The Arbitrators are contacting Indexer address 0xc55c63563efb36f7cc65ac3060c52987c6694b37 (figment-prime.eth) and fisherman 0x1ed0c56b75838e9788ae83b26d80c0c3a353e249 about a new Dispute filed in the protocol.
To the fisherman, could you share the data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).
Purpose of the Requirement
This requirement is related to the following dispute:
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.
Thank you @tmigone. This dispute is about POI 0xa3da80a275bf91366181a60c6763835070c914fc171e09e54234c3303c572729 having been used for a third time on a healthy subgraph. This can be verified with this query on the Graph Network Arbitrum subgraph:
{
allocations(where:{poi: "0xa3da80a275bf91366181a60c6763835070c914fc171e09e54234c3303c572729"}){
poi
indexer{
id
}
closedAt
}
}
The subgraph state is healthy, and other indexers active on the subgraph have correctly submitted non-duplicate POIs on this healthy subgraph during the same timeframe:
Noting that the indexer has contacted me privately, they are waiting for their forum account to be activated before they can join the conversation here.
Hi there,
My name is Thomas Bonnin and I represent the indexer in this dispute.
We have been called out on an issue affecting our allocation process on TheGraph network, which resulted in repeated submissions of the same Proof of Indexing. We understand this is exposing us to potential disputes and penalties within the community.
The root cause stems from a combination of automated process failures and temporarily muted monitoring systems. Upon discovery, we immediately initiated corrective actions and are actively working to resolve the situation.
Our current focus includes manually addressing allocations to prevent further issues, restoring monitoring systems, and improving our automation processes to ensure this does not happen again.
We take this matter seriously and remain committed to maintaining trust and reliability within TheGraph ecosystem. Further updates will be shared as we progress toward full resolution.
The root cause stems from a combination of automated process failures and temporarily muted monitoring systems. Upon discovery, we immediately initiated corrective actions and are actively working to resolve the situation.
Can you expand a bit on this? It would be helpful if you described what your automated processes look like and how it led to POIs being reused. Are you using the indexer-agent to close allocations?
I can confirm that we are using the indexer-agent to close allocations and that it was operational.
For the automation script, I will focus my answer on the faulty part we identified.
We encountered a problem automatically closing allocations with a 0x0 POI for subgraphs that failed to fully synchronize on our side (IE67 error)
The automation tries to force close them which resulted in a loop of error.
Now, based on our analysis of the different timestamps (automation changes, automation run, and faulty POI being reused), our understanding is that this would be the cause of the issue.
Somehow the automation went through without processing those IE67s and stuck queue actions and that eventually resulted in the faulty behaviour
Thank you @ThomasBonnin. I still struggle to understand the details here. IE67 error means you either fully sync and provide a valid POI or you force close with 0x00. What appears to have happened here is a force closure but using a non zero POI, was that what your automation script was doing? Any chance you can share that code snippet?