The Arbitrators are contacting Indexer address 0x3e1536fc83cd5bed83a521a26034ff3e59c6a7c4 and fisherman 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5 for a new dispute filed in the protocol.
Fisherman, could you share the insights or data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).
About the Procedure
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.
I am disputing differential-graph.eth indexer 0x3e1536fc83cd5bed83a521a26034ff3e59c6a7c4 for repeatedly closing the subgraph Qmc2hgE7f9otZvD2xp1SuzatKwXKFgrr9rRp8zW85EoFGM with the same POI 0xcb8a79bbd1b4d43dcc41c6418fbf8174eb0181f42da2bdf0c2615c69a5545b0d. This has occurred a total of 5 times.
It is also worth noting that prior to this POI, the same POI 0x06e3c15dfe32bf23b6e38955c90c79ee36153916856b2faf002d4ab9ffaa9924 was used a total of 8 times.
Upon reviewing this subgraph’s full allocation history, not a single other indexer has submitted a duplicate POI. This strongly suggests that the subgraph is healthy and does not have a history of deterministic error.
Hi,
I am attaching the screenshot of the grafana dashboard. As I replied in discord for our indexer this subgraph failed with a deterministic error. As far as I know the indexer agent does not generate a poi unless the failure is deterministic. Let me know if any other clarification is needed.
Could you please provide more detailed context regarding the deterministic error? Information such as the block_hash, errored block, error message, handler,
Also your Graph Node version, Indexer-Agent version, and Indexer-Agent logs would be very helpful.
Thanks @Inspector_POI, that information would certainly be useful context here.
I would also like to note the following:
The subgraph in question appears to be failing with a deterministic error, or at least none of the current indexers report they can sync it fully. At the time the dispute was originally raised the subgraph appeared healthy.
There are currently three indexers indexing the subgraph
StreamingFast and Ellipfra both report the same repeating POI since block 316708444 (epoch 839), likely indicating a deterministic failure.
The disputed indexer however appears to have diverged from the other two before that at block height 250504233 (epoch 647).
Hi,
I am attaching the grafana table for the subgraph status. The subgraph in question is the first line. Start and last hash are not clearly visible and I’m copying them here.
Start Hash: 6dc645b7ce22e1b635d5ebf3a8caadc617b5fca4830f644909910ab0275fd893
Last Hash: d34c86d62fe21fec24fe31137e4fd51bce5c992acb7c0c596e74c4f436d98c0a
Could you please tell me how to get the other info?
The graph-node version is the 0.35.1 while the indexer-agent is currently the 0.22.0. How can I get the other two? I can print some line from the docker log. Is that what you want?
Initially, I planned to inquire about your RPC provider details (e.g., versions). However, you mentioned using an outdated Graph Node version 0.35.1 , released 9 months ago . The latest version is 0.37.0 released 3 weeks ago, and several indexers still use 0.36.1 released 2 months ago. Notably, 0.36.0 is unstable, making 0.35.1 effectively two versions behind .
It’s possible that a PoI or Attestation provided by an Indexer is invalid due to the Indexer running an outdated version of the protocol software. As described in GIP-0008, protocol upgrades of the Indexer software and Subgraph API will specify a grace period during which the previous official version of the Indexer software may still be run. For disputes involving a PoI or Attestation that is only correct with respect to the previous official version of the Indexer software, the Arbitrator must settle any such dispute as a Draw.
I have approached this dispute with neutrality , aiming to gather and analyze all relevant facts to determine whether this issue stems from a software malfunction or an operational oversight by the indexer. I leave this to the Arbitration Team to decide.
Hi,
We tried to upgrade to 0.36.0 version of the node when it came out but this broke the database. With the downgrade script we managed to recover it but recent further attempts to run more updated version of the graph-node were not successful.
Since we were already planning to move towards a bigger server we accelerated the process and on the new server we are already syncing subgraphs with the latest version of the graph-node. We did not complete the transition yet partially because of this arbitration.
Could you provide additional details or screenshots from the Deterministic Error section of your Grafana dashboard related to this subgraph? You may click on it to expand.
Could you specify the details of your RPC provider, such as the version/type in use? (Arbitrum-one chain)
Graph Node version 0.36.1 was released on January 28, 2025, and version 0.37.0 was released approximately three weeks ago. Are you unable to upgrade to either of these versions? Many indexers are now running at least version 0.36.1 without going through 0.36.0
According to your allocation history for this subgraph, you closed 8 allocations with the recurring POI 0x06e3c15dfe32bf23b6e38955c90c79ee36153916856b2faf002d4ab9ffaa9924 from September to October 2024. Additionally, the POI 0xcb8a79bbd1b4d43dcc41c6418fbf8174eb0181f42da2bdf0c2615c69a5545b0d was submitted 5 times from January to March 2025. Were you aware that this subgraph was experiencing deterministic errors when you closed these allocations?