Request for information about disputes #GDR-24

The Arbitrators are contacting Indexer address 0xdecba5154aab37ae5e381a19f804f3af4d1bcbb5 (decisionbasis.eth) and fisherman 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5 (@Inspector_POI) about a new Dispute filed in the protocol.

Dispute ID: 0x488aac24d17c6b89f17db3fa4db97c16205efccdd1cfd80a791a7e396386b011

Subgraph deployment ID: Qmb27RY3RqP98UMKbTgScf6F7hhokfMuS9fV7VAtPiZHwF

To the fisherman, could you share the data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).

Purpose of the Requirement

This requirement is related to the following dispute:

Dispute (0x488aac24d17c6b89f17db3fa4db97c16205efccdd1cfd80a791a7e396386b011)
├─ Type: Indexing
├─ Status: Undecided (0.74 days ago) [24 epochs left to resolve]
├─ Indexer: 0xdecba5154aab37ae5e381a19f804f3af4d1bcbb5
├─ Fisherman: 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5
├─ SubgraphDeployment
│  └─ id: 0xbc6819c290cf2e48340e63f6dec80e9e8e9d579f2b812ce3c81b341bd23c8c1a (Qmb27RY3RqP98UMKbTgScf6F7hhokfMuS9fV7VAtPiZHwF)
├─ Economics
│  ├─ indexerSlashableStake: 126054.068493163038042542 GRT
│  └─ indexingRewardsCollected: 31.439073893523433108 GRT
├─ Allocation
│  ├─ id: 0x57da6d449d27bba187b8a93798c9a128545f8119
│  ├─ createdAtEpoch: 714
│  ├─ createdAtBlock: 0x75f1f52ddfdf9c54d199124bc92cf02eb1d77c6056937fba473d1a5dd222d808
│  ├─ closedAtEpoch
│  │  ├─ id: 718
│  │  └─ startBlock: 0xc209e05949448e4cecc079d81bb2ed0564e84de1484e2cca606931b9073c976f (#21195137)
│  └─ closedAtBlock: 0xbe50c670aeb5d5f716bd3af89e75a05cc6114f5f9ef53f0abef0e0be88d951f4 (#21198881)
└─ POI
   ├─ submitted: 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
   ├─ match: Not-Found
   ├─ previousEpochPOI: Not-Found
   └─ lastEpochPOI: Not-Found

About the Procedure

The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.


Please use this forum for all communications.

Arbitration Team.

gm all

thanks @indexer_payne for dming and pinging on that one

it’s been some years since i last played with bogus poi and signal pumps (properly slashed there haha)

for now all subs that don’t sync i either kill with 0x0, or leave them hanging until killed by someone else

so, as i leave my allocation handling to indexer-agent, i hope this one will be resolved in my favor. Why would someone slash allocation that got 30grt rewards, hoping to get 120k grt from it? clearly predatory behavior, count me in for his 10k deposit. i recently failed a rescue for drained mnemonic, would be happy to compensate the victim there

btw, some proofs from my grafana, please observe “deterministic” flag

i’ll be resyncing that subgraph hoping to navigate around the issue

1 Like

Dear Arbitration Team,

I am disputing decisionbasis.eth indexer (0xdecba5154aab37ae5e381a19f804f3af4d1bcbb5) for repeatedly closing the subgraph Qmb27RY3RqP98UMKbTgScf6F7hhokfMuS9fV7VAtPiZHwF with the same POI (0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f). This has occurred a total of 8 times (technically 7 times, as the most recent closure happened after this dispute was raised. I believe the disputed indexer has not yet noticed this dispute regarding the 8th closure).

Upon reviewing this subgraph’s full allocation history, not a single other indexer has submitted a duplicate POI. This strongly suggests that the subgraph is healthy and does not have a history of deterministic error.


According to the screenshots, The Graph Explorer shows that the disputed indexer is the only one failing to sync, while all other indexers are synced and up to chainhead.

(Sorted by latest to oldest)
Allocation ID: 0xc3ad07fbabc57864335b5335881bba6d87d17d65
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 750
Closed Start Block : 21425537

Allocation ID: 0x57da6d449d27bba187b8a93798c9a128545f8119
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 718
Closed Start Block : 21195137

Allocation ID: 0x4e257818af72eb2bf9867fbecd90ec579c5c73a7
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 714
Closed Start Block : 21166337

Allocation ID: 0x6e0036ea6495c9cb99339652d43529629e68babd
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 706
Closed Start Block : 21108737

Allocation ID: 0x7fcb4f971c992c402c04a500a94e8dccec31f23e
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 694
Closed Start Block : 21022337

Allocation ID: 0x8a5f73352d1b8ed28334400fa47d252326e26375
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x75a1c9d26630732e5b29ed9c740ab11f8c71d869a18959f6053eda5829df0bc9 (Unique)
Closed Epoch : 665
Closed Start Block : 20813537

Allocation ID: 0x3f54b1bdb610d109611335a70202ada75217cda8
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 642
Closed Start Block : 20647937

Allocation ID: 0x167d6df8dddf8e554bf7f11a95fa5954dc2fc4db
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 631
Closed Start Block : 20568737

Allocation ID: 0x51b627bf85d9635e7ba8f453dc41b5693393a562
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
Closed Epoch : 624
Closed Start Block : 20518337

Allocation ID: 0x8a33b85335972f05f2662512089db042fecba2a6
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x096079a657901029aa9217e9d5060244559cf2564b3744bcd53c8fe0ae3b0a87 (Unique)
Closed Epoch : 601
Closed Start Block : 20352737

Allocation ID: 0xc5d75a35a81e710f22a779a3ccf4fa713c25b873
PUBLIC POI : 0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e
POI : 0x2c1ce2de884183ecb3bde8481f5715d45f8feeb2754dfaa71bbd9743dda5da31 (Unique)
Closed Epoch : 599
Closed Start Block : 20338337

Upon further investigation, unmatching PUBLIC POIs compared to other indexers were found for all allocations closed by the disputed indexer on this subgraph, based on the startBlock of the closedEpoch for Ethereum Mainnet. This includes the additional 3 allocations with the unique POIs. Refer below :


Based on the disputed indexer’s screenshot, the error occurs at block #20100104, the disputed indexer is expected to have correct and matching data prior to this block with all other indexers. Attached below shows the latter.


It is promising that the disputed indexer failed to sync and stopped at block #20100104, as all the PUBLIC POI shows the same for the disputed indexer after this block until chainhead.

By definition, deterministic errors result in the subgraph halting at the same block, indexer-agent would provide the same POI and the PUBLIC POI would remain consistent even up to chainhead if the error persists.
According to the allocations list, there is a history of 3 UNIQUE POIs but still with the same PUBLIC POIs. Expected behaviour from indexer-agent for deterministic errors would be the same POI for all 11 allocations.
Disputed indexer, have you attempted before by rewind this subgraph or resync it from the beginning?

However, a deeper analysis reveals that the divergence began at block #12742160. Before this block, the disputed indexer’s PUBLIC POI matched that of all other indexers from block #1 to block #12742159.

Regarding the disputed indexer’s claim:
Why would someone slash allocation that got 30grt rewards, hoping to get 120k grt from it?
The slash is not based on a single allocation but on all the above cases of duplicate POIs, combined with unmatching PUBLIC POI data among other indexers. The slashing amount is determined by the network design and is at the discretion of the arbitration team.
I acknowledge that it may seem excessive when all the indexing rewards you have claimed for this subgraph are considered. However, the arbitration team should also take into account that this subgraph is ranked #32 in terms of query fees.

Clearly predatory behavior
You are correct that the situation is deemed predatory. I am in the process of identifying incorrect POIs, including those resulting from deterministic errors. While I trust your expertise as an experienced indexer, despite your past slashing history, I hope that you, the arbitration team, or others can provide additional context on how this subgraph could result in a “deterministic error.”
If such context is provided/proven, I am willing to accept the decision, whether it is a draw or a loss. However, the single piece of evidence presented by the disputed indexer does not explain why only 1 out of 13 indexers encountered issues, nor does it address the matching PUBLIC POIs but with 3 unique POIs and 8 duplicates.

TL;DR:

  • 11 allocations were submitted, all after the disputed indexer’s claimed error block (#20100104), with 8 duplicate POIs and 3 unique POIs, but still with the same PUBLIC POI (0x8c42bb96e744715430a2ae6114375de501e46773b5b46184fbc51e02b06bdc9e), indicating that the disputed indexer is stuck at the same block for some reason.
  • Indexer-agent behavior: If a deterministic error is encountered, the same POI will be submitted repeatedly, making the presence of 3 unique POIs inconsistent with this claim. Moreover, the claimed error at block #20100104 occurred well before the first allocation by another indexer.
  • No duplicate POIs were submitted by any other indexers.
  • All other indexers show matching data, except for the disputed indexer.

Thanks @Inspector_POI for the investigation. To summarize your findings I’ve put together the following table:

AllocationID Created At Epoch Closed At Epoch POI
0xc5d75a35a81e710f22a779a3ccf4fa713c25b873 562 599 0x2c1ce2de884183ecb3bde8481f5715d45f8feeb2754dfaa71bbd9743dda5da31
0x8a33b85335972f05f2662512089db042fecba2a6 599 601 0x096079a657901029aa9217e9d5060244559cf2564b3744bcd53c8fe0ae3b0a87
0x51b627bf85d9635e7ba8f453dc41b5693393a562 608 624 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x167d6df8dddf8e554bf7f11a95fa5954dc2fc4db 624 631 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x3f54b1bdb610d109611335a70202ada75217cda8 631 642 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x8a5f73352d1b8ed28334400fa47d252326e26375 642 665 0x75a1c9d26630732e5b29ed9c740ab11f8c71d869a18959f6053eda5829df0bc9
0x7fcb4f971c992c402c04a500a94e8dccec31f23e 665 694 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x6e0036ea6495c9cb99339652d43529629e68babd 702 706 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x4e257818af72eb2bf9867fbecd90ec579c5c73a7 707 714 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x57da6d449d27bba187b8a93798c9a128545f8119 714 718 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0xc3ad07fbabc57864335b5335881bba6d87d17d65 742 750 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f
0x17a59a0b6b80b41e283e1447e8468ee0b60910aa 750 - -

@inflex, the error log you shared shows the subgraph deployment being flagged with deterministic error at block height 20100104 (block hash 0x260e18eef4ecb61a0b7bb7e796fc70c55664a04112ed094a8e6388d2f952de87), which is approximately epoch 565, so somewhere during the first allocation on the previous table.

The two points of concern at the moment we would need to answer are:

  • Why are no other indexers hitting this determinism issue? Can you share the specific versions of graph-node and indexer-agent you are using? Any specific actions you took on this subgraph that could help explain this behavior like prunning, rewinding, etc?
  • As @Inspector_POI mentioned, since the deterministic error you shared there have been 3 unique POIs presented, even switching from 0x5d087cdb84c09ab8ba790f947e6ce0735109a505ba692a58673981a5cf6ada4f and back again to it. This is not normal behavior for indexer-agent when a deterministic error is found. Are you positive all these allocations listed above have been closed using the indexer-agent? Can you provide logs for this?

Thanks.

I’m not making excuses for anyone, but there is definitely something wrong with the agent and indexer. I’ve encountered a similar problem where some indexers could successfully synchronize a subgraph, and I always got a deterministic error. I’ve already started this thread in the discord, trying to understand what’s going on.

Here is an example.
Subgraph: Orbs TWAP - BSC | Graph Explorer
My logs: https://gist.githubusercontent.com/0x19dG87/687427f4cad3e093321e8383f1d7c8d3/raw/13feab77a1c9ebd9a709bf79434663a0da869ac1/QmPXBSGJV4ptejsATQw4udeF2QqJ5wRWkx995FNjexMFWy.log
I tried restarting, rewinding, and even recently dropping and synchronizing it back, but I still got the error:

transaction a65237ddb7cc9cc26f30c898369a91bc0d372b9ca52cfdd5a6cb7fbe75cc8117: error while executing at wasm backtrace:	    0: 0x3b56 - <unknown>!~lib/@graphprotocol/graph-ts/chain/ethereum/ethereum.SmartContract#call	    1: 0x6b2c - <unknown>!src/twap/handleOrderFilled: Mapping aborted at ~lib/@graphprotocol/graph-ts/chain/ethereum.ts, line 681, column 7, with message: Call reverted, probably because an `assert` or `require` in the contract failed, consider using `try_order` to handle this in the mapping.

Moreover, the agent generated 2 different POIs (where there should be only one if the synchronization failed). You can see it on the attached screenshot. All this was done by the agent v0.21.11. Indexer node v0.36.0. I use rpc and firehose. I open and close allocations manually, without any automation, etc.

.
And this is not the only example. Perhaps someone will be able to figure it out and explain what the reason is.