The Arbitrators are contacting Indexer address 0x0fd8fd1dc8162148cb9413062fe6c6b144335dbf and fisherman 0xbace05744f1d075ba6bb82ebf561c1c3915f5cd3 for a new dispute filed in the protocol.
Fisherman, could you share the insights or data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).
About the Procedure
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.
This one’s about Protofire with the use of manual POI on Quickswap subgraph that doesn’t line up with the closed epoch which they used the latest POI from their synced block instead, as mentioned in Discord.
There are quite a few indexers synced to chainhead, so the subgraph looks healthy from our side. Unless Protofire can show it’s actually a deterministic issue and not just something on their end, it seems off.
Arbitration team has contacted Protofire requesting their prompt response here.
@alexbadaoui the linked Discord channel is private. If there is anything relevant to this dispute please feel free to copy the information into this forum thread.
The info’s shared in the subgraphs-health channel, but I’ll just drop it here too:
Brian L. | Protofire — 5/27/25, 1:32 PM
We couldn't fix it guys, we tried rewinding/reindex the subgraph many times and also we tried using polygon3 and also polygon2 but it was the same unfortunately, we used the latest valid POI to close this allocation
Adding a bit more you can check out Protofire’s operator wallet 0x7a361db89c9419699def3349b5f6f1cba294267d. If you trace back tx ID 0xaa175ca9747bf8717e916acc132db73608df870dfc54b1d18b95fea4ef6e692a, they used the Close Allocation contract method to close this subgraph as well as some others with 0 poi. For healthy ones they used Multicall method that comes from the agent. So, kind of looks like they manually called the contract Close Allocation with a manual POI, instead of letting the agent do it.
In light of the evidence presented by the Fisherman and the lack of response from the disputed Indexer we have decided to accept the dispute. We would also like to note that the Arbitration team reached out privately to the Indexer in two opportunities requesting a prompt response.
Sorry for being late guys, here the details why we closed this subgraph using the latest valid POI:
Subgraph
QmQEYSGSD8t7jTw4gS2dwC4DLvyZceR9fYQ432Ff1hZpCp
Error
transaction 0588d242a328f85ff6272dce269e515069d284fd42de5f6a9300d0c6b94c10ff: error while executing at wasm backtrace: 0: 0x5b9c - !~lib/@graphprotocol/graph-ts/chain/ethereum/ethereum.SmartContract#call 1: 0x8ae9 - !src/mappings/core/handleSwap: Mapping aborted at ~lib/@graphprotocol/graph-ts/chain/ethereum.ts, line 440, column 7, with message: Call reverted, probably because an assert or require in the contract failed, consider using try_totalFeeGrowth1Token to handle this in the mapping. in handler handleSwap at block #71470303 (54b61a94d1cc00f96e2e22dca8c4bbd55698ae9224d3ce2d212707a36db13a7b)
POI
We proceed to close this allocation using the latest POI got it at 14/05 in epoch 897
We spent several days testing and troubleshooting: we tried switching RPC endpoints from different Archive Nodes, rewinding the subgraph multiple times, and even clearing the Polygon cache in Postgres, but nothing worked.
It’s important to note that we weren’t the only ones affected — several other indexers also encountered the same issue and were unable to fix the subgraph.
We had indexed the subgraph for many days without any issues, so it didn’t feel fair to close with a zero POI.
At first, we thought the issue might have been caused by the Polygon upgrade from version 2 to version 3, but despite our efforts, we couldn’t find a working fix.
Every Indexer feels like this when it happens and it does seem unfair at times.
With the way indexing verification works today, it’s the price you pay to play the game and makes it as fair and relatively ungamable as was believed possible when the PoI function was developed. You win most and lose some. The more subgraphs you sync and the more you spread your allocation out, the less impact a single failing subgraph has on your economics.
Given the above, I don’t think the PoI behavior is excusable by a fairness justification. I wanted to comment because Protofire has been contributing to the protocol for longer than nearly any Indexer I know, so they have earned the benefit of doubt on their motivations as far as I am concerned.
That being said, if there is a group of indexers that believe these situations are a great injustice that must be changed, then the answer is not surreptitiously submitting false PoIs - it’s making a bunch of noise to gather support for your cause and proposing changes.
My recommendation to Indexers that worry about this is:
Spread your allocations out as is reasonable for your total stake. There are tools that make this a relatively painless job now.
Don’t immediately allocate to something new - check entities, and let it offchain sync so you get a feel for its size before you make an allocation decision
The 28 day mental model we have all leaned on actually makes Indexers lazy - consider a more frequent cadence for refreshing your allocations. We stagger ours as much as we can. Indexer-tools makes that a job an intern can do with some guidance.
Don’t assume that nobody will notice your PoIs. I’m sure it happens, but the fishermandems, they are watching
What would you advise if the subgraph was successfully sync and there were no problems with the allocations, but at some point it failed and you already have an assigned allocation? That is, in fact, forces you to close with 0 POI, because some indexers were somehow able to work around the error, others were not. And this is despite the fact that we have tried all possible options: rewind, drop, full resync, change of rpc, clearing the chain cache, etc. I have more questions about why some indexers were able to sync, while others - not. And why successfully synced indexers are not in question of dispute?
And the last but not least, index service set the deterministic error, so it means we should not rely on it and manually check if other indexers were able to successfully overpass this error? And what to do if you have automation script which distribute/open/close allocations?
In the current system that’s one we 0x0, probably the most frustrating type.
We see it on our indexer frequently and the possible combination of components between indexers syncing a sub can make it complex, or as you are experiencing, near impossible to figure out why one indexer’s stack can sync it and another’s cannot.
I imagine that if a person had the skills to look at the specific indexing step that is failing, they can figure out the root cause. I don’t have those skills and maybe know two or three people that do.
I don’t understand why you think synced indexers should be disputed - unless you are saying that they all have different PoIs?
All of these issues are crosses we bear in not running one giant web2 stack. They are extremely hard problems to solve.
Ah I see what you mean - if a single indexer were to do that, (manually skipping a block) it results in PoI divergence. Maybe you are suggesting they modify their PoIs in the database to go unseen? Given enough attention by fisherman, evidence of those types of malicious behavior live in the onchain data as far as I understand it, but is anyone watching closely enough? Questionable.
First of all, i have closed many subgraphs with 0x0, even if i was indexing for weeks, sometimes months. Its a pain yes, but better that than a slash. Last one was actually quite recent, and after 72 days.
As for closing allocations - from what i have seen, in a lot of cases it doesnt really matter if it says deterministic or non deterministic, there is a good chance more than 1 indexer gets the same error, even if its deterministic.
Overall, and i mentioned this beofre, I am firmly in the camp where, if the software provided (eg. the agent), allows you to close an allocation (automatically, not with a manual POI added), i consider it non slashable. Why? Because you use the software as is and there is no malicious intent, even if something diverges or it breaks down on a different block than someone else (many such cases lately).
But if i would have to put in a manual POI, i would consider it well before doing it and why. Probably i would signal it in the discord at least as well, see what others think before doing it. That said, except 0x0 poi, i cant remember where i had to enter a manual POI anymore, it was looong ago.