The arbitrators are contacting indexer address 0xae9bfdf9eeec808f4f3f6f455cb1968445cc6f2f (indexafrica.eth) and Fisherman 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5 (@InspectorPOI) about a dispute filed in the protocol.
@InspectorPOI, could you share the data you gathered that led to you filing the dispute?Please provide all relevant information and records about the open disputes. This will likely include POIs generated for the affected subgraph(s) .
About the Procedure
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter - HackMD
I am disputing the IndexAfrica indexer (0xae9bfdf9eeec808f4f3f6f455cb1968445cc6f2f) for closing two allocations on two different subgraphs with the same POI.
Both allocations were closed on the same day, with a time difference of only 8 minutes between the closures.
Here are a few points I’d like to highlight:
Uniswap subgraphs are substantial and require a significant amount of time to sync.
It doesn’t make sense to have the same POI on two different subgraphs.
IndexAfrica also force-closed (0x0) an allocation on the same Uniswap V3 Arbitrum (4.0.1_1.5.3) subgraph (Allocation ID: 0x4c88baefab86ee35c58f68e5b8bc4aaac24c2f4f), which remained active for 117 days—well beyond the epoch limit—from June 25th to October 20th.
While point 2 is self-explanatory, I would still like to question the disputed indexer about the reason behind having a duplicate POI on two different subgraphs.
I went back and looked at what could have happened, and it seems that while closing the allocation, I had copied/pasted the last POI incorrectly for the Uniswap subgraph when issuing the close command.
As correctly stated, our Uniswap subgraph was still syncing (as it is enormous), so I checked the last POI we had for it and, in turn, did not copy the calculated one but had a prior one in my clipboard.
While not malicious in intention, I do acknowledge the fact that it is a duplicate POI and that I should have double-checked before running through the steps. I sincerely apologise for this and assure you it will never happen again.
We are, for the most part (99.99% of allocation closes). However, this one wouldn’t close, and the logs kept showing it couldn’t calculate a POI, so I went the above route to work around it.
In the future, though, I won’t do anything manually; I will only use indexer agent for us!
Although the previous allocations have all expired due to the epoch limit, I believe it’s relevant to gather more information about the series of duplicate POIs on different subgraphs over the past few months.
Can you explain this pattern?
POI : 0x8f1fa9aed3ca1cbc249591c990fbde487a37d53290e43cee7d6d612ed06d1ef8 AllocationIDs closed with this POI: 0x1c824235b9922ebd398aa5fffcab9f6517b27988 0x9918f3a28bdf9f4d009755ce925ffc75692034c3 0x404e7a4d8268ed3278a922a1b702193ebf9ab858 0xccfb7d79704bfa1646a4dabf64847e42da402542 0xc1dab8857553f1e60a56e5f4bf232dd824a6ff66 0xff01afae690c2caff8b8a10a919daa6fc9e1cf6a 0xd6f7a157c9ebb6debffb310dc67448bb45210dba 0x32ec27af1634264f1c061f35a8e624f9ce5ad7a5 0x5e308a1dd349dac03336524cc67106d7d64438b9
POI : 0xb50411d5900941df490f0d13bb4f3a8dc5c128367b64265bad97a2225f1cfa75 AllocationIDs closed with this POI: 0x3a5c194abb61625ca2ed10004b9ba3cff2de168b 0x0482d912d90798dc38554275776dbac3a5eda48e 0x33f9b84f6e59c82ef83da8f6cb950ba583e3f258
POI : 0x3a55433504db48ba5f2fcc514fd312cd087e8e4843855e4685d168730f4f913f AllocationIDs closed with this POI: 0x2945946c9608b88a21ae63d6de236be033c9ff0d 0xe20180a77648e2c5801f8c34a91101363a384388 0x44b9e65c11713eaf5d8a6838cc21662ab2a8d6d5
POI : 0x586a762f8c7dfbb1d0b4e8f08cff8de45878a9e93bf8bda8ac94af9d2274e8bf AllocationIDs closed with this POI: 0x5373eb256fb258c9aba5b05f54ee727e22966a5f 0x05152845b2a96500303b002f85ec848b957454ce 0xcebcedd16a67396d391831a67cb29f249a7d729d 0xe89baf82539784b583e070d2230d0466082a7746 0xbf930565cc24f45297b7507dff2b82127b18aa02
Thanks for bringing this to our attention @Inspector_POI, I think this is very relevant context, interested to hear what the explanation is. I assume these other allocations you list are all outside the statute of limitations for disputing no?
I’ve gone through the logs for our containers, and unfortunately, a system update was done around 2 weeks ago, and no logs have been preserved from the containers before that. When it happens again, I’ll send them to whoever is best. It happens often when closing/re-opening allocations, so I’ll export all logs at that stage and report accordingly.
As for the others, I can’t say with 100% certainty, but it must have been a mixup with the manually calculated allocations (as with this one); I can’t think of any other reason. Apologies for not being able to give more, but having no more logs doesn’t help this matter.
Dear all, given the evidence provided by the fisherman which has been validated by the arbitration team we resolve to slash the indexer. We understand honest operator mistakes can happen however in this case it has been shown the same error occurred several times in the past, enough to consider this to be a broader systematic issue that needs addressing.
To the affected indexer we ask to review their processes for closing allocations. It’s always recommended to use the indexer-agent to manage allocations unless there is a specific need. If the agent is failing please do report the issue on a relevant channel so it can be investigated and a course of action suggested.
@InspectorPOI once more we appreciate the diligence in your investigative work.