Request for information about disputes #GDR-17

The arbitrators are contacting Indexer address 0x918fcc24e6b7f5ec73b4cf766e2393d8fe707541 (dacm.eth) and Fisherman 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5 about disputes filed in the protocol.
Two disputes have been filed, disputing the POIs submitted in epoch 630 and 631 for the Silo Finance Arbitrum subgraph (QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW). Please see below for details.

There has already been brief discussion in the indexer channel of the graphprotocol discord server, but for the sake of prosterity and continuity let’s try to start the conversation from scratch here.

Disputed indexer, dacm.eth, could you provide context on your indexer stack setup when you closed the disputed allocations? and the process you used to generate POIs and close the allocations?

Fisherman, could you share the data you gathered that led to you filing the dispute? and the reason you believe the allocations are subject to slashing?

Purpose of the Requirement

This requirement is related to the following disputes:

Dispute (0xac6e28504cbef99b6ca93223640f839ae2fad08c0582a446346efc9a4e9c38aa)
β”œβ”€ Type: Indexing
β”œβ”€ Status: Undecided (8.04 days ago) [27 epochs left to resolve]
β”œβ”€ Indexer: 0x918fcc24e6b7f5ec73b4cf766e2393d8fe707541
β”œβ”€ Fisherman: 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5
β”œβ”€ SubgraphDeployment
β”‚  └─ id: 0x4a76af16bbbd2e09e46f13a0a52e17c99ffd68b00bb79d4eababb56e515b7d15 (QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW)
β”œβ”€ Economics
β”‚  β”œβ”€ indexerSlashableStake: 38631.6017612231034381 GRT
β”‚  └─ indexingRewardsCollected: 6850.8538073616006058 GRT
β”œβ”€ Allocation
β”‚  β”œβ”€ id: 0x56f5c726378de882a7de457817e0c78e516e5152
β”‚  β”œβ”€ createdAtEpoch: 611
β”‚  β”œβ”€ createdAtBlock: 0x2e64329f37c4632048f69e460f12f8168ffae7f11e635714cd9fc792aee4e5dd
β”‚  β”œβ”€ closedAtEpoch
β”‚  β”‚  β”œβ”€ id: 630
β”‚  β”‚  └─ startBlock: 0x45e817c771e4c97934fb604590f74acbd1571a3805717795bd22a0f24e2660f6 (#244438686)
β”‚  └─ closedAtBlock: 0xbfba5d8cdfdb24d673804bc8f99cfd96d28503bde5c4781aa7d86c6bd8293bce (#244750312)
└─ POI
   β”œβ”€ submitted: 0x55d52a2607fbd2f07fd230a30eb9706b69f3400cd1175a76e0ffe225f4d3fcb6
   β”œβ”€ match: false
   β”œβ”€ closedEpochPOIReferences: [0xcc5f5e534810172b112fc1b6f35b209592300ae6b6dc42f18b611588bfb6c71f, 0x1a09fc994d903c9f426fe4cf59c83530668cc30e59df1d8e1e4ddfd151966d9f]
   └─ previousEpochPOIReferences: [0xb303f1e191b18f46c20ba67cd3a962740baa62446e669d86c2925b6403f7ccad, 0x1419a1bc17de6f3d49c102275876805cf42964f410b43ab641dea50bbfdc8fa1]
Dispute (0x0fc357b5a1f3a79736c891e575c636133d7bb6a3df1c774495d0fc7f09078939)
β”œβ”€ Type: Indexing
β”œβ”€ Status: Undecided (8.04 days ago) [39 epochs left to resolve]
β”œβ”€ Indexer: 0x918fcc24e6b7f5ec73b4cf766e2393d8fe707541
β”œβ”€ Fisherman: 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5
β”œβ”€ SubgraphDeployment
β”‚  └─ id: 0x4a76af16bbbd2e09e46f13a0a52e17c99ffd68b00bb79d4eababb56e515b7d15 (QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW)
β”œβ”€ Economics
β”‚  β”œβ”€ indexerSlashableStake: 38631.6017612231034381 GRT
β”‚  └─ indexingRewardsCollected: 4676.7018404839417828 GRT
β”œβ”€ Allocation
β”‚  β”œβ”€ id: 0x9e88996ecb7341a8500bb9c8be8cdf9d32c5cf10
β”‚  β”œβ”€ createdAtEpoch: 630
β”‚  β”œβ”€ createdAtBlock: 0xd9c29f03b87ed541aa05473462eed8cf111daed9b44f76a4d71e4971d4b91661
β”‚  β”œβ”€ closedAtEpoch
β”‚  β”‚  β”œβ”€ id: 642
β”‚  β”‚  └─ startBlock: 0x9973b1846e37819f954f67b3ad7065aca8d98c7e9eb7d372df618c07730d3155 (#248593514)
β”‚  └─ closedAtBlock: 0x8f7936b91fd18b70ace074afc0f03e7488948cc1dcad9e91a175c17bd566206f (#248599999)
└─ POI
   β”œβ”€ submitted: 0xcbf34812defef0f2f39067066c2f010160db3df22efbf4c7f8f61f3c8e67dfde
   β”œβ”€ match: false
   β”œβ”€ closedEpochPOIReferences: [0xf2f3b8553c35ec13c53df992496da2a10c24f25aa4df75c4474ca66a37ea42ad, 0x353b1dbd3c70b0bb34c23947bff716d774867994345f76c0458709adaca9d8d7]
   └─ previousEpochPOIReferences: [0x353b1dbd3c70b0bb34c23947bff716d774867994345f76c0458709adaca9d8d7, 0xaf2c911af6f618ee2adca1f9034ceecc8a897b6fd151f35d92ea60c7413b671b]

About the Procedure
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at https://hackmd.io/@4Ln8SAS4RX-505bIHZTeRw/BJcHzpHDu.

For communications, please use this forum. Additionally, please monitor Graph Protocol’s Discord #arbitration (Discord 4) for any arbitrator reaching out for more information.

2 Likes

Hello, Inspector POI here who is also the fisherman.

Firstly, I would like to thank the arbitration team for looking into this matter.

I am disputing DACM indexer that they are closing the Silo Finance Arbitrum subgraph without being fully synced. Based on The Graph Explorer’s website, which shows they are not synced. Attached below are screenshots I took on August 20th and August 31st.

dacm1.png
dacm2.png

I also used a public POI checking tool provided by Ricky to verify this. It shows β€œPOI: None,” which indicates that the indexer service is unavailable or unsynced. However, since the explorer shows their sync status, it suggests that they are indeed unsynced.

dacm3.png
dacm4.png
dacm5.png

While I understand that these screenshots may not be solid enough as finalized proof, as Ford mentioned earlier, the arbitration team’s POI check shows their POI is unmatched. So there is no reason that DACM can claim that they imported the synced POI from the old indexer and submitted the POI.

Let’s rewind a little to a Discord message from the #indexers channel.

DACM claimed that the subgraph was syncing properly on their old node, but after migration, they were unable to determine the subgraph’s sync status. However, The Graph Explorer clearly shows their sync status, leading me to believe this is either a false claim or an excuse.

DACM then mentioned they manually closed the POI with this method:
http -b post http://query-node-0:8030/graphql query=β€˜{proofOfIndexing(blockHash: β€œβ€, blockNumber: , subgraph: β€œβ€, indexer: β€œβ€)}’

I do not see the reason why DACM needed to submit a POI manually when the indexer-agent is capable of handling it. Other than with the reason DACM is trying to claim the indexing reward even if the subgraph is unsynced.

Next, DACM claimed that they were undergoing a server migration, which led to their indexer being unsynced. If you properly close it using the indexer-agent, you would definitely get an IE067 error (Failed to resolve POI: POI not available for deployment at current epoch start block).
Source: indexer/docs/errors.md at main Β· graphprotocol/indexer Β· GitHub

Finally, another person from DACM eventually claimed that the subgraph was healthy according to their Grafana. They closed the subgraph with the POI up to the block they were synced at, claiming indexing rewards. They are not aware that this isn’t allowed. If that’s the case, every indexers can sync just a few blocks, manually get the POI and submit it. Doesn’t make any sense isn’t it?

Now, they admitted that this was a MISTAKE and a valuable lesson, clearly acknowledging that they closed it improperly, obviously with the intention of claiming indexing rewards. This was not a one-time occurrence. There is another subgraph (camelot-amm-v3) that they also closed while being unsynced. I am not wealthy enough to slash a third one, or more.

TL;DR:

  1. As mentioned by Suntzu, β€œ1. Subgraph not fully synced β†’ Must close with 0x0 POI.”
  2. Failure to determine the sync status of subgraphs is the indexer’s responsibility. It is illogical if others can see their status but they themselves cannot.
  3. They were fully aware they closed the subgraphs without confirming they were 100% synced and manually submitted a non-legitimate POI instead of using indexer-agent software, especially since the subgraph was not in a deterministic error state.
  4. They closed the subgraph with the POI up to the block that they are synced at.
  5. Each responses with different reasons and in the end claiming it’s all just a MISTAKE. I’m really curious what’s the next reason.

That’s all I have to say for now. I’m pretty sure other reputable indexers have witnessed this situation themselves β€” closing while being unsynced.

Fun fact: This particular subgraph had very high indexing reward potential due to low allocation volume from other indexers, combined with a very high curation signal during the time DACM was allocating on it. As of now, the subgraph version has been updated.

EDIT : For some reason I can only upload one image per post, I’ll attach 5 images below this post.

1 Like

Hello,

Thanks for creating the discussion and asking for clarification on this issue.

Our GRT Indexer was initially setup on Hetzner(Indexer A) and during the period of the disputed incident, we were in the process of migrating to a Sydney based baremetal provider(Indexer B). The subgraph in question: QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW.

During the period of the dispute, indexer A was synced for the subgraph in question, however post migration we were having difficulty confirming the sync status of that subgraph on indexer B. Having referred to our internal monitoring in grafana we operated under the assumption that post migration indexer B would be synced up to the latest block that we were seeing on indexer A and as such we manually closed the POI using the aforementioned block height.

Our intent as an indexer should not be questioned as we have been successfully running an indexer on ETH from March 2022, and as mentioned at the beginning of our post, the disputed period happened during our migration to a new provider; a process which is both costly and time consuming for our team - but we deemed it necessary in order to improve our quality of service to the network. We also run our own dedicated RPC node for ETH and ARB and utilise backup RPCs from other providers in order to provide maximum uptime.

Since the raising of this incident, we have revised several internal processes in order to further improve on the quality of service and uptime we provide. We are happy to provide further information on the incident if required.

Thanks,
DACM Infra Team

Dear disputed indexer, I have a few questions for you:

  1. You mentioned having difficulty confirming the sync status, but honestly, I don’t understand why this is difficult for you. Yet, you were confident enough to close an unsynced subgraph manually, thinking it was synced. Also, your sync status can be easily seen on the Graph Explorer, which led to the disputes. You were at least 60% of the blocks away from the chain head.

  2. You mentioned indexer A was already synced before this dispute happened. So, with the assumption that post-migration, indexer B would be synced up to the latest block you’re seeing on indexer A, why did you still need to close the allocation manually instead of using the indexer-agent?

  3. In your migration process, you were synced on indexer A, but on indexer B, it was still ongoing. Are you running two indexers with the same address? If so, why did you close the allocation on indexer B instead of indexer A?

I do not question your intent as an indexer, but after finding all your reasoning to be invalid, none of it supports the statement that you are eligible for claiming the indexing rewards legally.

1 Like




  1. As already mentioned, - in addition to having access to the external tools widely available to the public (like the graph explorer the inspector is referencing) we also have access to in-house internal monitoring tools in order to determine a node’s overall health.

  2. Our justification and reasoning for closing the POI manually has already been answered in our initial post.

  3. We have never run two indexers with allocations at the same address in parallel. As part of the migration we temporarily stopped the indexer on one server to set up and test the second one. This involves several tasks including: handling failed DB migrations, testing firewall configurations and connectivity, setting up the required RPC nodes etc.

Feel free to provide more information on the query tool you are using in your screenshot and we can review this and include it in our processing checks.

1 Like
  1. You are dependent on your own in-house monitoring tool and closed the subgraph manually, thinking it was synced, but it was not. After this dispute arose and the subgraph was updated, you closed the allocation the right way with 0x0. However, prior to this, you closed the allocation twice while it was unsynced, which is slashable.

  2. That doesn’t answer my question about why you did not use the indexer-agent to close the subgraph, other than the reason that you closed it manually because you were aware that you could not close it with the indexer-agent, for obvious reasons.

  3. Now that you mentioned that your indexer A wasn’t running and migration takes time, how can you be so sure that your indexer A won’t be far from the chainhead post-migration? Since you relied on indexer A’s data for indexer B; FYI Arbitrum-One produces 4 blocks per second.

In your previous message in the Discord channel, you stated: β€œWe now understand through various discussions internally and within the community that the manual POI should be generated in very limited cases, and our case DOES NOT QUALIFY for one where it should have been closed with a manual POI.” You also mentioned that you have been running an indexer since 2022 on Ethereum Mainnet, so you should know better than most indexers.

Query tool: Hex

After the dispute happened, you unallocated the subgraph with 0x0. This leads to the impression that you are now finally aware that you aren’t allowed to close an unsynced subgraph manually.

Dear Arbitration Team @Ford @absolutelyNot, I don’t think the disputed indexer is providing relevant reasoning to my questions and disputes, and their statements are becoming increasingly distant from the main point. I don’t believe I need to prove this further.

The way I see it, it’s clear that the disputed indexer closed manually with the sole purpose of claiming indexing rewards, even though it was unsynced.

Disputed indexer is aware of unsynced subgraph, manually closed with an incorrect POI instead of using the indexer-agent twice, two disputes for slashingβ€”end of story.

Hopefully, this can be justified. Much appreciated!

1 Like

Hi Ford,

We have been transparent about our operations and our decision making in the process of this dispute - however we do not believe sufficient evidence has been provided by Fisherman in his claim.

Fisherman is disputing our sync status - and in attempting to prove the lack of sync status is using screenshots of a block explorer with no timestamps. Can the fisherman provide concrete proof showing a) we were out of sync b) how many actual blocks behind we were out of sync?

Fisherman is also utilising a POI checking tool in their first post to make the same claim, however in the same post admits this is not finalized proof - but somehow then continues to use this 'proof; to make the assumption that we cannot claim that we made the migration.

Despite these claims we have been open in learning more about this process in order to continue improving our quality of service - however we have not received any more information around this request.

We also want to clarify that manual closure of a subgraph is not considered an illegal practice. While we acknowledge that manual closure is not ideal under certain circumstances, it is not inherently a violation. The request to be slashed twice for the same subgraph therefore seems highly unrealistic but Fisherman is also making the claim that there is a 3rd subgraph they are interested in raising a dispute for - which we again, consider to be an extreme outcome given we have shown no malicious intent or attempt to manipulate rewards.

Fisherman has questioned our transparency and our intent since the first post, and in every response has continued to disregard our rationale in favour of their own reasoning that we do not consider is founded in concrete proof.

Since the issue arose, we have been open to feedback and have actively sought feedback, any tools or methods that could help us verify our block sync status more effectively.

It is correct that we initially used our in-house monitoring tool to track the sync status of the subgraph. This decision was based on the fact that our monitoring tool provides us with real-time insights into our operational environment, which we relied on to ensure efficiency. The decision to close the subgraph manually was based on the information from our tool, which indicated that the subgraph was synced.

Thanks,
DACM Infra Team

1 Like

Dear all,

Given the data provided by both @Inspector_POI and @arysar above, the Arbitrators resolve to slash one of the disputes and draw the other.

We found that the disputed indexer manually closed allocations for a unsynced subgraphs using an arbitrary POI, since it could not get the valid POI at the moment of closing. We think they could have pursued an alternative course of action to determine the accepted practice in the case of issues other than collecting rewards with an invalid POI. Valid actions could have been to close it with 0x0 and forfeit the rewards, or ensure the subgraph was fully synced to get the correct one.

We recommend every indexer to refer to the community in Discord when they run into issues, since the core devs and other indexers are always willing to help

The basis for the decision is grounded in Section 9 of the Arbitration Charter: (GIP-0009: Arbitration Charter - HackMD)

Slashing Penalty:

The Disputed Indexer made 11527.5556478 GRT in rewards from these allocations and will be slashed for 38631.6017612231034381 GRT.

The Arbitrators believe the loss from these actions (27104.0461134 GRT) is a sufficient penalty for these actions and that this will encourage careful consideration under similar circumstances going forward.

Sincerely,
The Arbitrators

1 Like

Dear Arbitration Team,

I would like to extend my sincere gratitude for your prompt attention to this matter within the epoch limit, ensuring fairness for all indexers.

Could you please provide me with the transaction ID (TXID) for the disputed indexer balance that was burnt?

Additionally, I would greatly appreciate it if you can also provide me the TXID for the 2 deposits refund, including the 50% earnings from the slashing.

Thank you for your assistance in this matter.

@Inspector_POI, thank you for your patience on this dispute. We are waiting on a signature and will promptly share the transaction IDs when they have been executed.

Executed Transaction:

Well received and confirmed. Thank you very much.

At this moment, there is only one remaining dispute to be resolved, which dates back to 7 months ago (Dispute ID: 0x0ea585e0e9a4e0d8bc59440c9cc4bcb197dfe4f46b639559be7ad87b0a88c408).

The indexer involved in that dispute has already ceased operations and withdrawn everything. I believe the other fisherman should at least receive a draw and have their deposit refunded.

@Inspector_POI, thank you for bringing this to our attention. I will bring this to the Arbitrators and get back to you as soon as possible.

1 Like

Executed Transaction:

Thank you again @Inspector_POI

1 Like