Request for Information about Disputes #GDR-36

The Arbitrators are contacting Indexer address 0x9082f497bdc512d08ffde50d6fce28e72c2addcf and fisherman 0xfe56931ed1cd3021ef1162bae2b3872e8694d1da for a new dispute filed in the protocol.

Dispute ID: 0x37688090bea7b0acade036a9af824da517773f6e45dbca044da07ec6ed0dcbad

Allocation: 0x0ec48d7e0a5d3e94a3c2747856f0f00e493f53ec

Subgraph: QmZCXBToPx7Tymkv7wexog35WYqQo8Q2BPVGZvJ11KmKCq

Fisherman, could you share the insights or data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).

About the Procedure

The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.


Please use this forum for all communications.

Arbitration Team.

Hi @tmigone, it’s me 0xfe56931ed1cd3021ef1162bae2b3872e8694d1da,
This dispute concerns the indexer 0x9082f497bdc512d08ffde50d6fce28e72c2addcf closing subgraph that was not synced 100%. The indexer only synced up to block number around 12388000, while the correct startBlock should have been 22102337 epoch 844.

The indexer reported a database failure on April 12. Also took at least 20 days to sync based on the first allocation opened on February 27 and closed on March 19.

Today is April 24 which is 12 days and I don’t see much progress that the indexer is close to being 100% synced.

Hi tonymontana,

Can you please clarify what is the actual fault for the dispute?

By March 19th, 2025 when we closed the allocation on this subgraph we had it fully synced and the POI is a valid one.

On April 12, 2025 we had a database failure which means we had to resync all of our subgraphs. This is a heavy subgraph, taking up 109GB so far and still syncing.

I don’t see any relation with syncing speed and validity of a POI. We might have started syncing offchain before the first allocation. Can you please expand on your observation?

From the last check, @megatron was synced up to block around 12388000 which was almost four years ago. It took @megatron at least 20 days to sync 100%. Now 13 days after the database failure, the subgraph is nowhere close to being 100% synced.

If @megatron started syncing at the same time the allocation was created then the current progress seems illogical. If offchain syncing method was used, are there any graph-node logs or grafana chart logs from the past two months that can be provided to support this statement?

@megatron agreed to provide logs from the failed database for @Inspector_POI’s disputes. I hope to see the same here.

Hi tonymontana,

Below you can find the indexer agent logs related to the syncing start date of the subgraph and the table data from the live database and the crashed one.

Index Agent Logs

{"level":20,"time":1740667563543,"pid":1,"hostname":"722a8eb22524","name":"IndexerAgent","component":"ActionManager","function":"executeApprovedActions","protocolNetwork":"eip155:42161","prioritizedActions":[{"id":110,"type":"allocate","status":"approved","priority":0,"deploymentID":"QmZCXBToPx7Tymkv7wexog35WYqQo8Q2BPVGZvJ11KmKCq","allocationID":null,"amount":"388386.4","poi":null,"force":null,"source":"indexerCLI","reason":"manual","createdAt":"2025-02-27T14:45:45.656Z","updatedAt":"2025-02-27T14:45:52.418Z","transaction":null,"failureReason":null,"protocolNetwork":"eip155:42161"}],"startTimeMs":4,"msg":"Executing batch action"}
	
{"level":30,"time":1740666362392,"pid":1,"hostname":"722a8eb22524","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/ZvJ11KmKCq","deployment":{"bytes32":"0xa15b65bbb45ce64c7eecc0ad665b53510cf0520ee6d71a451e77c772a2375cb6","ipfsHash":"QmZCXBToPx7Tymkv7wexog35WYqQo8Q2BPVGZvJ11KmKCq"},"msg":"Successfully deployed subgraph deployment"}

{"level":30,"time":1740666359854,"pid":1,"hostname":"722a8eb22524","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/ZvJ11KmKCq","deployment":{"bytes32":"0xa15b65bbb45ce64c7eecc0ad665b53510cf0520ee6d71a451e77c772a2375cb6","ipfsHash":"QmZCXBToPx7Tymkv7wexog35WYqQo8Q2BPVGZvJ11KmKCq"},"msg":"Deploy subgraph deployment"}

{"level":20,"time":1740666359850,"pid":1,"hostname":"722a8eb22524","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/ZvJ11KmKCq","deployment":"QmZCXBToPx7Tymkv7wexog35WYqQo8Q2BPVGZvJ11KmKCq","msg":"Subgraph deployment not found, creating subgraph name and deploying..."}

Live Database Table results

table_name total_size table_size index_size estimated_rows
swap 67 GB 29 GB 38 GB 57486864
token 22 GB 6652 MB 15 GB 45057912
poi2$ 2534 MB 921 MB 1613 MB 2827763
liquidity_pool 70 MB 25 MB 44 MB 51337
dex_amm_protocol 34 MB 9632 kB 24 MB 50754
data_sources$ 8816 kB 4312 kB 4464 kB 51337
liquidity_pool_daily_snapshot 216 kB 0 bytes 208 kB 0
liquidity_pool_hourly_snapshot 216 kB 0 bytes 208 kB 0
financials_daily_snapshot 192 kB 0 bytes 184 kB 0
usage_metrics_daily_snapshot 168 kB 0 bytes 160 kB 0
usage_metrics_hourly_snapshot 160 kB 0 bytes 152 kB 0
withdraw 120 kB 0 bytes 112 kB 0
deposit 120 kB 0 bytes 112 kB 0
liquidity_pool_fee 104 kB 0 bytes 96 kB 0
reward_token 104 kB 0 bytes 96 kB 0
account 32 kB 0 bytes 24 kB 0
active_account 32 kB 0 bytes 24 kB 0

Crashed Database Table results

table_name total_size table_size index_size estimated_rows
swap 262 GB 115 GB 147 GB 0
token 81 GB 25 GB 56 GB 0
poi2$ 10 GB 3826 MB 6724 MB 0
liquidity_pool 556 MB 203 MB 352 MB 0
dex_amm_protocol 271 MB 73 MB 198 MB 0
data_sources$ 69 MB 34 MB 35 MB 0
liquidity_pool_daily_snapshot 216 kB 0 bytes 208 kB 0
liquidity_pool_hourly_snapshot 216 kB 0 bytes 208 kB 0
financials_daily_snapshot 192 kB 0 bytes 184 kB 0
usage_metrics_daily_snapshot 168 kB 0 bytes 160 kB 0
usage_metrics_hourly_snapshot 160 kB 0 bytes 152 kB 0
withdraw 120 kB 0 bytes 112 kB 0
deposit 120 kB 0 bytes 112 kB 0
liquidity_pool_fee 104 kB 0 bytes 96 kB 0
reward_token 104 kB 0 bytes 96 kB 0
account 32 kB 0 bytes 24 kB 0
active_account 32 kB 0 bytes 24 kB 0

Best,

@tmigone @absolutelyNot Can you please review and verify the evidence?

After reviewing the evidence and considering the information available we have decided to resolve this dispute as a draw. The disputed indexer’s activity is in line with the operational failure they have described. We don’t see any reason to believe the POI they presented is invalid nor that the subgraph was not fully synced at the moment the associated allocation was closed. Indexer’s status endpoint provides valuable information related to current subgraph deployments, it should not be used to derive information for previous deployment syncing attempts.

We appreciate the fisherman’s thoroughness and everyone’s patience on this dispute. We’ll be posting the transaction shortly.

Thanks, Arbitration team.

1 Like