Request for Information about Disputes #GDR-35

The Arbitrators are contacting Indexer address 0x9082f497bdc512d08ffde50d6fce28e72c2addcf and fisherman 0xfe56931ed1cd3021ef1162bae2b3872e8694d1da for a new dispute filed in the protocol.

Dispute ID: 0xac7bca2867db359d5ee17df34a6e3f37e39516bf83d7ceb927cbfd256990dbdd

Allocation: 0xce314e3a43b35f13b562fb813d31e32ee458d1f1

Subgraph: QmeBQRHngJLJ2r84Vvp3mgKbgYXHmAqc3dWkXoywU9ze3d

Fisherman, could you share the insights or data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).

About the Procedure

The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.


Please use this forum for all communications.

Arbitration Team.

Hi @tmigone, this is 0xfe56931ed1cd3021ef1162bae2b3872e8694d1da,
This dispute concerns the indexer 0x9082f497bdc512d08ffde50d6fce28e72c2addcf closing subgraph that was not synced 100%. The indexer only synced up to block number around 88610000, while the correct startBlock should have been 321468804 epoch 853.

According to #GDR-33 and #GDR-34, 0x9082f497bdc512d08ffde50d6fce28e72c2addcf indexer reports a database failure on April 12. The indexer took at least 22 days to sync based on the first allocation opened on February 24 and closed on March 18.

Today is April 24 which is 12 days and the indexer is nowhere near half synced.

Hi tonymontana,

Can you please clarify what is the actual fault for the dispute?

By March 31st, 2025 when we closed the last allocation on this subgraph we had it fully synced and the POI is a valid one.

On April 12, 2025 we had a database failure which means we had to resync all of our subgraphs. This is a heavy subgraph, taking up 234GB so far and still syncing.

I don’t see any relation with syncing speed and validity of a POI. We might have started syncing offchain before the first allocation. Syncing speed can also be affected by the RPC provider (we are using Pinax for Arbitrum). Can you please expand on your observation?

I’d like to share some thoughts here.

Using the time or day difference as a measure to gauge whether the disputed indexer is progressing isn’t the most reliable approach. One could also argue that after a database failure and resyncing all subgraphs from scratch, the load becomes significantly heavier compared to when the indexer initially started syncing the subgraph.

I agree with this.

From the last check, @megatron was synced up to block around 88610000 which was almost two years ago. It took @megatron at least 22 days to sync 100%. Now 13 days after the database failure, the subgraph is only about 30% synced.

If @megatron started syncing at the same time the allocation was created then the current progress seems illogical. If offchain syncing method was used, are there any graph-node logs or grafana chart logs from the past two months that can be provided to support this statement?

@Inspector_POI You’ve filed two disputes for the same indexer @megatron, just wondering what led to the change in your approach toward my disputes? @megatron agreed to provide logs from the failed database for @Inspector_POI’s disputes. I hope to see the same here.

There hasn’t been any change in my approach. Just because I mentioned this as not the most reliable method doesn’t mean I’m siding with the disputed indexer.

As stated in all my previous disputes,

I’ve approached this matter with neutrality, aiming to gather and analyze all relevant facts to determine whether the issue stems from a software malfunction or an operational oversight by the indexer.

The disputed indexer has been cooperative so far by providing all the requested logs for MY disputes, which certainly adds credibility to his claim.

You are the fisherman for THIS dispute, whatever logs or necessary materials are required, it’s YOUR / Arbitration Team’s responsibility to request them.

Please do not conflate this with my disputes or my requirements.

Hi tonymontana,

Below you can find the indexer agent logs related to the syncing start date of the subgraph and the table data from the live database and the crashed one.

Index Agent Logs

{"level":30,"time":1739325004291,"pid":1,"hostname":"9e2ae438aa23","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/XoywU9ze3d","deployment":{"bytes32":"0xeb5c9b1960366a59bd64f7db97692c2bb24e01662be4a5f09b0d995781a19594","ipfsHash":"QmeBQRHngJLJ2r84Vvp3mgKbgYXHmAqc3dWkXoywU9ze3d"},"msg":"Successfully deployed subgraph deployment"}

{"level":30,"time":1739325003354,"pid":1,"hostname":"9e2ae438aa23","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/XoywU9ze3d","deployment":{"bytes32":"0xeb5c9b1960366a59bd64f7db97692c2bb24e01662be4a5f09b0d995781a19594","ipfsHash":"QmeBQRHngJLJ2r84Vvp3mgKbgYXHmAqc3dWkXoywU9ze3d"},"msg":"Deploy subgraph deployment"}

{"level":20,"time":1739325003347,"pid":1,"hostname":"9e2ae438aa23","name":"IndexerAgent","component":"GraphNode","name":"indexer-agent/XoywU9ze3d","deployment":"QmeBQRHngJLJ2r84Vvp3mgKbgYXHmAqc3dWkXoywU9ze3d","msg":"Subgraph deployment not found, creating subgraph name and deploying..."}

Live Database Tables Results

table_name total_size table_size index_size estimated_rows
token 25 GB 9712 MB 16 GB 24092336
token_day_data 24 GB 6690 MB 17 GB 24089172
pair 23 GB 6168 MB 17 GB 13152432
pair_day_data 17 GB 4579 MB 13 GB 13119903
swap 17 GB 4861 MB 12 GB 14930782
pair_hour_data 12 GB 3205 MB 8594 MB 13145427
poi2$ 10 GB 3747 MB 6565 MB 11485764
day_data 8392 MB 2346 MB 6046 MB 10963946
factory 8028 MB 2450 MB 5577 MB 11003978
transaction 7315 MB 3161 MB 4148 MB 12174802
liquidity_position_snapshot 4609 MB 1183 MB 3426 MB 2708577
liquidity_position 2321 MB 606 MB 1715 MB 2472719
mint 847 MB 244 MB 603 MB 712973
bundle 517 MB 155 MB 362 MB 2093307
burn 203 MB 59 MB 144 MB 161576
user 72 MB 21 MB 51 MB 233264
data_sources$ 4448 kB 2144 kB 2264 kB 25949
token_hour_data 168 kB 0 bytes 160 kB 0
hour_data 152 kB 0 bytes 144 kB 0

Crashed Database Tables Results

table_name total_size table_size index_size estimated_rows
token 34 GB 13 GB 21 GB 0
token_day_data 33 GB 9399 MB 24 GB 0
pair 33 GB 8526 MB 24 GB 0
pair_day_data 24 GB 6338 MB 18 GB 0
swap 24 GB 6969 MB 17 GB 0
pair_hour_data 16 GB 4481 MB 12 GB 0
poi2$ 14 GB 5322 MB 9307 MB 0
day_data 11 GB 3329 MB 8069 MB 0
factory 11 GB 3489 MB 7779 MB 0
transaction 10 GB 4434 MB 5872 MB 0
liquidity_position_snapshot 6495 MB 1674 MB 4821 MB 0
liquidity_position 3443 MB 915 MB 2528 MB 0
mint 897 MB 257 MB 640 MB 0
bundle 580 MB 179 MB 400 MB 0
burn 255 MB 74 MB 181 MB 0
user 78 MB 24 MB 54 MB 0
data_sources$ 5400 kB 2608 kB 2752 kB 0
token_hour_data 168 kB 0 bytes 160 kB 0
hour_data 152 kB 0 bytes 144 kB 0

Best,

@tmigone @absolutelyNot
Can you please review and verify the evidence?

After reviewing the evidence and considering the information available we have decided to resolve this dispute as a draw. The disputed indexer’s activity is in line with the operational failure they have described. We don’t see any reason to believe the POI they presented is invalid nor that the subgraph was not fully synced at the moment the associated allocation was closed. Indexer’s status endpoint provides valuable information related to current subgraph deployments, it should not be used to derive information for previous deployment syncing attempts.

We appreciate the fisherman’s thoroughness and everyone’s patience on this dispute. We’ll be posting the transaction shortly.

Thanks, Arbitration team.

1 Like