Request for Information about Disputes #GDR-34

The Arbitrators are contacting Indexer address 0x9082f497bdc512d08ffde50d6fce28e72c2addcf and fisherman 0x4208ce4ad17f0b52e2cadf466e9cf8286a8696d5 for a new dispute filed in the protocol.

Dispute ID: 0xc8dacc9928389ce216c05640b0ffed69ec617d361544bc1718ef3e4b47176680

Allocation: 0xb56ca95858b7c1519ac5f703cb5726f9fa76a562

Subgraph: QmTpSFUKL7QM6DLNdHfGHqjjTYfu3SofpTqaQYEr9kH8ne

@InspectorPOI, could you share the insights or data you gathered that led to you filing the dispute? Please provide all relevant information and records about the open dispute. This will likely include POIs generated for the affected subgraph(s).

About the Procedure
The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay or at GIP-0009: Arbitration Charter.

Please use this forum for all communications.

Arbitration Team.

1 Like

Dear Arbitration Team,

I am submitting a dispute against holographic-indexer (0x9082f497bdc512d08ffde50d6fce28e72c2addcf) for closing allocation on the arbitrum-rpc-revert-hs without being fully synced.
This case is similar to #33 but I’ll still provide all the necessary details.

The allocation was closed on the start block #321468804 on the Arbitrum-One network. Upon cross-referencing with the Query Cross-Checking Tool, it is evident that the indexer was not fully synced up to this block at the time of closure.

1 Like

Dear Arbitration Team,

The arbitrum-rpc-revert-hs subgraph was effectively synced by the time our indexer closed the allocations, and the POI was presented by the indexer agent without any manual intervention. The only reason you don’t see it synced at the time of opening the dispute #34 on April 18th, 2025 is that we had to resync all of our subgraphs due to critical database failure happening on April 12th, 2025. Most subgraphs are back to synced state while others are still syncing.

This is the crash log from our database, which failed to replay the WAL log during restart attempts. Although we successfully reset the WAL log, the database remained inconsistent due to table failures during indexing. Following this incident, we made the decision to resync all subgraphs from scratch.

2025-04-12 17:08:58.654 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2025-04-12 17:08:58.655 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-04-12 17:08:58.685 UTC [30] LOG:  database system was interrupted while in recovery at 2025-04-12 17:08:21 UTC
2025-04-12 17:08:58.685 UTC [30] HINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.

2025-04-12 17:09:28.325 UTC [1101] FATAL:  the database system is not yet accepting connections
2025-04-12 17:09:28.325 UTC [1101] DETAIL:  Consistent recovery state has not been yet reached.
2025-04-12 17:09:28.325 UTC [1102] FATAL:  the database system is not yet accepting connections
2025-04-12 17:09:28.325 UTC [1102] DETAIL:  Consistent recovery state has not been yet reached.
2025-04-12 17:09:29.517 UTC [1] LOG:  startup process (PID 30) was terminated by signal 11: Segmentation fault

For clarification purposes we attach a timeline of the events regarding this subgraph:

  • March 17th, 2025: Opened allocation0xb56ca95858b7c1519ac5f703cb5726f9fa76a562
  • March 31st, 2025: Closed (synced) allocation 0xb56ca95858b7c1519ac5f703cb5726f9fa76a562
  • March 31st, 2025: Opened allocation 0xfc55813265f6077629e00473412eed125fb38582
  • April 12th, 2025: >> Critical database failure. We start a resync of subgraphs from scratch.
  • April 18th, 2025: Dispute filed

We didn’t close any allocation with arbitrary POIs and an unsynced state, neither we have closed 0xfc55813265f6077629e00473412eed125fb38582 as the subgraph is still syncing.

Please advise what additional proof you need. We have retained the database directory before resyncing. An effective validation of the POIs requires comparing them against other indexers’ submissions, rather than checking the current sync state.

Best.

Dear @megatron,

Thank you for your response and the details provided.

The POI comparison tool indicates that a majority of indexers have reached consensus on block #321468804 , except for the disputed indexer, which currently shows no POI found. Additionally, The Graph Explorer displays that synchronization is still in progress.​

While I understand that database corruption is an unfortunate and sometimes unavoidable issue, I would like to confirm my understanding of the timeline you’ve provided to ensure accuracy and prevent any misunderstandings

To elaborate:

  • On March 17th, 2025, you began syncing the arbitrum-rpc-revert-hs subgraph and opened the first allocation0xb56ca95858b7c1519ac5f703cb5726f9fa76a562.
  • By March 31st, 2025, the arbitrum-rpc-revert-hs subgraph was fully synced and allocation 0xb56ca95858b7c1519ac5f703cb5726f9fa76a562 was closed.
  • The subgraph remained synced until a critical database failure occurred on April 12th, 2025, then you initiated resync of subgraphs from scratch.

Could you please confirm if this understanding is accurate?

Hi Inspector_POI,

The timeline is correct. The reason that the subgraph is not returning the POI through the GraphQL interface is because it’s still re-syncing since we started that process in April 12th, 2025 after the crash. We are currently at block #320881375 for that subgraph while you are querying #321468804.

However, the POI was effectively present during March 31st, 2025 since we had the fully synced version at that time.

Thank you for your response.

Could you please run the following SQL query on both your retained database directory (subgraph db) prior to the failure and your current database that is syncing? This will help us verify the state of the data before the incident.​ You may attach the results here.

SELECT
  c.relname AS table_name,
  pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size,
  pg_size_pretty(pg_relation_size(c.oid)) AS table_size,
  pg_size_pretty(pg_indexes_size(c.oid)) AS index_size,
  COALESCE(s.n_live_tup, 0) AS estimated_rows
FROM
  pg_class c
  JOIN pg_namespace n ON n.oid = c.relnamespace
  LEFT JOIN pg_stat_user_tables s ON s.relid = c.oid
WHERE
  n.nspname = 'sgdxxx' -- Your namespace here for the subgraph
  AND c.relkind = 'r'  
ORDER BY
  pg_total_relation_size(c.oid) DESC;

Additionally, by chance if you have any indexer-agent logs also, that is related to the allocation closure, we would greatly appreciate it if you could share them. These logs are crucial for understanding the sequence of events leading up to the failure.

Hi Inspector_POI,

Below you can find the indexer agent logs related to the closed allocation and the table data from the live database.

We are still working on moving the backed up crashed database to a new server to run a postgres instance and get you the other query you requested.

{"level":30,"time":1743429086408,"pid":1,"hostname":"722a8eb22524","name":"IndexerAgent","component":"AllocationManager","protocolNetwork":"eip155:42161","action":266,"deployment":{"bytes32":"0x51689c0eade265f736c7b06a7dfaa7a26d37d18017873265b0b1ae26919a5b43","ipfsHash":"QmTpSFUKL7QM6DLNdHfGHqjjTYfu3SofpTqaQYEr9kH8ne"},"allocation":"0xB56CA95858B7c1519Ac5F703CB5726f9FA76A562","indexer":"0x9082F497Bdc512d08FFDE50d6FCe28e72c2AdDcf","amountGRT":"145800.0","poi":"0x00836b67045e09b79257f2d22f576dacfd6f7bcdb669e38ab5810795d3a4c421","transaction":"0x8b53b160b895404a2880bf74f9b46829dfedee833deeb822b48a1913e3ddfd12","indexingRewards":{"type":"BigNumber","hex":"0x342193e0cf1836f630"},"msg":"Successfully closed allocation"}
table_name total_size table_size index_size estimated_rows
poi2$ 1767 MB 560 MB 1207 MB 5375287
global 1105 MB 350 MB 755 MB 5375707
data_sources$ 32 kB 0 bytes 24 kB 0

Thank you for your cooperation. Looking forward to the results from the crashed database.

Hey Inspector_POI,

We could not yet restore the database since we are copying 7TB from a storage disk to a server and it’s taking longer than we hoped, but in the meantime the subgraph (QmTpSFUKL7QM6DLNdHfGHqjjTYfu3SofpTqaQYEr9kH8ne) sync have just caught up with the block in dispute for allocation 0xb56ca95858b7c1519ac5f703cb5726f9fa76a562.

Can you please run your tooling again to check for the POI validity? I just tried with the a local script and it’s matching with the one we presented for that allocation.

If required, we can also pg_dump the current table which includes all entities and POIs that lead to that result. Let me know.

Thanks

For this subgraph, I’ve checked your status, and the public POI checks out. Regarding the migration of the crashed database, do you have an ETA for its completion?

Dear Arbitration Team,

I don’t believe I have anything further to add beyond the last requested SQL query results from the crashed database. Please advise whether this is still required or if the materials provided by the disputed indexer thus far are deemed sufficient.

Once again thank you for your efforts @megatron.

Inspector_POI, this is the result of the requested query on the crashed database.

table_name total_size table_size index_size estimated_rows
poi2$ 3096 MB 1160 MB 1936 MB 0
global 2036 MB 730 MB 1305 MB 0
data_sources$ 32 kB 0 bytes 24 kB 0

After reviewing the evidence, including logs from @megatron and observations and data from @Inspector_POI, the Arbitration Council has decided to resolve the dispute as a draw.

We appreciate the detailed evidence, logs, and observations provided, as they were invaluable in helping the Arbitration Council reach an informed decision. We also recognize the time and effort you both dedicated to submitting clear and thorough information, which ultimately strengthens the integrity and transparency of the dispute resolution process.

The Arbitrators

2 Likes

Dispute has been drawn here: Arbitrum One Transaction Hash: 0x7c488f1702... | Arbitrum One

1 Like