Request for information about disputes #GDR-12

The arbitrators are contacting indexer address 0x1a99dd7d916117a523f3ce6510dcfd6bceab11e7 (p-ops) about disputes filed in the protocol.

To make the investigation as brief as possible, please provide the following information and any other relevant records about the open disputes:

  • Version of graph-node used.
  • Graph-node configuration in full.
  • Type and version of Ethereum node.
  • Table of PoIs generated by the affected subgraphs.
  • A dump of the call cache data from the database for the affected subgraphs.
  • Entity values as of the divergent block once identified.

This is not an all-inclusive list. Request for additional information may be made if considered necessary.

How to Get Relevant Data

You can use the following queries to extract information from the indexer software to be used in the evaluation.

# Get subgraphs schema
SELECT name FROM public.deployment_schemas where subgraph = 'QmSEeNzFx6bimFBwtdbj9uvcx4Hrdsx84XYWSFawy8CV2V';

# Move to the subgraphs schema

# Dump PoI TABLE - (from outside of psql console but on server)
pg_dump --dbname="<YOUR_DB_NAME" --host=<DB_HOST> --port=5432 --username=<USERNAME>  --table='<SUBGRAPH_SCHEMA>."poi2$"' --file='<FILE_TO_SAVE_TO>'

# Dump call cache (on some setups this table may be in a different schema, check it with `select * from public.chains`)
pg_dump --dbname="<YOUR_DB_NAME" --host=<DB_HOST> --port=5432 --username=<USERNAME>  --table='public.eth_call_cache"' --file='<FILE_TO_SAVE_TO>'

Once a divergent block is identified:

# loop through all entity tables and get changes for that block
# for each entity table in subgraph deployment schema:
select * from <entity_table> where lower(<DIVERGENT_BLOCK>);

Purpose of the Requirement

This requirement is related to the following disputes:

Dispute (0x40852da89f1ae64ccf68a1ae0614f6f16aa1c35fc6ec4f9b0537796d054d69e5)
β”œβ”€ Type: Indexing
β”œβ”€ Status: Undecided (2.98 days ago)
β”œβ”€ Indexer: p-ops (0x1a99dd7d916117a523f3ce6510dcfd6bceab11e7)
β”œβ”€ Fisherman: 0xc56961836857210e256d71c91a62e90865075380
β”œβ”€ SubgraphDeployment
β”‚  └─ id: 0x39e4e3dfec58b8e4ddc213ffb907922b6a46e1475312046cd3a1796fafa4d66e (QmSEeNzFx6bimFBwtdbj9uvcx4Hrdsx84XYWSFawy8CV2V)
β”œβ”€ Economics
β”‚  β”œβ”€ indexerSlashableStake: 60138.972274680226652204 GRT
β”‚  └─ indexingRewardsCollected: 469.950294852107025 GRT
β”œβ”€ Allocation
β”‚  β”œβ”€ id: 0x1911a590488db5f9dcfe5182fe40239611b5d9f6
β”‚  β”œβ”€ createdAtEpoch: 539
β”‚  β”œβ”€ createdAtBlock: 0x2ebd8da7212fa781459a1248eb759db039225be7fa7dcd02c88a2f63a8813a6b
β”‚  β”œβ”€ closedAtEpoch
β”‚  β”‚  β”œβ”€ id: 542
β”‚  β”‚  └─ startBlock: 0xc7f97ac06b38c4732e499be5c54c15fd1231a313b2108baae87fbbf7c74bb5b7 (#15048900)
β”‚  └─ closedAtBlock: 0x4120b9606c58b5eb4a231fcfb47a2ed6d2491180a6abf892b0a6d41bbc1bacae (#15055510)
└─ POI
   β”œβ”€ submitted: 0x4945c638a879a0c0b646c4ae04a2526a64521137af77152c9e889354154d78d7
   β”œβ”€ match: Not-Found
   β”œβ”€ previousEpochPOI: Not-Found
   └─ lastEpochPOI: Not-Found

About the Procedure

The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay

For communications, please use this forum. Additionally, please monitor Graph Protocol’s Discord #arbitration ( ) for any arbitrator reaching out for more information.


As most of you here know we have been an exemplary indexer so far without any issues. This is the first time we are being called upon so please bare with us.

First of all, let me explain we have been just moving our servers for The Graph, i am not sure exactly when we fully migrated, but it was before the allocation in question happened, so this was on our current setup. We had lots of issues even setting up as we used the latest versions, as follow:

  • graph node 0.26
  • graph indexer and agent 0.19.3
  • erigon 2022.06.05-alpha (it could be it was on beta at that time though, as we uncovered a bug where the subgraphs got synced up to a point with the new software, then stopped until erigon daemon was restarted)

Now to get to the point. I am usually the person that opens and closes allocations, and for the allocation in question, i closed it via agent (never’d the subgraph and restarted agent to speed it up as i usually do). Before i close an allocation i always verify our grafana dashboard if the subgraph im trying to close is in sync or having an issue, which was not the case here, and in fact it still isnt.

Given what i saw on discord about this subgraph, i have no idea how this is possible, but the fact is this particular subgraph is somehow still syncing/in sync for us, as you can see here:

I know this is just a screenshot but if needed, we would be willing to give access to Grafana or even our indexer server to @Ford as we know and trust him. Here is also the link to the table dump i did, but from here on out i have literally no idea what to do: sgd4 - Google Drive
Callcache file: callcache - Google Drive

We are happy to participate and try to help find out what happened, but i want to stress out that whatever happened here, it didnt happen maliciously and i did close the allocation because i saw that the subgraph is in sync on our dashboard (screenshot above is from a few minutes ago so its still in sync). I think both the fisherman, who knows us really or at least me, and @Ford both know we arent here to try and do anything malicious. Especially not risking our reputation ever since the genesis, and the slash amount for those measly 400 GRT tokens.


This subgraph (Curve Factory Pools) is broken and not indexing (indexer node says DeploymentNotFound), and everyone 0x0 it. I have a small allo open, and my agent (v0.19.3) fails to close it automatically, erroring that it failed to retrieve a valid POI. If what indexer claims is right (and it seems to be so, as 400 GRT of reward is indeed not worth the reputation), and agent closed it as a result of software bug, I’d vote for drawing the dispute.

1 Like

Hello, thanks for the shout out!

This is really odd yes, i have been reading that it cant sync because graph node 0.26.0 doesnt support version 0.05 of whatever that subgraph needs, so i panic checked 3 times if our graph node is really the latest one that is allowed, and it is, its 0.26.0.

Another thing that bugs me is why the subgraph is still in active subgraph list and shows its synced, like all other healthy subgraphs we index. And lastly, i usually refer to our own system and grafana reports on broken subgraphs, as with the latest one on unicrypt. I dont check subgraph health often, so i didnt even know there was an issue with it since it didnt broke down for us.

Anyways as i said, i know the screenshot doesnt prove much since its just a screenshot, but either me or a teammate can give access to our grafana (and in the extreme case to our indexer server) so that a person like Ford could confirm what im saying is true.

p-ops is a reputable indexer and they can’t do it on purpose with just 400 GRT of reward. I have checked some of the information they provide and I believe this is a bug from the software.

  1. Subgraph synced on their side
  2. Indexer-agent close it automatically with a Valid POI.

I’d vote for drawing the dispute.

1 Like

@mindstyle_p-ops Thanks for the info!


Reference epoch and block info for POI:
Epoch: 542
Block: 15048900
Hash: 0xc7f97ac06b38c4732e499be5c54c15fd1231a313b2108baae87fbbf7c74bb5b7

POI Query:

http post http://localhost:8030/graphql \
        subgraph: "QmSEeNzFx6bimFBwtdbj9uvcx4Hrdsx84XYWSFawy8CV2V"
        blockHash: "0xc7f97ac06b38c4732e499be5c54c15fd1231a313b2108baae87fbbf7c74bb5b7"
        blockNumber: 15048900
        indexer: "0x1a99dd7d916117a523f3ce6510dcfd6bceab11e7"

    "data": {
        "proofOfIndexing": "0x4945c638a879a0c0b646c4ae04a2526a64521137af77152c9e889354154d78d7"

Indexer’s POI: 0x4945c638a879a0c0b646c4ae04a2526a64521137af77152c9e889354154d78d7
My POI Result: 0x4945c638a879a0c0b646c4ae04a2526a64521137af77152c9e889354154d78d7


When running Graph Node v0.26.0, the subgraph fails to deploy with the following:

subgraph_deploy failed, params: SubgraphDeployParams { name: SubgraphName("indexer-agent/SFawy8CV2V"), ipfs_hash: DeploymentHash("QmSEeNzFx6bimFBwtdbj9uvcx4Hrdsx84XYWSFawy8CV2V"), node_id: Some(NodeId("erigon")), debug_fork: None }, error: ResolveError(ResolveError(This Graph Node only supports manifest spec versions between 0.0.2 and 0.0.4, but subgraph `QmSEeNzFx6bimFBwtdbj9uvcx4Hrdsx84XYWSFawy8CV2V` uses `0.0.5`)), component: JsonRpcServer

However, when running the Graph Node fraction2 Github tag, the subgraph is successfully indexed and a POI match is found.

If Fisherman 0xc56961836857210e256d71c91a62e90865075380 would like to provide additional info, please try to do so in short order so the Arbitration Council can address this in a timely manner.


Hello all, throwing in my response as the fisherman for this dispute.

First off, I’d like to echo that p-ops is a productive indexer with terrific repute. I do not believe there is any malice in his activity.

Taking a pragmatic approach to what I saw, we attempted to sync Curve Factory Pools (as well as a few other recent deployments using spec version 0.0.5) and were unable to do so with the same error as mentioned above. Thus the POI submitted seemed invalid. I have been reaching out to subgraph developers who are publishing with spec version 0.0.5 to help them downgrade to 0.0.4 if possible.

The fact that the fraction2 tag can provide a valid POI AND it matches what the indexer submitted is fascinating to me. @mindstyle_p-ops are you running a backup to help test tagged versions? Should indexers consider a POI provided by an unofficial release as a valid POI? If so, it would seem the indexer would actually win rather than a draw.

The arbitration charter has language in what to do when an indexer is running an outdated official version, but it doesn’t talk about running unofficial releases in prod.

It’s possible that a PoI or Attestation provided by an Indexer is invalid due to
the Indexer running an outdated version of the protocol software. As described in
GIP-XXXX (TBD), protocol upgrades of the Indexer software and Subgraph API will
specify a grace period during which the previous official version of the Indexer
software may still be run. For disputes involving a PoI or Attestation that is only
correct with respect to the previous official version of the Indexer software, the
Arbitrator must settle any such dispute as a Draw.

I’d also like to echo @KonstantinRM’s comment on this forum post in that if we required a valid POI in order to open an allocation, we could avoid faulty allocations at a protocol level.

1 Like

Hey @DataNexus !

Ok first to answer your question, we unfortunately dont run any backups, pretty swamped as is and half of our team is either down with covid or on vacation, but we might set it up later on.

As far as versions go, we have installed 0.26.0, even with the help of @Ford as we use binaries instead of docker, and those are always with many issues to install, but we still prefer these to the dockerized version. Well @JB273 actually installed the SW, but like i said, with quite some help from Ford. I wont claim we didnt checkout the wrong version, but when checking graph-node version it does say 0.26.0:

$ graph-node -V
graph-node 0.26.0 :: hosted-current+87 (28580bb76 2022-06-21)

Now it could be that the last part or the number in the brackets mean that this version has fraction2 included, but even if so, versioning should be super clear and if something isnt recommended for mainnet (eg 0.26.0 with fraction2, it should NOT be called 0.26.0 at all).

We also agree with what Konstantin said, although much of these issues could also be solved with simply providing good, stable and clear releases.

Overall i feel like there are many things still to work out here, including some of the rules, so that theyre clear, as well as versions and channels used where these versions can be checked out. This has been an ongoing pain point i feel, as well as the fact that those (like us) that prefer to run binaries usually have a ton of issues to properly set it up (i cant remember all the issues we had with this installation but there were many and it took quite some hours to finally have them all working). So i would also ask this part to be streamlined better in the future.

As far as this dispute goes, if, like Derek mentioned, the indexer would win, we do not want to see Derek lose his deposit as this case is too unclear for him to lose his deposit over it. If Arbitors can take this into account when making the decision, please do so. Lets try not to hurt people that are actively making The Graph better.

1 Like

Can we get some closure on this one soon? This very much reads as yet another learning exercise for all with no malicious intent.

A list of action items to help to avoid this scenario in the future, and a ruling on the outcome would be awesome.

Thanks Jim! Was just about to slowly ask the same since it has been months now

Also agreed. I think by way of the discovery process we have seen that there is no malicious intent but rather a more structured version upgrade process needs to be defined.