Request for information about disputes #GDR-7

The arbitrators are contacting indexer address 0x453b5e165cf98ff60167ccd3560ebf8d436ca86c about disputes filed in the protocol.

To make the investigation as brief as possible, please provide the following information and any other relevant records about the open disputes:

  • Version of graph-node used.
  • Graph-node configuration in full.
  • Type and version of Ethereum node.
  • Table of PoIs generated by the affected subgraphs.
  • A dump of the call cache data from the database for the affected subgraphs.
  • Entity values as of the divergent block once identified.

This is not an all-inclusive list. Request for additional information may be made if considered necessary.

How to Get Relevant Data

You can use the following queries to extract information from the indexer software to be used in the evaluation.

# Get subgraphs schema
SELECT name FROM public.deployment_schemas where subgraph = 'QmTj6fHgHjuKKm43YL3Sm2hMvMci4AkFzx22Mdo9W3dyn8';

# Move to the subgraphs schema
SET SEARCH_PATH TO <RESULT_FROM_ABOVE>;

# Dump PoI TABLE - (from outside of psql console but on server)
pg_dump --dbname="<YOUR_DB_NAME" --host=<DB_HOST> --port=5432 --username=<USERNAME>  --table='<SUBGRAPH_SCHEMA>."poi2$"' --file='<FILE_TO_SAVE_TO>'

# Dump call cache (on some setups this table may be in a different schema, check it with `select * from public.chains`)
pg_dump --dbname="<YOUR_DB_NAME" --host=<DB_HOST> --port=5432 --username=<USERNAME>  --table='public.eth_call_cache"' --file='<FILE_TO_SAVE_TO>'

Once a divergent block is identified:

# loop through all entity tables and get changes for that block
# for each entity table in subgraph deployment schema:
select * from <entity_table> where lower(<DIVERGENT_BLOCK>);

Purpose of the Requirement

This requirement is related to the following disputes:

├─ 0xacb9026d7aba202bdf26776d7b21bbc8215282f39d10a06b2dc20e6a36d2207a
│  ├─ Type: Indexing
│  ├─ Indexer: 0x453b5e165cf98ff60167ccd3560ebf8d436ca86c
│  ├─ Fisherman: 0xec022aa5960c97764237f3dde3ed4065dd7ecf2f
│  ├─ SubgraphDeployment
│  │  └─ id: 0x1d69c84478129a01ca1ab08c2337e640ff42d959637f21f8f2414cb7512086fa (QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw)
│  ├─ Allocation
│  │  ├─ id: 0x7bb9cc693a6e1523fee00b99f4f405dd3df4f876
│  │  ├─ createdAtEpoch: 204
│  │  ├─ createdAtBlock: 0x8726ec4151c89d5bf022a904829d6f6de67591c46d4a53f30a6c53862f76713c
│  │  ├─ closedAtEpoch: 219
│  │  └─ closedAtBlock: 0xcd47ca957cda0bc46434ae8e5e05ab1abd06645ccb089bdfb565f9e6829623a1 (#12908408)
│  └─ POI
│     ├─ submitted: 0x5cc307ecd069223429de491e4d6a5a91e6d630fa3590dcfa1f5e3a99210bfe3b

About the Procedure

The Arbitration Charter regulates the arbitration process. You can find it in Radicle project ID rad:git:hnrkrhnth6afcc6mnmtokbp4h9575fgrhzbay

For communications, please use this forum. Additionally, please monitor Graph Protocol’s Discord #arbitration (https://discord.gg/9gg7XvfggW ) for any arbitrator reaching out for more information.

Hey @ari , we will prepare the information and get back to you asap!

As the fisherman of these dispute i have some explanation about this case.

First at all, there are two disputes.

 0xacb9026d7aba202bdf26776d7b21bbc8215282f39d10a06b2dc20e6a36d2207a
│  ├─ Type: Indexing
│  ├─ Status: Undecided
│  ├─ Indexer: 0x453b5e165cf98ff60167ccd3560ebf8d436ca86c
│  ├─ Fisherman: 0xec022aa5960c97764237f3dde3ed4065dd7ecf2f
│  ├─ SubgraphDeployment
│  │  └─ id: 0x1d69c84478129a01ca1ab08c2337e640ff42d959637f21f8f2414cb7512086fa (QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZ
│  ├─ Allocation
│  │  ├─ id: 0x7bb9cc693a6e1523fee00b99f4f405dd3df4f876
│  │  ├─ createdAtEpoch: 204
│  │  ├─ createdAtBlock: 0x8726ec4151c89d5bf022a904829d6f6de67591c46d4a53f30a6c53862f76713c
│  │  ├─ closedAtEpoch: 219
│  │  └─ closedAtBlock: 0xcd47ca957cda0bc46434ae8e5e05ab1abd06645ccb089bdfb565f9e6829623a1 (#12908408)
│  └─ POI
│     ├─ submitted: 0x5cc307ecd069223429de491e4d6a5a91e6d630fa3590dcfa1f5e3a99210bfe3b
│     └─ match: Not-Found

And

─ 0xb856a651b6ceec4d41f3903ca84544b32ccc452eedc5a4de2a523984b09cb2aa
│  ├─ Type: Indexing
│  ├─ Status: Undecided
│  ├─ Indexer: 0x453b5e165cf98ff60167ccd3560ebf8d436ca86c
│  ├─ Fisherman: 0xec022aa5960c97764237f3dde3ed4065dd7ecf2f
│  ├─ SubgraphDeployment
│  │  └─ id: 0x1d69c84478129a01ca1ab08c2337e640ff42d959637f21f8f2414cb7512086fa (QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZ
│  ├─ Allocation
│  │  ├─ id: 0x26a964d7fd08092c4b5088e196e6519e3e9f4880
│  │  ├─ createdAtEpoch: 219
│  │  ├─ createdAtBlock: 0x168cb67d88aeed2e9de8cd469c864b56afb8511ae9d3d6558487bf4e669416ad
│  │  ├─ closedAtEpoch: 226
│  │  └─ closedAtBlock: 0x41cdd1a51905c1e8611ec86604b26fb4111d82034ff5dc1b162c7054c755c6ce (#12953001)
│  └─ POI
│     ├─ submitted: 0x5cc307ecd069223429de491e4d6a5a91e6d630fa3590dcfa1f5e3a99210bfe3b
│     └─ match: Not-Found

As you can see, there are 2 diffent allocation with the diffent “created” and “closed” epochs, but they share the same POI, which simply cannot be accepted.

Anyway. even 0x7bb9cc693a6e1523fee00b99f4f405dd3df4f876 allocation has no valid POI.
The valid POI for this allocation is
0xbec9d3ebaaba99f6d4b7c6f61244c1322ae2bef498a741df4a3ddcc5917a7373

On the basis of the above we see a direct and repeated violation of the rules, for which the indexer should be punished, thank you.

2 Likes

Following we provide the requested Information for this Dispute.

We are just not really sure what output is expected on the “Entity values as of the divergent block once identified” data. We would appreciate further help and information on that part.

Version of graph-node used.

graphprotocol/graph-node:v0.23.1

Graph-node configuration in full.

thegraph:
    image: graphprotocol/graph-node:v0.23.1
    environment:
      BLOCK_INGESTOR: thegraph-index-0001
      ethereum: mainnet:https://api.anyblock.tools/ethereum/ethereum/mainnet/rpc/******/
      ETHEREUM_RPC_MAX_PARALLEL_REQUESTS: 256
      GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE: 256
      ETHEREUM_BLOCK_BATCH_SIZE: 100
      GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE: 2048
      GRAPH_ETHEREUM_MAX_EVENT_ONLY_RANGE: 1024
      GRAPH_ETHEREUM_MAX_PARALLEL_TRIGGERS_PER_BLOCK_RANGE: 800
      GRAPH_ALLOW_NON_DETERMINISTIC_FULLTEXT_SEARCH: "true"
      RUST_LOG: warn
      GRAPH_KILL_IF_UNRESPONSIVE: "true"
      GRAPH_IPFS_SUBGRAPH_LOADING_TIMEOUT: 180
      GRAPH_IPFS_TIMEOUT: 180
      GRAPH_MAX_IPFS_CACHE_SIZE: 256
      GRAPH_ENTITY_CACHE_SIZE: 40000
      GRAPH_QUERY_CACHE_BLOCKS: 2
      GRAPH_QUERY_CACHE_MAX_MEM: 5000
      GRAPH_GRAPHQL_MAX_FIRST: 5000
      ELASTICSEARCH_URL: http://****
      EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: synced
      postgres_host: ****
      postgres_user: postgres
      postgres_pass: ****
      postgres_db: thegraph
      STORE_CONNECTION_POOL_SIZE: 128
      node_role: index-node
      node_id: thegraph-index-0001
      ipfs: https://ipfs.network.thegraph.com
      ETHEREUM_POLLING_INTERVAL: 125

Type and version of Ethereum node.

Anyblock RPC-API

Table of PoIs generated by the affected subgraphs.

A dump of the call cache data from the database for the affected subgraphs.

Entity values as of the divergent block once identified.

We are not entirely sure what output is expected here and how to identify a divergent block and how to run the expected steps. We would appreciate some help here.

Statement

First of all, I would like to thank you for implementing this system and the role of the fishermen and arbitrators. They serve an important purpose and enable the decentralized system to function reasonably.

To clarify this situation satisfactorily, I would like to give some information and background. We at Anyblock Analytics are currently in the process of automating and optimizing the allocation process. To this end, we are in the process of implementing this project with the help of a grant. (See GitHub - anyblockanalytics/thegraph-allocation-optimization: Allocation Optimization The Graph).

We decided on a strategy for indexing subgraphs where we perform a small allocation on all new subgraphs that show up. We had a global rule to allocate 10 (later 2) GRT to ALL subgraphs with minSignal of 500 (minStake did not work reliably). We have since changed the global rule to ‘never’ and hand pick subgraphs, which ultimately hurts the overall ecosystem in our opinion. This served the purpose of providing new subgraphs directly with an indexer that is in sync and can process queries. However, over time we realized that there were some subgraphs that were either broken or created with bad intentions and then deleted (see also Request for information about disputes #GDR-3 - #5 by indexer_payne). We were not prepared for this case at the time. I.e. we were “baited” and indexed all subgraphs that were defective (now we have developed a script that blacklists most subgraphs that are defective and we have also shared this tool with the other indexers in Discord).

In the course of this, at some point it was no longer possible through the indexer agent to place new manual allocations, or to clear these old allocations for which we had no valid POI for the current epoch.

We came across two sources:

  1. https://hackmd.io/MMHefHW5TxOOLtqqutP-VA?both#9-Valid-Proofs-of-Indexing-for-a-given-epoch which mentions that an indexer must submit a valid POI to the FIRST BLOCK of an epoch. If a POI is valid for previous blocks, then a Dispute is a Draw. It is also mentioned that an indexer can submit the last valid POI if the subgraph contains a bug.

  2. Failed subgraphs - Manually Closing Allocations - The Graph Academy it is mentioned that in order to maximize the rewards of a failed subgraph, every epoch should be checked to find out the last valid POI.

For this purpose, we wrote a tool that for each subgraph we have to clear manually because the indexer agent was blocked, iterates through each allocation and looks for the last valid POI (thegraph-allocation-optimization/poi.py at develop · anyblockanalytics/thegraph-allocation-optimization · GitHub). If none is found, a 0x0 POI is output. If a last valid POI is found, this POI is used to close the allocation. Through research in the last two sources, as well as in Discord, we were convinced that it is OK to specify the last valid POI when manually closing an allocation.

Let us now move on to the first dispute:

This dispute is about allocation 0x7bb9cc693a6e1523fee00b99f4f405dd3df4f876 . This allocation was created in epoch 204 and manually closed in epoch 219. The various sources said that for the epoch in which the allocation is closed, one must specify a valid POI for the FIRST BLOCK. If we check Graph Explorer we see that for epoch 219 the starting block is “12902242”.

If we now query the indexer server for the POI for this block in the respective epoch, using this query:

http -b post http://localhost:8030/graphql query='query poi { proofOfIndexing(subgraph: "QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw", blockNumber: 12902242,blockHash: "0x985c628817706e2b14fdba75b47c5794d4484ec4c593dda76719e02291f730fe", indexer: "0x453b5e165cf98ff60167ccd3560ebf8d436ca86c")}‘

Then you get this POI as result: 0x5cc307ecd069223429de491e4d6a5a91e6d630fa3590dcfa1f5e3a99210bfe3b

We also submitted this POI to close the allocation. Since the closing of the allocation has been done manually and not through the agent, it was sadly re-allocated automatically and continued to sync.

We can also show a valid POI for the block in which the allocation was closed. The allocation was closed at block 12908408. For this block we get the POI 0x5d6a290ea0e8d23ce55e85a7e6ed7de883611b38be8630cb9eec6b43796a5f7b . This can be done with the query:

http -b post http://localhost:8030/graphql query='query poi { proofOfIndexing(subgraph: "QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw", blockNumber: 12908408,blockHash: "0xcd47ca957cda0bc46434ae8e5e05ab1abd06645ccb089bdfb565f9e6829623a1", indexer: "0x453b5e165cf98ff60167ccd3560ebf8d436ca86c")}'

This block is also in epoch 219.

Let us now turn to Dispute 2:

This is the Allocation Id 0x26a964d7fd08092c4b5088e196e6519e3e9f4880 which was created in epoch 219 and closed in epoch 226. It is true in this case that we have no POI for epoch 226. However, we have proceeded here as recommended by the link in The Graph Academy. We found out the last valid POI for an epoch. The last valid POI was in epoch 219. Here we have the same POI as in the previous case because it is the same block and the same epoch. However, we can also provide a POI for the block where the allocation was created:

http -b post http://localhost:8030/graphql query='query poi { proofOfIndexing(subgraph: "QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw", blockNumber: 12953001,blockHash: "0x41cdd1a51905c1e8611ec86604b26fb4111d82034ff5dc1b162c7054c755c6ce", indexer: "0x453b5e165cf98ff60167ccd3560ebf8d436ca86c")}‘

The poi for this block would be 0xd1ddb0d49b62d6a678daa636969cb3ac081d9a89e5a535a6fc1241336c8d4096. With this we can show that we have a valid POI at all. We also had to close the allocation manually because of the same problem with the clogged indexer and therefore took the last valid POI.

Technically, we might have made a mistake, but not with a bad intention. It is very important to us to give something back to the community and that is why we develop all these tools (Blacklisting Tool, Automatic Allocation Tooling, Last Valid POI Generation tooling…) to help the indexers to avoid as much manual work as possible and to automate it without running into malicious bait subgraphs.

Ultimately, if you have a look at the transaction history of our operator account (https://etherscan.io/txs?a=0x6bb16952cf5754651a5b1334422c95e3f55e2c95) you can see that to close all pending and broken allocations on 2021-08-03, we spent over 1 ETH to claim roughly 60 GRT, which is not the best strategy to get rich quick :wink:

2 Likes

Hello and thank you very much for your answer, i was impressed, but you made a several mistakes, so let’s figure it out together )

Number 1: Automating and optimizing tool. Excellent goal, but we have Testnet for testing such tool. Just imagine what would happen to mainnet if everyone tested something with impunity. It cannot be accepted at all. If you didn’t have enough checks or your developer was tired it cannot be an excuse for a breach of the rules.

Number 2: You see, the rule about “If a POI is valid for previous blocks, then a Dispute is a Draw. It is also mentioned that an indexer can submit the last valid POI if the subgraph contains a bug.” works only if subgraph is broken or contains a bug.
Subgraph “QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw” is not broken and has no contain any bug. So you couldn’t close it with the previous valid POI. If you have any problem with a subgraph you have to close it with 0x0.

Number 3: Let’s go to the first dispute :slight_smile:
It’s true, you can get your own POI with the query:
http -b post http://localhost:8030/graphql query='query poi { proofOfIndexing(subgraph: "QmQKU61VgfxvD1zsRKXydV1cDrirRJnjfmz6jnP3fonYZw", blockNumber: 12902242,blockHash: "0x985c628817706e2b14fdba75b47c5794d4484ec4c593dda76719e02291f730fe", indexer: "0x453b5e165cf98ff60167ccd3560ebf8d436ca86c")}‘

But take a look, you use your own graph-node and database servers to do that. How can you be sure you don’t have an error in database? Indexers use special tools:

For example this is the result of “graphprotocol-poi-checker”

Or, if you insist, this is result of the query:

I put the picture in the next post because of a rule of the forum

See? Something different. And as you may notice, the POIs are valid for everyone but you. Try run “graphprotocol-poi-checker” on your graph-node.

On the basis of the above, unfortunately, you don’t have the valid POI for this allocation.

Number 4: The second allocation. It’s clear. As i wrote before in Number 2, you can close your allocation with the last valid POI only if the subgraph is broken. Anyway, because the POI of the first allocation is not valid, even this justification cannot be accepted.

So, let’s summarise:

  • Use test your tool in mainnet and tool made mistakes.
  • You turn off your indexer agent and close your allocations manually, so it’s not an application error.
  • You didn’t check your POI and you have closed a lot of allocations with the wrong POI.
  • You closed your allocation with the last valid POI (But it wasn’t valid actually), but you couldn’t do that in this situation anyway.

I’m sure this case should be a showcase to the rest of us that indexers should check POIs before closing an allocation and that you can’t do whatever you want in Mainnet. Hope for a fair judgement and thank you again.

1 Like

The picture

Hey Fisherman!

Thank you for the further clarification and your detailed representation of the mistakes made. This really helps us track down what went wrong and hopefully avoid such situations going forward.

Allow me to elaborate a bit more on what actually happened on our side.

Number 1:

I am totally with you that one should test such a tool on the Testnet, however, this was not the cause of the problem here. The allocation tool is not yet used in an automated way at all, but instead only gathers information and suggests rules to be added to the indexer agent. And we didn’t even do that.

Instead, we set up a global rule decisionBasis=rules allocationAmount=10.0 parallelAllocations=1 minSignal=500 to pick up and pre-index all new subgraphs before actually allocating to them and taking the subgraphs into production, thus providing access to new subgraphs to the community as fast as possible. In the meantime we learned that pre-syncing is also possible with parallelAllocations=0 , but we will rather hand-pick the subgraphs going forward.

This led to a situation where the indexer agent picked up all the newly released broken and even worse bot bait subgraphs that were immediately removed again. In this case the indexer agent was not able to automatically close the allocation, because the subgraph was no longer available. We first tried just setting the rule for these subgraphs to never , but due to missing metadata, the indexer agent just errored and stopped trying until the next restart.

We then tried only manually removing the failing allocation, but this just led to the indexer agent getting stuck on the next missing subgraph, which is why we ultimately decided to take the hit and manually close all open allocations (roughly 50, each transaction costing around $30) and start fresh.

Number 2:

The devil is probably in the details here. Yes, we closed the allocation with the wrong POI. Why it was wrong in the first place is what we still need to figure out.

However, as said, our indexer agent was completely clogged with broken and bait allocations and thus not working at all, so we saw this as a bug and followed the recommended procedures.

Honestly, we really tried to do the right thing here. We could have just closed everything with 0x0, but we thought it would be cleaner to provide the POI we were able to produce.

Number 3:

First of all, I would like to thank you for explaining this so clearly and sending the links along. In the future, I hope on the one hand that we no longer have to manually close allocations and on the other hand we will use these tools if it comes so.

However, I would like to ask how these differences in the POIs can arise? For me it is a certain information asymmetry. If you consult sources like The Graph Academy (which was also sponsored by a grant from the Foundation), you will only get the information to figure out the POIs the way we did. (Failed subgraphs - Manually Closing Allocations - The Graph Academy). Neither there nor in the official documentation (Indexer | Graph Docs) it is mentioned that one should question this POI again. There is already a section in the official documentation that deals with security, infrastructure and POIs, but there is no mention of this.

Don’t get me wrong, we will use the graphprotocol-poi-checker and keep this in mind in the future. However, this is information that we feel is not readily available. If you rely on the common sources, there is nothing about counter-checking the POIs with an additional tool (as you expect to get a correct result with the query shown at Graph Academy).

We do not want to discredit anyone with this statement. We appreciate the work of everyone in this community and think it’s fantastic how much has been and is being done. We try to do our part as well. Nevertheless, in this case it is a mistake that was not foreseeable and not with a negative or malicious intention.

This was a mistake that could have happened to many indexers who were in our situation and tried to index as many subgraphs as possible. Also, our transaction history shows that we were not aiming to cheat any indexing rewards by doing this. By being forced to close the allocations manually, we rather lost money.

Basically, this problem leads to many new subgraphs not finding indexers directly. Our goal was also to help the ecosystem and new subgraphs by allocating directly to them and indexing them. Due to the fact that the indexer software can’t handle this circumstance when there are broken subgraphs, we were forced to close allocations manually. In the future, we will have to select subgraphs manually, as this would otherwise lead to too great a risk (in terms of time and money).

Number 4:

This is “just“ a follow up error, since the indexer agent somehow managed to pick up the freshly closed allocation and create a new one for this subgraph. The POI is still invalid obviously, since nothing changed in the meantime.

Next Steps:

All that being said, we still need to figure out what actually went wrong with the POI and how to proceed. We provided the requested data, but I’m still unsure how to figure out at what block our indexer diverged and how to locate the erroneous data. Wouldn’t we need a correct database dump to even make the comparison?

Is there any guide or forum thread we could follow? With the current tooling at hand, we can only say that the data is somehow wrong, but not why this happened IMHO. And figuring out the why and fixing the root cause seems to be the most important issue, since no one wants to serve incorrect data obviously.

So, any help and pointers really appreciated here!

Thanks again for your explanation and the valuable information you provided. We are very curious to see how this procedure will turn out. We would like to emphasize again that we had no bad intentions. Of course we hope for a fair judgement and even more important: We hope that this dispute will show transparently how we work and what are also various aspects where the information situation, the information distribution and clarification as well as the indexer agent could be improved after the dispute.

1 Like

Thank you for your elaboration too )
Now I would like to go back to the factual level based on Arbitration Charter.

I want to repeat the previous summarise:

  • You test your tool in mainnet and the tool made mistakes, even if only “recommendation tool”.
  • You turn off your indexer agent and close your allocations manually, so it’s not an application error, which means it’s not a determinism bug or something like that.
  • You didn’t check your POI and you have closed a lot of allocations with the wrong POI.
  • You closed your allocation with the last valid POI (But it wasn’t valid actually), but you couldn’t do that in this situation, because the subgraph was healthy.

Speaking about “lack of information about POI crosscheck”: it was discussed a lot of times in the Discord, on this forum, and during office hours (if I’m not mistaken). We are working in a decentralized environment and we can’t expect that everything will be fully described in one source.

Anyway, as you may know, there is the Arbitration Charter

The motivation part of the Arbitration charter told us, that Indexers should perform their work correctly and be honest: “The substance of the Arbitration charter is intended to ensure that the Arbitrator is fulfilling their role of supporting a healthy and functioning network where Indexers perform their work correctly, while minimizing the risk to honest Indexers of being economically penalized while interacting with the protocol in good faith.”

Unfortunately, there are a huge number of cases that you provide the wrong POI.

And as we know now, it was not a determinism bug.

I believe, that breaking rules, several times, without attempts to verify served data it’s incorrect work. And speak about failed or broken subgraphs, when disputed allocations don’t relate to them at all it’s also not so honest behavior.

Also, the chapter has the rule №9 that says: “If the Indexer is unable to produce a valid PoI for whatever reason, then they must close the allocation with a so-called “zero PoI,” which is a PoI comprising all zeros.” and there is not anything about how many rewards you have earned. You did not follow this rule, because you didn’t check your POI, so you can’t be sure that your POI was valid, frankly speaking.

By the way that mistake was made for several subgraphs:

2 Likes

Hey :slight_smile:
just a quick update. We are unfortunately still stuck on the same step and don’t know exactly how to find out the blocks where the indexer diverges.

We would greatly appreciate support in doing this, so that we can provide the requested data as soon as possible and thus resolve the dispute faster.

Hey everyone. Again we wanted to amplify that we want to help to resolve this dispute. We are still stuck on the steps above and would appreciate support. @ari we would like to ask about the current status of this dispute.

@ari
it seems this dispute is stuck. I think it should be closed with a draw.
Thank you.

We resolved the Dispute as a draw adapting the original interpretation of the charter with further discussion on the forums by the community and core contributors.

As discussed on multiple occasions, we consider that there is value in serving old data for some use cases and the curation/query market can decide about that. However, the Arbitrators suggest that the clause “9. Valid Proofs of Indexing for a given epoch.” should be reviewed, updated, and eventually put to the vote for ratification. In addition to that, a modification in the graph-node is needed so that the failed block produces a deterministic POI to make it standard across multiple indexing runs.

Hello!

There is one more thing, there were 2 disputes about the same issue.

0xacb9026d7aba202bdf26776d7b21bbc8215282f39d10a06b2dc20e6a36d2207a
and
0xb856a651b6ceec4d41f3903ca84544b32ccc452eedc5a4de2a523984b09cb2aa

0xacb9026d7aba202bdf26776d7b21bbc8215282f39d10a06b2dc20e6a36d2207a is closed with “Draw”

But 0xb856a651b6ceec4d41f3903ca84544b32ccc452eedc5a4de2a523984b09cb2aa is still “Undecided”

Can you close with “Draw” the second dispute as a part of this GDR?

Thank you.

@fisherman thanks for letting us know, the dispute is now closed as a draw.