Request for information about disputes #GDR-29

Dear Arbitration Team,
If the scope of this arbitration is to clarify if there is

we believe that we extensively proved that we experienced a deterministic error and the indexer agent acted consequently to that. In case this discussion is going on to technically understand what happened in order to improve the protocol we are happy to bring it forward with the graph team providing all the data required. We do not think that @Inspector_POI is a necessary interlocutor in this case. Nevertheless we are going to answer, once again, to the points raised by @Inspector_POI trying at the same time to specify why in our opinion this discussion went beyond the scope of this arbitration.

  1. We double checked both grafana dashboard and indexer status grep at the same time and the screenshots we posted are correct. Despite the “noticeable” differences, both indicate the same source of errors error while executing at wasm backtrace and Mapping aborted at src/handlers/GNSTradingCallbacks/index.ts. Therefore we do not understand what you want to prove with this point. Maybe that we are very good in using photoshop but so stupid that we did not photoshop both images with the same exact message error? If requested by the graph team, we are more than willing to give them temporary access to the grafana dashboard. This arguing seems deliberately in bad faith and not at all directed towards a technical understanding of the problem.

  2. First: the picture you found digging in discord is indicating 0.36.0 simply because one of the attempts to solve the not a valid IPFS server issue was upgrading the node. As soon as that issue was solved with the help @DaMandal0rian, the 0.36.0 was not working properly hence we downgraded. What you found is an (out of context) error log and not a log showing correct functioning of the 0.36.0 version of the node. This seems another evidence of bad faith acting. We are currently running graph-node 0.35.1. Second: we are well aware of the instruction posted on the 0.36.1 release but, nevertheless, even following that instruction, we were not capable of successfully upgrade the node (see above). As mentioned earlier, since the decision to move to a larger server had already been made, we did not consider it worth the effort to spend more time upgrading the node on the old server, and instead chose to restart on the new server with the latest version of the node. Nevertheless, we do not think we have to give you any additional explanation about our internal decision and, since you are so quick and diligent in spitting out dates, I am afraid that you do not really understand the time and effort necessary to run a node. Third: you keep stating the “proof” of duplicate POIs and at the same time you mentioned the case of decisionBasis (@inflex) that, by the way, you started. From what we understood from that arbitration duplicate POIs and deterministic errors is not something new. In addition, we can say that we were using erigon3 as beacon layer for arbitrum-one but we switched to erigon2 given common issues largely discussed. Given the fact that you are well aware of this other arbitration, we believe that you are acting in bad faith showing a

about which we would like to draw the attention of the Arbitration Team.

  1. If requested by the graph team we can open and close the allocation. Old logs are not available anymore given the many docker cache pruning performed due to the already mentioned lack of memory on the current server. Nevertheless, we struggle to see how this request can be done in good faith. We proved the deterministic nature of the error; if the graph developing team (it is our understanding that @Inspector_POI does not belong to this team but please correct us if we are wrong) deems necessary to have access to this log in order to potentially fix a bug, we are more than willing to do that.
1 Like