Recommendations for MIPs-V2

Hi ellipfra,

I appreciate the time you took to provide such a thorough list of general recommendations and scoring methodology. I will do my best to address each of these below.

Goals of the program and expectations

Before I dive deep into your feedback, I want to reiterate that the goal of the program is, indeed, to bring new chains to the network. As a consequence, we needed to be sure we’d have a good amount of Indexers on the network serving all of these chains with the desired quality of service. Assuming we could only do so with pre-MIPs Indexers would be too risky, which is why we decided to onboard many new stakeholders to the protocol in Phase 0.

Program structure and priorities

You make a good point about optimizing for maximum parallelization of chain integration. To your point, we do want to have more chains in parallel, and getting ready for it has been the focus for, at least, the last 6-7 weeks. However, we didn’t just want to announce the next batch of chains and have Indexers scrambling to get started without minimum guidance. While experienced Indexers - and infrastructure operators in general - could get away with little information, we wanted to ensure newcomers would have the minimum information required to get started. In hindsight, we could have communicated the rationale behind this approach more clearly.

Begining the program with one chain only (Gnosis) allowed us to focus on the most important thing at that time which was to ensure the network and protocol were multi-chain ready. We’re past this stage now (a great achievement!) and, moving forward, the Indexer community, core developers, Foundation, and the Council can work more efficiently when adding new chains to the network at a quicker pace. Furthermore, the program attracted a big number of Indexers, which meant we also had to quickly scale the operations team to handle applications. We’re past this stage, so we’re all focused on bringing new chains to the network and working more closely with participants.

Here’s a high-level list of things we’ve been actively working on with core contributors when gathering the minimum required information needed to onboard new Indexers to such new chains:

  • Minimum hardware requirements to urn an archive node.

  • Snapshots (ideally, as we are collaborating with protocol teams).

  • Guides on running archive nodes. This implies testing the new clients, working with the different teams (like we did with Nethermind who were extremely helpful), and writing new docs if the official ones are outdated or lack critical details, etc. In recent weeks we worked on getting Docker images, bare metal installation guides, and Helm charts for 4 different chains.

  • Validating that clients work reliably with Graph Node, exposing the required interface to support all subgraph features.

On the roadmap, we have been working on a way to share timelines for all future chains, phases, and missions. A plan for the next 6-8 weeks was released a few days ago when the four new chains were announced. It will be continuously updated, and we’ll make an effort to include new chains and phases weeks in advance, so Indexers can plan accordingly.

Also, in your original Forum post you called for clearer communication. Our current process has been to send updates through email and Discord, as information becomes available, plus a dedicated section during IOH every week. But, to address your point, we are going to add some scheduled touchpoints as well (such as weekly digest updates) to add more structure to communication. We are open to any other ideas you might have in this regard! We also try to avoid communication overload, as to avoid confusion.

Rating system

I want to start by saying we’re actually doing 90% of what’s recommended here under C2 and C3 of your post, both on how we conduct testing and monitoring progress on the two environments, as well as the proposed scoring criteria. I understand, however, participants had no visibility over the methodology. To this point, we’ve been working on a more transparent solution.

The framework adopted to score participants on how they fare on the network is quite simple and very QoS driven. While it is true we’re looking at Gateway-based metrics, we aren’t taking into account Indexers’ allocation size but merely the following metrics gathered at query time (regardless of how many are served individually):

  • Data freshness (number of blocks behind the head of the chain).
  • Query latency.
  • Success rate.

We’re essentially leveraging the already existing monitoring systems core developers had in place while removing the ISA (Indexer Selection Algorithm) from the equation.

Gateway-based public QoS data

To gather all data related to the metrics mentioned above, a subgraph was specifically built back in October: gateway-mips-qos-oracle. This data on QoS metrics has been our source for scoring since Gnosis Phase 1. Furthermore, a public real-time dashboard exposing all of these metrics (and more) has recently been made available here: https://mips-qos.streamlit.app/. The main focus of this dashboard is to allow mainnet participants to have better insight into Gateway-measured QoS across all chains. We hope MIPs participants (and mainnet Indexers in general) can use the dashboard to understand how QoS changes over time and how they rank against all others. The dashboard will be complementary to the public leaderboard, not a replacement. Please give it a try - we’d be happy to have your feedback on it. Also, if you build other dashboards on top of this QoS Oracle Subgraph, please do share it with the community.

Again, we appreciate all feedback shared thus far, and we hope this post helps clarify some aspects of the program. We’ll continue iterating while making progress with the program. Please don’t hesitate to reach out!

6 Likes