Suspending our participation in the MIPs program

As strong supporters of The Graph and its long-term vision, we believe the best way for us to contribute to The Graph’s success is by building our multi-chain operation and integrating other Graph products on our own terms. The MIPs program, in its current form, prevents us from doing what we think is best for the protocol. Participating in the MIPs program has been frustrating and has diverted our focus from the important work of bringing new chains to the network.

We will still be testing new chains on the testnet and supporting most of them on mainnet, but we will no longer be attempting to follow the MIPs schedule, fill out forms, create special endpoints, or meet any other MIPs-specific requirements. We will instead use the current and projected on-chain signal to guide our work schedule. Unfortunately, this means we will be unable to provide as effective support to other indexers on Discord. We might revise this position if the Graph Foundation makes substantial changes to the program.

It is not a secret that our inability to obtain explanations for the scoring of phase 0 was a great source of frustration. Despite widespread criticisms of phase 0, the organizers stubbornly maintained the same opaque and flawed methodology for phase 1. More recently, the organizers’ silence on critical issues raised by us and others about the unfolding of Phase 2 has irrevocably shattered our hope of a fair reward for our participation in the program.

We believe that the MIPs program is hindering rather than benefiting The Graph network. Here are the reasons why.


  • Instead of fostering a strong collaboration between indexers and developers to bring new chains to the network, the program has actually increased the distance between the Graph’s core teams and the indexer community. The influx of a large number of new participants to the network has overwhelmed the developers and has probably caused them to become less active in the community. Additionally, due to concerns about participants potentially gaining an unfair advantage or attempting to game the scoring system through MIPs, communication has been minimized.
  • Criticisms and offers of assistance are being ignored by the organizers, which has demotivated long-standing members of the community who feel that their contributions are not being recognized. As a result, The Graph is trying to navigate the biggest transition in indexation without the help of some of its most experienced experts.


  • The current program does not incentivizes indexers to actually improve the software, contribute to the community, or build a setup that will scale to support the billions of requests that a multi-chain operation will entail. Nothing in the program will ensure that indexers will support the chains for more than a few weeks. We think it is instead favouring “fly-by-night” indexers that deploy minimal setups that use standard and simple configurations. We fear that these indexers will shutdown and leave as soon as their GRT is distributed, and move to the next incentivized testnet.
  • The program is focused on bringing new indexers to The Graph, instead of bringing new chains to The Graph. We applaud and encourage all initiatives that aim bringing new Indexers as we think is for the best of the protocol, but using the MIPs program to do this is a distraction from the primary objectives and an inefficient allocation of ressources. If the Foundation wants to provide grants for new indexers, it should be done in a separate, and more focused program.

Scoring methodology

  • There has been no information shared about the scoring methodology, which is not in line with the core values of transparency in the blockchain and cryptocurrency space. Scores are published weeks after a phase has completed, long after the next phase has begun, making it impossible for indexers to address any issues in a timely manner. The organizers have also been unable to provide explanations to indexers about their scores, preventing them from improving their performance. Due to the lack of feedback, it is unlikely that the MIPs program will improve the quality of service on the network.
  • There has been an unnecessary focus on tokenomics, such as allocation size and rebate pool, which are well-known to the community and developers and are not being changed by the MIPs program. Additionally, gateways are not an effective way to evaluate the quality of service provided by indexers in the context of the program, due to the secret Indexer Selection Algorithm. In order to have confidence in the scores, indexers need access to the selection algorithm. As this is unlikely to be possible, an open source tool should be used instead. Furthermore, gateways rely on signals like economic security to choose indexers, which disadvantages smaller indexers."


  • The MIPs program could be greatly simplified by focusing on the right objectives. The current complexity is a strain on the organizers’ resources, and the opaque scoring mechanisms require organizers to spend a lot of time answering legitimate questions from participants.
  • There are no effective communication channels between program participants and organizers. Organizers are not engaging in Discord channels, and it takes weeks for them to respond to emails and private messages. This poor communication strains the trust of participants in the program.
  • The program was scheduled to begin in September 2022 and finish in March 2023. As of January 2023, more than halfway through the program, we have not yet finished the first chain. Indexers who had planned and budgeted for a six-month program will have to revise their estimates, and as a result, others may have to reconsider their participation.

As strong believers in the mission of The Graph, we are deeply disappointed by how poor execution is jeopardizing the goals of the program. We are calling on the Graph Foundation to make immediate changes.

Marc-André Dumas


We published our recommendations for a MIPs-V2 program in a separate post.


Thank you for taking the time to post this. There are more long-term mainnet Indexers who have privately shared similar frustrations about the program, I respect the bravery of putting it out there in public.

Wavefive has also been “checked out” of the MIPs schedule for some time. This is both for personal health reasons unrelated to the program and due to many of the pain points that Marc-Andre has outlined above. The personal stress is simply not worth it, and every time we do work through the night to hit a MIPs goal, we get burned the next day by some new announcement or another technical issue that has not been considered.

The result is we are mostly absent in helping other Indexers get spun up because we are just doing things on our own schedule. If that fits with MIPs so be it; if not, we no longer get anxious and stressed about it.

I am not entirely sure how we fix this, but what I do know is two things:

  1. MIPs project team are humans and trying as best they can. Keep this in mind when you respond.

  2. Express your issues and frustrations here, but deliver them with an idea on how to solve the problem. Anyone that cares about MIPs and has found themselves disconnecting due to anxiety, stress, frustration etc. needs to consider contributing to Marc-Andre’s proposal to correct course on MIPs, here


@ellipfra As a member of the Foundation team working on MIPs, I wanted to let you know that I have read your post and I appreciate the time you took to share your frustrations and concerns (same to you, @cryptovestor). I want you to know that I am fully committed to working together and addressing the concerns of participants in the MIPs program.


(1) Clearly participants are unhappy and unsatisfied with how the score has been calculated mainly because it has not been made transparent. Marc-Andre has outlined and proposed a rating system with three key criteria which sounds great and useful. However I foresee (once again) “quantification” problem with these key criteria. It will be difficult to “put” concrete numbers/scores to these key criteria UNLESS a baseline or a base criteria is clearly defined and established. Scoring is a relative term for which comparison MUST be done. If we have no base criteria to compare with, then it will NEVER be a fair trade.

(2) One of the goals of the MIP program is to attract new potential indexers. The MIP program should be conducted in such a way that an open fight between a senior indexer who has been accumulating delegations and rewards over the past years and a new comer is avoided iduring the execution of the MIP program. Senior more experienced indexers will obviously have plenty of opportunities to make use of their strong muscles OUTSIDE of the MIP program.

(3) Taking about fairness in an incentivized program like MIP: Create a baseline (may be a strong tough one that allows the team to gather all the information they need from the MIP program to do their complex analysis). Reward all the participants equally if they meet this base criteria. This in my opinion is what we can term as “EQUITY” and not “EQUALITY”.

(4) Everybody here wants the team to be more active, transparent and more organized. It has been a tough journey (no bed of roses) for us so far.


Just wanted to lend my +1 to this topic, as we at 0xFury also decided to quietly drop out of participation approx 1 month ago.

The ever-moving timelines and changes to docs made it hard to schedule our involvement around other daily tasks. This was the main (but not only) metric that ultimately led to the decision that we could no longer continue to follow along with the program.

Our main goal for participation was to be a part of the process of moving in to the multichain phase of the network and to be a part of that process, as each new chain came along. But for now, we will instead just follow this process from the sidelines and solely focus on mainnet.

When asking participants to dedicate their time and efforts to such a program, some things should be a given. Such as:

  • Open communication, with enough manpower behind it to provide fairly prompt replies to those taking part in the program. This is a two-way benefit.
  • Transparent scoring, preferably ‘before the fact’. We already have enough black box systems to contend with.
  • A schedule that respects and values the time of those who participate.
    We all know, that testnet or otherwise, things happen and delays occur in this space… frequently. However, please err on the side of overestimating delay periods and coming in under-time. And statements such as “we will give 24 hours+ notice”, should mean exactly that. There are real people on this side of the fence too, often with a multitude of ‘other’ responsibilities to juggle alongside their participation in MIPs.

However, we don’t wish to focus on the negative, or blame any single factor. But merely highlight the biggest points of contention, from our experience.

The important part is finding a solution and a willingness to make the changes required to keep long-standing members feel just as engaged as any of our enthusiastic newcomers. We know that this team is capable of it, as Mission Control is something we Indexers have remembered fondly over the past 2 years.

Therefore, i’ll stop here, and try and commit some time to your v2 post this week. And also thank you @ellipfra for being the first one to bring this up, when it’s much easier to quietly ‘unparticipate’, as we and a number of others already did.


I believe there is a conflict in the goals of trying to bring on new chains and many new indexers at the same time. Ideally, these would have been two different programs.

Newcomers need a fair chance to participate in the program, and I agree that this is important. In my proposed changes, I have removed the Gateway and its token economic elements that could unfairly favor indexers with larger stakes. This also has the added benefit of simplifying things for organizers, in my opinion. These things are not easy to get right, and that is why I would love to see the community refine and improve upon this proposal.


I have two things to point out:

  1. dont mention indexers can reuse their testnet indexer account and operator for mainnet, that should not be advised for security reasons

  2. the mainnet ENS names are sometimes taken vs those on testnet, so i suggest you simply make a form so everyone can submit their indexer address, regardless if ens matches or not (i know of at least one case where the mainnet ens name is taken, and im sure there are more)

1 Like

Just wanted to comment in support of this post and any proposed improvements.

I’m not an indexer but I read this post and agree that we should improve the program especially considering the GRT investment into it.


Thank you @ellipfra for this post, all my thoughts about the MIPs program are directly mentioned here. Unclearly criteria for evaluating work are always very annoying. You kind of use all your resources, try to do well, then you see a low rating, you don’t understand the reasons, and the easiest decision is to just quit, because you can’t influence it. Evaluation criteria need to be made clear so that indexers try to work on quality. I support the changes suggested by @ellipfra, very detailed list

1 Like

I completely agree with every word in the post.
Neither transparency in points, no answers to obvious questions makes me say that MIPs is one the worst organized testnets ever. For instance there were lots of questions about the way how will be distributed points for phase 2 (by collected or by claimed fees)? And just complete ignore as an answer.


I want to begin by expressing appreciation to @ellipfra and all the others who took the time to share feedback regarding their experience participating in the MIPs program. Since @ellipfra first posted, I have had several conversations with Indexers in the MIPs program so that I can understand the different perspectives and collect more feedback. With hundreds of Indexers participating in MIPs, it’s important that the Foundation receives participant feedback, takes it seriously, and looks for ways to improve participants’ experience.

The objective of the MIPs program is to add support for new chains on the decentralized network and enable the migration of multi-chain subgraphs. As such, MIPs is clearly a necessary step for the protocol to realize its mission. And the experience of Indexers participating in the program needs to be a core metric for how we measure the program’s success.

Based on the feedback provided here, The Graph Foundation is reevaluating several aspects of the MIPs program with the intention of improving the Indexer experience. We’re focusing on improving (i) communication with program participants, (ii) how the scoring methodology is communicated, and (iii) mission objectives and timelines. We also intended to increase collaboration with program participants, core dev teams, and other members of the ecosystem. More on this in the recommendations forum post.

In order to establish reasonable expectations moving forward, it’s important to note that being open to feedback and improving the experience of participating Indexers does not mean that The Graph Foundation will be able to address every concern or satisfy every request, but we’ll do our best to. As an example, feedback from some participants argues that MIPs should not have been used as a way for new Indexers to join the protocol. Whether this position is correct or not could be debated, but removing new Indexers from participating in MIPs is not something we can change or implement. When addressing feedback related to scoring methodology, incentive structure, or communications with participants, we must balance any improvements with equal attention to ensuring fairness and equity.

The Graph Foundation is committed to improving the MIPs program experience for all participants. We acknowledge the frustrations expressed by those who have shared their concerns and, once again, we look forward to working with the community to improve the program.

The fact that members of our Indexer community and participants in the MIPs program care enough about the network to share their feedback and take the lead in suggesting improvements speaks volumes about the quality of those building this community. We’re grateful for the feedback and appreciate your continued support of The Graph Network.


First and foremost, I want to thank @martintel and the Graph Foundation for taking the time to respond to our feedback. We appreciate the foundation’s commitment to improving the indexer experience. The way the graph handled this situation improved our trust in the program and as part of the conversation, we gained a perspective from the foundation’s point of view and better appreciate some of the aspects of the program.

We are glad to hear that the foundation is focusing on improving communication, how the scoring methodology is communicated, and mission objectives and timelines. We hope that this will lead to increased collaboration with program participants, core dev teams, and other members of the ecosystem. We, and we believe the indexer community as a whole, are also ready to step up and help more.

We are happy to announce that after careful consideration and following the recent developments in the MIPs program, we have decided to rejoin the program starting with Polygon. The recent launch of the gateway subgraph and accompanying dashboard has been a game changer for the protocol, as it shares the key metrics used in the scoring, which directly addresses some of our concerns about the lack of transparency in the scoring methodology. We are excited to be a part of it and bring new chains to the network.

We have noticed an improvement in communication since our previous post, however, we still believe that there is room for more collaboration between indexers, developers, and the foundation. We understand that building trust and fostering collaboration takes time, and we look forward to continuing to work together for the benefit of the protocol. We encourage the foundation to keep the conversation flowing between all stakeholders to ensure the continued growth and success of The Graph network.


It has been two months since our initial post and the community discussions that followed. We appreciate the efforts of Martin and Pedro in announcing three areas of improvement and making available a QoS subgraph and dashboard on Jan 24.

As of today, March 9th, we would like to share our observations on the progress made in the following areas:

1 - Communication with program participants:
We appreciate the improved quality of communication, especially the regular email updates and keeping the Notion page up-to-date. It’s helpful to have the Notion page as a reference. However, we suggest the team also post quick Discord announcements when updates are being made.

While we appreciate the Explorer team for manning the #mips-arbitrum-layer2 channel, we would like to see more regular participation from the Foundation and the Core Teams in Discord. Tagging individual team members often results in delayed responses, and when not tagged, there is no one available to address concerns. We hope to see more participation from the team with the community.

2 - How the scoring methodology is communicated:
While some general information on the scoring methodology has been shared, most requests for clarification are still left unanswered. We appreciate the improvements made to the leaderboard, which now shows individual phase scores. However, this is insufficient as it does not provide additional data that participants cannot determine on their own. We urge the team to improve score methodology communication and transparency to avoid participants needing to email them individually for clarifications. Also, the leaderboard needs fixing as it contains errors with duplicate entries and inconsistent data, which needs immediate attention to maintain trust with participants.

3 - Mission objectives and timeline:
We appreciate the improved communication of the mission timeline and the completion of several parallel missions. The new month-long mission on syncing and exploring a new chain is a step in the right direction. However, we still do not know when the program will end, which is causing uncertainty for indexer resources, especially new indexers. We recommend providing a high-level timeline for better visibility and clarity.

4 - QoS Data:
We were ecstatic about the availability of real-time QoS data, which could address several of our concerns. However, it was still incomplete when announced, and as of today, we do not consider it delivered. The subgraph and dashboard are often down, and data is missing. We understand the complexity of this deliverable, but we would appreciate better reliability and functionality to improve the quality of service on both testnet and mainnet for all Graph users. Also, we would like to point out that the Gnosis, Celo, Arbitrum, and Avalanche phases did not benefit from live QoS data.