It’s hard to draw up a precise timeline for something this big, and it will likely become clearer as we write down the specs for each approach (working on it these days). But here’s what I think regarding scope of work and timelines:
Greenfield: Needs a completely new Staking contract, a different Indexer Agent, and changes in the gateway code. There are many open questions in the details of how that Staking contract would work, which we might be able to simplify if we default to something closer to the current protocol. So I’d estimate a couple months to finish the first iteration of the implementation, and then we’d need to do more than one audit and test it intensively. Once we deploy, we have a protocol that lives in parallel to the current one, with no Indexing Rewards. Then maybe we build tools for indexers to move stake between the two protocols? And this new protocol would have no UIs (e.g. Graph Explorer, Studio) unless those are implemented in parallel, and this would be completely new products to implement.
Brownfield: The changes in the Staking contract are significant but manageable. We also need updates to Indexer Agent and gateway code. We change the allocations interface, change delegation and slashing, and we need to be very careful about the transitions. We can do this in a few steps, first introducing permissionless data services and later adapting the subgraph data service to the new mechanism. This obviously also needs audits and testing, and while the transitions require lots of care it will likely be a lot less code. As soon as we deploy each change, everyone can immediately take advantage of the improvements. We’d also need some changes to Explorer and Studio to integrate with the new allocations format as this will involve a few breaking changes. Going through the GIP process at each step can take time, but I’m optimistic we can move forward quickly considering the positive feedback we’ve received so far.
So I think in terms of “time to mainnet” both approaches are similar, but in the greenfield case once we get to mainnet it will take time for people to start using it (and rewards in the current protocol will incentivize against it).
(Edit: note by “mainnet” here I mean Arbitrum One)
Is there any reason why the gateway responsibilities proposed for the new DAO couldn’t just be assigned to the Core Devs as a collective?
I’m very skeptical of governance DAO’s in general, as most seem to be ineffective, highly political, captured, and/or decentralized in name only. One of my favorite things about The Graph is that protocol governance isn’t held captive by a DAO. Not sure why we’d invite that headache into the ecosystem.
I totally agree with your sentiment regarding DAOs. In my mind, this organisation should be more akin to an industry association that sets standards for its members than a DAO that puts every governance decision to a vote.
This is something we’re thinking about at GraphOps and will likely share more in the coming months as third-party Gateways become a reality.
Hey Pablo (and Zac), thanks for putting this proposal together. It’s been great to see all the discussion so far, and I know a lot of great thinking by many people went into this.
I’m particularly encouraged to see so much momentum behind the “Brownfield” approach. Others have already made a good case for this, but I’ll add my strong support for the following reasons:
Unbundles higher priority work items from lower priority items, which gives the higher priority items a chance to ship as soon as possible.
Avoids another heavy migration—between hosted service migration and L2 migration, I think The Graph’s developers will have migration fatigue by now.
Gives each proposal an opportunity to be evaluated on it’s own merits and discussed by the community rather than “tagging along” with other proposals that have broad support.
Preserves optionality if other high priority work items arise in the meantime that we would to move towards the front of the queue.
Given the support there seems to be behind the Brownfield approach, I gather the next step will to be to post independent GIPs to the forums, so I will reserve much of my proposal-specific feedback until then. For now, I’ll share a high level reaction to some of the framing I’ve seen attached to the Horizon proposals.
I want to preface by saying that on a long enough time horizon (pun intended) I’m supportive of much, if not most, of what’s being proposed in Horizon bundle of proposals.
That said, I would like to challenge the idea that “permissionlessness” is a strategic objective our ecosystem should be orienting around in the short-to-medium term.
When this proposal was presented in Panama, permissionlessness featured heavily as one of the top three design goals, and it shows up again here as a stated benefit. Since I have not seen this reasoning questioned publicly anywhere yet, I would like to do so here.
To be clear, I’ve always thought building on The Graph (i.e., individual subgraphs or substreams modules) should be fully permissionless and censorship resistant. What I’m skeptical of is that building The Graph itself (i.e., unilaterally specifying the verifiability and economic properties of data services) should be fully permissionless at all, much less a top priority.
Rather, I believe The Graph should be focused on delivering the best possible service to as many developers and end-users as possible, while remaining true to the values of decentralization. To do that, it must:
Facilitate the development of new data services in response to market demand.
Achieve and maintain a high quality bar for all data service in the network, in terms of performance, liveness, and data correctness.
Avoid centralization vectors that would disintermediate the decentralized network and recreate the platform monopolies of web2.
Make sure rewards in the protocol flow to where value is actually being created, and establishes robust economic flywheels.
…there is still quite a bit of work required to achieve this, much of it involving on-chain upgrades to the protocol.
Not only do I believe permissionlessness as a strategic objective is orthogonal to the above goals and thus presents a real opportunity cost, but it actually could be counterproductive:
Removing any barriers for a data service to be considered part of The Graph potentially lowers the quality bar of The Graph, diluting The Graph’s brand and even harming the perceived quality of the more reliable data services.
Fragmenting governance of the protocol makes it harder for the protocol to be opinionated on how economic rewards or verifiability guarantees should flow across data services, which is very important for establishing economic flywheels in a world of composable data services.
Allowing any data service developer to define their own economics opens up vectors for unaligned actors to disintermediate the decentralized network. (You don’t need to believe these actors are malicious, just rational and self-interested).
Furthermore, I believe that orienting around this kind of “permissionlessness” as a path forward, does harm by creating the illusion of progress:
It presents governance fragmentation as governance decentralization, without actually improving the underlying trust assumptions of using the protocol. Arbitration secured by some smaller DAO or centralized entity is not much more preferable, in my opinion, to arbitration guaranteed by The Graph Council. We should be investing in true verifiability using refereed games or ZKPs, both of which require on-chain work.
For years, we have struggled as a core dev ecosystem to get data services that arguably are far simpler than Subgraphs, like Firehose or JSON-RPC onto the network, and I’ve often heard a lack of permissionlessness or on-chain functionality offered as explanations for why it cannot be done. As @Pablo alludes to above, as long as a data service can only be secured through arbitration or similar trust assumptions (which is the case for all of The Graph’s data services currently), then it can be initially supported by the V1 protocol without any on-chain changes. The on-chain protocol really isn’t even aware of the concept of a subgraph per se. Furthermore, there has never been an instance that I’m aware of where The Graph Council or a lack of permissionlessness has blocked a data service from being supported on the network.
What’s actually needed and has blocked supporting new data services on the network is designing and building the off-chain data services, the negotiation protocol, the service pricing, etc. that makes the data service market work off-chain. All the same things we had to do for subgraphs. This work has largely been neglected until recently, and while it’s great to see some progress finally happening, I feel like we would have been there months or years sooner had we as an ecosystem not let ourselves get distracted by the red herrings I’m describing.
I’ll reiterate that I support much, if not most, of what’s in the Horizon proposal, especially direct indexer payments (DIPs) and modular verifiability, both of which I feel are important to The Graph’s multi data service future.
My intention here is not to criticize the Horizon proposal as a whole, nor the great work the team has put into the bundle of proposals, but rather the “permissionless” meme that has been developing in our ecosystem around this proposal, which in my opinion has impeded and will continue to impede The Graph’s progress if left unquestioned.
Update: After sharing an earlier draft of this response with @Pablo, he indicated openness to reintroducing some opinionated criteria for data services around economics and verifiability through off-chain social conventions, Gateway standards, front-ends, use of The Graph’s brand, etc. He also explained that for him, permissionlessness was more about having a frictionless pathway for experimental data services to be deployed before they are able to implement the more opinionated economic and verifiability requirements of the network (please correct me @Pablo if I have not accurately conveyed your points). This is a far more nuanced and pragmatic view of permissionlessness as an objective than I’ve heard stated previously, and one which I’m firmly supportive of (an idea in the direction of experimental data services was actually already formally enshrined in GIP-0008).