Thanks for starting this thread @flynntropolis. In addition to what @rburdett and @ari have, said which I largely agree with, I’ll introduce a few other considerations.
Defi Interoperability. As noted by others, we’ll need to design a partioning of protocol state across L1 and L2. This should account for the benefits of having defi interoperability of things like GRT, Curation Shares, Delegation Shares, etc. on L1.
Vesting Contracts. Many Indexers & Delegators in the network today are interacting with the protocol via special vesting contracts that have allowlisted certain functions in the protocol running on L1. Need to think through how these users will participate when some functionality moves to L2.
Timing. There are many dimensions to this: How long until rollups see large adoption and increase gas costs on L1 due to induced demand? Projecting forward an increase in # of subgraphs on the network and # of Indexers/subgraph, how long until the gas cost improvements from something like Arbitrum/Optimism are completely washed out? How long should we invest in 1.X of the protocol and how long should we wait for improvements on ZK Rollups + EVM–>ZK Rollup compilers to emerge?
Subgraph/Indexer Migrations We’ve already seen how much work it is to just migrate subgraphs from Edge & Node’s hosted service to The Graph decentralized network. For L2, we’ll have to do this again for both subgraphs and Indexers. Can we do this in such a way that doesn’t lead to outages in any production apps relying on The Graph? Related to timing considertions above… how many times do we want to go through such a heavy migration? Is it worth migrating to a stop gap solution or just holding off until the long-term higher throughput solutions are more stable?
Multiblockchain Bridges. Assuming we move PoI submisssion and allocation management to the L2, we’ll likely need a bridge to the various other L1s The Graph supports, in order to have deterministic PoIs + indexing rewards. Which L2 designs lend themselves better to bridging to other (potentially high throughput) L1 chains?
Personally, I’m very intrigued by the ZK Porter/ Volition lines of research, because they offer orders of magnitude greater throughput, by introducing the option to store data off-chain.
I still have a lot of questions as to precisely what kinds of data availability guarantees these will offer longer–i.e., will it follow a LazyLedger, Arweave model or something else? The Graph has the need for a source of truth on data availability for other protocol logic, so a built-in source of truth on data availability could be desirable for other reasons (i.e. store an entire subgraph manfifest on L2 storage, not just the subgraph ID), depending on what the storage costs and guarantees are.
A higher throughput chain also would potentially allow for higher resolution bridging to other fast L1s. While rollups increase throughput of using Ethereum, they don’t improve latency (i.e. you are still dependent on L1 block times for block production frequency).
I also like Arbitrum a lot for the simple reason that it is EVM compatible, gives a large throughput gain, is simpler to reason about, and available sooner. One nice thing about designing our L2 architecture w/ Arbitrum in mind as the deployment target–many of the decisions would likely carry over to using zkSync (zkPorter) or Starknet (Volition), once both those chains support EVM compilation. By the time we make serious headway on our L2 design/implementation we’ll likely have a lot more data on how both those projects have developed, as well as how gas costs for Indexers, Subgraph Developers and others evolve as more subgraphs are deployed to The Graph’s decentralized network.