As The Graph is being built in public and in a decentralized way, we’ve put together a new publicly accessible The Graph Core R&D Workspace . You can use this page to follow along with each working group’s progress on significant workstreams and access meeting notes and recordings.
Monthly Core Dev updates [NEW! ]
Core development teams are now posting monthly updates in this forum section Core Team Updates - The Graph.
Soon you’ll also find regular updates from all Working Groups and Task Forces on all major components of The Graph stack. Stay tuned!
The Graph’s Core Devs Call #18 featured updates on L2 Migration and a quick demo of subgraph deployment on L2, a presentation on File Data Sources, a demo and updates on subgraph bridges from SoulBound Labs and on substreams from StreamingFast.
File Data Sources Update
Subgraph Bridge Update
L2 Support (5:39)
Try this version of The Graph Explorer out for yourself on the Arbitrum Goerli testnet. Please provide feedback to the team in the explorer feedback channel on Discord!
L2 Contracts Status
The L2 bridge and the network have already been deployed to Arbitrum. There is an audited PR to enable L2 Indexer rewards which is going through formal testing and then will be put forward for a council vote. MIPs program participants are currently have missions on L2 and are completing missions to provide feedback.
To make the transition to L2 easier on network participants, migration helper contracts are being developed. This will condense what would be a multi-step transition into a simplified process.
MIPs missions for Arbitrum mainnet are coming soon!
The Graph is great for on-chain data, but Dapps also need to query offchain data. File Data Sources – shipping to Indexers in the next release of graph-node – allows Indexers to sync data from files stored on decentralized storage networks and will initially support IPFS.
Check out this demo of File Data Sources in action!
“The Graph Foundation is currently trying to RFP this Arweave integration. As Leo says, it’s super extensible, so it’s easy to plug in other data sources following this API. If there’s anyone that wants to do this work please reach out to me.” – Pedro | The Graph Foundation
SoulBound Labs have been working on building the subgraph bridge, which is a way to use subgraphs as a data source for smart contracts and offload computation to The Graph Network as an L2.
Subgraphs are technically off-chain. Usually, developers would implement an oracle service to bring that data back on chain. Because of the crypto economic security guarantees The Graph network already has in place, SoulBound Labs built the Subgraph Bridge which brings subgraph data back on chain at no extra cost to users.
When an Indexer serves a query request for the decentralized network, it includes a signature in the response headers including a hash of the specific request they received. This allows for a dispute process within the function of the subgraph bridge, to ensure that all data bridged on chain is correct.
“Did you cover what sort of use cases you’re aiming for?”
“For reputation this particularly nice. If we are defining a set of sold out NFT’s which can be used to represent voting power in a DAO, we can define how those should be tracked in the subgraph. We can merkleize all eligible users and put them on chain and then we have a static fixed piece that updates as the subgraph indexes. Anyone can bridge the subgraph bridge data over because we specify this query, and if it passes all the security checks that we have it’s eligible for use. There’s no sort of like manual overrides that need to be done or admin control. We can specify how this should be done, how these reputation badges should be awarded, and we can print them out to people.” – Alexander Gusev | SoulBound Labs
The documentation on Substreams has been revamped with more information and references! You can find all the information regarding StreamingFast’s work here!
StreamingFast has introduced production mode to Substreams. Substreams have two execution modes: production mode and development mode. The execution mode affects the time to first byte, module logs and outputs sent back to the client, and parallel execution. In development mode, the client receives all module logs, all modules are re-executed, backward parallel execution is effective, and all module outputs are returned. In production mode, logs are not available, modules are skipped if their output is found in cache, both backward and forward parallel execution are effective, and only the root module’s output is returned. The default mode is development and is specified at the gRPC request level. It is recommended to run production mode against a compiled .spkg file for stability and proper use of cached output.
Substreams have two types of parallel execution: backward and forward. Backward parallel execution executes in parallel from the module’s start block to the start block of the request for dependencies of type store. Forward parallel execution executes in parallel from the start block of the request to the last known final block or the stop block of the request, but only occurs in production mode and returns cached output, which means logs are not accessible. Backward parallel execution occurs in both development and production modes. The diagram below shows that in production mode, parallel execution happens both before and within the requested range, while in development mode, parallel execution only happens before the requested range.
“We have synced Uniswap v3 with that… We’ve done that in 86 hours; that’s without a bunch of optimization that is still to come. A thing that would take normally many weeks now was able to be synced in a few days.” – Alexandre Bourget | StreamingFast