The Graph’s Core R&D Call #11


Summary

In Core Devs Meeting #11 we receive updates from the core development teams on multi-chain integration, listen to the progress on Firehose development, and learn about Semiotic AI’s work on automated cost modeling.

Topics Include:

  • Multi-Chain Updates
    • Solana
    • Tendermint (Cosmos Ecosystem)
  • Firehose
  • Epoch Block Oracle
  • Automated Cost Modeling

Detailed notes regarding each topic with timestamps can be found below.

Multi-Chain Updates (5:58)

Solana

StreamingFast continues to work on Solana integration. Their testing node has been running and producing Firehose data for approximately one month with only minor hiccups along the way. Most of the integration work within graph-node has been completed already, and the focus will now be on revising the data model.

Running a Solana node requires a lot of resources. How difficult will running the Firehose on top of that be?

“I’m not sure exactly. We need to do some benchmarking optimizations for that. It’s not so bad… It’s done in parallel too, there’s a lot of work that is done in each of these threads that scale with the number of CPUs too.” -StreamingFast

Tendermint (13:23)

Cosmos is an ecosystem of interoperable and sovereign blockchain apps and services. Each blockchain created in the Cosmos ecosystem will be application specific. They have created Tendermint, which is the package of the networking and consensus layer and the Cosmos SDK, which is used to build the application layer.

Figment have been working on integrating Tendermint so that each time a new application is released only the application layer will need to be integrated. All the Tendermint PRs have been reviewed and merged. Next, Figment will be working on Cosmos hub.

Schema standardization is something that the core dev teams will also be exploring as they continue their work. Since all applications in the Comsos ecosystem utilize Tendermint, many have schema similarities that could eventually be standardized across all the networks.

Firehose (20:14)

The Ethereum Firehose implementation has undergone a lot of testing and is succeeding at indexing individual subgraphs. POI reliability testing has been taking place on many subgraphs, making sure that results are consistent whether indexing with an RPC or with the Firehose. In many cases, the Firehose is discussed from the lens of speed and faster indexing. It also can pass a lot more contextual data performantly instead of using trace filtering which isn’t supported by all Ethereum endpoints.

Epoch Block Oracle (33:49)

When indexing Ethereum subgraphs there is a simple rule that Indexers must use the first block of the epoch when they close their allocation to receive indexing rewards. This ensures that Indexers are indexing with a certain amount of recency and enables POIs to be cross checked across the network. As multi-chain indexing becomes a reality, it is unclear which block they should use to close their allocations, due to differing block speeds.

The Epoch Block Oracle would fix this problem by utilizing DataEdge (GIP–025). DataEdge is a gas efficient way to bridge data into subgraphs, which can also include when an epoch block occurs. The Epoch Block Oracle would update the subgraph with the applicable blocks for allocation closures across networks.

Automated Cost Modeling (46:59)

Indexers need a way to express fees for a given query. To do this they utilize Agora. The cost of serving a query can depend on many variables and Indexers would prefer not to write their cost models by hand.

Semiotic AI is developing a tool that is currently called “Automated Agora” that analyzes queries that an Indexer receives based on variables such as peak memory used by the query and how many CPU cycles used for the query etc. The tool utilizes a query stable that host the skeletons of queries so that analysis of queries can take place. After analysis, the Indexer can use a cost model based on the measured costs for the query. There is also the possibility of having a frequency discount for frequent queries or cost increases for when a server gets more load, and the Indexer does not want to sacrifice quality of Indexing.

The goal is to provide an end-to-end system that Indexers can tweak that would automatically ingest queries, continuously run statistics, update the Agora models, and push them to indexer agents constantly.

Stay Tuned!

Join us March 31st for The Graph’s Core Devs Meeting #12!

Keep up to date by joining discussions in the forum, following The Graph on Twitter or joining the Discordserver.

Watch the full recording of Core Devs meeting #11 on YouTube!