In July, the Timeline Aggregation Protocol (TAP) for Verifiable Payments saw major advancements with the completion of core components, internal testing, initiation of smart contracts auditing, and collaborations for integration. The Verifiable Firehose effort also progressed with designs, Rust module development, and community engagement. Additionally, initial experiments were made to bring SQL queries to the network, with experiments related to ClickHouse, sinking speed improvements, and planning for analytics applications. The focus for August is on continued TAP development and deployment, a proof-of-concept for Verifiable Firehose, and an assessment of other analytics-oriented databases.
Last month saw significant advancements in the Timeline Aggregation Protocol (TAP) for Verifiable Payments within The Graph’s ecosystem. Key achievements include the completion of the TAP Core, TAP Manager, and Aggregator service implementation (repository), and the internal testing with mockups. The proposal GIP-0054 introducing TAP was finalized, announced on The Graph forum (post), and various presentations were made to core devs & The Graph Council. Collaborations were started on TAP’s integration with the Indexer Service (with E&N and GraphOps) and the Gateway (with E&N). Smart Contracts for TAP verifier and Escrow (ex. Collateralization) were implemented (repository) and the auditing of the smart contracts has been initiated (OpenZeppelin). Permissionless Payers GIP remains a work in progress and the associated post was shared on The Graph forum.
- Detailed adapter traits docs #147 #150 #157
- Make everything async #161
- Core + Aggregator integration tests #149
TAP Smart Contracts:
- Escrow (previously collateral) #4 (init PR) → #30 (PR renaming to escrow)
- Adds an authorized sender role to block sender actions for unauthorized addresses #26 (Open)
- This is optional PR in the case of rejection of Permissionless Payers GIP [link]
Link to The Graph forum post on GIP-0054 [link]
This month saw significant progress with the Verifiable Firehose efforts starting with the sharing of the Verifiable Firehose (VFH) Problem Statement. We are currently requesting feedback and questions on our proposed design. Following the preliminary plan given in the problem statement, we implemented a Rust module for verifying flat file correctness of receipt and transaction data and are currently implementing a PoC using the module to verifiably sync Firehose directly from flat files (the first milestone in the VFH Problem Statement).
Our current benchmarking suggests that it will take less than 5 ms to verify the correctness of receipt (events) and transaction data for a single block stored in flat files. We also started work on a potential solution for verifying receipt and transaction data streamed/queried from Firehose (the second milestone in VFH). The approach uses inclusion proofs along with a SNARK to prove that events streamed from Firehose are correct relative to Ethereum consensus. We are implementing a PoC using Noir, a zkSNARK programming language. We are also investigating other “off-the-shelf” solutions for verifiable computation (particularly relevant for substreams verification).
This month we also conducted tests using zkWASM. Testing showed that zkWASM requires approximately 1 minute for proving a simple computation, e.g. y = x^6, and does not seem ready to prove the size of statements we are interested in. Additionally, we are dedicating effort to communicating the work we are doing with the broader crypto community. We are applying for speaking slots at relevant conferences. Additionally, we started writing an ethresear.ch post to communicate how our solutions will benefit the crypto community, e.g. EIP-4444, with the goal of bringing attention to our work and soliciting engagement from others interested in verifying blockchain data.
Last month, during a core dev leads planning meeting, we decided to bring SQL queries to the network. The roadmap for this new data service is still a work in progress. The most straightforward approach to bringing SQL queries to the network is to expose the existing Postgres databases that are already populated during the indexing of subgraphs.
Also related to the SQL effort, we are currently experimenting with writing substreams to sink to ClickHouse, an OLAP database that is excellent for analytics applications. We measured an approximate 10x speed increase for analytics queries using ClickHouse vs. Postgres. Additionally, during the course of our ClickHouse experiments, we made improvements to substreams-sink-postgres which made sinking speeds 10x faster for our test application. In the future, Indexers could specialize in running OLAP databases to serve high-performance analytics use cases.
- We added these commands to substreams-sink-postgres:
- During indexing of ERC-20 transfers these commands improved the speed of inserts by 10x using StreamingFast’s stack #27. We expect the PR to be beneficial for all append-only workloads.
The next few months will be filled with continued development and deployments. Planned milestones include the completion of the Escrow subgraph and integration with the Gateway by mid-August, the conclusion of the smart contract audit by mid-September, and the porting of the Indexer Service to Rust by the end of September. Staged deployment plans include a staging Testnet deployment at the end of August, production Testnet by the end of September, and eventual Mainnet deployment in November. These efforts are aimed at seamlessly integrating and deploying the TAP Scalar in production.
We will share the design, implementation, and results of our PoC for Firehose flat file verification (first milestone). We will also: continue work on PoC for streaming/querying data from Firehose (second milestone) and likewise document and share the current design; start scoping changes required for Firehose to enable verifiable streaming/querying; and submit an ethresear.ch post and continue applying for speaking slots. We are also preparing to write a GIP proposing any changes, if needed, to the core protocol to enable the Verifiability Firehose solution we are proposing.
We will continue to experiment with ClickHouse. We will also meet with PingCAP to discuss TiDB which is another OLAP database recommended by Messari. We will also work with other core devs to better document and understand the SQL data service market and technical requirements.