GIP-0014: Batch GNS transactions

Hi all, this is a proposal to allow batch transactions on the GNS contract.


One of the issues brought by the community is that sometimes a subgraph publisher would want to publish a new subgraph and deposit the initial tokens. Today, that’s only possible by using a Multisig or any other contract to batch those transactions.

This proposal allows batching transactions on the GNS, based on the Multicall pattern seen in Uniswap (v3-periphery/Multicall.sol at main · Uniswap/v3-periphery · GitHub) and recently implemented in OpenZeppelin (Utilities - OpenZeppelin Docs)


A new contract called MultiCall is introduced, inspired by the one used by Uniswap. The payable keyword was removed from the multicall() as the protocol does not deal with ETH. Additionally, it is insecure in some instances if the contract relies on msg.value.

The GNS inherits from MultiCall that expose a public multicall(bytes[] calldata data) function that receives an array of payloads to send to the contract itself. This allows to batch ANY publicly callable contract function.
Client-side one can build such payloads like:

// Build payloads
const tx1 = await gns.populateTransaction.publishNewSubgraph(
const tx2 = await gns.populateTransaction.mintNSignal(

// Send batch
await gns.multicall([,])


The changes are implemented in the following PR: Allow batch transactions using multicall in the GNS by abarmat · Pull Request #485 · graphprotocol/contracts · GitHub


Interesting! Presumably batching using an existing generic multicall deployment might break some msg.sender logic in the GNS?

Yes, by doing a delegatecall within multicall() we preserve the msg.sender

1 Like

Any concern that this strategy, although effective for deterring bots, may lead to the deployment of more predatory subgraphs?

If these predatory persons know that they will automatically be guaranteed the first minted shares, I would think that they’ll begin to flood the network with “traps” for unsuspecting curators.

I do not know much about the technical side of any of this though, by the way. Just a thought - Thank you for your hard work!

1 Like

For visibility, here is additional feedback on this proposal captured in this forum post Deploy & Signal - Solution to Curator Bot. Please continue provide feedback here in this thread to consolidate responses. Thank you!


Thank you Oliver.

@ariel, I am in full support of this. For the purpose of getting this done, what steps would be needed to make it an official protocol item?

1 Like

I’m in support. I would suggest putting a limit on the self-signal, perhaps something in the 5-10k range.


Limiting the self-signal would only serve bots looking to be the second curator on the bonding curve.

A developer would be able to curate with a higher amount right after deployment. (If a limit is put on the deploying address, another address would be used)

Developers themselves having a large self-signal can be very healthy for the network


  • Limits the impact of bots - A large self-stake limits the impact of malicious actors that signal early.
  • Less volatile curation markets - If the developers intend to keep their curation shares, this would push the bonding curve into an area of less relative steepness.
  • More data - A subgraph with self-stake can indicate an intent to query the subgraph


  • More Predictable service - Indexers allocate resources according to the signal on a subgraph. A subgraph with volatile signal will also receive varying service. Developers that hold a significant self-signal would receive a more predictable service;
    • No matter the market conditions, the developers can ensure a “base amount” of signal
    • Subsequent curators are pushed into a “less volatile” area of the bonding curve.

Indexers and Delegators

  • More predictable signal - For the reasons mentioned above, the indexers will also see a more predictable signal, allowing them to better optimize their allocations, tune their infrastructure, write good cost models and more. This would in turn lead to higher indexing rewards, which benefits both indexers and their delegators.

It is also important to realize that The Graph is not a zero-sum game. Every stakeholder group is better off when working together to deliver a great service.

I am heavily in favor of batch transactions.


I agree, putting a limit will give everyone a fair shake in share minting.

Bots can get rugged themselves if they signal second.

1 Like

Bots will be able to curate with an amount that is smaller than the 2.5% curation tax;

Sure - the subgraph developer would be able to unsignal, and get some of the bots’ funds. However, the developer would do so at a loss.

And again - it would not serve any purpose, other than allowing a bot to get early on the curve. The developer (and the bot) are the only ones that knows the exact block the subgraph is published. The bot and the developer themself (on a second account if need be) would be able to curate before any other human curator.

1 Like

Me & my team are heavily in favor of this proposal. Mimics DataNexus’s proposal with benefits to all stakeholders.

1 Like

I fully support this proposal that will allow batch transactions on the GNS contract.


To be fair this proposal came before my post. This was not my idea I’m just very in support of it.


This proposal is recorded as GIP-0014. It was approved by the Council and went live yesterday on 9/27