Subgraph Showroom

Thanks @Oliver!

Problem: The way the bonding curve currently works means that the person who signals first to a subgraph has the highest reward with the lowest risk - irrespective to legitimacy of the subgraph. This has results in bots jumping in while skipping the process of verification.

It’s worth clarifying that the “issue” we’re looking to solve here is frontrunning opportunities created by the technical peculiarities of how blockchains work (e.g. public mempools, PGAs, MEV, etc).

What’s great about the pool is that it effectively democratises access to get into the first buy, whereas otherwise that buy would be dominated by curators that have access to tech that would enable them to frontrun on the basis of value extraction, as opposed to real “signalling” in the spirit of the role.

Looking forward to discussing further :slight_smile:

8 Likes

Thanks Oliver for creating this. Chris, I appreciate your take on this. I feel like this system has a lot merit in that it rewards curators who do their function well (verification & traffic speculation) while penalizing those do the function poorly (no verification or poor speculation).

Regarding the subgraph’s economics until we are able to expedite the analytics of query volume, the first 28 days is likely to be very volatile. From the curator side, the 30 day Query Fee metric is the confirmation point of how we speculated. Between deployment and a full cycle of indexers closing out allocations we are going to operate off of undecisive analytics. Many curators are noticing a trend on the legitimate subgraphs of an initial spike, a 30-50% drop and then a slow dwindle. I expect the growth on a subgraphs signal once we confirm query traffic.

Addressing the risk aspect of Stage 1, I feel like a higher tax is warranted during this stage at which point the economics should play out as follows:

  1. Curators bid as to how much signal they expect the subgraph should receive. There would need to be a system for a curator’s signal to cancel if it spikes higher than their ceiling to prevent curators for waiting until the last minute. At the end of the period the total signal is determined and anyone who did not surpass their ceiling receives signal shares. Indexers should be able to start syncing at this point if they see a profitable subgraph dynamic. A higher tax is warranted during this period - I propose a 5% tax.
  2. Participating curators who did not have their ceiling surpassed each mint shares at the 1:1 position on the curve.

Outcome A: Subgraph undervalued. Curators who decided to signal in stage 1 (call them Group A) each get their shares and upon confirmation of query traffic receive a high Query Fee vs Signaled GRT portion. This flocks new curators (Group B) to the subgraph providing an exit strategy for the A to go inspect new subgraphs.
Outcome B: Subgraph is over signaled. As the query fees start to roll in, the split query fees does not substantiate the amount of signal Group A agreed upon. Group A curators will start to exit their positions for a loss of the tax (and gas fees). Those who remain in will slowly see their share of the query fees balance to the market’s fair assessment. Group B curators will start to enter once this balance is complete providing Group A the chance to exit and inspect new subgraphs for a potential profit (subgraph stabilizing)
Outcome C: Subgraph receives no signal - bogus do not index.

5 Likes

Very new to everything so please take this with a grain of salt and expect very naive assumptions of the current model.

After reading over this thread and the one leading up to it regarding Curation feedback, it has got me thinking about other various ways we could incentivize curation whilst also mitigating risk of rug pulling bots or whales in general.

The idea being that a “Thawing Period” be introduced similar to delegation but with a bit more sophistication. Essentially provided a logarithmic model proportional to overall signaled stake within a given subgraph, one could calculate a curation “inertia” of a given curator and enforce a thawing period relative to that inertia. That way a larger bot/whale signaling 10k GRT in a 12K curation pool is forced to thaw 1 week before they can pull out since they have such a influence on the other participants. On the other side of things a curator signaling just 100 GRT could unsignal in say, 30 minutes.

Now this could lead to bots/whales simply breaking up their signalrs by using multiple wallets to perform the curation, but we could also impose a indirectly proportional tax (burn) based on the signal. That way you are basically incentivized to not have 100 small 100 GRT signals instead of 1 10k signal since it will cost you more in fees for each small signal. This is essentially the “buy in bulk” discount.

I think some combination of these 2 mechanisms could help stabilize the curation model and mitigate exploitation.

Please rip my ideas to shreds :stuck_out_tongue:

2 Likes

Some good thoughts in this thread so far trying to address the “good behavior curator” catch 22 (the system now allows short term profit taking and somehow punishes the desired long term curation).
In this sense a “curator score” system could be useful.
A few examples:
You will not able to cash out quick until you reach a certain score where you’ve shown to be a long term player. This would limit bots and newly created bot wallets from “rug pulling”.
If you cash out quick a few times quick then your score lowers, whereas protocol positive behavior gets incentives in the score. This would continue to allow all curators to take profit but not 5 times in a row.
A higher score could provide privileges with new subgraphs and a lower score would put you in a waiting room for example.
A lot of ways to make this work, a lot of ideas here.
Let me know your thoughts.

2 Likes

I like the idea of a “showroom”, because it helps to mitigate the issues as described in the top level post. However, I think there’s a bit of mental gymnastics that is being done in order to bring a new system in that would also work alongside the current system. But if we think (which I strongly do) that a bonding curve being used in this situation isn’t achieving the desired outcome, why are we trying to keep it? Why not move to a 1:1 pro-rata solution and keep it that way?

The GRT that is being used to signal, along with the tax, are the economic incentives that drive the community forward as a whole, provide some signalling towards indexers, and keep curators motivated to continue their role. Why should we value the early opinion over that of a later opinion? From this standpoint, why not just move towards removing the curation share token, and it’s simply amount of GRT signalled, and you as the curator have a pro-rata claim on query fees collected for curators?

I also think that adding a thaw period of 28 days (using this amount of days simply to match delegation thaw and keep things simple) would also drive out short term yield seekers, as there’s simply no way to know the return of your curation signal as it can vary wildly before being returned to you. This will incentivize those who believe that their due diligence on that subgraph will be sufficiently thorough, giving indexers even more reason to trust in the amount of signal as an informative tool.

2 Likes

@Josh-StreamingFast This is along the lines I was thinking as well.

I’d love to get your thoughts on this: Global Curator Improvement Thread - #19 by jona

Thanks! :slight_smile:

1 Like

I still worry about a static thawing period due to the fact that it still is FIFO in that case which means its almost worst for others as the bots (who can get in within seconds of a deployment) will still be able to unsignal first, but there will be a whole 28 days of waiting while legitimate curators pile on.

Wouldn’t that make the rugpull even worst than it is now cause you would be forced to sit there as 5 bots just unsignal 50% of the total signal GRT all at once after 28 days and 1 second, no? I could definitely be missing something here, but that is why I thought the thawing period would work out best as a value relative to your overall stake in the game.

1 Like

But if we move the curation system to no longer work on a bonding curve, it wouldn’t matter if they pulled out at 28 days plus 1 second. It would no longer affect you really. And as such, there would be no incentive to run these bots anymore because there would be no incentive to get in early/first.

While I see the merit in what you’re describing, I think this creates a system that is overly complicated. If you want adoption, complicating maths is not going to help drive us there. And I still see no reason why we’re looking to incentivize early curation more than later curation. The goal is to show indexers which subgraphs are good and worth indexing. Not to help boost returns for curators (that is just a by product). Focus on the initial reason.

2 Likes

Just for clarification: The context of the early suggestion to introduce a thawing period was a sliding scale. Example: 1st curator=28d, 2nd curator=25d, 3rd curator=22d … etc. The spirit was for early curators to be tied to the bonding curve longer and where later curators could exit in fact sooner than earlier ones.

1 Like

Ok I understand the idea behind this if there is no bonding curve, however, I do wonder if not incentivizing early curators would be detrimental as there is obviously the most risk early on when a subgraph is new, has low/no query fees, etc correct? So naturally people will want to wait for a subgraph to grow before investing into it. If everyone does that then there is a “deadlock” and subgraphs never get signaled as organically as they should.

As you yes, the primary focus should be to provide indexers with guidance on what subgraphs to ingest and I wonder if it would be difficult to get an initial signal if there is no good reason to “get in early”.

2 Likes

The incentivization would come from having a larger share of the total amount signalled on that Subgraph. So they would earn a larger stake of the query fees generated, as at the beginning there wouldn’t be as many others signalling on it (presumably). This is the same as when a new liquidity pool is introduced on DeFi platforms, or a new token is added to a lending protocol. There is a period at the beginning that tends to show oversized returns for those in early.

Hey Josh. While we have had difficulties with the bonding curve in it’s current state, I do not think a static share value and only relying on query fees is a solution that can pan out on our current compensation plan.

Livepeer which is currently a well respected subgraph (and currently #1) has 30 day query fees of 6,200. If the curator community were to split the 10% fee of 620, that would warrant a total signal of 7,750 in order to achieve an 8% APY. The bonding curve is essential to our progress as it incentivizes correct future speculation where it makes sense to spend time curating instead of delegating.

3 Likes

Additional point worth mentioning, one indexer raised the concern of having risk of failed deployments during the stage 1 (showroom or bootstrapping as previous conversations were had).

I believe this is a risk that the curators should take on IF they have the time to inspect obvious syncing issues such as referencing unsupported structure (Pickle polygon, 0x using IPFS). Currently after verifying (or as we’re currently seeing often times ignoring) legitimacy people are jumping in and then evaluating the projects volume to see if they should stay in. More time to do proper due diligence can save the indexer leg time, reallocations and wasted stake.

As I understand it, the bonding curve has an important design intent: to incentivise predictive signalling, or in other words, to incentivise curators to seek out and surface subgraphs that will have high query fees in the future.

This is important because it creates incentives for the resources of the network (driven by signal) to be allocated to subgraphs ahead of demand. For example, if a hot new subgraph came out, you’d want the network to bootstrap supply-side capacity (indexers indexing said subgraph) ahead of demand for that subgraph. If the only incentive for signalling was query fees, then there’s a chicken and egg problem. Curators would only be incentivised to signal when query fees are high, but without existing supply-side capacity to serve that demand, it’s unlikely that query fees would be able to scale to a level that would incentivise curators to shift their signal onto said subgraph.

It’s important that the risk-reward tradeoff for this predictive behaviour is right. Without strong enough incentives, signal will concentrate on subgraphs that have the highest levels of existing demand, with little reason to be predictive. Without the bonding curve, it’d make much more sense to only move signal once query fees have reached a level that warrants the move. The tradeoff of signalling early would come with very little upside (since signal would quickly saturate the subgraph once demand is proven), but quite a lot of downside (the opportunity cost of query fees from demand-proven subgraphs).

The bonding curve certainly presents other issues, but as we explore alternatives, we should have the design intent of the existing solution front of mind.

3 Likes

This is very true. And a good way of inspecting how the curator incentives should likely work.

I don’t know that there’s a route for curators to achieve the current APY that delegators see purely off of query fees on a subgraph. But through the bonding curve’s share appreciation by curators signaling (which is negative sum for the network) + query fee’s added to the bonding curve (positive sum) we should strive to reach a APY that incentivizes the time & research it takes to be a good curator as opposed to a more passive role in the network.

1 Like

Chris, an excellent point that I admit I hadn’t fully considered all the way through (and hadn’t seen mentioned in a clear way that resonated with me like this). I appreciate you laying it out like that for me. I agree with you that the predictive nature of the intent should be kept in mind.

4 Likes

This only clicked for me recently, so glad it was helpful for you too.

1 Like

What does everything think an appropriate timeframe would be? I would say at least 1 day but no more than 2 days. With a full 24 hour period it gives time to allow for all timezones to inspect. Beyond 2 days you may cause delays for DApps to utilize their new subgraph.

I think the way that the show room signals should work would be through a bidding sytem (otherwise people will be incentivized to wait until the end so they can see how much signal it’s at before entering). The way the bidding system should work is that curators will have the ability to state the amount they want to signal and a ceiling which prevents their entry beyond a certain point. (ie I will signal 5,000 as long as the total isn’t over 100,000).

Then at the end of the showroom period, the winning scenario is that which results in the most signal. (if 10 people bid 5,000 with a ceiling of 70,000 and 2 people bid 20,000 with a ceiling of 40,000 we can achieve a 50,000 signal by accepting the first 10 people and thus the 2 from the previous group are not considered). After we determine who remains the winners should be picked by signal date (in the event that there are 100 people who bid 5,000 with a ceiling of 20,000 then only the first 4 would make it in).

Thoughts?

Some feedback on the details for who qualifies with how much signal in stage 1:

  • My interpretation is that the first-come-first-serve-principle by signal date could potentially create a similar dynamic that we are currently trying to solve for in the current environment, in that it includes a front-running incentive within stage 1
  • The introduction of a hard cap may potentially lead to the following dynamics:
    • Indiscriminate Bidding: curators could be incentivized to bid on every validated subgraph that is deployed without preference. Thought being that you could set the ceiling equal to your bid, leading to minimized downside risk (curator essential bids the strategy of “if I’m the only one signaling, then I’m in. Otherwise I’m out”)
    • Whale Attack Exposure: The open-ended form of a ceiling (everyone setting their own limit) may leave the exercise of setting that amount for each curator a difficult task to establish a number that matches his/her risk reward profile. Example: Curator A wants to signal a fixed amount on subgraph A, regardless of total signal amount in stage 1, hence sets ceiling of 1M GRT, assuming it’s a safe number. Curator B signals with 2M GRT, rejecting Curator A stage 1 signal.

Here an alternative thought on the details of the stage 1 bidding process that should still capture the spirit of the show room as it has been discussed here:

Curators set a minimum and maximum for their bid. The maximum signal is also accommodated with a ceiling number, difference being that any amount above that would reduce Curator’s bidding amount proportionately down until it reaches the set minimum. Example:

Effects:

  1. Curators are required to commit to signaling to a subgraph, as even a minimum of zero would result in some signal (no indiscriminate bidding incentive)
  2. Straightforward method to commit to a fixed signal amount (if so desired by a curator) via setting minimum amount = maximum amount
  3. Eliminates need for time-based selection process within stage 1 that may crowd out some curators

I also see some additional general open questions regarding the showroom proposal to gain consensus around:

  • Should aggregate bidding action be openly visible to everyone or should every curator do their bidding in isolation and without visibility to other bids?
  • If a proposal for a deploy & signal would be implemented, would that signal amount be part of stage 1 or should it be considered as already existing (i.e. stage 0)? If already existing, would stage 1 amount then be the start of the bonding curve?
1 Like