Subgraph Showroom

Some feedback on the details for who qualifies with how much signal in stage 1:

  • My interpretation is that the first-come-first-serve-principle by signal date could potentially create a similar dynamic that we are currently trying to solve for in the current environment, in that it includes a front-running incentive within stage 1
  • The introduction of a hard cap may potentially lead to the following dynamics:
    • Indiscriminate Bidding: curators could be incentivized to bid on every validated subgraph that is deployed without preference. Thought being that you could set the ceiling equal to your bid, leading to minimized downside risk (curator essential bids the strategy of ā€œif Iā€™m the only one signaling, then Iā€™m in. Otherwise Iā€™m outā€)
    • Whale Attack Exposure: The open-ended form of a ceiling (everyone setting their own limit) may leave the exercise of setting that amount for each curator a difficult task to establish a number that matches his/her risk reward profile. Example: Curator A wants to signal a fixed amount on subgraph A, regardless of total signal amount in stage 1, hence sets ceiling of 1M GRT, assuming itā€™s a safe number. Curator B signals with 2M GRT, rejecting Curator A stage 1 signal.

Here an alternative thought on the details of the stage 1 bidding process that should still capture the spirit of the show room as it has been discussed here:

Curators set a minimum and maximum for their bid. The maximum signal is also accommodated with a ceiling number, difference being that any amount above that would reduce Curatorā€™s bidding amount proportionately down until it reaches the set minimum. Example:

Effects:

  1. Curators are required to commit to signaling to a subgraph, as even a minimum of zero would result in some signal (no indiscriminate bidding incentive)
  2. Straightforward method to commit to a fixed signal amount (if so desired by a curator) via setting minimum amount = maximum amount
  3. Eliminates need for time-based selection process within stage 1 that may crowd out some curators

I also see some additional general open questions regarding the showroom proposal to gain consensus around:

  • Should aggregate bidding action be openly visible to everyone or should every curator do their bidding in isolation and without visibility to other bids?
  • If a proposal for a deploy & signal would be implemented, would that signal amount be part of stage 1 or should it be considered as already existing (i.e. stage 0)? If already existing, would stage 1 amount then be the start of the bonding curve?
1 Like

I think people should be able to view the number of bids, but the bid factors should not be publicly visible. This gives active indexers a clue as to the amount of interest, but it will not say how much to stake, at which point weā€™ll likely start seeing a stake of 10 GRT and the sync process starting, then reallocating stake after weā€™ve reached a signal equilibrium.

Though this may be open for debate, I feel the subgraph dev or the project should be guaranteed a position in the showroom stage at a discounted rate (enough for a dev to profit from their work, but low enough so they are unable to rug the others who partake). Their position should be open data to the curators so they understand the potential risk.

2 Likes

Regarding the Indiscriminate Bidding:
If the curator does his job to validate it is a bonified subgraph and no other curators take part, I feel like that is fine. They still need to do the work of validating and judging how much they should signal. A small DApp will produce few query fees to recover their position. If others missed the boat then it is there loss by taking 2nd position. Even if they bid this way on every project, they are still providing a service of winning when they correctly verify a project and signal to it and losing on unverified subgraphs or over signaling a small project without query volume to substantiate the signal.

Regarding Whale Attack Exposure: two solutions to this. 1: allow for infinite ceiling option and/or 2: require a ceiling a certain % higher than their signal.
Personally I would never use an infinite ceiling. If I want to signal 3,000 and per my research a fair market total signal is 300k, I would not want to wait for query fees to recover my position if a whale drops 1M.

I was trying to mathematically express this logic in an attempt to provide a clean algorithm that can be coded for. I wasnā€™t able to come up with anything as the selection process includes time-based priority logic as well. The best I can come up with is an interactive process where bids would be evaluated in rounds until no changes occur between rounds. Example:

Naturally, if 50 or 100 Curators are bidding on a subgraph, then we would see much higher number of rounds that would have to be iterated to arrive at a lock. Would be interested to hear from devs on the anticipated scope for this type of process and/or if there is a better method to solve for the expressed requirements.


Further aspects to consider specifying desired behavior for:

  • Can bid be withdrawn and/or modified during stage 1?
  • Can second bid be placed with differing bid/ceiling values for same subgraph from same wallet?
  • Possible big one: transaction process details:
    • How exactly should bid be processed? As a signal txn like today with txn costs and taxes?
    • How do rejected bids get processed? Resulting in a second txn with another txn costs? Who bears the costs for return to wallet?

Hey Oliver, hereā€™s how I pictured it should work. Order the bids by the block # that the transaction occurs in. calculate the total signal for each ceiling selection ordered by bid eth block until that ceiling is reached. After calculating the total signal for each possible outcome use the one with the highest signal. I can prepare a T-SQL script that will do this, though someone would need to translate it to Solidity (I am still new to this language).

In the below example the 84,000 ceiling was the winning estimate and curators 1, 7, 8, 9, 10, 12, 14 and 15 will have shares minted at an equal value.

  • For initial development I feel like bids should be final. I do not like the concept of withdrawing bids and thus do not like the idea of modifying bids.
  • A second bid should be able to be placed with differing bid/ceiling values. Itā€™d be too easy to game otherwise, so no need to try to prevent this.
  • The bid should be processed by writing to a smart contract (curator covers the gas fee on this transaction). I lean toward tax being applied if you make it within the bid, but this tax should be higher than the standard signal (5%). At the end of the contract, this places initial signal on a subgraph then opens it to the public.
  • Rejected bids will have their GRT in the contract at which point the curator should go back and ā€˜reclaimā€™ the GRT. The gas for reclaiming these funds should be up to the curator.

@DataNexus @ari and I met for a brief workshop today to mutually review the scope of the discussions thus far in order to gain a better understanding of the feasibility and effort of implementing the concept of a showroom. A key conclusion is that advanced in-showroom logic & parameters tend to have potential negative impacts on both timeline for implementation as well as protocol costs. The introduction of a ceiling to further improve the initial idea is thereby a key component in adding complexity that would require more in-depth specifications and likely add to the dev timeline and possible audit scope.

Recalling ā€“ these are the primary issues that the showroom concept aims to solve:

  • Minimize/eliminate risk for frontrunning the bonding curve
  • Provide reasonable amount of time for subgraph validation to be performed by curators

We have discussed in this thread more valid ideas in order to optimize the signaling process in the showroom and the ceiling concept was a key suggestion of that. If we were to go back to the very initial thinking of the showroom, it would remove development complexities and thus provide for a quicker path to release a minimum viable product (MVP), which in turn would provide the curator community a solution to their primary pain points in a more timely manner. To recap, the MVP requirements would be what @DataNexus described in his initial post:

The one identified concern thus far to solve for remains what @chris described in his initial response.

I would recommend for the community to provide feedback with regards to the question of whether to proceed with an MVP or whether to continue discussing a larger scope optimal solution. It is difficult to project exactly the difference between the two. From a timeline perspective it could be weeks to months, pending on what exact solution the community would settle on.

Please share your feedback here in this thread. Also, please present any other ideas (outside of showroom) that would address the primary issues. I have added below poll to aggregate community feedback.

Question: How should we now proceed in scoping out the showroom for the initial release?

  • MVP (faster)
  • Full scope solution (longer)
0 voters

in the first two days people should be allowed to buy stocks on a ā€œtemporaryā€ bonding curve, then at the end of the two days we will have a number of stock purchased and an amount of grt signaled and so an actual price for the stock can be set, then everyone that signaled during showrooms gets stocks based on how much they have signaled. In this way thereā€™s no rush in signaling when an important sub gets deplyed because you have two days to buy.

That is the correct idea. Everyone who participated in the showroom stage enters the curve for the same share value. Curators need to set mine legitimacy, market effect and estimate query volume accurately to win in this game (all things were expected to do)

I think itā€™s critical for showroom to have the ceiling functionality before release, as signaling with no ceiling leaves curators without the ability to control a price range for the sharesā€“I believe this may be just as undesirable as the current bot attacks.

I would propose a short-term solution whereby we implement a punitive thaw period for early signalers. I saw that Oliver proposed a ā€œsliding scaleā€ thaw period whereby the 1st signaler assumes the longest thaw period and then it reduces down from there for subsequent signalers. If we made the 1st signaler assume a 90-day thaw period, for example, I believe that would make genuine curators and bots alike be much more prudent with their decisions.

For the long-term solution, I love the concept of the showroom with ceiling.

Thanks for all the work on thisā€“once we get this aspect of curation optimized, we will be much closer to a mature and effective process!

2 Likes

I agree, as I to believe signaling with no ceiling leaves curators without the ability to hold a set price range for the shares.

The sliding-scale seems like a great idea, curators would be forced to think twice before just hopping in a subgraph without verification.

While the ceiling functionality is inarguably a better system. I guess the question is do you like the ceiling more than you dislike the bot/front-running?

I donā€™t think itā€™s necessary an either/or choice. Thatā€™s why Iā€™m proposing the highly punitive thaw concept as a way to deal with the bot attacks in the short-term while we work on the full showroom product (ceiling).

If we canā€™t do the thaw functionality, then I think itā€™s about a toss-up for me between bot attacks and showroom with no ceiling.

Forgive me as perhaps Iā€™m missing a bit of the maths behind the ceiling concept, but Iā€™d like to understand how the ceiling would actually change things.
If, for example, the showroom ended and there were 100k GRT signalled, and 100 shares minted, then each share on that subgraph would equate to 1000 GRT (at the 1:1 initial price once it came out of the showroom).
In a second subgraph, if the showroom ends there are 20k GRT signalled, and again 100 shares minted, then the curation shares for that subgraph would each be worth 2000 GRT.
They donā€™t need to correspond, right?
Similar to how on an AMM like Uniswap, the amount of LP tokens has nothing to do from one pool to the next.

1 Like

My initial idea was that the ceiling is a precaution for curators to say ā€œIā€™m good for 1000 as long as it does go above _______ā€. If a new subgraph is posted, I would verify legitimacy then try to predict a fair market signal based on how much GRT I expect it to produce. A small project may only warrant 20k total signal. If Iā€™m the first to participate in the showroom and signal 1,000 then it opens the door to someone coming after me and signaling 50k which is now a bad position for me.

In regards to the initial shares that amount would depend on the amount of total signal (the pool brought in 50k which mints shares equal to what I would mint if I signaled 50k on a subgraph with 0 signal). But the shares would be split proportionally.

1 Like

Since the pool would be bootstrapped by the initial funds raised by the showroom, the basement price when you pull out should be the GRT/curation share price as defined after the showroom closes (minus the curation tax of course). The shares represent a pro-rata portion of the pool, so if you get in and the amount signalled is higher than youā€™d like, you could simply sell off shares immediately, and you should get the same rate (minus the tax). Itā€™s not a two sided reserve like on an AMM, so you should be in at the basement price.

2 Likes

Thatā€™s right. Since everyone has the same share price for the first entry, when they go to redeem shares your share value stays the same. If you donā€™t like where your position is once it starts you exit and lose tax. I do think a higher tax is warranted during this point of entry (5%).

Similar to how if you are the only person signaling on a subgraph and sell half your shares the other shares donā€™t go down in value.

So is that not effectively providing the same thing that a ceiling bid would provide?

Ok, I admit I didnā€™t catch this nuance earlier. If the bidder is only out the 5% tax, but no depreciation in share price, then I believe I might actually be on board with this concept. Although I think we would tend to have all subgraphs get overbid during the showroom phase because thereā€™s really very little penalty for being wrong.

I think it is a workable system for now, max loss is 5% or you can hold your position until the query fees make up your position or if people enter after the showroom stage is done. This dynamic

The first position currently has the lowest risk and highest reward potential, this change will increase their risk (from 2.5% to 5%), lower reward potential (since the initial mint is shared with many others) but also focus people more toward query fees. If the subgraph receives no position 2 entries, you will solely rely on query fees to recover the 5% tax. All while eliminating the bad behavior of skipping the verification and evaluation process.

1 Like

I see your point, in one example your max loss is 5%, in the example of the ceiling I was assuming the tax would only enter if you made it into the initial mint.