One product suggestion is to add some details about the creator of the subgraph on the explorer. At the moment, there’s not enough info to enable curators dig into the backgrounds of subgraph creators before deciding to curate.
Expanding from the Discord:
An idea for something that could possibly become a RFP could be something like a “graphgecko”: a page which tracks advanced data and metrics on subgraphs. For example (apart from Fee history and GRT signalling which are already available):
-More detailed usage metrics
How many unique addresses query the subgraph / how frequently / average fees per query
Which are the most called subgraphs of the hour/day/week/month?
-Signaling turnover stats:
Average time a curator stays signalling (or median time, to account for long-time/dev signallers)
Also, being one step removed from the “official” graph explorer (which understandably should stay as “neutral” as possible) could allow for new possibilites:
-Identity verification for projects:
Devs can signal “official” dApp graphs
Also individual developers which create multi-dApp, task-specific graphs can verify their work in a more visible way than an address
(may become a moot point if some kind of ENS support like with indexers is implemented)
Users can downvote bad graphs / flag malicious addresses
Profitability analytics: How many of current curators are in profit? Did some OG signalers “cash out” big, leaving the rest, or is it more distributed?
Just throwing around ideas here, but there seem to be a lot of possibilities. I think there is a discussion to be had about how much should be “enforced by the protocol” and how much moderated by the community.
I think it would be possible to add a verified badge on a subgraph through https://github.com/ceramicstudio/identitylink-services/. If we have a list with the official partner deployment addresses we can do a check on the address and the Github account of the deployer.
What datums do you look at?
-How long have they been in The Graph
-What subgraphs have they made, if any have generated query fees
-Are they indexing or delegating in addition to their new deployment
I like those metrics and I can see how those would help with selecting subgraphs to signal on. Have you seen the graphtronauts site? Network - Graphtronauts
As a curator, if there is a good subgraph, I have no problem leaving/signaling my GRT for an extended period of time.
But hey, I’m up against this bonding curve for curation rewards. Why? It defeats the purpose of digging deeper into the subgraph if it is worthy for curation or not. Add to the fact that it provides opportunity for bad actors to game curation system given that if they get to signal first, they can exit at the first moment there is substantial profit. Late entrants are at bad actor’s mercy.
A way to flag malicious subgraphs is a solid idea, but there would need to be an increased awareness among newer entrants to curation (like myself) to be able to sniff out something malicious. I would imagine as more and more subgraphs are created it will be come much harder to discern what is a bad subgraph vs what is just a lesser know subgraph that provides real utility. It was a little more obvious on day 1 that something was up because only 10 subgraphs had been moved over, but as lesser known projects use the subgraph studio, etc, “what constitutes maliciousness?” and “how can it be identified?” would be good info to have. There have been some good suggestions on this thread, and I liked your suggestion allowing devs to create “official” subgraphs (I envision like the Twitter check mark).
I think the initial suggestion, adding a thawing period to the earliest minted shares, would be a sufficient deterrent .
I do not totally know the unintended ramifications the following might have, and perhaps its not even feasible, but could introducing a slashing mechanism be useful? Seeing as the only person who stands to gain from deploying a bogus subgraph is the actual user that deploys it (by signaling to it either with the same address or an associated one), could there be a requirement to stake in order to deploy a subgraph? In the event a subgraph was flagged as malicious perhaps the creator could then be slashed.
This also is probably overkill, but could a reputation score be introduced? There is utility in reviewing a subgraph creators profile current state, but something succinct like a score could make it easier to discern good/bad actors. Positive items could be things like subgraph creation, amount signaled to created subgraphs, query fees (which could also impact a curators score, rewarding them for participating and for signaling to a useful subgraph), time active in the ecosystem, etc. The score could be negatively impacted by users down-voting bad subgraphs, lack of query fees or signaling to subgraph over an extended period of time, using a new address to the ecosystem, etc.
The user who created the MakerDao, Compound, and Balancer subgraphs, for example, used a fresh Ethereum address so as not to use the address they had been using in the ecosystem since February (at least thats my guess). Thats an easily noticeable red flag.
You could deploy and rugpull from different addresses. That’d be too easy to game.
I feel the biggest issue put simply is that the structure in place rewards opposite behavior of what the curator role is supposed to achieve.
I believe a 28 day thawing period would only allow the bot to get in first once again, and then lock in a whole bunch of people for 28 days as well. The bot would then most likely leave after 28 days to the second before anyone else could get out, effectively creating a bigger rug pull. I like the idea having three phases to curating. The first being the developer can curate on deployment, the second being a period of time where everyone can verify and vet over a course of 1-2 days (the show room) and then all of those people (bot included) can get in at the same price, and then third regular bonding curve as is now.
It is also interesting to think about having the earlier you are, the longer you are locked in… however I believe the bot would still sell out as soon as it could… leaving the long term investor looking for QFs still bag holding.
There shouldn’t be a greater reward for apeing/boting in 1 minute after a subgraph is deployed compared to an hour after you have done minimal research, confirmed that the devs are legitimate and completed basic estimations on query fees compared to 2 weeks after you’ve reviewed the code and seen the query fees they generate and confirmed that the subgraph is working as intended.
I came here to pretty much say this also.
A Thawing period won’t fix the auto-bot behaviour it will just compound the bot’s profit and make them wait 28 days to release it.
The bonding-curve makes the first entry far too powerful for the following curators.
How about a 24 hour grace period for any new subgraph where curators can do their due diligence on the subgraph and any early signaller have a linear curve share dependant of how much GRT they want to signal?
2.5% tax on signal - Is this the correct amount? Should it be 0, lower, higher, it’s just right?
My take is that we’re still too early to know, but my gut says that 2.5% isn’t a big deterrent.
Should this amount go somewhere specific or just be burned?
Should there be a tax when burning your shares back into GRT?
There’s a lot being talked about in this thread, but i’d like to just drop some thoughts on this one aspect.
In general, for any participant, I would like to see GRT defined as being ‘in protocol’ when it’s deposited for use in one of the functions/roles of the network. Once ‘in protocol’, you should be able to move it around as you please. Or at least, with much less penilization than ‘exiting’ the protocol.
In the case of Delegators, this would give them more freedom to move between different Indexers without a thawing period, or a reduced one at least. (Just an example, sticking to the topic…)
For Curators - I don’t think the tax should apply until they exit the protocol. Perhaps they could cash their signal/shares in and out within the protocol (for GRT), and thus move their collateral between different subgraphs, without full taxation being applied as it is currently.
***Disclaimer: I’m am unaware of the technical challenges/contract limitations on such changes.
These kind of changes for participants (of either kind) could compound and create an environment that’s much more forgiving to those who partake in long-term involvement within the protocol. This is the outcome we’d like to see as the norm, after all.
I’m not sure i’d say there should be no tax or no restrictions on moving around within the protocol, but i think the vast bulk of the taxation and thawing should be mostly a consideration that’s made when a participant wants to exit the protocol. Lessening the impact on those who stick around, performing in their roles.
As to the rest of what you said - I too think it’s too early to gauge whether the current variables are too high/low or not. But i also think the way the mechanics work need some consideration first. Any changes to the mechanics could move the goalposts enough for any current ‘good’ numbers to suddenly become irrelevant.
All subgraphs must migrate via the new Subgraph Studio. If each subgraph was subject to a mandatory 28 day “viewing period” prior to migration, Curators would have ample time to verify its veracity and legitimacy. This would eliminate the necessity of making rapid decisions based on minimal info. It would eliminate forked subgraphs and allow the community to suggest alterations that they would find beneficial- which raises the quality level of the protocol.
A key element of this method is that it does not require any alterations to the protocol, which removes a massive amount of cost, labor, and time required to create, test and implement protocol alterations.
What if we replace the bonding curve with what I would call a timed-released curve.
If you signal 1k, that’s the share you get. The longer you leave it in, the more that share increases.
So curators that signal early get a higher rate of increase as it compounds. As the subgraph matures, the rate of compounding decreases for those that signal after but doesn’t decrease for those that are already signaling. Basically it locks that rate once you signal. This would keep the incentive of wanting to signal early to get the better rate but the maturity of that rate would happen right away.
This would incentivize curators to think long term and signal subgraphs that will last(causing curators to do more research before signaling).
I think it would also disincentivize over signaling since the more mature the subgraph got, they less incentive in signaling.
You could possibly even tie it to the allocation an indexer has given it so that if allocation was high it might mean there’s high demand therefore more signal could be warranted and so the curve would slightly increase to incentivize more curation.
Each curator’s addition to the subgraph would not directly affect other curators, basically sandboxing the experience for each curator, disincentivizing bots and bad actors from rug pulls.
Questions posed to me:
| How do you think this will effect the relationship between signal and query fee’s?
Bottom line is query fees are king. The share value is only relevant in the context of curating. The moment the curator want to exit they would take their original 1k + any query fees they received based on the share value. The share value is based on: - when they started to signal - the amount they signaled - the total amount of signal on the subgraph As more people begin to signal existing curators share value goes up but they also have to share the query fees. A curator the signals later has to put more GRT upfront in order to match the first curator’s shares. Mutual interests with a separation in value.
| Would this make it possible to farm APY on a subgraph without it ever getting query’s?
In this scenario no but maybe there’s a way to accommodate for that idea if it creates value for the long term vision of the ecosystem.
Because this model relies heavily on query fees two things should be considered:
Long term - fees need to be lucrative enough for curators to stick around and offset any expenses incurred during the process
Short term - the Hosted service becomes a crutch that teams can continue to use for the foreseeable future. One idea could be to create a grants program that provides initial GRT funds to teams that migrate over to the mainnet. If we are here to make The Graph ecosystem grow into a force that truly enables and empowers teams into a Web3 world then we should invest in that future by providing some stepping stones out of a centralized system. Probably lots to consider here as I’m sure there are complexities that I might be aware of.
It would be a considerable shift in what it is now so not sure how feasible it would be but maybe there’s some ideas here that could be cherry picked or polished to be better suited for where the system is now. Just some of my thoughts at early morning cause I couldn’t sleep and stop thinking about it lol.
Would love to hear opinions, thoughts, things I didn’t think about or know about.
I am in no way technically qualified to know if something like this would work or not, but here's what I was thinking..
Would it be possible to place a share:epoch hold on a curator’s signal? Ie - a bot quickly jumps into a promising (or even illegitimate) subgraph with a large amount of GRT. As we know, the initial shares are heavily discounted, per the bonding curve. Let’s say the bot obtains 30 shares - would it be possible/beneficial to place a hold on the signal funds that correlates with the amount of shares obtained? Maybe it’s a 1:1 ratio of shares:epochs or something similar. This would require earlier investors (receiving discounted shares) to commit to their investment for a longer period of time, and allow for “late” investors to perhaps not fall victim to profit scalping bots/individuals.
Even if it were a progressive unlock… 30 shares, 30 days, 1 share unlocks per day.
Could be an awful idea, but I wanted to put it out there nonetheless
Hey ohmyjog, an inverse bonding curve to apply thaw on early signaled shares is something that we’ve been discussing and right now it has pretty high acceptance in the community that this is a good idea.
Although I believe I was the first to bring it up since launch, I am currently more in favor of the 2 stage curating system (I called it showroom and production, I think the idea was originally called bootstrapping and production).
Appreciate the feedback, I will look into this method. Thanks!
One feature request: Would love some kind of bonding curve visualization to help me choose whether to signal or not on a subgraph.
Right now its kinda just raw numbers and hard to parse.
The simple Curator bonding curve example in this Figment.io article is a perfect example of what would be useful on the dashboard for us to visualize our potential share of deposits and historical price(s) over time on a subgraph:
@TheBondsmith alluded to this with the idea of some sort of verification. We have a list of meaningful indicators on the validity of a subgraph, so why don’t we charge a verification fee to load a subgraph that will search/query the discord page, their github etc. and provide a reliability ranking to each of the subgraphs posted similar to what the SEC/auditors do?