Why did you deleted the part mentionning how your legal team swiflty approached you ?
It was very interesting
Part 1: The purpose of curation & foreshadowing some Horizon concepts.
The Graph protocol, stripped to the bones, requires at least three roles, which I will simplify to caricatures:
- The Subgraph developer authors and publishes subgraphs. They play a vital role in growing the usage of The Graph in the same way that smart contract developers grow usage for blockchains. So, ideally, we would like to incentivize subgraph authorship because our whole thesis of decentralization revolves around incentivizing value-add behavior. It is possible that a subgraph developer is altruistic and publishes subgraphs for free as a public good. But ideally, they have some financial motivation. They could derive extrinsic benefits from the Subgraph existing, for example, when the Subgraph enables a dapp. Or, if the Subgraph is valuable to the network, maybe they could take a cut of the queries of the subgraphs themselves. (Unfortunately, the latter incentive mechanism is problematic in the context of The Graph, as we will see. Nonetheless, it is helpful to keep this idea in mind when considering the design goals of curation.) The Subgraph developer wants to publish their Subgraph to The Graph rather than run infrastructure. (If the dapp developer is the only one that can run the infrastructure, the dapp is not decentralized.) Hopefully, subgraph developers want their Subgraph to bring value to consumers.
- The consumer demands queries for data indexed by subgraphs. They might be interested in raw data for analytics. More likely, they are using a dapp that depends on dynamic blockchain data. The consumer wants to get interesting data cheaply, verifiably, at low latency, and high reliably.
- The indexer is profit-motivated and should be incentivized to serve data to consumers. Ideally, they are rewarded when they bring consumers value (interesting, cheap, low-latency, verifiable queries.) Their core competency is running infrastructure. So, they want decision-making to be automated when possible.
These roles form a symbiotic relationship. Each depends on and benefits from the specialized work of others through the medium of exchange using The Graph protocol! Because of the symbiotic relationship, value for one group can accrue to the others in a flourishing ecosystem. (I apologize if this is pedantic. I promise all of this setup is critical to understanding curation.)
An indexer wants to automate their decision-making but needs to make intelligent choices about which subgraphs to index. They want to profit by maximizing revenue and minimizing costs. To do this, they must know how expensive subgraphs are to index, how much demand there is for queries, and how well they would fare in the market against competing indexers. None of these can be predicted perfectly, but there is value in minimizing uncertainty and risk. (Remember - when you reduce uncertainty, this value can accrue to all protocol users because of the symbiotic relationships that exist through the medium of exchange.)
How can indexers reduce uncertainty to automate this decision-making process? Here are a couple of ideas that donât pass muster.
Maybe an Indexer could observe historical data about query fees. By observing on-chain payments, an Indexer may gain insight into how much value flows through the network for a particular subgraph and which indexers compete in that market. There are a few problems with this idea. First, if The Graph is permissionless, someone could send large payments for spoofed queries through the system. They could impersonate a consumer and indexer by sending payments to themselves without actually serving queries. There are various reasons to do this, from creating the illusion of demand to just griefing indexers. So, indexers should not trust historical query volume indicators. There is a mitigation, but itâs not foolproof. The protocol could burn some percentage of query fees. (The protocol does this today, unfortunately.) The more you burn, the more of a deterrent to fee spoofing but also the more friction for real consumers and indexers who eat the extra cost for legitimate traffic. Punishing everyone rather than offenders (which, I repeat, is how The Graph approaches problems today) is not just distasteful. It raises a serious question. Why should consumers and indexers send query fees through The Graph at all!?!? Shouldnât users prefer the cheaper option of transacting via ERC-20 token transfers? (This problem is called âprotocol disintermediationâ and is one primary motivator for Horizonâs design, but weâre getting ahead of ourselves.) Another problem with observing historical query fees is that it doesnât solve the bootstrapping problem. When a subgraph is first deployed, there is no historical data that indexers can look at to reduce revenue uncertainty. Yet another problem is that the solution infers a linear cost to regularly publishing query fees on-chain per Subgraph. Linear recurring overhead hurts everybody in the ecosystem. The high prices resulting from this design make The Graph unviable for the long tail. Even without these problems, this solution would be incomplete because it does not address the variable cost of indexing.
One option that should come immediately off the table is central planning. There could be oracles, or we could have permissions on who can pay into The Graph to limit query spoofing. The Graph employs these solutions today, which is unacceptable for a decentralized protocol. Decentralized infers permissionless.
One idea to remove the problem of cost uncertainty would be to use static analysis. Predicting the amount of compute required to execute a function in the general case is impossible due to the halting problem. We could resort to Turing-incomplete languages, but the tradeoff is sacrificing expressiveness, power, and familiarity. Subgraph developers may need to learn new concepts or languages before developing a subgraph. Some Subgraphs may not be possible to build at all. Since subgraph developers grow the ecosystem, taking on this kind of restriction would necessarily reduce the size of the market. Even if we have static analysis, we do not know the size of the data because of data dependencies between the code and chain, as in the case of spawned data sources. Also, some data to be crunched will be produced in future transactions. This is a non-starter.
With a few ideas tossed aside, curation was conceived as a prediction market for query fees. A cut of each Subgraphâs fees is paid into a revenue stream for curators. Profit-motivated curators bring outside information to the chain by purchasing shares. In equilibrium, we might expect that the shares purchased correlate with the size of the revenue stream. This is because if one Subgraph has too much curation and another too little, individual curators would be better off rebalancing. Mix in the wisdom of the crowds and get reliable signals. At first glance, this appears to be a brilliant and elegant solution to our problem. Spam prevention happens naturally through the cost of capital required to own shares over time. The revenue stream offsets that cost of capital. There is an exchange and specialization of services - indexers and consumers pay curators through the curation tax and receive valuable off-chain information predicting demand.
A couple of additions are needed to solve problems with this idea, but these fixes will come at a cost. The first problem is that since each Subgraph is a separate market, we cannot expect enough liquidity or trading volume on each to enable accurate price discovery. Without price discovery, we would lose the accuracy of the signal - the purpose of this system. This is where bonding curves come into play. With bonding curves, the protocol becomes an automated market maker, ensuring there is always liquidity to meet demand for price discovery. Additionally, bonding curves enable the idea that a curator could be rewarded for curating on a subgraph early.
Indexing rewards are not necessary for curation to exist. Reading the above, it is not clear they fix a problem with curation per se. In fact, I will later argue that the two concepts are mutually incompatible and must not be integrated. But, it would be remiss to leave them out of the description of the current system, and I will grit my teeth and try to justify their inclusion as a part of curation.
Indexing rewards were conceived as a mechanism to bootstrap the network. Initially, consumers wonât use the network without supply, and indexers wonât use the network without demand. This is a chicken-and-egg problem. We could bootstrap by incentivizing either side to come to the network first. The supply side was chosen. There needs to be some mechanism to distribute these rewards, and what better way than curation because it outputs a prediction of future query fees? (This reasoning is faulty, as is much of the reasoning in this post - but I want to explain it first from the perspective of how things are intended to work as originally explained to me before the network launched.) Indexing rewards were first communicated to me as a temporary measure to solve the narrow problem of bootstrapping. But, later, someone read the Arweave paper and concluded that indexing rewards are necessary to enable âpermanent dapps.â At some point, âbootstrapâ also became âgrow.â 2% became 3%. And rewards started to sound more like a permanent fixture.
These are the original ideas behind curation in the network. Some new requirements have been appropriated into the existing mechanisms since. But rather than try to justify this history, I will leave those ideas for the next section on problems.
To be continuedâŚ
Iâm refreshing this page every day in anticipation!
Sorry for the delay. Iâll try to drip-feed some posts rather than write a single post on problems with curation to get an update out sooner.
Product Market Fit
The first problem with curation is a failure to find product-market fit.
The envisioned use case for The Graph was that end-users would pay for their queries. End-user is defined in this context as the user of a dapp through a browser. Although this goal is worthwhile in the long term, the customer persona does not describe most paying users today. We have a long way to go to validate that market. We havenât even built the products required to enable this use case or test our hypothesis. Curation was designed (poorly) with this end goal in mind.
The customers (paying users) we see in practice today are not end-users, but instead are dapp developers who deploy subgraphs and subsidize queries to enable their dapps. The protocol must meet their needs to survive long enough to bridge the gap between now and when end users pay for queries.
The mechanisms of curation do not meet the needs of dapp developers. A prediction market is unsuitable for âincentivizing indexers to prepare to serve queries.â Prediction markets are volatile, risky, complex, and have an unfamiliar UX (in terms of using the mechanism by manipulating the prediction market to indirectly achieve the end goal of getting a subgraph indexed). None of those words are what dapp developers want to hear. They need a reliable, simple, cost-effective tool - not some game. If a customer shows up to purchase a scarf, you donât ask them to first participate in a yarn volume prediction market.
To illustrate the struggle a dapp developer experiences when using curation, consider questions they ask, like âHow much curation is necessary to get my subgraph indexed?â There isnât a straightforward answer to this question, and itâs a moving target. What may be a sufficient amount of curation today may be inadequate as more subgraphs migrate to the network, leading to outages. Or, âHow can I ensure that I will retain a sufficiently diverse set of geographically distributed indexers that will meet our dapps quality of service needs?â Again, the protocol offers no method, even though this question is critical to customer success. The current protocol canât differentiate between a single indexer in Antarctica with 300M staked GRT vs. three indexers serving major population centers across the globe with 100M staked GRT each. One of those indexer selections results in customer success, and the other is failure. The fact that the protocol has no mechanism to incentivize success is a problem.
Subgraphs found product market fit through years of iteration and listening to feedback. I think there was an assumption that the protocol would have product market fit by extension of the success of subgraphs rather than by the process that resulted in Subgraph product market fit in the first place. Indexing fees are a response to overwhelming negative customer feedback and data from curation. Curation hasnât even found product market fit with curators. All but a few have already left. That fact alone should be enough to question any motivation for keeping this system around.
Everyone wants to shut down the hosted service. But we canât do that if the protocol does not meet the hosted service usersâ needs!
More to come, hopefully daily.
No recognition of âuseful workâ
The protocol cannot recognize (and therefore cannot reward) âuseful work.â The protocol being able to do so is the underlying assumption to keep curation and indexing rewards around in their current form. It is easy to believe a myth that the protocol is an all-knowing benefactor capable of distributing a subsidy. The protocol is actually quite limited.
To demonstrate, Iâll present an exploit that could conceivably be used today to farm indexing rewards. This edge case will be illustrative, but even in the general case, there is no notion of useful work in the protocol. The actual subsidy distribution uses a tragically inept mechanism.
Imagine a subgraph that uses a cryptographic trapdoor function such that it is difficult to calculate the PoI without knowing a secret. Suppose an attacker secretly chooses and multiplies two primes. They publish a subgraph attempting to factor the composite and save an entity with an ID for each factor. The attacker can execute this subgraph trivially to produce a correct PoI and collect indexing rewards because they know the entity IDs a priori. Other indexers would have to run the algorithm brute force and have no hope of collecting the reward cost-effectively.
The malicious indexer would use separate pseudo-anonymous IDs to publish, curate, index, and maybe even query the malicious subgraph. They would split up their stake such that most of it was used to curate and direct indexing rewards to the subgraph with minimal stake allocated toward indexing.
Due to the large amount of stake some indexers control compared to the amount in curation, the protocol would think this is a massively valuable subgraph and send lots of indexing rewards there for the malicious indexer to claim! Thereâs only 5.5M GRT in curation globally right now, and many indexers could front that much or more to claim >50% of all rewards. Note, though, that no âusefulâ work is performed. Iâm defining âusefulâ as âthere is a marketâ - nobody is willing to pay the indexer to execute their trapdoor attack because there is no public good created with any meaningful value to consumers of The Graph.
Whatâs worse is that no attributable fault takes place. Itâs impossible to prove that the indexer played a role in executing an attack. Therefore, they cannot be slashed based on verifiable grounds. The indexer could be an innocent bystander with super-fast hardware compelled by curation to index the subgraph. They didnât necessarily collude with the subgraph author or curator. We kind of know they must have, but it canât be proven - especially by the protocol.
Some have argued we may attempt to mitigate this attack through sophisticated arbitration. But, that would require centralized governance and human arbitration in The Graph to continue (a non-starter for what weâre trying to build). Furthermore, arbitrators must be able to reason about pseudo-anonymous identities and be omniscient to prove fault. Donât go and try this attack, though. I bet arbitrators will find a way to slash you even if they canât prove malicious behavior. But again, proof is a pre-requisite to removing centralized governance.
So, if the protocol doesnât reward useful work, what does it reward instead? It turns out that if you take curation signal and feed it into indexing rewards, curation ceases to function as a prediction market. Instead of being an indicator of how much value a subgraph may have, curation is now only a voting mechanism for distributing indexing rewards. Itâs ironic, but you canât say, âtake the predicted value of this subgraph and direct rewards to itâ without also changing the definition of the mechanism to no longer predict the value of the subgraph, thereby breaking the link and the reason these would want to be connected in the first place. Curation becomes voting to distribute subsidies where more GRT = more votes. Nothing more. Similarly, allocations are votes to distribute subsidies.
If curation and allocations are votes to distribute subsidy, who do voters vote for? Themselves, obviously. Curators (subgraph developers) vote to distribute rewards on their subgraphs, and allocations are votes to distribute rewards to oneâs indexer. By this mechanism, a well-funded indexer receives a disproportionately large reward for indexing the same subgraph as a GRT-poor indexer. Unequal pay for the same work. The rich get richer for no reason other than being rich. At least in capitalism, the rich must deploy capital to productive use to profit from it. Here, capital votes to receive more capital unto itself without any validation from the market or value produced. Is that what people mean when they say âdecentralizationâ? Thatâs not what I signed up for.
What is incentivized in this system? Having a lot of GRT, not producing value. Who benefits? The largest indexers and largest dapps, and those who have access to cheaper capital. Itâs almost funny if we remember that one goal of indexing rewards is to grow The Graph. I thought that to grow The Graph, we would need to fund up-and-coming projects , not permanently fund the established players that can afford to pay. I want a system that incentivizes producing value.
Stay tuned for more.
I agree with a lot of the reasoning here, but I would like to also add some additional perspective.
I agree that today, âThe Graphâ has found product market fit with subgraph developers who are building subgraphs for their own applications. The current protocol design is not ideal for this use case. We want protocol mechanisms that work better for this scenario where the developer is the subgraph developer and the consumer, who is paying fees.
I also agree that we have not solved the problem of curation. I think the issue here is 2 fold: 1) we havenât created a sufficient mechanism for a Curator role to meaningfully organize information on The Graph 2) we havenât proven whether or not our economic design is the right one to incentivize this behavior.
The issue of curation is a deep one and it gets to the fundamental question of what is The Graph. Is part of the mission of The Graph to organize the worldâs public knowledge and information? Or is it simply to give developers easy access to blockchain data? This is going to be a big question for us to answer, and Iâm going to have more perspective to share on this.
But regardless, I agree that for the first use case, for which we already have product market fit today, we need better mechanisms that address the needs of these users.
Hi Yaniv, good to see you.
I think your problem statement for Curation is interesting and worthwhile. (Organize the worldâs public knowledge and information and, based on previous conversations with you I would add something like - to create a trustworthy source of information upon which we can build institutions for the internet age.) I earnestly await concrete proposals for what kinds of mechanisms we can use to tackle this problem with incentives and ad-hoc coordination.
I think before we can even consider this problem we need a reliable foundation that gives developers easy access to blockchain data. The good news is that with Horizonâs modular design, we will not only have a reliable foundation to build access to blockchain data but we will also have a springboard allowing us to deploy and test mechanisms to address Curation as a means to organize data. You wonât need to have community consensus or council approval to try your ideas. Questions like whether such a mechanism is or is not a part of âThe Graphâ will become irrelevant semantics. The only thing that will really matter is whether it is useful to enough people to gain traction.
Now I have to be a jerk and correct some nits with your post.
The current protocol design is not ideal for this use case.
The current design is not suitable, much less ideal, for any use case.
we havenât proven whether or not our economic design is the right one to incentivize this behavior
âprovenâ is doing a lot of work in this sentence. Because The Graph is an open system (one which interacts with the outside world, as compared to a closed system like Chess) itâs very difficult to prove anything about it. But, we can still do science. Before the network launched, hypotheses were presented which theorized curation being broken. Those hypothesis were accompanied by predictions, like that bots would use sandwich attacks to leech GRT from participants. The data is consistent with the predictions which increases our confidence that indeed the mechanisms of curation as implemented in The Graph today are not the right design to incentivize the behaviors you have in mind to achieve curated data. If curators have left, it is because they were not incentivized to engage in the intended behavior. Thatâs as close to proof as we are going to get.
I wish that we would have responded to these incidents sooner. But, I feel that a straightforward reading of your language may unintentionally cast doubt on how much is known about the mechanisms of The Graph and thereby also lead people toward inaction rather than responding to the continued crisis that is unfolding in the protocol. It seems you are saying that we donât know for sure whether âour economic designâ (the protocol as implemented) âincentivize[s] this behaviorâ (rewards people for) âorganiz[ing] the worldâs public knowledge and information.â But, we actually know quite a bit about what the protocol does and does not incentivize.
I understand from talking to you out of band that your comment was not meant to be in response to the specific mechanisms of curation as implemented in The Graph. Your idea of what curation should be is a mechanism to surface valuable information. We both agree that a protocol that incentivizes behavior which would surface valuable information would be useful. We also both agree that the protocol does not do that today.
But, without a specific mechanism design your idea of curation is still just a goal. We must not conflate a goal you want to achieve with a specific mechanism that exists today in the context of this discussion. The goal you want to achieve is a cause worth championing, but curation as implemented is doing significant harm. Curation is SO bad The Graph would be better off simply deleting it even without offering a replacement like Indexing Fees.
I appreciating you sharing thoughts on this and have been looking forward to each successive post. As someone who initially found the network through curation, this has always been a topic of interest. While more thoughts may be coming, I couldnât help but chime in on this part.
Isnât this solution so simple it just might work? Deprecate signal enitrely. Indexing rewards on all allocated stake accrues at an equal rate. Utilize the upgrade indexer(s) to bootstrap new subgraphs. Let the decentralized indexer market compete on productive subgraphs. Build dashboards that help indexers identify actively querying deployments to inform indexer decisions. As a subgraph becomes more productive, it becomes more decentralized & redundant.
Subgraph Developers can deploy without having to learn about âhow will this be indexedâ. Indexers focusing on the product of the network (query fees) will naturally accrue more delegation since they have a second income stream (assuming equal reward cuts).
I also just want to chime in here and say that Iâve enjoyed reading the thoughts being put into the posts in this discussion. I donât have anything to add at this point, but look forward to continue reading.
I want to apologize for some anger and ego that came out in a previous reply to @yaniv. Iâm going to be working to revise that post before adding the next one. I donât think the tone that I used was appropriate or conducive to convincing people of my position. I talked to Yaniv out of band and got a better understanding of his position, which I hope to reply to as well.
Actually, this is pretty much where we landed when designing Horizon. Every single other method we evaluated for distributing rewards came with significant drawbacks. Other methods typically would disincentivize useful work and disproportionately reward gaming the system.
There are some arguments for why this solution is unsatisfying. But, itâs the least bad option if we want to keep protocol rewards around at all.
The part thatâs really great about this option is that the focus of indexers can shift from farming rewards to seeking profit opportunities and not be penalized, but rewarded. Right now, even if you knew which subgraphs would bring the most value to the community the protocol forces you to take another option by incentivizing different subgraphs than the valuable ones. So, yes, we would be better off without Curation. But, better still if we present yet more profit opportunities - which is what Indexing Fees is⌠an additional profit opportunity that is aligned with consumer demand.
When Iâm done trashing Curation (which, yes, there is more to say here) I canât wait to tell you how exciting this profit opportunity is because, for one, indexing rewards donât by themselves scale with the size of the indexing market. But indexing fees can scale arbitrarily high and create profit opportunities for indexers.
@yaniv - I gave the second half of this post a complete overhaul. I spoke for you a bit in there, so please correct me if Iâve misrepresented you. Thanks again for your feedback.
Public recognition and some consensus on of the shortcomings of the protocol after (nearly) three years of operation is healthy and necessary for us to build effective, useful products. Thank you to everyone here willing to participate in the conversation and especially to @That3Percent for taking thoughtful writing to a public forum.
I accept as fact that Curation has not delivered, that this fact could have been expressed (by many of us including me) earlier, and needs to be addressed now. I also accept @yaniv 's point on the ultimate goal being way beyond just delivering apis.
That leads me the following question - maybe too big for this specific GIP but entirely relevant:
Is lengthy failure identification and lengthy course-correction acceptable and sustainable?
If not, how do we recognize and accept failure faster and iterate faster in an environment that is almost allergic to both of those things? An environment that sometimes feels like wading through soup (council, governance) coupled with a public narrative that often feels like it hasnât caught up to reality?
Thanks for your comment, @cryptovestor. Your question is a perfect lead-in to Horizon Core - the foundation of Graph Horizon.
The problem you are alluding to is that evolving the current protocol is necessarily a bureaucratic process. The protocol is a complex monolith in which local changes can unexpectedly result in global, adverse outcomes. Due to this design, allowing just anyone to upgrade it is impossible. The protocol developer and integrator roles are not permissionless. Those roles, like several others in the protocol (arbitrator, gateway, oracles), are centralized. Only the council holds the key to these powers. The council is a censorship vector that threatens the long-term existence of The Graph. In the event their keys are stolen an attacker could deploy unwanted upgrades, shut down the protocol, and steal every staked token. This is not just about bureaucracy and slow iteration; the status quo does not uphold our values of decentralization, permissionlessness, or efficiency.
One of the primary goals of Graph Horizon, therefore, is to make every role permissionless. We will ask the council to relinquish their powers, including the power to restrict protocol development. Only then can the community rapidly iterate on the protocol design to achieve product market fit.
Itâs not immediately obvious that we could ever fearlessly allow anyone to be a protocol developer if you consider the current protocol a starting point. All those governance parameters, oracles, and permissions exist for a reason. But, the reason isnât a good one. The reason is to patch over systemic design flaws.
Horizon, by contrast, offers an immutable, modular, composable design. The immutability removes the councilâs ability to censor The Graph. The modularity enables complexity and risk to be encapsulated. Lastly, the composability allows core devs and other protocol integrators to independently layer new value-added products into the protocol without relying on a central bottleneck. All of these properties greatly speed up development.
At the root of Horizonâs design is an immutable smart contract called Horizon Core. The responsibility of Horizon Core is to provide a simple and secure staking mechanism that stores time-locked slashable collateral. The threat of slashing serves as a deterrent to violating warranties. The unique design of Horizon Core enables us to build components on top in a composable manner. The components include a world of data services (e.g., indexing fees, query fees, and others), financial services (such as lending or delegation needed to provide indexers decentralized access to capital), verifiability primitives (eg: refereed games, snarks, arbitration and similar to automate the enforcement of warranties), payment services (such as subscriptions and Scalar TAP), and more to broaden the scope of The Graph beyond just APIs. Each of these components will be built not as a part of a monolith but as independent building blocks that can pieced together like legos and adopted by the community according to preference rather than being hoisted upon all protocol users. For the first time, The Graph will unlock powerful network effects, similar to how smart contracts work in Ethereum.
We will build documentation, how-toâs, and best practices to enable anyone to build upon and into The Graph - just as we did with subgraphs. Our goal will be to arm the community with knowledge so they can build, in contrast to previous hesitance to communicate about the protocol. New blockchains will be able to integrate themselves into the ecosystem. Research groups will be able to integrate new verifiability primitives, and so on. With an immutable foundation, they can do so fearlessly.
This path is necessary to match the needs of decentralized development, such that the core devs become independent and unlock their potential.
In my previous post, I only answered how to iterate faster. Accepting failure and being open and honest in communication should also be addressed.
Right now, the incentives are misaligned. Can we say everyone should upgrade to the protocol on the one hand and, on the other hand, be forthcoming about its shortcomings?
Something that compounds the difficulty is that the GIP and approval process heavily emphasizes design up front. Within this process, there is a long time between acknowledging a problem publicly and deploying a solution. Between those two times, customers may be dissuaded from using the protocol. This is a chicken and egg problem - we hesitate to communicate for lack of a finished solution, and we canât start on the solution due to the hesitance to communicate.
But, we can adopt a âshow, donât tellâ approach with Horizon. We can build MVPs first and only when those MVPs are usable start to market them with arguments about why the new solution is better. It is then up to the community to decide whether to use those products or if they want to stick with the existing solution. Thatâs not to say we would stop listening to feedback. But there are far better ways to get feedback than the GIP process.
I want to share this article which was originally circulated internally by @Tegan.
The parallels between The Graph and the authorâs situation are striking. And, the outlined principles I wholly agree with and are the key to The Graphâs success. This is better written and more salient than anything I could write.
If weâre using Curation as an example, itâs complicated. I appreciate you saying:
I accept as fact that Curation has not delivered, that this fact could have been expressed (by many of us including me) earlier, and needs to be addressed now. I also accept @yaniv 's point on the ultimate goal being way beyond just delivering apis.
This is not a view that has gotten wide alignment and so itâs been hard to address. I shared privately after launch that I didnât think the curation design was doing what it needed to do and that itâs important for us to solve organizing information in order to deliver on the mission of The Graph. I was not able to get people aligned on that view. There have been CEO changes at E&N which in some ways exposes differences in perspective on the direction of the protocol. Changes like that take time to play out.
Ultimately we need to get alignment on the mission of the protocol. When we have that alignment, itâs easier to evaluate our designs against the stated mission. When there are disagreements on the mission, it forces some people to do their work in private so that it doesnât get shut down. I think thatâs what happened here. When we have agreement on the mission, we can do our work in public and the best ideas can rise to the top.
I think we can rise to that challenge.
Thank you for the detailed proposal. Iâm wondering if this should be presented as a standalone GIP. My concern is that its significance might be overlooked if itâs only included as a reply to GIP0058. I believe itâs crucial for the wider ecosystem to be aware of this.
Yes, yes it should. Working on it.
I am posting my idea for a possible upgrade to curation model.
Proposal Title: Introduction of Tiered Self-Signaling System and Adjusted Curation Query Fees for Subgraph Curation
Introduction
The purpose of this proposal is to introduce a tiered self-signaling system for subgraph curation within the Graph Protocol and adjust curation query fees. This combined approach aims to enhance the curation process by providing clearer signals of the importance of subgraphs, creating more equitable rewards for indexers and delegators, and maintaining a sustainable economic model.
Motivation
The current curation system on the Graph Protocol allows anyone to signal on subgraphs, and curation query fees are set at 10%. However, I believe that a tiered self-signaling system, combined with a reduction in curation query fees, can offer several advantages:
-
Improved Signal Quality: Tiered signaling provides more granularity in indicating the importance of a subgraph, enabling users to express their preferences more accurately.
-
Fairer Rewards: Indexers and delegators will be rewarded based on the tiered signal, which encourages them to prioritize subgraphs with higher signals, ultimately leading to a fairer and more efficient allocation of resources.
-
Reduced Curation Query Fees: Lowering curation query fees to 5% ensures a more attractive economic model for users, potentially increasing the overall adoption of the Graph Protocol.
Proposal Details
Propose the following tiered self-signaling system:
-
Tier 1 - Low Importance: Users must provide a minimal self-signal to indicate low importance. The self-signal required is X tokens.
-
Tier 2 - Medium Importance: For medium importance, a self-sign of Y tokens is required.
-
Tier 3 - High Importance: Subgraphs deemed of high importance require a self-signal of Z tokens.
-
Tier 4 - Critical Importance: The highest tier, reserved for critical subgraphs, necessitates a self-signal of W tokens.
Curation Query Fees Adjustment
I propose reducing the curation query fees from 10% to 5%. The remaining 5% of the fees collected could be split amongst indexers and delegators as follows (but can be adjusted as needed):
- Indexers: Indexers will receive 3% of the curation query fees.
- Delegators: Delegators will receive 2% of the curation query fees.