Set The Great Celo Halvening Parameters

Set The Great Celo Halvening Parameters

  • Receiver Entity: Celo Governance
  • Status: DRAFT
  • Title: Set The Great Celo Halvening Parameters
  • Author(s): cLabs Team
  • Type of Request: Network Decisions & Protocol Improvements
  • Funding Request: N/A

1. Summary

This proposal implements the tokenomics changes outlined in “The Great Celo Halvening” by setting specific configuration parameters across several smart contracts. These changes were previously approved through a temperature check and are designed to be executed shortly after Celo’s transition to an L2 network.

2. Motivation

The Great Celo Halvening represents a significant adjustment to Celo’s tokenomics model as the network transitions to an L2 architecture. These parameter changes are crucial for maintaining economic balance, incentivizing network participation, and ensuring the long-term sustainability of the Celo ecosystem in this new phase of development.

3. Specification

Changes are explained in the original “The Great Celo Halvening” forum post.

4. Metrics and KPIs

N.A

5. Current Status

This proposal follows the forum discussion and temperature check approval of “The Great Celo Halvening” tokenomics model. The parameter changes are ready to be proposed before the network transitions to an L2 and executed shortly after.

6. Timeline and Milestones

Dependant on L2 transition timing.

7. Detailed Budget

N/A - This proposal does not request funding.

8. Payment Terms

N/A - This proposal does not involve payments.

9. Team

N/A

10. Additional Support/Resources

Full body of the proposal can be found here.

Thanks @martinvol and team for the proposal, at my end this proposal is fulfilling all the requirements:

  • Post a proposal in the Celo Forum and leave it for discussion at least for seven (7) days.
  • Present Proposal in a Governance Call and address the feedback received: Proposal was presented during Celo Governance Call #63 | March 13th, 2025

With the above said, from my end the proposal is ready to move into the voting phase when proposer wants to move forward or consider is appropriate.


:bangbang: Remember Current Celo Governance Overview & Procedures

To proceed to the submission and voting phase at least two Celo Governance Guardians must post explicitly that the proposal fulfills the requirements to be able to move into the Voting Stage in the proposal thread on the Celo Forum.


Remember next steps

  • Submission of PR to Celo Governance Repository
    Proposers needs to fork the Celo Governance Repository and add a PR including the proposal .md file and json file.
  • Approval of PR by Celo governance Guardians and merge into main branch of Celo Governance Repository.
    Celo Guardians are responsible for conducting a comprehensive review of every Pull Request (PR) to ensure that there is complete alignment and consistency between the final proposal posted in the forum post and the specific files that are being requested to be merged.
    This review process is strictly technical in nature, focusing solely on verifying the authenticity and good faith of the proposers. It does not involve any personal opinions or biases regarding the merits or content of the proposal itself. To maintain the integrity of the Celo Governance repository, it is mandatory to obtain approval from a minimum of two Governance Guardians for each PR before it can be merged into the main branch.
  • OnChain Submission of Proposal
    After PR is merged into main Governance Repo the proposers needs to fork locally the Celo Governance Repository and submit the proposal onchain using the guidelines described in the Celo Docs.

CC: Governance Working Group (@annaalexa @Wade @0xGoldo)

Hello everyone!

As validators shift into the role of community RPC providers, it’s essential to establish a clear framework for performance monitoring, rewards, and enforcement to ensure network reliability.

To that end, we’re proposing a Validator Scoring and Slashing Mechanism, overseen by an independent Score Management Committee. This system will:

  • Track validator RPC uptime and allocate rewards based on performance.
  • Enforce penalties for inadequate uptime, ensuring validators remain reliable infrastructure providers.
  • Operate via a multisig-controlled ScoreManager contract, with validator representatives overseeing scoring adjustments and slashing where necessary.

This structured approach ensures that validators continue to play a critical role in Celo’s decentralized future while maintaining a stable, high-performing RPC network during the transition to decentralized sequencing.

Full details on scoring, committee responsibilities, and enforcement mechanisms are outlined below. We welcome feedback from the community as we refine this approach.

Validator Operations

1. How It Works

  • Validators who register a RPC will be allocated 82.19178082 cUSD per day at a perfect score (1). These allocations can later be claimed by each validator (link).
  • Scores are monitored-off-chain by an independent working group of validators (the Score Management Committee) running custom software based on the Community RPCs page at Vido by Atalma
  • This committee will create a Safe Multisig which itself has permissions to manage the on-chain ScoreManager.sol smart contract.
  • Every week, the Score Management Committee will collate their measurements, aggregate and average their scores for each validator, for the week prior. If the score is 1 (100%), no adjustments need to be made for that validator.
  • If a validator’s score is less than 1, their score will be updated by the committee and will apply to their payments for the following week, at which time the score will be adjusted again.
  • Registering a validator RPC is not optional. If you do not have a configured RPC with sufficient uptime, you will be slashed. The Score Management Committee’s multisig will also be given permissions to slash validators.

2. The Score Management Committee

The Score Management Committee will control a multisig which has the power given by Governance to call functions on the ScoreManager and GovernanceSlasher contract.

Currently, the multisig consists of three members, and we welcome additional participants to share responsibilities:

  • Aaron Boyd – Prezenti founder, independent validator operator, Vido creator and maintainer
  • Clemens (clemens.eth) – Independent validator operator, creator of celo-community.org
  • Dee – Independent validator operator

Multisig: 0x68ce71d4CECA3003701ca6844D9a345925407455

Please contact us in this forum post if you are interested in being involved.

Responsibilities:

  • Running and maintaining additional infrastructure including DB and indexer for uptime tracking. The stack utilized will be open-sourced for anyone to verify.
  • Dedicated time for performance evaluation, collaboration, multisig operations, and transparent communication

To cover operational expenses for monitoring uptime, maintaining additional infrastructure, and managing Scoring and Slashing, each committee member will receive $2k cUSD per month.

3. Metrics

Validator rewards will be allocated automatically after each epoch based on the score defined by the ScoreManager contract and can afterwards be claimed manually by the validator.

The committee is proposing the following weekly score breakdown:

RPC Uptime Score
100% - 80% 1.00
79% - 60% 0.80
59% - 40% 0.6
39% - 20% 0.40
19% - 0% 0 and Slash
  • Validators with uptime below 20% for 7 days will be slashed.
  • Running a community RPC node is mandatory for elected validators.
  • If an elected validator does not run a properly configured RPC node, they will be slashed, and a portion of their locked Celo will be forfeited.

Additionally, after the transition to Layer 2, we propose a 1-week grace period for validators to configure their RPC nodes. After that, the performance scoring system will be fully enforced.

4. Other Considerations

  • We opted for a weekly cadence as daily updates and multi-sig coordination is outside the scope of a single small team
  • We opted for only 5 sets of scores in a range, to avoid the transaction and workload of precisely updating validators every week
  • Validator slashing remains part of the process. The idea behind this process is to maintain a competent and secure set of technical contributors for potential future operations. If a validator can not continuously maintain this load then they need to be ejected from the elected set and be replaced with someone who can.
  • This process will run for 6 months, or until decentralized sequencing has been introduced, to allow us to learn and adapt, and potentially change the management structure.
  • Future technical solutions that could be explored include: updating the Celo stats websocket push from full nodes and manage performance through that site (which would require work to update the code), or even integrating natively with Lavanet - a decentralized RPC management protocol that allows custom incentives for RPC providers.
  • RPC requests to community nodes will eventually be semi-random and include state queries etc so that bad faith validators cannot simply proxy eth_blockNumber from another node - even Forno itself.

5. References

5 Likes

Great to see this initiative come to life! Huge thanks to everyone involved in creating this proposal.

3 Likes

Amazing proposal, it looks great!

A few suggestions:

  1. I’ll love to see some small report of learnings around the 6 month mark, or even before so we have time to improve this process.
  2. Can we define a grace period for enforcing this policy after the transition? The L2 is already a big coordination effort among the community, so I don’t want to complicate this with RPC providers worrying about getting slashed / losing score at a time were the network may have other priorities.
  3. What about disputes of the score? Can we give a warning before slashing?
  4. What about compensation for the member of this committee, covering operation costs and development of the tooling? I think it’d be good to include that on the proposal.

Follow up for the mid term: it’d be great if with a request we could get some sort of auth (like for example, asking the node what accounts are unlocked or something like this). I would consider that if someone it’s pointing their RPC url to someone else’s node, that’s ground for an immediate slash.

3 Likes

Hi Martin,

  1. We can definitely create a report over time and share on the forum here.
  2. We are proposing a 1 week grace period after L2 launch to make sure everyone is configured and understands the process.
  3. Dispute policy we haven’t defined, but by all three of us averaging our measurements, and having large bands, we hope this will mostly remove jitter / network latency / patch / upgrade downtime from the scoring mechanism - both on the RPC side and on the measurement / checking process side.
  4. We have proposed payment for the committee in the post to be $2K per person per month. Happy to hear any feedback from the group but we did some rough calculations based on the time spent managing and signing the scores every week, cost of developing and maintaining the measurement process, time spent communicating and reporting on what we’re doing.

Regarding node identification, would love to hear any ideas on this. The personal RPC interface is going away but there might be something else we can call.

3 Likes

You can view a beta/staging version of what I’ve developed already for Vido checking the Baklava RPCs here: https://dev.vido.atalma.io/celo-baklava/rpc

What it does:

  • There is a back-end process that uses @celo/celocli to retrieve the current list of community RPCs.
  • Then uses this list to call every RPC every 5 minutes, and records the eth_blockNumber (for now) and measures the response time. If it returns, it gets an “Up” measurement (green on the chart).
  • If the RPC fails to return before timing out or some other error, a “Down” measurement is scored (red on the chart).
  • The process also compares the current list of RPCs with the current list of elected validators. If you are elected don’t have an RPC configured, you also get a “Down” measurement.
  • Expose the data via API for Vido to chart, and there is also a Download button for each validator where you can select the data range an inspect the values measured.

Coming Soon :trade_mark:

  • Metrics in the “Metrics” tab showing daily/weekly summaries for all RPCs
  • Open-sourcing and simplifying the code for use in the management committee and community to verify everything looks OK

There are a couple of minor bugs I’m working through but should have a complete version up by next week.

4 Likes

@celogovernance The Score Management Committee would like to present this proposal at the next governance call.

2 Likes

This dashboard rocks

I do not feel good to have this additional committee proposal added to the already presented Celo Halvening parameters proposal without proper community discussion. This addition to the proposal feels to me a way to bypass the formal governance process by converting a technical proposal of changes previously approved through a temperature check by adding another request of 36k cUSD for an RPC nodes committee that just posted a comment on the forum 2 days ago.

2 Likes

Hi @0xGoldo , appreciate the concern and I might have similar thoughts reviewing this from the other side.

Yes, this concept of the management committee was not fully formed while the original temperature check occurred. There were, however placeholders for these values. What we have appended to the existing governance proposal is:

  1. The address of the multisig that will be managing the scores
  2. An approval for the management committee for administration expenses and time

Some background, personally I had a bit of confusion around how this entire process was planned to operate. I had assumed the payments to validators (now RPC operators) was handled either by the protocol as normal, or by cLabs. That’s on me as I didn’t attend all the validator discussions recently due to other commitments.

After a discussion only very recently with @martinvol did I understand that no, the payments don’t happen automatically, and scores will need to be managed off-chain by a trusted committee (not ideal, but it’s a placeholder). I had already decided I wanted to visualize something related to the community RPCs on my company’s monitoring page Vido, so in the past I had put my hand up to help collect and manage scores.

@Dee and @clemens reached out to me in the same week, as they were also both working on some ideas related to the community RPCs, including a load balanced public endpoint and some other independent metrics collection.

With only a week to go until mainnet by this time, we quickly workshopped a solution and some starting metrics, and added the details to this forum post.

There’s nothing nefarious going on, simply time pressure to be prepared for mainnet. I would be very happy to extend the grace period to something like 1 month, if the community wants more time to collaborate on the metrics and slashing parameters. @marek mentioned to me that he desired the scoring and slashing to be fairly close to the current validator slashing parameters (12 hours or 8640 consecutive missed blocks), so what we’re proposed here is 100% a suggestion, and we would welcome discussion, or even a technically better or trust-less solution.

Regarding the payments, we did some rough calculations based on: developing, operating, patching and paying for infrastructure for the scores, meeting at least one a week, every week, to perform potentially dozens of multi-sig transactions to update scores for the validator set, and to slash non-performing nodes. Plus reporting and communication and all the unplanned work that goes along with it. Honestly we’ve probably under-shot the real-world impact on our other full-time jobs. I know you didn’t criticize the amount exactly, but I think it’s more than fair for the funds that it manages and the constant workload.

None of us are going to burn our reputations to try and grift a few thousand from the treasury, but we do value our time and experience. We’re all long-term contributions to the Celo ecosystem and just want to help with an interim solution while a better one is devised.

I understand operationally, this should have been proposed months ago for wider discussion, I agree. If the community doesn’t support it, I’m happy to hand it back to cLabs or another solution.

I think if the community needs more time, a good compromise could be to increase the grace period to something like 1 month or more, while the committee still collects scores, then we all can collaborate on a metrics/score/slashing setup. Happy to hear all suggestions or alternative proposals :+1:

5 Likes

Just want to add a little clarification.

The RPC rewards do not need to be distributed by the committee, it would be a nice touch, but as it is permissionless is up to anyone to be a good citizen, or just leave the operators themselves to collect their own rewards.

I’d actually prefer to collect my own rewards, so my accounting gets easier. My personal opinion would be not to distribute them automatically every epoch.

On the other side, I really wish we had more time to discuss this, but unfortunately it wasn’t the case. Transitioning Celo to an L2 is an incredible engineering effort in unchartered territory. Timelines and people’s bandwidth was a lot more difficult to predict than originally thought.

Just as an example, Ethereum’s Pectra hardfork was announced when the engineering effort was essentially done, and Holesky (the network Celo’s tesnets are rallying on) had multy-week downtime shortly after. I think every piece of feedback or criticism should consider this, rather than pointing it as “a way to bypass the formal governance process”. And on top of that, pretty much every single partner that has ever built on Celo reached out to make sure they got the transition right.

Once the transition is completed, I would like to take the time to write about all the complexities that we had to deal with.

I think this extraordinary situation shows that the governance process we have is not flexible enough for our needs, I invite us all to discuss how we can improve it going forward.

I’ve already said in a governance call, but these new governance rules were news to me after they went into effect. I attend pretty much every single governance call and wrote dozens of proposals, so if I didn’t know about them, I really wonder who, other than the people writing them, did. I do not recall being asked for feedback while they were drafted.

Those are my honest thoughts, but now, back to building.

3 Likes

Yes I should have been clearer, the RPC/validator scores need to be managed by an off-chain process, which is used to calculate each validator’s payment. But the payments themselves can be claimed by the validator.

If there was no off-chain process of collecting and tallying the scores, my understanding is that everyone’s score would be permanently fixed at 100%, meaning everyone could collect their payments regardless of whether they are actually running an RPC or not.

So there needs to be something in place. This proposal is an attempt to fill the gap until a better solution is found.

3 Likes

Couple of comments:

  1. There is no agreed upon standard on how uptime checking should be done, which could result in regular disputes. In the current validator setup, there is exactly one way to determine the score and we have several independent software implementations around it (Ryabina Bot, Vido, Celo stats). There should be a refined technical standard around the probes that everyone is comfortable with.
  2. A dispute policy should be created. E.g. if a RPC provider claims he has been unfairly penalized/slashed how should the evidence from both sides be weighed?
  3. Without any authentication or verification that a community RPC provider is running a full node as per the required standards, there is nothing stopping bad actors from from piggybacking on Forno or other RPC node providers. I think this is a hard stopper and I would recommend rallying the community into contributing such a feature into celo-op-geth or coming up with an intrusive software ASAP.

I also think the grace period of any uptime checking should be extended to 2-3 weeks to gather more feedback from the wider Celo community. Preferably in a separate forum post and governance proposal as this seems less to do with the halvening parameters.

Also I’d like to get involved (if there are any available slots) specifically around solving problems 1 and 3 above.

3 Likes

Hi Thylacine,

I would be interested to participate in The Score Management Committee.

There is also few comments from my side:

  • Please update this section to state also here, that it only applies to the elected validators.
* Registering a validator RPC is not optional. If you do not have a configured RPC with sufficient uptime, you will be slashed. The Score Management Committee’s multisig will also be given permissions to slash validators.
  • Uptime scoring - I think we should also add block levels (delta from the top) to the scoring. If some node is stale (not ingesting blocks) but up, it should be considered as down.

  • Do I understand correctly that in order to have a perfect score, it is enough to be online > 5.6 days (134.4h), ie you can have downtime of 1.4 day (33.6h).
    my calculations : 20% * 7 * 24 => 33.6

  • I am also in favor of the grace period for the few days during the migration from L1=>L2

3 Likes

Initially I was thinking about proposing a small budget to develop a better solution for serving RPC to the public.

Forno is also regionally load balanced. It is much better approach then randomly choosing the RPC from the validator set.
I would like to see some protection of the validators RPCs. Like this some be overloaded by requests while others can be idle.
So the goal was to offer a centralized endpoint like forno but from the validator rpc set.

My idea was to have:
Global Load Balancer → Regional Load Balancers → private connection → RPC Nodes
DDoS protection and rate limiting at the load balancer level
Comprehensive monitoring using Prometheus and Grafana
Block-level health checks for improved reliability

1 Like

I think proposals to make the system better are highly valuable, maybe we can do some request signing using nginx or some other proxy, I haven’t researched this particular thing.

Regarding the grace period, I think one week is enough. There’s no way to stop the payments and I think one week to provide a decent RPC should be enough.

I do think there should be a dispute process, but at the end of the day in most cases it will be fairly obvious if someone is down and it’s nobody’s intention to slash people over technicalities.

I also wanted to add a little theory: it is not possible to prove if someone is offline, because you can’t prove that the problem is not in the connection from the tester to the provider. Therefore, it’s also not possible for a chain to automatically do this (as we did for validator downtime). But at the end of the day, if it looks like a duck quacks like a duck, it’s likely a duck.

3 Likes

Good comments, and I didn’t add some extra details that I could of the implementation I built for Vido.

  • Checks every validator every 5 minutes, so there are 288 measurements per day.
  • Not having a RPC configured is considered down for every measurement it’s not configured.
  • If the RPC returns eth_blockNumber is it considered up. I agree, it would be a better if the blockNumber is within a few blocks of a neutral endpoint like Forno perhaps? I could easily upgrade this and have a tristate: up/stale/down instead of simply up/down.
  • Total score will just be a flat percentage over the payment period (proposed as a week here).

This is just one software implementation I built and it’s by no means perfect. The fact that anyone can spin up a fullnode on Quicknode for like $10/month and proxy requests to it, or even anyone else’s node, is a bit of a problem, regardless of the implementation.

As I said, I was not fully aware of how the end to end process worked until recently so it was more luck that I was building this. Perhaps it’s better to have completely independent measurements with different software implementations, make them all open source and then aggregate/average the results? (Rather than relying on a quick and dirty visualization I made).

An even better solution would be to onboard all of us to Lavanet, since this protocol is explicitly designed for managing and rewarding RPC fleets and probably already handles uptime and identification (I haven’t investigated in detail).

1 Like

Regarding dispute policy.

Would it even be worth pursuing from a person-hours point of view? Let’s say the management committee says you scored 79% and now get a reduced payment. However, you are certain you were up for 81% of the week and should get the full payment.

One RPC receives around $90/day. So $630 per week. If your score is reduced to 50% for the following week, you lose $315.

Apart from each of the management committee providing the data they used and calculated the score from, what else could be done? If it ends up in long meetings and arbitration, past one or two hours of work with multiple people in meetings, the ROI on a dispute just becomes a waste of time. I understand this is not a very “clean” or verifiable approach, but this entire process suffers from having humans involved rather than code already.

This is why we have a large band near the top to get the 100% payment. The best result for the management committee is that every week we rarely have to adjust anyone’s scores, even if they are down for many hours while patching or having configuration issues.

Part of the vote for this committee is a vote that the members will do a fair and decent job. It sucks that it’s an off-chain process, it’s not very clean.

Maybe a workflow could be:

  1. Sunday XX:YY:ZZ UTC: committee collects last week’s values and reports in forum
  2. Dispute period opens
  3. Monday XX:YY:ZZ UTC: disput period closes, commitee updates scores and performs slashing

… so there’s some window for disputes. I’m just really wary of blowing this entire process out into a lot of extra work for everyone. But it’s good to have a stated process, nonetheless.

2 Likes

Adding to Aaron’s @Thylacine points above. While the current solution is not perfect, the endgame for Celo L2, as it was communicated, is to have a decentralized sequencer. So, by the time someone comes up with more complex solution for tracking RPC uptime it might become obsolete. Also, let’s not discount community validators’ integrity. We all have been in the trenches for many years, and I hope noone is willing to risk their reputation and stake by pointing their RPCs to someone else’s node.

1 Like