Score Management Committee Results
Week of 13/04/2025
This is the first of a weekly series of reports on our off-chain score keeping system for the elected validator / RPC set. Each week on Sunday we will publish our next actions for the score management multi-sig, and open up the dispute period.
Changed Scores
We will only report any deviations from a full score. If you are not listed here, we need to take no action and your validator score remains: 1.0 (full payment, claimable each epoch).
- Chorus One Validator1 (
0x5098A28bFa2Da3183E36C009c7D63093b66441a5
), RPC uptime for the week was 0%, set score to 0.0
- Keyko1 (
0x233c06EDc757003cA4ec078012F517F21A24C55c
), RPC uptime for the week was ~ 55%, set score to 0.6
- missionquorum-0 (
0x39b01620ecdEB8347270ee78dC962b0EA583b57a
), RPC uptime for the week was 0%, set score to 0.0
- missionquorum-2 (
0xdb2e80840f40033761A5cdFa3B09fE24f95a4aaa
), RPC uptime for the week was 0%, set score to 0.0
Next Steps
The dispute period is now open for approximately 24 hours, after which, we will execute the above score changes if there are no challenges. Those changes will apply for another week until we review the scores again on 20/04/2025.
6 Likes
Asa here from MissionQuorum, sorry we dropped the ball we had a bit of a misunderstanding with respect to the L2 migration. I will be working on getting our RPC nodes up this weekend.
1 Like
Hi Asa,
Not sure if you’re still configuring or not, but your two validator RPCs are registered to the same endpoint.
The intention is that the validator payments are on a 1 node to 1 validator basis. Otherwise, everyone will just pay $10/month for a full node on Quicknode and proxy the RPC requests.
Apart from providing many different community RPCs (we are working a top-level load balancer for these RPCs too), part of the motivation for Celo to reward elected validators in this manner is to have a mobilized set of technical contributors who know how to operate the node, have copies of state and so on, and can keep in the loop if cLabs makes progress on decentralized sequencing.
I know it might seem like redundant resource usage have duplicate full nodes simply sitting there syncing, but without upgrading op-node/op-geth to have node identifiers or signed RPC responses, at the moment we’re running on social consensus to not just be proxying to either a third party or internally.
Looks like you’re still down as of right now, but just reach out if you have any trouble.
Thanks @Thylacine, I am indeed still working on this. I have multiple nodes behind a load balancer, hence the same endpoint. I can expose the individual nodes instead, though I think that is probably an inferior setup.
Yeah, if the goal is having a load balanced RPC available to the public that people will consistently feel comfortable using, or setting in their browser wallet and so on, the quality of service would be priority and really we should be focusing on having zero-downtime RPCs - so focusing more on the proxying and cluster setup of the API rather than redundant copies of op-node for no reason.
But part of the goal is somewhat the other things I mentioned. It’s difficult to justify rewarding groups with 5 validators five times more than a solo validator simply for having one node and 5 routes all to the same machine behind a reverse proxy.
There’s no way we can stop people from doing this though, or indeed not even having a node at all and simply proxying to someone else’s RPC (or Forno) in a $8/month VPS. We’re somewhat stuck in bit of a worst of both worlds scenario until we come up with some better rules.
This scoring committee is only temporary for 6 months, and we can review the rules after that. (We might have a node ID / de-duplication PR ready in this time also).
No problem, I’ve updated the metadata to reflect endpoints for the individual nodes rather than the load balancer.
I think both nodes should be synced now, so we should be back up and running, thanks for your patience 