Score Management Committee - Results for week ending 31/08/2025

This is the 21th in a series of weekly reports on our off-chain score-keeping system for the elected validator / community RPC set. Each week on Sunday, we will publish our next actions for the score management multi-sig, and open up the dispute period.

Changed Scores

We will only report any deviations from a full score. If you are not listed here, we need to take no action and your validator score remains: 1.0 (full payment, claimable each epoch).

Scores below 1

  • Bi23 (0x997490F08B9b99dB00657e2B1BbEc180Cf804A27)
    RPC downtime for the week was above 20%. Score set to 0.8

  • coinwar (0xC4C51aACa587F16006E9a5F6ccc74a99eBEB37A1)
    RPC downtime for the week was around 20%. Score set to 0.8

  • ReFiMedellin1 (0x58362102FE07FB4d0E2287f5eB6b9F50e3a9739A)
    RPC downtime for the week was above 40%. Score set to 0.6

  • ChainLayer1 (0x4FC4EA624db2E4A1d6195a03744D505CbcD9431b)
    RPC downtime for the week was above 40%. Score set to 0.6


Scores back to 1

  • celvaly-0 (0xe8302a78c4eB56Ac1cd1d6aee25719dC54C63E59)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • celvaly-1 (0x989F3F2684F96B8cDeC308B4E7538A5a062890f0)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • celvaly-2 (0x9038D66a9b101C32Bd661e8861B8e61086A74c52)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • missionquorum-0 (0x39b01620ecdEB8347270ee78dC962b0EA583b57a)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • missionquorum-2 (0xdb2e80840f40033761A5cdFa3B09fE24f95a4aaa)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • Validator.Capital1 (0x0F412760759a2fAD4f07ceE36bfdAa218814DE89)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • Validator.Capital2 (0x07adF41F00dD7Ec9C63c445f3faD5cCbdc2a7c85)
    RPC uptime for the week was above the required threshold, set score back to 1.0

  • Validator.Capital3 (0x3ABE54fe87f77D18b472e47920934CE63b1F6bfc)
    RPC uptime for the week was above the required threshold, set score back to 1.0


Next Steps

The dispute period is now open for approximately 24 hours, after which we will execute the above score changes if there are no challenges. Those changes will apply for another week until we review the scores again on 07/09/2025.

2 Likes

Hello team @swiftstaking, thanks for the report.

We would like to clarify the situation regarding ReFiMedellin1:

  • The validator has been set up for over a year, but it only received delegation around past mid-week without any advice (Amazing surprise :smiling_face_with_three_hearts: )

  • Once the delegation came in, we realized we needed to migrate the RPC, which was done on Thursday.

  • Because of this timing, the RPC was not running consistently earlier in the week, which may have affected the measured uptime.

Going forward, we will actively manage this validator under Celo Colombia, ensuring reliable uptime and performance. Our plan is to allocate 20% of validator earnings to ReFi Colombia and from there support all ReFI Nodes including Medellin, Bogota, Amazonas, Atlantico and rest 80% to Celo Colombia activities, strengthening both local and ecosystem-wide engagement.

Given the context that the RPC only became active mid-week we kindly ask if this can be taken into account so that the validator is not scored down for this period :folded_hands:

Thanks for your understanding and for keeping this system fair.

2 Likes

Thank you for the feedback and rapid response.

With the Foundation Validator Voting Program winding down, significant changes are underway in both vote distribution and the elected validator set.

2 Likes

Thanks for your response @swiftstaking
Just to clarify: is this a yes or a no regarding our request not to be scored down this week, given the RPC only became active on Thursday?

Hi @CeloColombiano ,

Just explaining a bit more how the scoring works. We count downtime up as a portion of the calendar week for this very reason, to keep it fair to validators elected only partially through the week.

Downtime is calculated as proportion of the calendar week 12:00:00 UTC Sunday to 11:59:00 UTC Sunday. The reason we count as a proportion of the week rather than as a proportion of “the time you were elected in a week” is because of the following problem:

  • Validator A: Elected the entire week, and is “down” 10% of the week (which is also equal to their “elected” time), approximately 16.8 hours. Under the current rules, they have downtime less than 20% of their elected period, and get the full score.
  • Validator B: Elected on Saturday evening, let’s say at 23:59:59 UTC, have some configuration troubles and are down for 6 hours, and finally are back up at 06:00:00 UTC on Sunday. According to their “elected” period of 12 hours, they were down for 50% of the time and would have their score reduced to 0.6.

This is the problem with pro-rated by your “elected” time. Validator B was only down for 6 hours in this example, Validator A gave a worse quality of service - they were down for 16 hours in the week but received no penalty for this. This is unfair to Validator B, they were only briefly down for 6 hours in comparison!

So we updated the formula to account for this, by calculating only on the proportion of the entire week you are down. This makes it fair for everyone, by only counting downtime in hours, everyone gets penalised at the same effective “rate”.

This also aligns with the “old” system much better, which was much harsher. After 12 hours of continuous downtime (regardless of how long you have been elected), you would be elegible to be slashed. Additionally, under the old system, if you were back up after 11 hours without being slashed, your score would be exponentially decay very very quickly, after 11 hours you would be near single-digit validator score and would have to wait many epochs to build back up to 99.9%.

In your case, your validator was down for over 40% of the calendar week (43.6% by my recordings, and very similar results from the other score keepers).

We’re reviewing internally, but in my opinion there’s no extenuating circumstances here other than operator error. This program of rewarding RPC providers is already very generous, given that it’s not required to actually run the L2 network, and the rules are far more lax than actually contributing to consensus as in the L1. We are compensated for the devops/secops work of running a performant RPC node for the community. It’s on all of us to be technically on top of everything.

I’m sorry if that’s not the answer you were looking for, but once again - the penalty is much less severe than on the L1. Your score will only be 0.6 for the following week, when (assuming you have a downtime of less than 20%), your score will recover again be up to 1.0 and full reward.

Happy to field any questions about the calculations or our methodologies, or any technical questions on running the node safely.

Edit: Note, we do not count unelected time as downtime. We only count elected and down as downtime. So for all the time your were counted as down, you were elected.

Edit2: Wait, you might be on to something. I’m reviewing my script from the database.

3 Likes

OK, we checked all the raw data and everything looks correct as per my original response. It appears you actually were elected from the 24/08/2025 at approximately 19:50:00 UTC, when your data first appears as an elected and configured validator. I believe when elected, there were domain resolution problems, and so all your data records for until sometime on Wed/Thu were empty/NULL.

There is a minor display bug on Vido that is showing you as blue (not elected) for this period from the 24th, but it’s just a display bug - it’s not catering for “elected but domain problems” and is showing up as “not elected” color. I will fix this display error, but the underlying data looks sound from our end.

1 Like

Thanks for the detailed explanation. From now on we’ll make sure to be technically on top of everything. Our only point was that the delegation arrived unexpectedly, so we weren’t fully prepared at first, but fortunately we were able to handle it and get everything solved.

We’ll focus on maintaining reliable uptime going forward, and are confident we’ll be back at a 1.0 score soon. :flexed_biceps:

1 Like

Ahh I understand, yeah that’s a bit unfortunate. Anyway everything looks configured properly now and you’ll be back to 1.0 in no time.