Gingerbread Hard Fork: Timeline & Specification Update [All Core Devs call on Aug 18]

Gingerbread Hard Fork: Timeline & Specification Update

Dear Celo community, dear validators and node operators,

This is an update on A) the timeline for the Gingerbread hard fork mainnet activation, B) the content of the hard fork, as well as some other topics.

A) Updated Timeline

In the original post, we had a pretty aggressive timeline targeting set out targeting a mainnet activation on August 18. However, we have accrued a delay of about one month in the process, mostly due to difficulties in completing the code in the expected time and some issues that surfaced during testing. This means, we are currently operating with a one month time delay vs the original timeline.

As described below, we feel it would be beneficial to include more updates in this hard fork. This introduces more delay. Our expectation of the timeline for the hard fork including all the proposed additional changes below, is the following:

  1. August 15: Code complete for all HF features
  2. August 21: Baklava release published
  3. September 1: Baklava upgrade date
  4. September 4: Mainnet release published
  5. September 26: Hard Fork date

B) Additional Goals & Features

As it took more time to release the Gingerbread hard fork on Baklava, a number of additional items surfaced that would be really beneficial to be included in the hard fork. Importantly, all of these items are relevant for our parallel work to upgrade to an L2. A high level overview follows here. A more technical overview of these items is available on github, see links below.

1. Removal of CIPs 20, 25, 30 & 31

These CIPs lack an Ethereum implementation. Accordingly, they increase the differences between Celo and Ethereum, however, at cLabs we try to minimize these differences as much as possible. To our best knowledge, these CIPs are not used on Celo. Therefore, we suggest removing them.

See github for details.

2. Introduction of new transaction type “envelope transaction”

Historically, Celo has had two type of transactions, namely Ethereum-type and Celo-native (an envelope transaction with 3 additional input fields from Celo). For the upgrade to L2, we will remove two of these three fields (namely “gatewayFee” and “gatewayFeeRecipient” as we remove full node incentives) and only keep the fee currency field. For that, we introduce a new envelope transaction which is reduced to one input field. To enable all DApps to start the migration new envelope function is released. Doing that as soon as possible is ideal to provide a long migration window for all ecosystem participants.

See github for details.

3. Block size limitation

As part of Celo’s pursuit of processing more transactions per second, we constantly search for different ways to increase the gas limit without degrading the performance of every node. Upgrading to an L2 will unlock further opportunities to increase throughput. For now, a focus lies on increasing the gas limit of blocks, and decreasing the time between blocks. Increasing the block gas limit can theoretically result in an increased block size (in terms of MB) to a level which hinders the communication between nodes. To prevent that, we propose to add an overall limit of the block size of 5MB.

Note that this limit is of rather theoretical nature, as observed block sizes currently are a factor ~50x below that. We assume that only a malicious actor would trigger block sizes higher than 5MB and that it will not affect the use of Celo how we have historically observed it.

See github for details.

Updated recommended node specs

Based on the block size limitation to 5MB (see 3. above) and potential future increases of the gas size limit per block, we will propose updated recommended specs for validators and nodes as follows:

Validator and validator proxy recommended specs:

  • CPU: 4 cores / 8 threads, x86_64 3ghz (Cascade Lake or Ryzen 3000 or newer)
  • Memory: 32GB
  • Disk: At least 512GB SSD (resizable). Current chain size at August 2023 is ~190GB
  • Network: 1 GigE with low-latency fiber Internet connection, ideally redundant connections and switches

Some cloud instances that meet the above requirements are:

  • GCP: n2-highmem-4, n2d-highmem-4 or c3-highmem-4
  • AWS: r6i.xlarge, r6in.xlarge, or r6a.xlarge
  • Azure: Standard_E4_v5, or Standard_E4d_v5 or Standard_E4as_v5

Full Node recommended specs:

  • CPU: 4 cores / 8 threads, x86_64 3ghz (Cascade Lake or Ryzen 3000 or newer)
  • Memory: 16GB
  • Disk: At least 512GB SSD (resizable). Current chain size at August 2023 is ~190GB.
  • Network: 1 GigE with low-latency fiber Internet connection

Some cloud instances that meet the above requirements are:

  • GCP: n2-standard-4, n2d-standard-4 or c3-standard-4
  • AWS: M6i.xlarge, M6in.xlarge, or M6a.xlarge
  • Azure: Standard_D4_v5, or Standard_D4_v4 or Standard_D4as_v5

Archive Node recommended specs:

  • CPU: 4 cores / 8 threads, x86_64 3ghz (Cascade Lake or Ryzen 3000 or newer
  • Memory: 16GB
  • Disk: 3TB SSD or NVMe (resizable). Current chain size at August 2023 is ~2.1TB.
  • Network: 1 GigE with low-latency fiber Internet connection

Some cloud instances that meet the above requirements are:

  • GCP: n2-standard-4, n2d-standard-4 or c3-standard-4
  • AWS: M6i.xlarge, M6in.xlarge, or M6a.xlarge
  • Azure: Standard_D4_v5, or Standard_D4_v4 or Standard_D4as_v5

Request for validators: Form update

Regarding the hardware specs upgrade, it would be great if you can fill out this form so we can gather feedback of has already / will upgrade to increased specs.

All Core Devs Call & Validator Q&A

Historically, we’ve had two type of calls where we discuss CIPs: The All Core Devs Call and the Validator Q&A session. We will combine these two this time and have two calls to accommodate people around the world. Currently planned schedule:

Call: August 18, 2023, 5pm CET, 8am PT

Link to Join: https://meet.google.com/shx-pqjw-xzx (if link does not work, please check

Call details subject to change: check Celo Signal public calendar for latest details

We’re looking forward to your feedback, both here as of now, and in the calls on June 21.

Matt

For the blockchain team

7 Likes

Should a proxy node be considered as a “validator” or a “full node” for purposes of hardware selection?

2 Likes

Good question!
We must consider the proxy as validator rather than fullnode. Memory consumption is slighter higher on proxies than fullnodes, so they will run better (and safer) with 32GB of memory for periods of high traffic.

2 Likes

the baklava release including the updates discussed above is live: please update if you agree with these changes to the protocol!

2 Likes

I am a bit confused by these recommendations. “-4” machines only have 4 “vCPUs”. That is not 4 real CPUs, a vCPU is: " A vCPU is implemented as a single hardware thread, or logical core, on one of the available CPU platforms"

So if you actually want 4 CPU / 8 thread thing, then you want “-8” type of machines which are twice as large as “-4” machines.

And if you are using “-8” machines you will want to get regular ones because they come with 32GB of memory as a standard.

2 Likes

Hi @thezviad ! You’re right. We’re referring to vCPU. Indeed the base for our tests has been n2-highmem-4 and n2d-highmem-4 instances.
The comment was more intended for operator running on dedicated hosting services like Hetzner or on-premise validators, to avoid running on relatively old cpu families.

Sorry about the confusion and thanks for highlighting it!

1 Like

@bongui Is 32Gb RAM truly required or is this just the safest recommendation for the upcoming changes?

I’ve been running since genesis on essentially 8Gb dedicated and 12Gb swap space for both proxy/validator, without really having memory as ever being a bottleneck. I appreciate potential max blocksize is changing though, as listed in this post.

Also should we expect 32Gb to be the standard for mainnet and Baklava possibly a little smaller? Obviously I don’t want to run so close to the wire that there’s no room for increased load, but running machines of the new recommended sizes for a modified geth client feels overkill-ish?

3 Likes

Hi @Thylacine ! Thanks for the question.

The recommendation for 32GB is the safe value for pressure conditions in the network. I can share some dashboard snapshots for our tests, but under high network traffic (in terms of gas usage with a diversity of blockchain load) the memory usage will go over those 16GB of memory. If the activity on the network is lower, those 16GB will be enough, but if it increases there is a risk that validator with less than 16GB won’t have enough memory.

For Baklava is it ok to keep with lower resources (16GB or even a bit lower will be probably enough for Baklava nodes).

2 Likes

Thanks, appreciate it

1 Like