Results from blockchain scalability testing

Over the last few months, the cLabs blockchain team has been evaluating the potential throughput of the Celo blockchain client. The tests were conducted with a range of transaction workloads, blockchain parameter settings, and validator hardware configurations. We wanted to share a few more of the details of these tests for anyone interested, as background for the recent recommendations of validator hardware configurations, and as an indication of how Celo throughput is likely to change in the future.

Testing environment

We use Google Cloud to stand up and tear down large, Kubernetes-based clusters of validators. Our testing environment consists of 110 validators across 5 geographical regions, 22 validators in each:

  • South East Asia
  • Australia
  • Europe
  • South America
  • US

Load is generated in the US region.

The load testing system supports four different modes:

  • data - generates transactions with extensive amount of calldata
  • transfers - generates many small transactions performing token transfers
  • contract calls - generates many small transactions focusing on making contract calls
  • mixed - mixed workload of the three modes above

In our tests we used data and mixed workloads.

:warning: Important note: in order to stress test the system, the EIP-1559 gas price adjustment mechanism was effectively turned off by setting the adjustmentSpeed parameter to 0 and targetDensity to 1. On mainnet, the values are set to 0.5 and 0.7 respectively.

This was done to be able to maintain the desired gas usage per block at the highest possible level and avoid draining load test bots’ accounts quickly due to increased gas price due to EIP-1559.

Test Objectives

The goal of this test was to figure out a value for block gas limit that the system can reliably handle sustaining the desired 5 second block rate.

Test 1: 50M block gas limit

Parameters: mixed load mode, 4 vCPU (n2-highmem-4 instances), 32 Gb RAM, size of the state db ~475 Gb
Result: success, 624 TXs per block in average

Subsequent tests with higher gas limits demonstrated degraded performance and increased block rates due to CPU and I/O bottlenecks.

For tests with gas limits higher than 50M we switched to 8 vCPU (c2d-standard-8 or c2-standard-8 depending on the region), 32 Gb RAM machines. Due to the way GCP has its I/O limits for disks set up, higher number of vCPUs results in improved I/O performance of the storage layer that alleviates the bottlenecks.

Test 2: 70M block gas limit

Parameters: mixed load mode, 8 vCPU, 32 Gb RAM, size of the state db ~475 Gb
Result: success, 932 TXs per block in average

Test 3: 85M block gas limit

Parameters: mixed load mode, 8 vCPU, 32 Gb RAM, size of the state db ~475 Gb
Result: success, 1134 TXs per block in average

The little waves of iowait at the bottom of the chart indicate moments of the state db compactions that results in increased I/O operations. In order to increase the load on the I/O system and see when it would become a bottleneck, we turned on randomization of recipient addresses that increased load on the state db as will be shown in the next test.

Test 4: 90M block gas limit

Parameters: mixed load mode, random addresses, 8 vCPU, 32 Gb RAM, size of the state db ~475 Gb
Result: partial success, degraded block rate, 1162 TXs per block in average

Notice the elevated amount of read operations at the moment of switching from test 3 to test 4 and enabling randomization + 90M block gas limit.
The spikes on the chart again correspond to the moments of state db compactions which can be addressed to open possibilities of higher block gas limits.

Test 5 (bonus): 90M block gas limit + TX data size limit

Parameters: data load mode, 8 vCPU, 32 Gb RAM, size of the state db ~475 Gb
Result: success, 91 TXs per block

Although the block gas limit was set to 90M gas, the actual gas used values were 22.8M gas per block due to the total transaction data limit mechanism introduced in CIP-65.

The load test script generated transactions that use a lot of calldata and the total limit of 5Mb per block effectively resulted in a lower number of data-heavy transactions in a block, which indicates that the protective mechanism works as expected.


These tests have shown that with the changes coming in the Gingerbread hard fork, there’s significant scope for safely raising the block gas limit further, up to 50M, with the current minimum recommended hardware spec of 4 vCPUs and 32Gb RAM.

In order to further increase block gas limit up to 90M, the recommended hardware specification would need to be updated to 8 vCPUs and a more performant storage layer.

With EIP-1559 mechanism enabled and adjustmentSpeed and targetDensity values tuned accordingly, Celo can safely move to higher block gas limits since the economic cost of denial of service attacks becomes prohibitive.

The blockchain team continues working on optimizations that would further increase possible block gas limits. In particular, we’re focused on additional work and testing around optimizations of the state DB performance.

Please reply to this post if you have any questions or suggestions, or are interested in contributing to future work!


@bongui Thanks for the charts. Are there screenshots/info of memory usages for 32M and/or 50M blocks? Since there were 110 validators, It is safe to assume the net.peerCount was ~90-110 for any given validator, right?

Hi @kamikazechaser, welcome to the forum!

Yes, of course. Here is a memory usage chart for 50M. It would grow during the cache warmup, flattening around ~12Gb under sustained load.

The peerCount was ~125 at all times, including nodes used for load generation:

1 Like