Kinglory

Layering and Sharding

Why layering and sharding is the only consistent way solving scalability problem

Layering and data sharding (RADS) has more security and decentralization, but this is just part of the story. The real reason why RADS is the only solution is because of its scalability. It is the only way to achieve millions of TPS enduringly. Specifically, I will choose zkRollups, since optimistic layering has intrinsic limitations. There are two parts, the technical consistency and economic consistency.

Technical consistency

A technical consistent blockchain node needs to

  1. Follow up the chain and synchronize node
  2. Be able to synchronize from genesis block in reasonable time
  3. Prevent state from losing control growth

For a decentralized network, all these are not negotiable and resulting bottlenecks. [Appendix: someone says 2] is not necessary, I agree. Snapshot with consensus verification is good] Ethereum achieves all three and pushes edge of possibilities, which is not enough. Shard chain with the three can only increase to thousands of TPS, which is not enough as well.

Centralized solution and its limitations

More centralized network will compromise.

  1. Not every node needs to follow up the chain, only minimum amount of validators need.
  2. There’s no need to synchronize from genesis with snapshots and other convenient ways.
  3. State expiration is a good solution and will realize in most blockchains. Before that, solution such as regeneration will help.

The networks are not decentralized anymore, but we focus on scalability in the article. 

Among them, 1。 is necessary, and RAM, CPU, hard disk I/O and network bandwidth are potential bottlenecks of nodes. More importantly, keeping the least amount nodes means how far pushing the bottlenecks to.

L1 Layering can well pass centralized L1

zkR’s requirements are even higher than most centralized L1, since validations make it as secure as most centralized L1. In given time period only one node is active with still high security. For anti-censorship and elasticity, we need more sequencers with no need to reach consensus. For example, Hermez and Optimism plan to activate one sequencer at a time and switch between sequencers. Besides, zkRs can use all innovations to make full node client as efficient as possible, no matter they are for zkRs or L1s. zkRollups could be really creative with state expiration, since history could be rebuilt from L1. Innovations in sharding and history prepackaging could make it possible to run zkR on data shards directly.

However, we still have strict limitations. 1TB RAM, 2TB RAM, how far it can go is limited. Infrastructure providers that can follow blockchain need to be considered as well.

So, yes, scalability of zkR is much more than L1, but it itself cannot reach global scale.

Continue using multiple zkR

You can run multiple zkR on data shards. Once released, they will greatly improve data availability and keep expanding based on needs. One zkR cannot reach very high capacity, but multiple zkRs can.

Will multiple zkRs affect composability? For now, yes. But works have been done in the field, such as bridges including Hop, Connext, cBridge and Biconomy, and innovations like dAMM granting share mobility in multiple zkRs. The breakthroughs are harder or even impossible on L1, which results in that whatever centralized L1 can do, zkR can do better with higher TPS. Besides, multiple zkR can achieve global scale effectively.

Economical consistency

Transaction gas fees need to cover payment to validators and inflation for delegators. Although speculation enthusiasm and token premium will support blockchain development, for elastic and decentralized blockchain, we should work on economical consistency.

You can’t increase throughout to technical limits

Now the point is that blockchains will process more transactions and charge more fees to reduce inflation and reach breakeven. However, the reality is much more complicated. Hardware and network bandwidth are limiting performance of blockchain. Based on current technology, it is not possible to reach ideal TPS.

Yet another problem is that the extra transactions are not free. They increase bandwidth requirement, more state inflation, and in summary higher system requirement. Someone will say that there’s already space for them to do more, but it’s still a suspicious hypothesis, considering 128GB RAM is only able to follow up hundreds of TPS. Another point is that hardware is becoming cheaper, well it’s true, but not a miracle solution: you must choose from higher compacity and lower cost, or balance of the two. Also, zkR will be benefit from Moore law and Nelson Law.

In the end, all centralized L1 have to increase cost

In the end, the only two solutions are: a) network becomes more centralized; b) as network reach limit, cost increases. As discussed above, a) has its limitations, so b) is unavoidable. Reaching economic consistency costs a lot. We’re here only talking about economic consistency, there are many variables, such as value rise and fall. It is a simplified view, but the logic is there.

How rads increase efficiency

When it comes to rads, maintenance cost in layering is very low. In given time there are not many nodes need to run and no expensive secure consensus mechanism is needed, while it provides throughput greater than L1. Rollup will collect low L2 transaction fee to keep profit of the network. For data availability, with highly efficient Beacon chain consensus mechanism, limited activities will achieve minimum inflation.   

Therefore, the whole rads ecosystem can keep consistency, with no sacrifice in scalability and cost. Cost of rads is much smaller than that of centralized L1, allowing it to provide higher order of magnitude throughput.

Short term opinions

It is important that rads takes years to fully develop.

But in short term, there’re two choices.

  1. Consistent centralized L1, such as layering
  2. Inconsistent centralized L1

1 is still too expensive. 2 provides lower cost but no consistency. However, we have a third choice: availability. Validiums provides cost compared to Polygen PoS and Solana。 Currently validium data availability is also inconsistent compared to centralized L1. Though it uses consensus mechanisms such as data availability communities, it’s still much cheaper. Another point in validium is that when data shards publish, they have forward compatibility in layering. L1 has the choice, but it will be destructive. Besides, it’s safer than L1. 

In summary
  1. Some projects provide very low transaction fee, which effectively gives them subsidy for cryptocurrency risk. For customers looking for low cost, they are good choices. However, it is not a consistent model, not to mention centralization and security compromise.
  2. Even for these projects, when facing tractions, the cost has to be increased. Newer and more centralized L1 is needed, and it will be a forever-lasting competition that is not consistent.
  3. For now, optimized layering will provide consistent choices. The cost will be between $0.1 and $1.
  4. In the long term, rads is the only solution to scale to millions of TPC, reach global scale, while keep technical economic consistency at the same time. Kinglory can realize this, with guarantee of high security, decentralization, reliable neutral standing with no need of permit and trust. It sounds magic, as someone said, “Any advanced enough technology has no difference from magic”, layering and sharding is the magic.