Somnia pushes higher TPS than most rivals by splitting “write” from “agree,” letting every validator stream transactions in parallel while a fast BFT layer orders them; then it executes EVM code as native machine code and squeezes network bytes with streaming compression and signature aggregation. That combo keeps CPUs busy, cuts wire size, and commits batches in under a second, so throughput scales with validators instead of stalling behind one leader.

Parallel intake, then fast ordering

  • MultiStream design lets each validator append its own data chain continuously, so the network ingests many streams at once instead of waiting for a single leader’s block window.

  • A lightweight BFT ordering chain periodically snapshots all stream heads and deterministically merges them, giving one global sequence with sub‑second finality.

Compiled EVM, not just an interpreter

  • Hot contracts get compiled from EVM bytecode to native machine code, so execution behaves like optimized C++ rather than a slow generic VM.

  • The engine bets on a super‑fast sequential path to avoid complex runtime conflict schedulers that can thrash under state‑heavy workloads.

Lean bytes on the wire

  • Streaming compression works across continuous validator streams, gaining higher compression ratios than per‑block methods and lowering bandwidth per transaction as load rises.

  • Signature aggregation (e.g., BLS) collapses many signatures into compact proofs, reducing both payload size and verification cost per batch.

State I/O that doesn’t choke

  • A custom state database (IceDB) is tuned for tens‑of‑nanoseconds reads and writes with deterministic performance, so execution and replay aren’t starved by disk latency.

  • Fast snapshots and catch‑up paths keep new or lagging nodes from becoming throughput bottlenecks during spikes.

Why this out-runs typical designs

  • Most chains optimize a single block stream with parallel execution inside that stream. Somnia widens the intake itself, then orders quickly, so the limiting factor shifts from leader bandwidth to aggregate validator bandwidth.

  • End‑to‑end latency stays low because ingestion doesn’t stall for votes, ordering runs on tight ticks, execution is compiled, and network payloads are smaller—so confirmations land in sub‑second windows even at high volume.

Practical implications for apps

  • Real‑time apps like on‑chain games, social feeds, auctions, and orderbooks get consistent low latency while pushing huge transaction counts.

  • Fees remain stable under load because bandwidth per tx is reduced and ordering isn’t a bottleneck, so TPS scales without gas spiking as quickly.

Bottom line: Higher TPS is the result of architectural separation (parallel streams + quick BFT ordering), a faster execution path (compiled EVM on a nanosecond‑class state DB), and lighter networking (streaming compression + signature aggregation). Together, these choices raise sustainable throughput while holding finality under a second.

$SOMI

@Somnia Official #Somnia