Asentum

Concepts

Architecture

The whole stack at a glance · Estimated read time: 8 minutes

TL;DR

Asentum is five layers stacked on JavaScript: HTTP peer fan-out at the bottom, Tendermint-style BFT consensus on top of that, a Sparse Merkle Tree for state, a Hardened JavaScript sandbox for contract execution, and a JSON-RPC interface at the top. Every layer is JavaScript, every layer uses post-quantum cryptography for signatures, and the whole thing runs as a single Node.js process that fits on a Raspberry Pi 4.

The stack

┌─────────────────────────────────────────────┐
│  JSON-RPC (HTTP + WebSocket)                │
├─────────────────────────────────────────────┤
│  Hardened JavaScript VM (SES Compartments)  │
├─────────────────────────────────────────────┤
│  Sparse Merkle Tree state · LevelDB         │
├─────────────────────────────────────────────┤
│  Tendermint-style BFT · ML-DSA-65 sigs      │
├─────────────────────────────────────────────┤
│  HTTP peer fan-out · TLS · static peer list │
└─────────────────────────────────────────────┘

Each layer is small enough to read in an afternoon. None of the layers depend on C++ native modules — the whole chain runs on pure Node.js with a handful of WASM crypto primitives for hot-path hashing and signing.

Networking — HTTP fan-out

Peer networking runs over plain HTTP, using the same JSON-RPC endpoint nodes serve to clients. Block sync is pull-based: replicas poll a configured peer's /block-raw/:n endpoint and fetch any missing blocks. Consensus votes are push-based: when a validator emits a pre-vote or pre-commit, it POSTs to the /consensus/vote endpoint on every peer in its static ASENTUM_PEERS list. De-duplication at the receiving engine prevents relay loops.

This is deliberately boring. It works across any NAT, any firewall, any cloud network, with no mesh negotiation, no peer discovery, no idle timeouts. It is sized for the current 5–10 validator testnet, not a 10,000-node public network. A proper libp2p transport is on the roadmap for when it's needed; until then, HTTP fan-out removes an entire class of operational failure.

Consensus — BFT

Blocks are finalized by a Tendermint-style BFT committee of ~100 validators — propose → pre-vote → pre-commit → finality, with 2/3 voting power required at each phase. Finality is instant and there are no reorgs.

Every vote is signed with ML-DSA-65 (Dilithium3). Around 100 signatures per block at ~3.3 KB each is the load-bearing number that caps committee size.

State — Sparse Merkle Tree

Account balances, nonces, contract code, and contract storage all live in a single Sparse Merkle Tree keyed by hash. The root of the tree after every block is part of the block header, so every validator agrees on exactly the same state at every height.

Underneath, we persist to LevelDB — the same storage engine Ethereum clients use. State is pruned opportunistically on full nodes; archive mode keeps every historical state root queryable.

Execution — Hardened JavaScript

Contract bodies are plain JavaScript source. When a transaction calls a method, the VM evaluates the source inside a fresh SES Compartment and invokes the named function. Storage, events, and cross-contract calls are injected as globals; everything that could break determinism is removed.

Gas metering runs on a simple cost model — per-tx-kind base cost, plus per-storage-write, per-event-byte, and a cold-load surcharge for large modules. See the fee market.

Interface — JSON-RPC

At the top of the stack sits a JSON-RPC 2.0 interface deliberately shaped like Ethereum's. Most eth_* methods work unchanged — MetaMask, ethers.js, viem, and every Ethereum-ecosystem explorer can read Asentum without modification.

Signing is the one place compatibility breaks: MetaMask produces ECDSA, Asentum requires Dilithium3. For that, use the Asentum Wallet extension or the SDK.

How a transaction flows

  1. A wallet builds an SSZ-encoded transaction and signs it with Dilithium3.
  2. The wallet POSTs eth_sendRawTransaction to any full node.
  3. The node validates the signature, nonce, balance, and gas, then fans the tx out over HTTP to every configured peer.
  4. The current proposer picks mempool txs into a candidate block and broadcasts it to the committee.
  5. Committee members pre-vote, pre-commit, and finalize the block — all signed with Dilithium3 and broadcast to peers over HTTP.
  6. The VM executes each tx in the finalized block, updating the SMT state.
  7. Receipts, events, and the new state root are committed to LevelDB and streamed over JSON-RPC.

Read next