@gbaciX has distilled excellent points, a detailed summary that helps you quickly grasp Polkadot's future execution layer ideas!
Title: In-Depth Understanding of JAM and CoreVM
Integrates the advantages of Polkadot and Ethereum
Architecture Visualization
1 Ordinary Computer / CPU
Straight lines represent unobstructed processing and execution flows.
2 Ethereum
Compared to ordinary computers, Ethereum's blockchain architecture leads to limited, time-constrained processing and execution. That is, Ethereum computation can only be completed within each block, and all computations must stop at the end of each block.
3 Polkadot
It is an upgrade to Ethereum, allowing longer computation times and the ability to save states for later use. However, it is still limited by the blockchain paradigm and cannot achieve continuous computation.
4 JAM
Breakthroughs in architecture have overcome the limitations of Polkadot and Ethereum, achieving continuous computation:
On the left is a 1.6 PB data lake
On the right is a state machine - including smart contracts and other processes (stateful, looking like Ethereum)
In the middle are the cores - stateless processing units
They can read and write to the data lake, making the data lake an efficient storage solution.
JAM Pipeline
Authorize (Stateless): Identify whether a particular transaction/computation has been paid for
Refine (Stateless): Completing valuable computations based on paid items
Accumulate & Transfers (stateful): The blockchain part, such as ledger records
JAM Pipeline: Data
Item: An unsigned transaction. JAM does not require external data to be signed, as it is not a transaction-centric system like Ethereum.
Item is associated with a service, analogous to a smart contract
Package: A combination of a set of items and a data/token (which can be seen as a signature in Ethereum), the token is used to prove that these items have been paid for.
Digest: The output result after computation is executed in the service
JAM Pipeline: Transformation Process
Authorize: Authorize a work package and generate a trace
Refine: Combine trace and item to generate digest
Join: Classify all digests and traces by service type
Accumulate: Input the aggregated digest + trace into the service code for stateful execution
Refine Phase Details
On a single item
Can check authorization information / trace
Can view other items in the same package
Can check preimage / call PVM
Each item can retrieve a maximum of 12MB of data from external sources
Each item can write a maximum of 12MB of data to the data lake
The execution time of each item is about 5 seconds PVM gas, generating a maximum of 48KB of digest, passed to the accumulate phase
Accumulate Phase Details
For one or more digests belonging to a service
Can check all traces
Each digest can execute up to 10ms of PVM gas/s
Can perform fund transfers, state changes, and read other service states
Can create services and update code
Is asynchronous: fund transfers are post, reading external states is pre
Key Technology: JAM's Security Mechanism - ELVES
The security of JAM relies on ELVES, a cryptoeconomic mechanism designed by computation cores that do not store data, using staking verification nodes in consumer hardware to scale correctness verification, thus surpassing the 'everyone verifies everything' model.
How ELVES Works:
Validators provide guarantees for correctness reports
Validators can submit bundles that can be verified statelessly to a secure data availability layer (containing all necessary data for verification reports)
Once the bundle is available, the auditing process begins
Validators select and check other validators' bundles through VRF (Verifiable Random Function), broadcasting their judgment upon completion
If the bundle audit fails, the submitter and guarantor will be penalized (slashed)
This security architecture has approached the efficiency of the 'everyone verifies everything' model.
Compared to ZK scaling mechanisms, ELVES provides more than 2 million times the cost-performance improvement and avoids the centralization and high latency issues of ZK.
CoreVM
The first service / operating system on JAM
Can be seen as JAM's Docker
Accompanying Tools:
Testnet: Local launch of 6 nodes, 2-core test network
jamtop: Unix-like CPU monitoring CLI tool
jamt: A universal CLI tool for creating and interacting with services, VMs, and authors
CoreVM Player: Used to monitor coreVM service output streams (audio, video, console)
coreVM Builder: Used to analyze service status, RAM page requirements, and construct work packages to advance execution
CoreVM allows JAM to simulate a machine:
Close to native x64 execution speed
Compatible with RISC-V
Paged Memory
Continuous Execution Capability
Supports Messaging
In summary, CoreVM can run ordinary software compiled with conventional languages and abstractions.
Service Architecture: Logical Division of Labor Between Refine and Accumulate
Refine:
Guarantee correctness
Code has been determined
Cannot access storage
Input provided by builder (i.e., transaction builder)
Builder can be reviewed by the authoriser
Accumulate:
Guarantee correctness
Code has been determined
Input guarantees come from the digest of the refine phase
Accessible Storage
Storage Architecture: Data Lake vs Service Storage
Data Lake:
Accessible only in the refine phase
Data Cannot Be Named
Can only be referenced through UUID encryption
High throughput (628MB/s)
Service Storage:
Accessible only in the accumulate phase
Data Can Be Named: Using service UUID as the namespace
Low throughput (about 64MB/s)
Therefore:
Refine reads and writes to the data lake as much as possible
Pass the data lake reference to accumulate, then write to service storage
Service storage is used to track program states
Quake Demonstration
After the speech, Gavin demonstrated the performance of Quake running on JAM + CoreVM.
To achieve this, JAM made some changes:
Ported musl (a lightweight C standard library) to run on PVM
Created better schedulers and memory managers
Built a 'virtual hard drive' to simulate a file system