Capacity expansion has always been one of the topics that Ethereum cannot avoid. If Ethereum wants to become a true "world computer", it needs to have scalability, security, and decentralization at the same time. However, having these three conditions at the same time is called the "blockchain impossible triangle" in the industry, and it is a major problem that has not been solved in the entire industry.
But after Ethereum researcher and developer Dankrad Feist proposed the new Ethereum sharding solution Danksharding at the end of 2021, it seems to have brought a revolutionary solution to the "blockchain impossible triangle" and may even rewrite the rules of the game for the entire industry.
This research report will try to explain in plain language what is Ethereum's new sharding solution Danksharding and its background. The reason for writing this research report is that there are very few Chinese articles about Danksharding and most of them require a high knowledge threshold. Therefore, Spinach will try to break down the complex principles behind it for you, and use simple vernacular to make a Web3 beginner understand Ethereum's new sharding solution Danksharding and its predecessor EIP-4844.
Author: Spinach Spinach
Word count: This research report is over 10,000 words, and the estimated reading time is 21 minutes
Table of contents
Why does Ethereum need to expand?
Ethereum Scaling Background
What is the blockchain impossible triangle?
What are the current scaling solutions for Ethereum?
Ethereum's initial sharding solution Sharding 1.0
How does Ethereum's POS consensus mechanism work?
What is the initial sharding solution Sharding1.0?
What are the disadvantages of the initial sharding solution Sharding1.0?
What is Danksharding, Ethereum’s new sharding solution?
Preliminary solution EIP-4844: Proto-Danksharding — New transaction type Blob
Danksharding — Complete Scaling Solution
Data Availability Sampling
Erasure Coding
KZG Commitment
Proposer/Builder Separation
Censorship-Resistant List (crList)
双槽 PBS(Two-slot Proposer-Builder Separation)
Summarize
references
Why does Ethereum need to expand?
After Ethereum founder Vitalik Buterin published Ethereum's white paper "Next Generation Smart Contracts and Decentralized Application Platform" in 2014, blockchain ushered in a new era. The birth of smart contracts allows people to create decentralized applications (DApps) on Ethereum, and also brings a series of innovations to the blockchain ecosystem, such as NFT, DeFi, GameFi, etc.
Ethereum Scaling Background
As the Ethereum ecosystem grows, more and more people start to use Ethereum, and Ethereum's performance problems begin to be exposed. When many people interact on Ethereum at the same time, the blockchain will be "congested", just like the traffic light time on a road is fixed, and a certain number of vehicles will not cause traffic jams, but suddenly during peak hours, many cars drive to this road, and the number of vehicles leaving the road when the light is green is far less than the number of new vehicles entering the road waiting for the traffic light, which will cause a big congestion, and the time for all vehicles to pass through this road will be extended. The same is true in the blockchain, and the confirmation time of everyone's interaction request will be extended.
But in the blockchain, it is not just the time that is stretched, but also the high gas fees (gas fees can be understood as the hard work paid to miners, who are responsible for packaging and processing all transactions in the blockchain), because miners will prioritize the highest-bid transactions, which will lead to everyone raising the gas fee to strive for faster confirmation of interaction requests, triggering a "gas war". A more famous incident was the popularity of an NFT project in 2017, CryptoKitties, which pushed the gas fee to hundreds of dollars per interaction. It costs tens or even hundreds of dollars in gas fees to perform an interaction on Ethereum. How expensive is that.
The main reason for such expensive gas fees is that Ethereum’s performance can no longer meet the interaction needs of existing users. In terms of performance calculation, Ethereum is different from Bitcoin. Since Bitcoin is just a simple ledger to process transfer information, its TPS is fixed at 7 transactions per second, but it is different in Ethereum.
Due to the existence of smart contracts in Ethereum, the content of each transaction is different, so how many transactions each block can process (TPS) depends on the amount of data contained in a block. The amount of data for each transaction is determined according to real-time demand. We can learn about the mechanism of Ethereum performance: (The following information is helpful for understanding Danksharding, be sure to read it~)
Ethereum sets the upper limit of the amount of data in a block according to the gas fee. A block can carry a maximum of 30 million GAS of data.
Ethereum does not want the amount of data in each block to be too large, so each block has a Gas Target of 15 million Gas.
Ethereum has a set of data consumption standards for Gas. Different types of data consume different amounts of Gas. However, according to our estimates, each block is about 5kb to 160kb in size, and the average size of a block is about 60 to 70kb.
Once the gas consumption of a block exceeds the gas target, which is 15 million gas, the basic fee of the next block will be 12.5% more expensive. If it is lower, the basic fee will be reduced. This mechanism is an automated dynamic adjustment mechanism that can increase the cost to ease congestion during peak trading hours, and reduce the cost to attract more transactions during low trading hours.
By understanding the above mechanism, we can know that Ethereum's TPS is floating. **We can use the blockchain browser to see the number of transactions in each block to calculate the approximate TPS. According to the figure below, we can see that on average, there are about 160 transactions in a block just to reach the Gas Target, and the highest can reach more than 300 transactions. According to the 12-second block time of each block, the TPS is about 13 to 30 transactions, but according to the current known Ethereum TPS can reach up to 45 transactions per second.

图源:Mainnet | Beacon Chain Explorer (Phase 0) for Ethereum 2.0 – BeaconScan
If we take the world-famous transaction system VISA, which can process tens of thousands of transactions per second, Ethereum, which wants to become the "world computer", can only process 45 transactions per second, which is too weak. Therefore, Ethereum urgently needs to expand its capacity to solve the performance problem, which is related to the future of Ethereum. However, expansion is not an easy task because there is an "impossible triangle" in the blockchain industry.
What is the blockchain impossible triangle?
The "Blockchain Impossible Triangle" refers to the fact that a public blockchain cannot simultaneously meet three characteristics: decentralization, security, and scalability.
Decentralization: refers to the degree of decentralization of nodes. The more nodes there are, the more decentralized they are.
Security: refers to the security of the entire blockchain network. The higher the attack cost, the safer it is.
Scalability: refers to the performance of blockchain in processing transactions. The more transactions that can be processed per second, the more scalable it is.
If we look at the importance of these three points, we will find that decentralization and security are the most important. Decentralization is the cornerstone of Ethereum. It is decentralization that gives Ethereum neutrality, anti-censorship, openness, data ownership and nearly unbreakable security. The importance of security is naturally self-evident, but Ethereum's vision is to achieve scalability under the premise of decentralization and security. The difficulty of achieving this can be imagined, so this is also called the "blockchain impossible triangle."

Image source: Ethereum Vision | ethereum.org
What are the current scaling solutions for Ethereum?
We know that in the "blockchain impossible triangle", the premise for Ethereum to achieve expansion must be to ensure decentralization and security. To ensure decentralization and security, the capacity requirements for nodes must not be increased too much during expansion. Because nodes are an indispensable role in maintaining the entire Ethereum network, high-requirement nodes will hinder more people from becoming nodes and become more and more centralized. Of course, the lower the threshold for nodes, the better. Low-threshold nodes will allow more people to participate and make Ethereum more decentralized and secure.
There are currently two solutions for Ethereum's expansion: Layer2 and Sharding. Layer2 is an off-chain solution for the expansion of the underlying blockchain (Layer1). The principle is to execute requests on the blockchain off-chain. There are several Layer2 solutions. This research report only focuses on one Layer2 solution, which is Rollup: The principle of Rollup is to package hundreds of transactions off-chain into one transaction like pancakes and send it to Ethereum to achieve expansion. In this way, the cost of uploading Ethereum will be very cheap for everyone, and the security of Ethereum can also be inherited.
Rollup is currently divided into two types: Optimism Rollup and ZK Rollup (zero-knowledge proof Rollup). The difference between these two Rollups is simply that Optimism Rollup assumes that all transactions are honest and trustworthy, compresses many transactions into one transaction and submits it to Ethereum. After submission, there will be a period of time (challenge period - currently one week), and anyone can question and initiate a challenge to verify the authenticity of the transaction. However, if the user wants to transfer ETH on OP Rollup to Ethereum, he needs to wait until the challenge period ends before getting final confirmation.
ZK Rollup generates a zero-knowledge proof to prove that all transactions are valid, and uploads the final state changes after all transactions are executed to Ethereum. Compared with Optimism Rollup, ZK Rollup is more promising. ZK Rollup does not need to upload all the compressed transaction details like Optimism Rollup, but only needs to upload a zero-knowledge proof and the final state change data, which means that it can compress more data than OP Rollup in terms of scalability, and does not need to wait for a week-long challenge period like OP Rollup. However, the biggest disadvantage of ZK Rollup is that it is extremely difficult to develop, so in the short term, Optimism Rollup will occupy a large part of the L2 market.
In addition to Layer2, there is another expansion solution that is Sharding, the protagonist of this article. We know that Layer2 puts transactions on Ethereum onto the chain for processing. But no matter how Layer2 processes data, the performance of Ethereum itself remains unchanged, so the expansion effect that Layer2 can achieve is actually not that significant.
Sharding is to achieve expansion at the Layer 1 level of Ethereum, but we know that the premise of achieving expansion on Ethereum is to ensure the decentralization and security of Ethereum, so we cannot increase the burden on the nodes too much.
The specific implementation plan of sharding has always been a topic of discussion in the Ethereum community. The latest plan is Danksharding, the subject of this article. It is also mentioned that Danksharding is the latest sharding plan. Before talking about Danksharding, we will also briefly introduce what the old sharding plan is like and why it was not adopted.
Ethereum's initial sharding solution Sharding 1.0
Before talking about the Sharding 1.0 solution, I need to first introduce how Ethereum’s current POS consensus mechanism works, because this is the necessary prerequisite knowledge for understanding the Sharding 1.0 solution and Danksharding. I will briefly summarize it when talking about the Sharding 1.0 solution (just know what to do roughly).
How does Ethereum’s POS consensus mechanism work? [7]
The consensus mechanism is a system that enables all nodes in the blockchain to reach a consensus. Its importance is self-evident. Ethereum completed "The Merge" in the Ethereum 2.0 upgrade phase on September 15, 2022, that is, the Ethereum mainnet of POW proof of work and the beacon chain of POS proof of equity mechanism were merged. The POS proof of equity mechanism officially replaced the POW proof of work mechanism and became the consensus mechanism of Ethereum.
We know that in the POW proof-of-work mechanism, miners compete for the right to produce blocks by stacking their computing power. In the POS proof-of-stake mechanism, miners compete for the right to produce blocks by staking 32 ETH to become Ethereum's verification nodes (the staking method will not be introduced in detail here).
In addition to the changes in the consensus mechanism, Ethereum's block time has also changed from the previous floating block time to a fixed time, which is divided into two units: slot and epoch: Slot is 12 seconds, Epoch is 6.4 minutes. An Epoch contains 32 slots, which means that one block is produced in 12 seconds, and 32 blocks are produced in 6.4 minutes for an Epoch.
When a miner pledges 32 ETH to become a verification node, the beacon chain will use a random algorithm to select a verification node as a block node to package blocks. A block node will be randomly selected for each block. At the same time, in each Epoch, the beacon chain will evenly and randomly assign all verification nodes to a group of "committees" consisting of at least 128 verification nodes for each block.
That is to say, each block will be assigned 1/32 of the number of verification nodes of all nodes. The "Committees" composed of these verification nodes need to verify and vote on the blocks packaged by each block-producing node. When the block-producing node packages the block, more than two-thirds of the verification nodes vote in favor of the block to be successfully produced.

What is the initial sharding solution Sharding 1.0? [7]
In the design concept of the initial sharding solution Sharding1.0, Ethereum was designed from the original main chain to a maximum of 64 shard chains, and expansion was achieved by adding multiple new chains. In this solution, each shard chain is responsible for processing Ethereum data and handing it over to the beacon chain, which is responsible for the coordination of the entire Ethereum. The block nodes and committees of each shard chain are randomly assigned by the beacon chain.

The beacon chain and the shard chain are linked through crosslinks. The block of the beacon chain will give a hash value to the shard block of the same block, and then the shard block will give this hash value to the next beacon block to achieve crosslinks. If it is missed, it will be given to the next beacon block.

What are the disadvantages of the initial sharding solution Sharding1.0?
In simple terms, the sharding 1.0 solution is to cut Ethereum into many shard chains to process data together and then hand over the data to the beacon chain to achieve capacity expansion, but this solution has many disadvantages:
**Development difficulties: **Splitting Ethereum into 64 shard chains while ensuring normal operation is technically very difficult to achieve, and the more complex the system, the more likely it is to have some unpredictable vulnerabilities. Once a problem occurs, it will cause a lot of trouble to repair it.
**Data synchronization problem:** The beacon chain will reshuffle the "committees" responsible for verification every Epoch. Therefore, each reassignment of verification nodes is a large-scale network data synchronization, because if a node is assigned to a new shard chain, it needs to synchronize the data of this shard chain. Since the performance bandwidth of nodes varies, it is difficult to ensure that the synchronization is completed within the specified time. However, if the node is allowed to directly synchronize the data of all shard chains, it will greatly increase the burden on the node, which will make Ethereum more and more centralized. [2]
**Data volume growth problem: **Although Ethereum's processing speed has increased a lot, the simultaneous processing of data by multiple shard chains has also led to a large increase in the amount of stored data. The expansion rate of Ethereum's data volume will be many times faster than before, and the storage performance requirements for nodes will continue to increase, leading to more centralization.
**Unable to solve the MEV problem:** Maximum extractable value (MEV) refers to the maximum value that can be extracted from block production beyond the standard block reward and gas fees by adding and excluding transactions in the block and changing the order of transactions in the block. After a transaction is initiated in Ethereum, the transaction will be placed in the mempool (a pool that stores pending transactions) waiting to be packaged by miners. Then miners can see all transactions in the mempool, and miners have great power. Miners control the inclusion, exclusion, and order of transactions. If someone makes a profit by paying more gas fees to bribe miners to adjust the order of transactions in the transaction pool, this is a maximum extractable value MEV. [6]
for example:
There is a MEV method called "sandwich attack" or "clamp attack". This method of extracting MEV is to monitor large DEX transactions on the chain. For example, someone wants to buy a $1 million altcoin on Uniswap, and this transaction will drive up the price of the altcoin a lot. When this transaction is put into the mempool, the monitoring robot can detect this transaction. At this time, the robot bribes the miner who packs the block to put a purchase operation of the altcoin in front of this person, and then performs a sell operation after this person's purchase operation, just like a sandwich that sandwiches the person who conducts large DEX transactions in the middle. In this way, the person who launches the "sandwich attack" gains the profit of the altcoin because of this person's large transaction, while the person who conducts the large transaction suffers losses. [6]
The existence of MEV has also been bringing some negative impacts to Ethereum, such as the losses and worse user experience caused by the "sandwich attack", network congestion caused by front-runner competition, high gas fees and even node centralization problems. This is because nodes that obtain more MEV value can continue to occupy a larger share in the network through income, because more income = more ETH = more staked equity. In addition, the high cost brought by MEV (network congestion and high GAS caused by front-running) will cause Ethereum users to continue to lose users. Even if the value of MEV significantly exceeds the block reward, it will cause the consensus and security of the entire Ethereum to be unstable. The Sharding 1.0 solution cannot solve the series of problems brought by MEV.
After Ethereum researcher and developer Dankrad Feist proposed the new Ethereum sharding solution Danksharding at the end of 2021, Danksharding was unanimously recognized by the Ethereum community as the best solution to achieve sharding expansion, and may even bring a new revolution to Ethereum.
Danksharding uses a new sharding idea to solve Ethereum's scalability problem, namely a sharding solution based on Layer2's Rollup. This new sharding solution can solve the scalability problem without significantly increasing the node burden and ensuring decentralization and security, while also solving the negative impact of MEV.
We can see from the figure below that the goals of the next Ethereum upgrade stages "The Surge" and "The Scourge" are: to achieve 100,000+ TPS in Rollup and to avoid centralization brought by MEV and other protocol risks.

Image / Source: vitalik.eth Translation: ethereum.cn
So how does Danksharding solve the Ethereum expansion problem? Let’s start with Danksharding’s predecessor EIP-4844: Proto-Danksharding.
Preliminary solution EIP-4844: Proto-Danksharding — New transaction type Blob
EIP-4844 introduces a new transaction type to Ethereum - Blob Transcation. This new transaction type Blob can provide an additional plug-in database for Ethereum:
The size of a Blob is approximately 128KB
A transaction can carry up to two blobs - 256KB
Each block has 8 Target Blobs of 1MB, and can carry a maximum of 16 Blobs of 2MB (the concept of Target is mentioned in the expansion background)
Blob data is stored temporarily and will be cleared after a period of time (currently the community recommends 30 days)

Currently, the average size of each Ethereum block is only about 85KB. The additional storage space that Blob brings to Ethereum is huge. It should be noted that the total data size of all Ethereum ledgers since the birth of Ethereum is only about 1TB, and Blob can bring 2.5TB~5TB of additional data to Ethereum every year, which is several times the data size of the entire Ethereum ledger.
The Blob transaction introduced by EIP-4844 can be said to be tailor-made for Rollup. The Rollup data is uploaded to Ethereum in the form of Blob. The additional data space enables Rollup to achieve higher TPS and lower costs, while also releasing the block space originally occupied by Rollup to more users.
Since Blob data is stored temporarily, the surge in data volume will not cause an increasing burden on the node's storage performance. If only a month's worth of Blob data is temporarily stored, each block node needs to download an additional 1MB~2MB of data from the perspective of synchronized data volume, which does not seem to be a burden for the node's bandwidth requirements. From the perspective of data storage volume, the node only needs to download and save a fixed amount of about 200~400GB of data (a month's worth of data), while ensuring decentralization and security, and only paying the price of a small increase in node burden, the increase in TPS and the reduction in cost are calculated by dozens or even hundreds of times, which is simply an excellent solution for solving Ethereum's scalability problem.
What if the data is cleared and the user wants to access the previous data?
First of all, the purpose of the Ethereum consensus protocol is not to ensure that all historical data is stored forever. Instead, its purpose is to provide a highly secure real-time bulletin board and leave long-term storage space for other decentralized protocols. The bulletin board exists to ensure that the data posted on the bulletin board stays there long enough so that any user or protocol that wants this data has enough time to grab the data and save it. Therefore, the responsibility of saving this Blob data is given to other roles such as Layer2 project parties, decentralized storage protocols, etc. [3]
Danksharding — Complete Scaling Solution
EIP-4844 has achieved the first step in Ethereum's expansion around Rollup, but the expansion effect achieved by EIP-4844 is far from enough for Ethereum. The complete Danksharding solution further expands the amount of data that Blob can carry from 1~2MB per block to 16MB~32MB, and proposes a new mechanism, Block Producer-Packager Separation (PBS), to solve the problems caused by MEV.
Then we need to know what difficulties will arise if we continue to expand capacity based on EIP-4844:
**Nodes are overburdened: **We know that the increased burden on nodes from blobs of only 1~2MB in EIP-4844 is completely acceptable, but if the size of the blobs is increased 16 times to 16~32MB, the burden on both data synchronization and data storage will make the nodes overburdened, thereby reducing the degree of decentralization of Ethereum.
**Data availability problem: **If the node does not download all the Blob data, it will face the problem of data availability, because the data is not open on the chain and accessible at any time. For example, the Ethereum node has doubts about a transaction on Optimism Rollup and wants to challenge it, but Optimism Rollup does not hand over the data. Then, without the original data, it cannot prove that the transaction is problematic. Therefore, to solve the data availability problem, it is necessary to ensure that the data is open and accessible at any time.
So how does Danksharding solve these problems?
Data Availability Sampling
Danksharding proposed a solution - Data Availability Sampling to reduce the node burden while ensuring data availability.
The idea of Data Availability Sampling (DAS) is to cut the data in the Blob into data fragments, and let the nodes change from downloading Blob data to randomly checking Blob data fragments, so that the Blob data fragments are scattered in each node of Ethereum, but the complete Blob data is stored in the entire Ethereum ledger, provided that there are enough nodes and they are decentralized.
For example, if the data of a Blob is cut into 10 fragments, and there are 100 nodes in the entire network, each node will randomly select and download a data fragment and submit the fragment number to the block. As long as all the fragments with the same number can be gathered in a block, Ethereum will assume that the data of this Blob is available, and the original data can be restored by piecing together the fragments. However, there is also an extremely low probability that none of the 100 nodes will draw a fragment with a certain number, so the data will be missing, which reduces security to a certain extent, but it is acceptable in terms of probability.

Danksharding uses two technologies to achieve Data Availability Sampling (DAS): Erasure Coding and KZG Commitment
Erasure Coding
Erasure Coding is a coding fault-tolerant technology. Using erasure coding to split data allows all Ethereum nodes to restore the original data when only more than 50% of the data fragments are available, which greatly reduces the probability of data loss. The specific implementation principle is relatively complicated. Here we use a mathematical formula to give an example to roughly explain the principle: [2]
First, construct a function f(x) = ax + b, and randomly select 4 x values
Assuming m = f(0) = b, n = f(1) = a + b, we can conclude that a = n – b, b = m
Let p = f(2) and q = f(3), we can get p = 2a + b = 2n – m, q = 3a + b = 3n – 2m
Then the four fragments m, n, p, and q are scattered among the nodes of the entire network.
According to the mathematical formula, we only need to find two of the fragments to figure out what the other two fragments are.
If we find n and m, we can directly calculate q=3n-2m and p=2n-m
If we find q and p, we can add (2p=4n-2m)-(q=3n-2m) to get 2p-q=n and then we can directly calculate m.
Simply put, erasure codes use mathematical principles to cut Blob data into many data fragments. Ethereum nodes do not need to collect all the data fragments. They only need to collect more than 50% of the fragments to restore the original data of the Blob. This greatly reduces the probability of insufficient fragment collection, and the probability can be ignored.

KZG Commitment
KZG Commitment is a cryptographic technology used to solve the data integrity problem of erasure codes. Since nodes only randomly check the data fragments cut by erasure codes, they do not know whether the data fragments are really from the original data of the Blob, so the role responsible for encoding also needs to generate a KZG polynomial commitment to prove that the data fragments of the erasure code are indeed part of the original data. The role of KZG is somewhat similar to the Merkle tree but with a different shape. All KZG proofs are on the same polynomial.

Danksharding implements Data Availability Sampling (DAS) through erasure codes and KZG polynomial commitments, which greatly reduces the burden on nodes when the additional data carried by Blob is expanded to 16MB~32MB. The Ethereum community has also proposed a solution called 2D KZG scheme to further cut data fragments to reduce bandwidth and computing requirements, but the community is still discussing the specific algorithm to be used, including the design of DAS, which is also being continuously optimized and improved.
For Ethereum, Data Availability Sampling (DAS) solves the problem of expanding the Blob data volume to 16MB~32MB while reducing the burden on nodes, but there seems to be a problem: who will encode the original data?
If you want to encode the original Blob data, the prerequisite is that the node doing the encoding must have the complete original data. To do this, the node has higher requirements. As mentioned earlier, Danksharding proposed a new mechanism **Blocker-Packager Separation (PBS)** to solve the problem caused by MEV. In fact, this solution not only solves the MEV problem, but also solves the encoding problem.
Proposer/Builder Separation
First of all, we know that Data Availability Sampling (DAS) reduces the burden of node verification of Blobs and realizes low-configuration and decentralized verification. However, to create this block, it is necessary to have complete Blob data and encode it, which increases the requirements for many Ethereum full nodes. Proposer-Packard Separation (PBS) proposes to divide nodes into two roles: Builder and Proposer. Nodes with high performance can become Builders, while nodes with low performance can become Proposers.
Currently, there are two types of Ethereum nodes: full nodes and light nodes. Full nodes need to synchronize all data on Ethereum, such as transaction lists and block bodies, and they play two roles: block packaging and block verification. Since full nodes can see all the information in a block, they can reorder or add or delete transactions in a block to obtain MEV value. Light nodes do not need to synchronize all data, they only need to synchronize the block header and verify the block. [1]
After implementing proposer-packer separation (PBS):
Nodes with high performance configuration can become packagers (Builders). Packagers are only responsible for downloading Blob data for encoding and creating blocks, and then broadcasting them to other nodes for spot checks. For packagers (Builders), due to the high requirements for synchronized data volume and bandwidth, it will be relatively centralized.
Nodes with lower performance configuration can become proposers. Proposers only need to verify the validity of the data and create and broadcast the block header. However, for proposers, the synchronization data volume and bandwidth requirements are lower, so it will be decentralized.

PBS realizes the division of labor among nodes by separating the roles of packaging and verification. Nodes with high performance configuration are responsible for downloading all data for encoding and distribution, while nodes with low performance configuration are responsible for spot checks and verification. So how is the MEV problem solved?
Censorship-Resistant List (crList)
Since PBS separates the work of packaging and verification, the packager (Builder) actually has a greater ability to censor transactions. The packager can deliberately ignore certain transactions and arbitrarily sort and insert the transactions he wants to insert to obtain MEV, but the anti-censorship list (crList) solves these problems.
The mechanism of the anti-censorship list (crList):[1]
Before the Builder packages the block transaction, the Proposer will first publish a censorship-resistant list (crList), which contains all the transactions in the mempool.
The Builder can only choose to package and sort the transactions in crList, which means that the Builder cannot insert his own private transactions to obtain MEV, nor can he deliberately reject a transaction (unless the Gas limit is full)
After the Builder packages the transaction list, it broadcasts the final version of the transaction list Hash to the Proposer. The Proposer selects one of the transaction lists to generate a block header and broadcast it.
When a node synchronizes data, it obtains the block header from the proposer, and then obtains the block body from the builder to ensure that the block body is the final selected version.
The negative impact of MEV such as the "sandwich attack" is solved through the censorship-resistant list (crList), and nodes can no longer obtain similar MEV by inserting private transactions.

Ethereum's specific implementation plan for PBS is still under discussion, and the current possible initial implementation plan is a dual-slot PBS.
双槽 PBS(Two-slot Proposer-Builder Separation)
The dual-slot PBS uses a bidding model to determine the block: [2]
After receiving crList, the Builder creates the block header of the transaction list and makes a bid.
The proposer selects the final successful block header and packager (Builder), and the proposer unconditionally receives the winning bid fee (regardless of whether a valid block is generated)
The verification committee (Committees) confirms the winning block header
The Builder discloses the winning block Body
The verification committee confirms the winning block body and conducts a verification vote (if it passes, the block will be produced. If the packager deliberately does not provide a block body, the block will be deemed non-existent)
Although builders can still obtain MEV by adjusting the transaction order, the bidding mechanism of the dual-slot PBS causes these builders to start "involution". When everyone has to bid to compete for blocks, the profits obtained by centralized builders through MEV will be continuously squeezed, and the final profits will be distributed to decentralized proposers, which solves the problem of centralized builders becoming more and more centralized by obtaining MEV.
However, the two-slot PBS has a design flaw: we can see that the name of this design contains "two-slot", which means that there are two slots. This means that the effective block time in this scheme is extended to 24 seconds (one slot = 12 seconds). How to solve this problem has been hotly discussed in the Ethereum community.

Summarize
Danksharding provides a revolutionary solution for Ethereum to solve the "blockchain impossible triangle", that is, to achieve scalability while ensuring the decentralization and security of Ethereum:
The new transaction type Blob is introduced through the pre-solution EIP-4844: Proto-Danksharding. The 1MB~2MB of additional data carried by Blob can help Ethereum achieve higher TPS and lower costs on Rollup.
Data Availability Sampling (DAS) is implemented through erasure coding and KZG polynomial commitment, allowing nodes to verify data availability by spot-checking only some data fragments and reducing the burden on nodes.
By implementing Data Availability Sampling (DAS), the additional data volume of Blob is expanded to 16MB~32MB, making the expansion effect even better.
Through proposer-packer separation (PBS), the work of verifying and packaging blocks is separated into two node roles, achieving partial decentralization of packaging nodes and decentralization of verification nodes.
The negative impact of MEV is greatly reduced through the anti-censorship list (crList) and double-slot PBS. The packager cannot insert private transactions or censor a transaction.
If nothing unexpected happens, Danksharding's pre-processing solution EIP-4844 will be officially implemented in the Cancun upgrade after the Ethereum Shanghai upgrade. After the EIP-4844 solution is implemented, the most direct benefit is the Rollup in Layer2 and the ecology on Rollup. Higher TPS and lower costs are very suitable for high-frequency applications on the chain. We might as well imagine that some "killer applications" may be born. The centralized block generation + decentralized verification + anti-censorship achieved by Danksharding will bring a new round of public chain narrative to Ethereum. In addition to Layer2, what kind of chemical reaction will the modular blockchain collide with Ethereum after Danksharding?
We believe that the implementation of Danksharding will rewrite the rules of the game, and Ethereum will lead the blockchain industry into a new era!
references
[1] Data Availability, storage expansion of blockchain
[2] An article to understand the new Ethereum upgrade plan Danksharding
[3] Proto-Danksharding FAQ – HackMD
[4] What is “Danksharding”?
[5] V God recommends: To gain a deeper understanding of Ethereum’s sharding roadmap, just read this report
[6] Buidler DAO article: How to rescue NFTs from hackers after a wallet is stolen?
[7] History repeats itself? A detailed explanation of Ethereum 2.0 and hard forks