Editor | DappLearning

On April 7, 2025, at the Pop-X HK Research House event co-hosted by DappLearning, ETHDimsum, Panta Rhei, and UETH, Vitalik and Xiaowei appeared together at the event.

During the event, DappLearning community founder Yan interviewed Vitalik, covering multiple topics such as ETH POS, Layer2, cryptography, and AI. This interview was in Chinese, and Vitalik's Chinese was very fluent.

Below is the content of the interview (for ease of reading and comprehension, the original content has been slightly edited):

1. Views on the POS upgrade.

Yan: Vitalik, hello, I am Yan from the DappLearning community. I am very honored to interview you here.

I've been learning about Ethereum since 2017. I remember that in 2018 and 2019, there was a very heated discussion about POW and POS, and this topic may continue to be debated.

Looking back, (ETH) POS has been running stably for more than four years, with millions of validators in the consensus network. However, the exchange rate of ETH to BTC has been declining steadily, which has both positive and challenging aspects.

So standing at this point in time, how do you view Ethereum's POS upgrade?

Vitalik: I think the prices of BTC and ETH are completely unrelated to POW and POS.

There will be many different voices in the BTC and ETH communities, and what these two communities do is completely different, and their ways of thinking are also completely different.

Regarding the price of ETH, I think there is a problem: ETH has many possible futures, and in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.

This is a concern for many in the community, but this issue is actually quite normal. For example, Google, which does many products and interesting things, still has over 90% of its revenue related to its Search business.

The relationship between Ethereum ecosystem applications and ETH (the price) is also similar. Some applications pay a lot of transaction fees and consume a lot of ETH, while many (applications) might be relatively successful, but their contribution to ETH's success is not correspondingly large.

So this is a problem we need to think about and continue to optimize. We need to support more applications that are beneficial for Ethereum holders and have long-term value for ETH.

So I think the future success of ETH might appear in these areas. I don't think there is much correlation with improvements in consensus algorithms.

Two, concerns about the PBS architecture and centralization.

Yan: Yes, the prosperity of the ETH ecosystem is also an important reason that attracts developers to build it.

OK, then what do you think of ETH2.0's PBS (Proposer & Builder Separation) architecture? This is a very good direction; in the future, everyone can use a phone as a light node to verify (ZK) proofs, and everyone can stake 1 ether to become a validator.

But builders may become more centralized; they have to deal with anti-MEV and generate ZK proofs, and if Based Rollup is adopted, what builders do may require even more, such as acting as sequencers.

In this case, will builders become too centralized? Although validators are already sufficiently decentralized, this is a chain. If one link in the middle has a problem, it will also affect the operation of the entire system. So how do we solve this part of the anti-censorship problem?

Vitalik: Yes, I think this is a very important philosophical question.

In the early days of Bitcoin and Ethereum, there was a subconscious assumption:

Constructing a block and verifying a block is one operation.

Suppose you are constructing a block, and your block contains 100 transactions; then, on your own node, you need to process this many (100 transactions) of gas. When you finish constructing the block and broadcast it to the world, every node in the world also needs to do this much work (consuming the same gas). Therefore, if we set the gas limit, we need to ensure that every laptop or MacBook, or some size of server can build a block, which requires appropriately configured node servers to verify these blocks.

This was previous technology; now we have ZK, DAS, many new technologies, and Statelessness (stateless verification).

Before these technologies are used, constructing a block and verifying a block require symmetry, but now it can become asymmetric. Therefore, the difficulty of constructing a block may become very high, but the difficulty of verifying a block may become very low.

Using stateless clients as an example: if we use this stateless technology to increase the gas limit tenfold, the demand for computing power to construct a block will become huge, and an ordinary computer may no longer be able to handle it. At this point, it may require specially high-performance Mac studios or servers with stronger configurations.

But the cost of verification will become lower because verification does not require storage and only relies on bandwidth and CPU computing resources. If ZK technology is added, the CPU cost of verification can also be eliminated. If DAS is added, the cost of verification will be extremely low. The cost of constructing a block may become higher, but the cost of verification will become very low.

So is this better compared to the current situation?

This question is quite complex. I think if there are some super nodes in the Ethereum network, that is, some nodes with higher computing power, we need them to perform high-performance calculations.

So how do we prevent them from doing evil? For example, there are several types of attacks:

First: Create a 51% attack.

Second: Censorship attack. If they do not accept some users' transactions, how can we reduce this type of risk?

Third: How can we reduce the risks associated with anti-MEV operations?

Regarding the 51% attack aspect, since the validation process is done through Attesters, those Attester nodes need to verify DAS, ZK Proof, and stateless clients. The cost of this verification will be very low, so the threshold for consensus nodes will still be relatively low.

For instance, if some super nodes are constructing blocks, if such a situation arises where 90% of these nodes are yours, 5% are his, and 5% are others. If you completely refuse some transactions, it might not be a particularly bad thing; why? Because you cannot interfere with the entire consensus process.

Therefore, you cannot conduct a 51% attack, and the only thing you can do is to reject certain users' transactions.

Users may only need to wait for ten or twenty blocks until another person includes their transaction in a block, which is the first point.

The second point is that we have the concept of Fossil; what does Fossil do?

Fossil can separate the role of "selecting transactions" from the role of executing transactions. This way, the role of choosing which transactions to include in the next block can be more decentralized. Therefore, through the Fossil method, smaller nodes will have the ability to independently choose transactions to include in the next block. Moreover, if you are a larger node, your power is actually very limited.

This method is more complex than before; previously we thought each node was like a personal laptop. But if you look at Bitcoin, it is now also a relatively hybrid architecture because Bitcoin miners are all data centers for mining.

So in POS, it is done like this: a part of the nodes needs more computing power and resources. But the rights of these nodes are limited, while other nodes can be very dispersed and decentralized, thus ensuring the security and decentralization of the network. However, this method is more complex, so this is also a challenge for us.

Yan: Very good thought. Centralization is not necessarily a bad thing as long as we can limit its malicious actions.

Vitalik: Yes.

Three, the issues between Layer 1 and Layer 2, and the future direction.

Yan: Thank you for answering my long-standing confusion. We come to the second part of the question. As a witness to Ethereum's journey, Layer 2 has indeed been very successful. The TPS issue has indeed been resolved. Unlike during the ICO era, when transactions were congested.

I personally think L2 is quite usable now; however, there is currently the issue of liquidity fragmentation for L2. Many people have proposed various solutions. What do you think of the relationship between Layer 1 and Layer 2? Is the current Ethereum mainnet too laid-back, too decentralized, with no constraints on Layer 2? Should Layer 1 set rules for Layer 2 or establish some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution at Bankless, and I also agree with it. What do you think, and I am also curious when the corresponding solutions will go live?

Vitalik: I think there are several problems with our Layer 2 now.

The first is that their progress on security is not fast enough. So I have been pushing for all Layer 2s to upgrade to Stage 1 and hope they can upgrade to Stage 2 this year. I have been urging them to do this while also supporting L2BEAT to do more transparency work in this area.

The second issue is the interoperability problem of L2. That is, cross-chain transactions and communication between two L2s. If two L2s are within the same ecosystem, interoperability needs to be simpler, faster, and cheaper than now.

Last year we started this work, now called Open Intents Framework, and Chain-specific addresses, which is mostly UX-related work.

Actually, I think the cross-chain issue of L2 may be 80% a UX problem.

Although the process of solving UX issues may be painful, as long as the direction is correct, we can make complex problems uncomplicated. This is also the direction we are striving for.

Some things need to go further; for example, the withdrawal time for Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum and you cross-chain that token to L1 or to another L2, you need to wait a week.

You can let Market Makers wait a week (and in return, you need to pay them a certain fee). Ordinary users can cross-chain from one L2 to another L2 through ways such as Open Intents Framework Across Protocol for small transactions, which is feasible. However, for large transactions, Market Makers still have liquidity limitations. Therefore, the transaction fees they require will be relatively high. Last week, I published that article supporting the 2 of 3 verification method, which is the OP + ZK + TEE approach.

Because if you do that kind of 2 of 3, it can simultaneously meet three requirements.

The first requirement is to be completely Trustless, without needing a Security Council; TEE technology serves as an auxiliary role, so it doesn't need to be fully trusted.

Secondly, we can start using ZK technology, but this technology is still in its early stages, so we cannot fully rely on it yet.

Third, we can reduce the withdrawal time from a week to 1 hour.

You can imagine that if users use Open Intents Framework, the liquidity cost for Market Makers will decrease by 168 times. Because the waiting time for Market Makers (to perform Rebalance operations) will decrease from 1 week to 1 hour. Long-term, we plan to reduce the withdrawal time from 1 hour to 12 seconds (the current block time); if we adopt SSF, it can be further reduced to 4 seconds.

Currently, we will also use methods like zk-SNARK Aggregation to parallel process the ZK proof process and reduce latency a bit. Of course, if users use ZK this way, they won't need to do it through Intents. However, if they do it through Intents, the cost will be very low; this is all part of Interactability.

Regarding the role of L1, in the early stages of the L2 Roadmap, many people believed that we could completely replicate Bitcoin's Roadmap, where the utility of L1 would be very minimal, only doing proof (performing a small amount of work), while L2 could do everything else.

However, we have found that if L1 does not play any role at all, it is dangerous for ETH.

As we discussed before, one of our biggest concerns is that the success of Ethereum applications cannot translate into the success of ETH.

If ETH is not successful, it will lead to our community having no funds and being unable to support the next round of applications. So if L1 does not play any role at all, user experience and the entire architecture will be controlled by L2 and some applications. There will be no one representing ETH. Therefore, if we can allocate more roles to L1 in some applications, it will be better for ETH.

The next question we need to answer is what L1 will do and what L2 will do.

In February, I published an article that in an L2-centric world, there are many important things that L1 needs to do. For example, L2 needs to send proofs to L1. If an L2 encounters problems, users will need to cross-chain to another L2 through L1. Additionally, Key Store Wallets and Oracle Data can also be placed on L1, among others. Many such mechanisms need to rely on L1.

There are also some high-value applications, such as DeFi, which are actually more suitable for L1. One important reason why some DeFi applications are more suitable for L1 is their Time Horizon (investment period); users need to wait a long time, such as one, two, or three years.

This is especially evident in prediction markets, where sometimes prediction markets ask questions like what will happen in 2028.

Here arises a problem: if the governance of an L2 goes awry, theoretically, all users there can exit, moving to L1 or to another L2. However, if there is an application within this L2 whose assets are locked in long-term smart contracts, users are unable to exit. Thus, many theoretically secure DeFi applications are not very secure in practice.

For these reasons, some applications should still be done on L1, so we are paying more attention to L1's scalability.

We currently have a roadmap to enhance L1's scalability with about four to five methods by 2026.

The first is Delayed Execution (separating block verification and execution), meaning we can verify blocks in each slot and execute them in the next slot. This has the advantage that the maximum acceptable execution time may increase from 200 milliseconds to 3 seconds or 6 seconds. This allows for more processing time.

The second is the Block Level Access List, meaning each block will need to specify in its information which accounts' states need to be read and their related storage states. It can be said to be somewhat similar to a Stateless version without Witnesses; the advantage of this is that we can parallel process the execution of EVM and I/O, which is a relatively simple implementation method for parallel processing.

Third, there is Multidimensional Gas Pricing, which can set a maximum capacity for a block, which is very important for security.

Another is (EIP4444) historical data processing, which does not require every node to permanently store all information. For example, each node could save only 1%; we can use a p2p approach, where your node might store part of it, and his node stores another part. This way, we can store that information more distributedly.

So if we can combine these four solutions, we now believe that it may be possible to increase L1's gas limit by 10 times, allowing all our applications to begin to rely more on L1 and do more on L1, which is beneficial for L1 and ETH.

Yan: Okay, the next question is, will we possibly welcome the Pectra upgrade this month?

Vitalik: Actually, we hope to do two things: conduct the Pectra upgrade around the end of this month and plan for the Fusaka upgrade in Q3 or Q4.

Yan: Wow, so soon?

Vitalik: I hope so.

Yan: The next question I have is related to this. As someone who has watched Ethereum grow, we know that to ensure security, each client (consensus client and execution client) has about five or six clients being developed simultaneously, leading to a lot of coordination work and longer development cycles.

This has its pros and cons; compared to other L1s, it may indeed be slow, but it is also safer.

But what kind of solutions do we have that allow us to upgrade without waiting a year and a half? I have seen that you proposed some solutions; could you elaborate on them?

Vitalik: Yes, there is a plan; we can improve coordination efficiency. We now have more people who can move between different teams to ensure more efficient communication between teams.

If a client team has a problem, they can voice that problem so that the research team knows. Actually, one advantage of Thomas becoming one of our new EDs is that he is from the client team, and now he is also in the EF team. He can coordinate this, which is the first point.

The second point is that we can be stricter with client teams. Our current method is that if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now considering starting the upgrade as soon as four teams are completed, so we don’t have to wait for the slowest one and can also boost everyone's motivation.

Four, how to view cryptography and AI.

Yan: So there should still be appropriate competition. It's great; really, I look forward to every upgrade, but don't keep everyone waiting too long.

Later, I want to ask more questions related to cryptography; the questions will be quite broad.

When our community was just established in 2021, we gathered developers from major exchanges in China and researchers from Ventures to discuss DeFi. 2021 was indeed a time when everyone participated in understanding DeFi, learning, and designing DeFi—it was a nationwide wave of participation.

Looking forward, regarding ZK, whether for the public or developers, learning ZK, such as Groth16, Plonk, Halo2, as developers progress, they find it harder to catch up, and the pace of technological advancement is also fast.

Additionally, we now see a direction where the development of ZKVM is also fast, leading to the direction of ZKEVM not being as popular as before. As ZKVM matures, developers may not need to focus too much on the ZK underlying.

What are your suggestions and views on this?

Vitalik: I think for some in the ZK ecosystem, the best direction is that most ZK developers can know some advanced languages, namely HLL (High Level Language). They can write their application code in HLL, while those researching Proof Systems can continue to modify and optimize the underlying algorithms. Developers need to be layered; they don’t need to know what happens at the next layer.

Currently, there may be a problem where Circom and Groth16 have a very developed ecosystem, but this poses a significant limitation for ZK ecosystem applications. Because Groth16 has many drawbacks, such as each application needing to handle the Trusted Setup issue by itself, and its efficiency is not very high. Therefore, we are also considering allocating more resources here to help some modern HLL succeed.

Another thing is that the ZK RISC-V route is also very good. Because RISC-V has also become a HLL, many applications, including EVM and some other applications, can be written on RISC-V.

Yan: Okay, so developers just need to learn Rust, which is great. I also heard about the development of applied cryptography at Devcon in Bangkok last year, which was eye-opening for me.

Regarding the aspects of applied cryptography, what do you think about the combination of ZKP, MPC, and FHE, and what advice do you have for developers?

Vitalik: Yes, this is very interesting. I think FHE has a good future now, but I have a concern: MPC and FHE always need a Committee, meaning you need to choose, for example, seven or more nodes. If those nodes can potentially be 51% or 33% attacked, then your system will have issues. It is equivalent to saying that the system has a Security Council, which is actually more serious than a Security Council. Because if an L2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked before there is a problem.

The second point is that the Security Council, if they are reliable, will throw most of their assets into cold wallets, meaning they will mostly be offline. However, in most MPC and FHE situations, their committees need to remain online to keep the system running, so they may deploy on a VPS or other servers. In that case, attacking them would be easier.

This makes me a bit worried. I think many applications can still be developed; they have advantages but are not perfect.

Yan: Finally, let me ask a relatively light question. I see you have recently been paying attention to AI, and I want to list some points.

For example, Elon Musk said that humans might just be a guiding program for silicon-based civilization.

Then from our experience in the cryptocurrency circle, actually, the premise of decentralization is that everyone follows the rules, mutually restrains each other, and understands the risks; this will ultimately lead to elite politics. So what do you think of these views? Just discuss the views.

Vitalik: Yes, I am thinking about where to start answering.

Because the field of AI is very complex. For example, five years ago, no one could have predicted that the U.S. would have the best Closed Source AI in the world, while China would have the best Open Source AI. AI can enhance everyone's abilities, and sometimes it can also enhance the power of certain countries.

But AI can sometimes also have a relatively democratic effect. When I use AI myself, I find that in those areas where I've already achieved global top thousand rankings, such as some areas of ZK development, AI actually helps me less in the ZK part; I still need to write most of the code myself. However, in areas where I am a novice, AI can help me a lot. For example, for Android app development, I had never done it before. I made an app ten years ago using a framework, wrote it in Javascript, and then converted it into an app. Other than that, I had never written a native Android app.

At the beginning of this year, I did an experiment where I tried to use GPT to write an app, and as a result, it was completed within an hour. It shows that the gap between experts and novices has been significantly reduced with the help of AI, and AI can also provide many new opportunities.

Yan: To add something, I am quite grateful for the new perspective you gave me. I used to think that with AI, experienced programmers would learn faster, while it might not be friendly to novice programmers. However, in some aspects, it does improve novices' abilities. It may be a form of equality rather than division, right?

Vitalik: Yes, but now there is a very important question that also needs to be thought about: what effects will some of the technologies we are working on, including blockchain, AI, cryptography, and some other technologies, have on society?

Yan: So you hope humanity will not only be ruled by elites, right? You also hope to achieve Pareto efficiency for the entire society, where ordinary people become super individuals through the empowerment of AI and blockchain.

Vitalik: Yes, yes, super individuals, super communities, super humans.

Five, expectations for the Ethereum ecosystem and suggestions for developers.

Yan: OK, then we proceed to the last question. What are your expectations and messages for the developer community? Do you have anything to say to the developers in the Ethereum community?

Vitalik: Developers of these Ethereum applications need to think about it.

There are many opportunities to develop applications in Ethereum now; many things that were previously impossible can now be done.

There are many reasons for this, such as:

First: Previously, the TPS of L1 was completely insufficient, but now this issue no longer exists;

Second: Previously, there was no way to solve privacy issues, but now there is;

Third: Because AI has reduced the difficulty of developing anything, it can be said that although the complexity of the Ethereum ecosystem has increased somewhat, AI can still help everyone better understand Ethereum.

So I think many things that failed before, including ten years ago or five years ago, may now succeed.

In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.

The first type can be described as very open, decentralized, secure, and particularly idealistic (applications). However, they only have 42 users. The second type can be described as a casino. The problem is that both extremes are unhealthy.

So what we hope is to develop some applications,

The first thing users will like to use is that it will have real value.

Those applications will be better for the world.

Secondly, there are indeed some business models, such as economic aspects, that can run continuously without relying on the funds of limited foundations or other organizations; this is also a challenge.

But now I think everyone has more resources than before, so if you can find a good idea and execute it well, your chances of success are very high.

Yan: Looking back, I think Ethereum has actually been quite successful, continuously leading the industry and striving to solve the problems encountered in the industry under the premise of decentralization.

Another point I deeply feel is that our community has always been non-profit, supported by Gitcoin grants in the Ethereum ecosystem, OP's retroactive rewards, and airdrop rewards from other projects. We have found that building in the Ethereum community can receive a lot of support, and we are also thinking about how to keep the community operating sustainably and steadily.

Building Ethereum is truly exciting, and we hope to see the true realization of the world computer soon. Thank you for your valuable time.