Hello everyone, welcome to this special report from PolkaWorld.

This month's Polkadot Technical Fellowship update is somewhat unusual — it is no longer the familiar Google Meet meeting screen but a real offline gathering. After this high-intensity, three-hour face-to-face meeting, community members Alice and Bob dialogued with the Fellowship technical members present, and PolkaWorld brought the latest progress interpretations and behind-the-scenes thoughts to the Chinese community partners.

In this issue, you will see:

✅ Why does the version update of the Polkadot SDK make developers 'cry out'? How is Fellowship optimizing this process?

✅ Elastic Scaling is ready, just waiting for the final deployment! Key attack resistance features are about to go live!

✅ Progress on the BastiBlocks behind the 500ms block production mechanism: how to achieve the decoupling of 'computational resources' and 'block frequency'?

✅ Polkadot Hub will support the dual-stack smart contract architecture of PVM + EVM. Why will the AssetHub migration time be delayed by a month?

✅ The next round of stress testing may utilize 46 cores; let's look forward to Polkadot's 300,000 TPS!

✅ BLS Beefy is officially restarting, providing high-performance decentralized support for HyperBridge!

This interview contains a lot of information, and we have specifically sorted out the technical details and design philosophies. We hope to present to you the thinking process and advancement rhythm of Fellowship in terms of architectural evolution, performance optimization, and governance integration.

Optimizing the headache-inducing Polkadot SDK version release process

Tommy: Basti, hello! Basti: Hey, Tommy. Tommy: Thank you for welcoming us here today. Basti: No problem, very welcome. Tommy: So this time you had an offline face-to-face meeting? Why not continue using Google Meet?

Basti: Because we want to reflect the concept of 'Proof of Personhood'. This can actually be seen as the first step of DIM 0 (Decentralized Identity Management). Tommy: So it means confirming that everyone is indeed real human beings in reality? Basti: That’s right. Hahaha

Tommy: Could you share what you discussed in today's meeting?

Basti: From an external perspective, today’s discussion might seem not very 'exciting', but it is still important for us. For example, we discussed a more practical issue at the end of the meeting — the version release process.

The current process has significant pain points: once a new Polkadot SDK is released, the Fellowship runtime needs to be updated, and at that point, everyone is almost 'crying out', unwilling to touch it. This not only troubles the Fellowship team but also torments all developers running parachains.

We are considering how to optimize this process, not only for ourselves but also for the technical maintainers of the entire ecosystem. We hope the future release experience can be smoother and not something to avoid.

To be honest, today's meeting had too much content, and I’m starting to forget all the details (laughs).

We also invited Brian to participate remotely, although we joke, 'Who knows if he really exists?' 😄 He introduced some technical details about PVQ (PolkaVM Query).

We also had a more in-depth discussion about the necessity of PVQ. For example, is this mechanism really necessary? What benefits does it bring? What costs might it incur?

From the discussions, it indeed may introduce some delays, which is a negative impact. However, whether it is worth adding or if there are more elegant alternatives was also discussed repeatedly today.

This is probably the core content of our discussion today. At least for me, this is the most interesting part of today. Of course, others may have different opinions.

Elastic Scaling: Everything is ready, just waiting for the final 'Start' button to be pressed.

Tommy: Okay, I understand that today’s focus is on internal discussions about process and mechanism improvements. Next, we will move into today’s formal agenda, focusing on two key topics:

  • Polkadot Cloud

  • Polkadot Hub

Let’s start with Polkadot Cloud, focusing on the progress of the feature everyone is most concerned about: Elastic Scaling. How is the progress of this feature now? I heard it has gone live, but is it still undergoing security and attack resistance testing?

Robert, can you talk to us?

Robert: Yes, overall, the chain already has the capability for elastic scaling. However, we still have a feature that has not been enabled, which is meant to enhance the system's attack resistance — preventing malicious behavior from obstructing the normal operation of elastic scaling.

This feature was previously launched, but it caused problems in the Kusama network, so we decided to postpone the deployment.

In the past month, we have conducted intensive testing and fixes for this feature, mainly through stress testing on the test network. Its stability is now very ideal — the system was originally designed to tolerate up to one-third of validators going abnormal, and during testing, we even exceeded this threshold, yet the system was still able to operate normally.

So we now believe this feature is ready. Next, we just need to launch a runtime function module to ensure the final stability of the system, which is expected to be completed on the Polkadot mainnet within about 20 hours.

This will also mark the landing of the last key component of elastic scaling.

Tommy: Does this 'final step' mean that a new version needs to be released?

Robert: Strictly speaking, there is no need to release a new version. This runtime feature has been developed but is not yet enabled; it needs to be activated through the OpenGov governance process. So everything is ready, just waiting for governance approval.

Tommy: Understood, thank you. Then who will be responsible for releasing the new version in the formal process of enabling Elastic Scaling?

Robert: There is no need to release a new version, just activate the existing functionality. Because we have prepared all the code, it just needs to be enabled through governance.

So it can be said that everything is ready, just waiting for the final 'Start' button to be pressed.

Parity introduces NOMT to Polkadot, boosting performance by 10 times! MVP will be launched by the end of the year!

Tommy: Okay. I remember you announced two months ago that Parity and the NOMT team had reached a cooperation agreement to bring NOMT into Polkadot and increase performance by 10 times. You have some recent updates, right?

Robert: Yes, I just checked the latest developer discussion content. The entire work is roughly divided into two parts:

  • One part is the modification of NOMT itself to make it better adapt to Substrate in principle;

  • Another part is to truly integrate it into Substrate.

Now the transformation of the NOMT body has made good progress, and the integration work at the Substrate level will begin soon.

Tommy: So does this project have a clear timeline, or is it just 'whenever it's done'?

Robert: It depends on how you define 'completion.' Personally, I hope we can at least launch an MVP (Minimum Viable Product) by the end of this year so that everyone can see initial results. But from MVP to actual deployment on the production chain, it may take more time to refine.

500 milliseconds block production: MVP will be launched as soon as possible for deployment on the Westend test network for everyone to try out.

Tommy: Got it, thank you Rob.

Speaking of MVP, we also have another heavyweight developer on site who is pushing for the minimum viable product for BastiBlocks.

But... I heard you’ve been really busy lately, and everyone is asking you, 'When can you review my PR?' It seems like you have a mountain of things to review?

PolkaWorld Note: BastiBlocks is a parallel chain block logic module being developed by Basti to implement 500ms block production and elastic scaling, and is one of the key components in the Polkadot elastic computing architecture. This was a nickname given to Basti's new block building mechanism during Fellowship meetings and community technical discussions; it’s like saying, 'We’ll just call your block production model BastiBlocks.'

Basti: Haha, yes, this is indeed one of the challenges I am currently facing.

Tommy: Do you still have time to advance the development of BastiBlocks recently?

Basti: Yes, I have always hoped to finish this. Although it is not completely done yet, there are still some PRs that have not been merged, and I am asking everyone to help review.

Fortunately, I wrote some unit tests myself, which made the development process much easier. The overall code is relatively stable now, and we are mainly dealing with a few final tasks that are technically challenging but necessary — although they are not very large, each one must be resolved.

For example: when producing blocks on the blockchain, each validator is assigned a core, and they may even be assigned multiple.

Three or four months ago, Sebastian proposed a model: one core corresponds to one block, three cores produce three blocks — that is, the number of blocks corresponds one-to-one with the number of cores.

But the logic I am developing here is a brand new one. Under this new model, you can freely set the block rate; for example, if you want to produce a block every 500 milliseconds, it means you can produce two blocks per second, with a maximum of 12 per cycle. This block rate no longer directly depends on how many cores you have.

For example, even if you only have one core, you can still set this 500ms block frequency; but if you add another core, you will have more computational resources and storage capacity.

Tommy: So you first define the block frequency, like producing a block every 500ms, and then expand resources by increasing cores, right?

Basti: That's right, it's exactly this idea.

The key point of this model is that it decouples the dimensions of 'block frequency' and 'resource quantity':

You can independently set the block frequency, such as setting it to produce a block every 500ms. This setting itself is feasible, but if you only have one core, the system will automatically limit the resources you can use — because you need to complete computations and storage within the specified time.

But if you want to use more resources, it's also simple; just 'buy another core'.

The system or collator (block producer) will automatically utilize these new resources, for example, evenly distributing 12 blocks across two cores, with each core handling 6 blocks — achieving true parallel processing.

The mechanism I am implementing now can already support this operational mode.

However, the current configuration of Polkadot's runtime still has a static setting for block frequency, meaning that the block time is fixed when each chain goes live.

But the block frequency actually determines how many resources you can use; for example, we limit it to avoid someone building an 'oversized block' that takes an hour to verify on my computer.

So now we will set a cap for standard hardware, for example, Polkadot currently allows a maximum of 2 seconds per block.

But if we want to introduce an elastic scaling mechanism, we must support dynamic resources — for example, the number of cores is dynamic, and the number of blocks you intend to produce is also dynamic, so 'static time constraints' cannot be used to limit it.

Therefore, we are now developing a dynamic resource evaluation and allocation mechanism.

In fact, the version I implemented is even simpler than what I just described (laughs), not complicated. It has progressed quite well.

Although I am not yet sure of the exact launch time, I hope to soon release an MVP (Minimum Viable Product) and deploy it to Westend for testing.

I know many people are looking forward to it, including myself.

Tommy: So your current development approach is to first set a basic model, and then Sebastian always jumps out to say, 'Have you considered this issue?' and you all engage in a big discussion?

Basti: Haha, that's pretty much the model. We are currently dealing with some core design issues and challenging many previous default assumptions.

The system is becoming increasingly complex, with more variables involved, making it more difficult to deduce various boundary scenarios: 'If A happens, how will B operate?'

But there's nothing to be done; this is the natural process of system growth.

Tommy: After you complete the MVP or PoC (Proof of Concept), will it be handed over to others for integration? I heard it’s almost ready to use?

Basti: The integration work shouldn't be too complicated.

Sebastian previously developed a module we call a slot-based collator, which is now the default collator type being used, or will soon become the default.

The logic I developed is based on improvements to this slot-based collator, adding support for features like 500ms block production.

If everyone is also using slot-based collators (for example, deployed through OmniNode), then integrating this new logic into parachains is quite easy.

Of course, some small adjustments still need to be made at the runtime level, but the amount is not large and not complex, so the whole integration process should be very smooth.

Next, I might take the lead in completing the integration on Atop, after which other teams can continue to expand and deploy based on this.

Polkadot Hub will support both PVM and EVM.

Tommy: Okay, thank you! I think we have covered the content related to Polkadot Cloud and parachains.

Next, we can move into the Hub section, and I would like to invite Kian to talk about it! The migration related to smart contracts is currently underway; could you briefly summarize the overall progress?

Kian: Of course, let me first talk about the smart contract part.

First, our Polkadot smart contract technology, that is, Revive and its supporting toolset, has now gone live on Kusama. This should be what has happened since our last meeting, right?

This is a noteworthy advancement. To put it a bit more technically, it uses PVM as the virtual machine for smart contracts, so the contracts are written for PVM and executed within it.

At the same time, we have also discovered some issues or potential bottlenecks that may limit the system's usability, such as the contract code potentially being too large.

Therefore, in the deployment plan on the Polkadot mainnet, we are studying the adoption of the same technology stack, still supporting PVM, but possibly adding EVM as secondary backend support.

In other words, in the future, you can write Solidity contracts (compatible with Ethereum) and then choose to deploy them as:

  • PVM bytecode (uploaded to Polkadot Hub, which is AssetHub), or

  • EVM bytecode.

The details of this plan are still being further refined. I am not an expert in this module, and I am speaking on behalf of other colleagues, so I won’t comment further for now. However, if there are changes to the timeline later, they should be announced on the official Polkadot forum, so please pay attention.

Regarding the smart contract part, I will stop here for now. You just mentioned well that the Hub's content mainly includes two parts: contracts and AssetHub.

Next, I will hand the microphone over to Oliver to introduce the specific progress of the migration.

Oliver: Thank you, Kian. Hello, everyone.

Regarding the migration part, our goal is to complete the relevant migration to Paseo by the 28th of this month.

We are now making all preparations for Paseo, actually preparing new runtimes for Polkadot. We have a script that will migrate Polkadot's runtime to Paseo, including renaming and adjusting constants, etc.

The purpose of doing this is to ensure that the migration code that will run on Polkadot in the future can be thoroughly tested on Paseo first.

As usual, we have a principle: content that has not been fully tested on Kusama will not be deployed to Paseo first (since Paseo is expected to be a more stable environment). But this time we received a 'special permission' to conduct this test on Paseo first.

Of course, we have done a complete rehearsal on Westend before to ensure that it doesn’t crash immediately upon going live. But overall, this time still carries slightly higher risks than usual.

So, July 28 is our planned migration day for Paseo.

At that time, Paseo's AssetHub will complete the migration and enable asynchronous staking and multiple staking modules.

In this way, we can integrate smart contracts with these staking palettes in the future. For example, you can directly call governance-related logic in the contract — this is one of the very important attractions.

As for the subsequent timeline, our original plan was:

  • August 15: Kusama

  • September: Polkadot mainnet

But now it seems there’s an update: due to some partners' integrations not being ready, we are reluctant to go live before they are prepared, which is not good for us either.

From what I understand, this timeline will at least be delayed by a month, but I am not very sure about the specific progress, as I have not seen the latest updates.

Currently, that is the overall situation.

Kian: Perhaps I can add a small detail later. Regarding the status of Paseo, I see there are some confusions or comments in the community, so I want to clarify.

Currently, a parachain called PassetHub has been deployed on Paseo. I think it is necessary to explain its origins — we now have a Paseo relay chain, and as Oliver just mentioned, by the end of this month, asset migration will proceed according to the established process: a large amount of content from the Paseo relay chain will be migrated to Paseo's AssetHub, and smart contract functionality will be added to AssetHub afterward.

And PassetHub is a parachain specifically added for Paseo, which only includes part of the functionalities of AssetHub and smart contract modules, mainly to allow everyone to use it in developer activities (such as hackathons) in advance.

I am not sure what will happen to PassetHub in the future — it will probably 'meet its end' very soon (RIP 😅).

But I want to emphasize: PassetHub and Paseo AssetHub are different things.

https://x.com/polkaworld_pro/status/1942866040108806587

Oliver: One more thing about the contract functionality on Kusama.

Everyone should really try this feature — I am the first person to deploy a contract on it!

I deployed a standard ERC20 Token contract: just copy and paste the ERC20 template code into Remix, compile it, and then click deploy; MetaMask will pop up a payment request.

The entire process was very smooth; if you want to deploy contracts on Kusama, it's definitely worth a try.

The next spamming event may use 46 cores for TPS testing.

Tommy: Thank you, this is really interesting. It’s great to see how you deploy a chain, configure it, use it as a test chain, and migrate all content... I think this might be the first attempt in the world to migrate such a large amount of state data from one chain to another.

I have a temporary question to ask: who participated in the last 'spamming' (stress test)? I heard a rumor recently that another round of new spamming is coming (for example... I am the one spreading it 😄). But this time, the spamming shouldn't be as casually done as on Kusama. Can we discuss what the scale of the next round of spamming might be?

Robert: Since the last spamming until now, we have optimized network performance considerably. I expect that this time we can use more cores while still maintaining network stability — that’s something I am quite confident about.

However, our current priority for such tests is actually not that high; we are currently focusing more on block confirmation and low latency.

Once we can get more cores, we will prioritize testing on Kusama — although it is a lower priority, it is already in the plan. Last time we used about 23 cores, and this time theoretically we can double that. Of course, this is not just theoretical; we will conduct practical tests.

As for conducting similar tests on the Polkadot mainnet, technically it's not more difficult, but it is costly and risky:

  • On Kusama, people have a psychological expectation of chaos; if the network crashes, we will say, 'We warned you beforehand.'

  • But Polkadot is the mainnet, and the community's requirement for it is 'stable and reliable', so we must be more cautious.

But if we can stably run 40 cores on Kusama, then we can also consider gradually increasing on Polkadot, starting with 30 cores.

Tommy: So will testing begin soon?

Robert: For example, there was a problem called 'quantum' that has now been fixed, and we just need to wait for the next quantum operating cycle to begin.

In theory, we can soon get more cores. Once we have them, we will first conduct some internal tests and not open them to the public immediately. The main goal is to observe the network's performance under high load, which heavily relies on bandwidth and system resources.

I still hope to complete a full round of testing this year, but to be honest, this is not our current top priority.

Tommy: The reason I ask this is that we are recently preparing a small project called the Polkadot 200 Launch Campaign, which will relate to Elastic Scaling.

We are considering whether we can create some highlight content to attract more developers to participate. This event is a bit like spamming, but not exactly the same — it's more like everyone writing scripts together and collaboratively conducting stress tests. We are still in the incubation stage, and I am collecting various creative ideas.

Alright, I feel that the content related to Polkadot Cloud and Polkadot Hub can be concluded here.

BLS Beefy is restarting, and HyperBridge is taking a critical step towards decentralization.

Tommy: Let's update the latest progress of BLS Beefy. I happen to have two key roles here: one responsible for the bridge and one for consensus, making it the most suitable combination for discussing BLS Beefy.

Left: Seyed Middle: Tommy (Alice and Bob) Right: Seun

So I want to ask what you are doing now? I just saw all those dense mathematical formulas on your table... are you doing theoretical derivation or have you already entered the coding phase?

Seun: We are still arguing over the design ideas for this crypto plan.

He always says, 'No, no, no, our plan is better!' But honestly, I have not been completely convinced. His reason is that their solution already has an SDK implementation, making it more practical.

But I think his reasons for opposing our plan are not sufficient.

Tommy: I remember the starting point of this debate was when you first joined HyperBridge, right? You said back then that in order for HyperBridge to truly have high performance, we must implement BLS Beefy; otherwise, the performance could not be sustained. In fact, this project was initially slow to advance, with many early bottlenecks. But now it has finally gotten back on track and is moving forward overall.

Seyed: This has actually been stuck for a long time, and no one has been truly responsible during this time. I have also been held up by other projects, and code reviews have been delayed again and again. The core code is already written. You can now enable BLS key functionality, and validators will sign using BLS keys.

But there is still one key issue unresolved — the Proof of Possession has not been implemented. As long as this is not in place, the BLS keys are insecure.

We are currently modifying the relevant BLS code so that it can support this mechanism. Once implemented, BLS signatures will be usable and secure.

Of course, we still have to go through some audit processes, and there are two RFCs (technical standards) that need to be passed. Overall, the code is already there, but we need to go through the complete process before it can be officially enabled.

If you plan to deploy on the Westend test network, I think you can consider skipping the audit first, try it out, push the formal process while using it, and then go live on the mainnet.

Tommy: So your focus this week is to fully assist in pushing this project forward, completing the processes one by one?

Seun: It looks like it really has to be me doing it (laughs).

In fact, BLS Beefy should have been implemented long ago. When we launched the HyperBridge project, it was based on the assumption that BLS Beefy would definitely go live in the future. With it, HyperBridge can become the fastest and safest cross-chain bridge in the industry.

Can you imagine my surprise? I found out after joining that no one had been working on this!

So I had to proactively ask during the previous Fellowship meetings: 'Everyone, who is actually in charge of this? Can we get it done?'

I remember Seyed had indeed submitted quite a bit of code related to signatures, but then it stagnated.

However, he has now started to advance the Proof of Possession and RFC-related processes. Once these processes are completed, we can truly decentralize HyperBridge.

Because the current HyperBridge is actually only 'partially decentralized'.

We also need to perform zk conversion (zero-knowledge proof) for ECDSA (secp256k1) signatures, which is resource-intensive.

But with BLS Beefy, it is completely different:

  • You can directly submit Polkadot signatures to Ethereum,

  • without needing zk conversion,

  • with no extra overhead,

  • no performance bottlenecks,

  • and a higher degree of decentralization!

Tommy: Awesome, thank you for sharing! I hope you can smoothly advance the project progress this week!

That’s all for today’s program. Thank you all for watching, and see you next time!

Original video: https://www.youtube.com/watch?v=bF_5zacRpQc

#Polkadot