Binance Square

0x_Todd

0 Following
2 Followers
0 Liked
1 Shared
All Content
--
See original
A few days ago, @arbitrum_cn and @arbitrum just completed their own Pectra upgrade, codenamed ArbOS 40 Callisto. We actually talked about this before, which is the issue of how L2 and ETH L1 maintain synchronization. As we all know, the Pectra upgrade of ETH L1 went live on May 7, but just because L1 has it doesn't mean that L2 will automatically have these new features. Therefore, L2s like Arb also need to push their own Pectra upgrades to ensure that they always maintain a consistent user experience with ETH L1. Of course, some proposals in the Pectra upgrade, such as the node limit in the Staking module being raised from 32 to 2048, are not useful for Arb. However, there are still some proposals: for example, EIP-7702: smart wallet account abstraction; EIP-2537: on-chain BLS signatures, etc. These EIPs are quite important for Arb. Currently, the leading applications on Arb are focused on Perps, such as GMX, Ostium, Aark, and Rho. Arb should also need some other types of applications to change the game. Once account abstraction goes live, it can enable gasless transactions, batch transactions, social registration, etc., which are all solutions to the last mile problem. Then there's the on-chain BLS signatures, which can be used to achieve faster and cheaper BLS signatures and zero-knowledge proofs. Although L2 is already cheap and fast enough, it can still be faster and cheaper. I remember that someone previously mentioned that you could run an L2 for $50, but that was just the most basic L2. If you want to run a good L2, you still need to put in a lot of effort to maintain it. The upgrades from the ETH L1 mainnet require L2 to redevelop, retest, and re-audit, which all incurs costs. Otherwise, some tail-end L2s may feel neglected if they do not keep up with every mainnet update, which could make them feel like dilapidated buildings from the crypto-punk era.
A few days ago, @arbitrum_cn and @arbitrum just completed their own Pectra upgrade, codenamed ArbOS 40 Callisto. We actually talked about this before, which is the issue of how L2 and ETH L1 maintain synchronization.

As we all know, the Pectra upgrade of ETH L1 went live on May 7, but just because L1 has it doesn't mean that L2 will automatically have these new features.

Therefore, L2s like Arb also need to push their own Pectra upgrades to ensure that they always maintain a consistent user experience with ETH L1.

Of course, some proposals in the Pectra upgrade, such as the node limit in the Staking module being raised from 32 to 2048, are not useful for Arb.

However, there are still some proposals: for example, EIP-7702: smart wallet account abstraction; EIP-2537: on-chain BLS signatures, etc. These EIPs are quite important for Arb.

Currently, the leading applications on Arb are focused on Perps, such as GMX, Ostium, Aark, and Rho. Arb should also need some other types of applications to change the game.

Once account abstraction goes live, it can enable gasless transactions, batch transactions, social registration, etc., which are all solutions to the last mile problem.

Then there's the on-chain BLS signatures, which can be used to achieve faster and cheaper BLS signatures and zero-knowledge proofs. Although L2 is already cheap and fast enough, it can still be faster and cheaper.

I remember that someone previously mentioned that you could run an L2 for $50, but that was just the most basic L2.

If you want to run a good L2, you still need to put in a lot of effort to maintain it. The upgrades from the ETH L1 mainnet require L2 to redevelop, retest, and re-audit, which all incurs costs.

Otherwise, some tail-end L2s may feel neglected if they do not keep up with every mainnet update, which could make them feel like dilapidated buildings from the crypto-punk era.
See original
Today is the first day of the college entrance examination. It is said that many people who have left campus for a long time still dream of the anxious dreams of returning to the examination hall. However, I recently discovered a change in myself. I still occasionally dream of returning to the exam, but my mindset has made a 180° turn. For example, in my dreams, I have become clearly aware that I have already graduated, and returning to the examination hall is just to test myself on how many questions I can still answer, similar to the relaxed feeling of taking an MBTI test, without any anxiety. According to Freudian theory, the transplant effect in dreams usually serves as a form of inner defense inspection. For me, this dream has a sense of marking that my journey of self-cultivation has entered a new stage. I introspect and believe this is related to my significant adjustment of positions at the beginning of this year, namely roughly half in $BTC + half in stablecoin mining, with other tokens only occupying a small position, and then I left many group chats. Since then, I feel as if I have been liberated, with a sense of relaxation increasing day by day. I am happy when the price goes up (increased fiat value), and I am also happy when the price goes down (increased crypto value). There are both dreams (big coin) and stable cash flow (mining). I also have some small coins that I am optimistic about just to practice. Of course, this is a digression. Finally, I still wish my family members a smooth and victorious college entrance examination in 2025.
Today is the first day of the college entrance examination.

It is said that many people who have left campus for a long time still dream of the anxious dreams of returning to the examination hall.

However, I recently discovered a change in myself.

I still occasionally dream of returning to the exam, but my mindset has made a 180° turn.

For example, in my dreams, I have become clearly aware that I have already graduated, and returning to the examination hall is just to test myself on how many questions I can still answer, similar to the relaxed feeling of taking an MBTI test, without any anxiety.

According to Freudian theory, the transplant effect in dreams usually serves as a form of inner defense inspection.

For me, this dream has a sense of marking that my journey of self-cultivation has entered a new stage.

I introspect and believe this is related to my significant adjustment of positions at the beginning of this year, namely roughly half in $BTC + half in stablecoin mining, with other tokens only occupying a small position, and then I left many group chats.

Since then, I feel as if I have been liberated, with a sense of relaxation increasing day by day. I am happy when the price goes up (increased fiat value), and I am also happy when the price goes down (increased crypto value). There are both dreams (big coin) and stable cash flow (mining). I also have some small coins that I am optimistic about just to practice.

Of course, this is a digression.

Finally, I still wish my family members a smooth and victorious college entrance examination in 2025.
See original
Project Analysis of Sahara AI The reason for focusing on this project is that its investment institutions include Binance Labs, Polychain, and Pantera, among others. The most straightforward understanding is that Sahara is a one-stop AI tool, which includes a blockchain (Sahara chain). So what does this one-stop AI include: A data labeling platform that allows users to collect, refine, and label datasets, and receive incentives. What is data labeling? The data labeling process is used to identify raw data (images, text files, videos, etc.) and add some information tags as context. For example, you are a human, and while looking at a picture of a bird, you write down to tell the AI that this is a lark. A data tokenization platform that allows users to upload datasets, register on-chain, and mint an ownership NFT to complete the provenance verification of data and AI assets. Why tokenize data? For example, if you have some ready-made data, such as previously discussed human DNA sample data, others can purchase or rent this data to directly train specialized AI. Tokenization is for ownership verification, allowing one’s data to generate profit. An AI deployment platform that allows developers to utilize various data on this platform to launch their own AI. Thanks to various open-source projects, current developers no longer need to research neural networks and algorithms themselves; they can directly use existing models, such as DeepSeek, and feed in different data to train vertical domain AI. Therefore, Sahara also plans to provide supporting GPU and CPU computing power services. An L1 blockchain, namely the Sahara chain. The various links of the one-stop solution mentioned earlier, such as the trading of data assets and AI assets, are likely to occur on this chain. Many years ago, the industry had a dream that every platform would have its own chain; it now seems to have been basically realized. The virtual machine of the Sahara chain is EVM-based, and its development components are from Cosmos SDK, indicating that it should be a standard PoS chain. Although its tokens have not yet been issued (but a public offering is coming soon), it is presumed that one of the important roles of the tokens will be as GAS fees. Thus, this so-called one-stop AI development, from data labeling to AI models to GPU computing power, constitutes the basic framework of Sahara AI @SaharaLabsAI.
Project Analysis of Sahara AI

The reason for focusing on this project is that its investment institutions include Binance Labs, Polychain, and Pantera, among others.

The most straightforward understanding is that Sahara is a one-stop AI tool, which includes a blockchain (Sahara chain).

So what does this one-stop AI include:

A data labeling platform that allows users to collect, refine, and label datasets, and receive incentives.

What is data labeling?

The data labeling process is used to identify raw data (images, text files, videos, etc.) and add some information tags as context. For example, you are a human, and while looking at a picture of a bird, you write down to tell the AI that this is a lark.

A data tokenization platform that allows users to upload datasets, register on-chain, and mint an ownership NFT to complete the provenance verification of data and AI assets.

Why tokenize data?

For example, if you have some ready-made data, such as previously discussed human DNA sample data, others can purchase or rent this data to directly train specialized AI. Tokenization is for ownership verification, allowing one’s data to generate profit.

An AI deployment platform that allows developers to utilize various data on this platform to launch their own AI.

Thanks to various open-source projects, current developers no longer need to research neural networks and algorithms themselves; they can directly use existing models, such as DeepSeek, and feed in different data to train vertical domain AI. Therefore, Sahara also plans to provide supporting GPU and CPU computing power services.

An L1 blockchain, namely the Sahara chain. The various links of the one-stop solution mentioned earlier, such as the trading of data assets and AI assets, are likely to occur on this chain.

Many years ago, the industry had a dream that every platform would have its own chain; it now seems to have been basically realized. The virtual machine of the Sahara chain is EVM-based, and its development components are from Cosmos SDK, indicating that it should be a standard PoS chain. Although its tokens have not yet been issued (but a public offering is coming soon), it is presumed that one of the important roles of the tokens will be as GAS fees.

Thus, this so-called one-stop AI development, from data labeling to AI models to GPU computing power, constitutes the basic framework of Sahara AI @SaharaLabsAI.
See original
Project Analysis of Sahara AI The reason for focusing on this project is that its investors include Binance Labs, Polychain, and Pantera, among others. The most straightforward understanding is that Sahara is an all-in-one AI tool, which includes a blockchain (Sahara Chain). So what does this all-in-one AI include: A data labeling platform that allows users to collect, refine, and label datasets and receive incentives. What is data labeling? The data labeling process is used to identify raw data (images, text files, videos, etc.) and add some information tags as context. For example, you are a human looking at a picture of a bird, writing down to tell the AI that this is a lark. A data tokenization platform that allows users to upload datasets, register on-chain, and mint an ownership NFT to establish the provenance of data and AI assets. Why tokenize data? For instance, if you have some ready-made data, such as previously discussed human DNA sample data, others can buy or rent this data to directly train specialized AI, and tokenization is to establish rights and enable monetization of your data. An AI deployment platform that allows developers to launch their AI using the various data available on this platform. Thanks to various open-source projects, current developers no longer need to research neural networks and algorithms themselves, but can directly use existing models, such as DeepSeek, and feed in different data to train AI in vertical fields. Therefore, Sahara also plans to provide matching GPU and CPU computing power services. An L1 blockchain, which is the Sahara Chain. The various links of the all-in-one process mentioned earlier, such as trading data assets and trading AI assets, will most likely occur on this chain. Many years ago, there was a dream in the industry that every platform would have its own chain, and it seems this has been basically realized. The virtual machine of the Sahara Chain is EVM-based, and the development components are based on Cosmos SDK, which suggests it should be a standard PoS chain. Its token has not yet been issued (but the public offering is coming soon), and one of the important roles of the token is likely to be as a GAS fee. Therefore, this so-called all-in-one AI development, from data labeling to AI models to GPU computing power, when the four major components are combined, forms the basic framework of Sahara AI.
Project Analysis of Sahara AI

The reason for focusing on this project is that its investors include Binance Labs, Polychain, and Pantera, among others.

The most straightforward understanding is that Sahara is an all-in-one AI tool, which includes a blockchain (Sahara Chain).

So what does this all-in-one AI include:

A data labeling platform that allows users to collect, refine, and label datasets and receive incentives.

What is data labeling?

The data labeling process is used to identify raw data (images, text files, videos, etc.) and add some information tags as context. For example, you are a human looking at a picture of a bird, writing down to tell the AI that this is a lark.

A data tokenization platform that allows users to upload datasets, register on-chain, and mint an ownership NFT to establish the provenance of data and AI assets.

Why tokenize data?

For instance, if you have some ready-made data, such as previously discussed human DNA sample data, others can buy or rent this data to directly train specialized AI, and tokenization is to establish rights and enable monetization of your data.

An AI deployment platform that allows developers to launch their AI using the various data available on this platform.

Thanks to various open-source projects, current developers no longer need to research neural networks and algorithms themselves, but can directly use existing models, such as DeepSeek, and feed in different data to train AI in vertical fields. Therefore, Sahara also plans to provide matching GPU and CPU computing power services.

An L1 blockchain, which is the Sahara Chain. The various links of the all-in-one process mentioned earlier, such as trading data assets and trading AI assets, will most likely occur on this chain.

Many years ago, there was a dream in the industry that every platform would have its own chain, and it seems this has been basically realized. The virtual machine of the Sahara Chain is EVM-based, and the development components are based on Cosmos SDK, which suggests it should be a standard PoS chain. Its token has not yet been issued (but the public offering is coming soon), and one of the important roles of the token is likely to be as a GAS fee.

Therefore, this so-called all-in-one AI development, from data labeling to AI models to GPU computing power, when the four major components are combined, forms the basic framework of Sahara AI.
See original
Today while researching L2, I found that the standards for L2BEAT are quite strict. There are not many in the top 10 L2s that made it to Stage 1, and even those like Base, OP, and Unichain have been tagged; if they do not rectify within 53 days, they will drop back to Stage 0. Of course, they are all using OP Stack, so being tagged together is normal. Among the top 5, only Arbitrum remains strong, considered the backbone of Stage 1 in L2... The three-step 'auxiliary wheel' roadmap proposed by @VitalikButerin (Stage 0 → Stage 1 → Stage 2) has been transformed by L2BEAT into a concrete and measurable checklist (proof of usability, exit window, time lock, security committee design, etc.). Specifically, what Stage an L2 is in depends on a core question: who can veto or change the state? In Stage 0, the core team (or low-threshold multi-signatures) can override the proof system. In Stage 1, only a supermajority security committee (≥ 75% signatures) can overturn it; all others (including the core team) must go through the proof system. In Stage 2, the proof system itself is the final arbiter; the security committee can only intervene to fix provably on-chain vulnerabilities. Especially in Stage 1, L2BEAT has quantified it specifically: ≥ 8 members in the security committee, with at least half being external members ≥ 7 days upgrade time lock ('exit window') ≥ 5 external validators So I specifically looked at the current security committee situation of Arbitrum. In addition to foundation members, there are also L2BEAT co-founder Bartek Kiepuszewski, representatives from security or governance organizations like OpenZeppelin, Immunefi, Gauntlet, etc., totaling 12 members. PS: Humorously, they directly invited the co-founder of L2BEAT to join, saving him from demoting every few days... Normal upgrades proceed with a 9/12 multi-signature. Then, the Arb committee divides the members into 6+6, with half being re-elected each year. After Arb Bold (Bounded Liquidity Delay) officially launched this February, the contract supports anyone staking to participate, submit/challenge assertions, which essentially completes the last task of Stage 1. This is what has made Arbitrum one of the few truly Stage 1 L2s. @arbitrum_cn @arbitrum
Today while researching L2, I found that the standards for L2BEAT are quite strict.

There are not many in the top 10 L2s that made it to Stage 1, and even those like Base, OP, and Unichain have been tagged; if they do not rectify within 53 days, they will drop back to Stage 0. Of course, they are all using OP Stack, so being tagged together is normal.

Among the top 5, only Arbitrum remains strong, considered the backbone of Stage 1 in L2...

The three-step 'auxiliary wheel' roadmap proposed by @VitalikButerin (Stage 0 → Stage 1 → Stage 2) has been transformed by L2BEAT into a concrete and measurable checklist (proof of usability, exit window, time lock, security committee design, etc.).

Specifically, what Stage an L2 is in depends on a core question: who can veto or change the state?

In Stage 0, the core team (or low-threshold multi-signatures) can override the proof system.
In Stage 1, only a supermajority security committee (≥ 75% signatures) can overturn it; all others (including the core team) must go through the proof system.
In Stage 2, the proof system itself is the final arbiter; the security committee can only intervene to fix provably on-chain vulnerabilities.

Especially in Stage 1, L2BEAT has quantified it specifically:

≥ 8 members in the security committee, with at least half being external members
≥ 7 days upgrade time lock ('exit window')
≥ 5 external validators

So I specifically looked at the current security committee situation of Arbitrum. In addition to foundation members, there are also L2BEAT co-founder Bartek Kiepuszewski, representatives from security or governance organizations like OpenZeppelin, Immunefi, Gauntlet, etc., totaling 12 members.

PS: Humorously, they directly invited the co-founder of L2BEAT to join, saving him from demoting every few days...

Normal upgrades proceed with a 9/12 multi-signature. Then, the Arb committee divides the members into 6+6, with half being re-elected each year.

After Arb Bold (Bounded Liquidity Delay) officially launched this February, the contract supports anyone staking to participate, submit/challenge assertions, which essentially completes the last task of Stage 1.

This is what has made Arbitrum one of the few truly Stage 1 L2s.

@arbitrum_cn @arbitrum
See original
(Possible collateral damage) But many garbage projects be like: Sorry, our frontend/backend/snapshot/contract had issues So the agreed IXO Has to be delayed a bit
(Possible collateral damage) But many garbage projects be like:

Sorry, our frontend/backend/snapshot/contract had issues
So the agreed IXO
Has to be delayed a bit
See original
From the current results, after the gas fee on the Ethereum L1 mainnet has dropped to a 5-year low, many projects feel that deploying on L1 may not be impossible. Therefore, many people will ask, what core issues are L2 currently solving? There is an old topic called the Blockchain Trilemma, which, according to Vitalik's explanation, means that one can only choose two out of three: [Security], [Decentralization], and [Scalability]. Returning to the essence of technology, this is the problem that L2 should solve: First, state summaries are placed on L1, with the mainnet maintaining [Security]; Second, efforts are made on the sequencer to maintain [Decentralization] as much as possible; Finally, [Scalability] is implemented as cleverly as possible off-chain by L2. Different L2 solutions have their own merits, and everyone is familiar with OP Rollups and ZK Rollups. Today, I want to talk about something different, like Based-Rollup. The Based L2 solution was also proposed by Vitalik early on, and L2 projects like Taiko have been promoting the Based Rollup idea. PS: Note that it is Based, which has nothing to do with Coinbase's Base, which is also OP-based. As we all know, in a standard OP-based L2 system, the sequencer holds significant power; it can decide whose transactions come first and whose come last, and even if it doesn’t act maliciously, it can profit through MEV. This is why projects like Metis propose decentralized sequencers. Different L2s handle MEV differently: for example, Arb advocates for fair treatment of MEV (strictly first-come, first-served), while OP is more encouraging, considering MEV a free market behavior, thus taxing it. However, regardless of the approach, L2 sequencers have a prominent status. Therefore, Based-Rollup chooses to take a stand against the sequencer—it aims to let ETH L1 do the sequencing, thus limiting the power of L2 sequencers. Quoting a diagram from @taikoxyz’s documentation: You can see that it follows a three-step process: First, L2 searchers package L2 transactions and send them to L2 block Builders; Second, L2 block Builders construct the blocks; Third, L1 searchers include L2 blocks in the blocks they build on L1. Here, the L1 searchers and L2 builders can be the same person. This is a clever idea of 'doing two jobs'; in fact, the device performance of L1 searchers has redundancy, so constructing an additional L2 block for Taiko poses no pressure at all. To make an inappropriate analogy, if we compare ETH and L2 to the relationship between a province and a city, the idea behind Based Rollup is: let the mayor (L2 builder) also serve as the deputy provincial governor (L1 searcher), thus utilizing L1's resources to safeguard L2's security. It has been exactly a year since Taiko's TGE, and Token Unlock is about to start, so Taiko has also been brewing a new idea over the past year, called Based Booster Rollup/BBR. Booster Rollup can also serve as a mirror for L1, and that idea is quite interesting. However, due to limited space, the analysis of Booster Rollup will be expanded in the next article.
From the current results, after the gas fee on the Ethereum L1 mainnet has dropped to a 5-year low, many projects feel that deploying on L1 may not be impossible. Therefore, many people will ask, what core issues are L2 currently solving?

There is an old topic called the Blockchain Trilemma, which, according to Vitalik's explanation, means that one can only choose two out of three: [Security], [Decentralization], and [Scalability].

Returning to the essence of technology, this is the problem that L2 should solve:

First, state summaries are placed on L1, with the mainnet maintaining [Security];
Second, efforts are made on the sequencer to maintain [Decentralization] as much as possible;
Finally, [Scalability] is implemented as cleverly as possible off-chain by L2.

Different L2 solutions have their own merits, and everyone is familiar with OP Rollups and ZK Rollups. Today, I want to talk about something different, like Based-Rollup.

The Based L2 solution was also proposed by Vitalik early on, and L2 projects like Taiko have been promoting the Based Rollup idea.

PS: Note that it is Based, which has nothing to do with Coinbase's Base, which is also OP-based.

As we all know, in a standard OP-based L2 system, the sequencer holds significant power; it can decide whose transactions come first and whose come last, and even if it doesn’t act maliciously, it can profit through MEV. This is why projects like Metis propose decentralized sequencers.

Different L2s handle MEV differently: for example, Arb advocates for fair treatment of MEV (strictly first-come, first-served), while OP is more encouraging, considering MEV a free market behavior, thus taxing it. However, regardless of the approach, L2 sequencers have a prominent status.

Therefore, Based-Rollup chooses to take a stand against the sequencer—it aims to let ETH L1 do the sequencing, thus limiting the power of L2 sequencers.

Quoting a diagram from @taikoxyz’s documentation:

You can see that it follows a three-step process:

First, L2 searchers package L2 transactions and send them to L2 block Builders;
Second, L2 block Builders construct the blocks;
Third, L1 searchers include L2 blocks in the blocks they build on L1.

Here, the L1 searchers and L2 builders can be the same person.

This is a clever idea of 'doing two jobs'; in fact, the device performance of L1 searchers has redundancy, so constructing an additional L2 block for Taiko poses no pressure at all.

To make an inappropriate analogy, if we compare ETH and L2 to the relationship between a province and a city, the idea behind Based Rollup is: let the mayor (L2 builder) also serve as the deputy provincial governor (L1 searcher), thus utilizing L1's resources to safeguard L2's security.

It has been exactly a year since Taiko's TGE, and Token Unlock is about to start, so Taiko has also been brewing a new idea over the past year, called Based Booster Rollup/BBR.

Booster Rollup can also serve as a mirror for L1, and that idea is quite interesting. However, due to limited space, the analysis of Booster Rollup will be expanded in the next article.
See original
I feel that this snapshot of Cookie has a problem Do I chat about Pancake every day?
I feel that this snapshot of Cookie has a problem
Do I chat about Pancake every day?
See original
Bitcoin has three major holidays: Birthday October 31 On October 31, 2008, Satoshi Nakamoto published the paper "Bitcoin: A Peer-to-Peer Electronic Cash System." Holiday Customs: Re-read the white paper 🔖 Genesis Day January 3 On January 3, 2009, the genesis block of the Bitcoin blockchain was mined, and the Bitcoin network officially started. Holiday Customs: Tease that former British Prime Minister 🤦 Pizza Day May 22 A user on the BBT forum, Laszlo, used 10,000 bitcoins to buy two pizzas, marking the first recorded use of Bitcoin as a real payment medium. Holiday Customs: Eat two slices of pizza! 🍕
Bitcoin has three major holidays:

Birthday October 31

On October 31, 2008, Satoshi Nakamoto published the paper "Bitcoin: A Peer-to-Peer Electronic Cash System."

Holiday Customs: Re-read the white paper 🔖

Genesis Day January 3

On January 3, 2009, the genesis block of the Bitcoin blockchain was mined, and the Bitcoin network officially started.

Holiday Customs: Tease that former British Prime Minister 🤦

Pizza Day May 22

A user on the BBT forum, Laszlo, used 10,000 bitcoins to buy two pizzas, marking the first recorded use of Bitcoin as a real payment medium.

Holiday Customs: Eat two slices of pizza! 🍕
See original
Need Chinese to Chinese translation How is this formula from Binance calculated? 😂 What is the hard cap for a single account in $BNB?
Need Chinese to Chinese translation
How is this formula from Binance calculated? 😂
What is the hard cap for a single account in $BNB?
See original
Recently, I saved some money in Uniswap V4, so I seriously studied the hooks of Uni. Many people have privately told me that after Uni launched V4, they did not feel the same sense of amazement as when V3 was launched. The main reason is that the concept of 'hook' itself is too abstract and is directly blamed. Rather than being directly translated as 'hook', I personally think it would be better to interpret it as a 'plugin'. The hook itself is meant to add some functionalities to the pool that go beyond Uni itself. Its documentation emphasizes at what time hooks can be called, but most people do not care about this; it would be better to explain what hooks can actually do. [Examples of Hook Usage] -- For example, it can limit access to your created pool, such as ETH-USDT, to certain specific addresses only; -- Or, it can allow your pool to charge higher fees during busy times and lower fees during idle times; -- Or, it can even allow your pool to operate without the X*Y=K curve (PS: probably inspired by Curve at that time 😂). In summary, you can freely develop all sorts of functionalities that you need, which Uni's official team might never release. It's a bit like the Creative Workshop on Steam, where the official team stops producing and allows others to create freely. Another change is that in the past, there were only two profitable niches on Uni: LPs and traders, and both would take money from each other 😂. After V4, with hooks, it actually allows some script kiddies to have profitable niches as well. You write a hook, and others can pay to use your hook when they create pools (selling... hooks?). Each pool can use one hook plugin, but one set of hook plugins can be subscribed to by countless pools, with a relatively low marginal cost. There is a website called Hook Rank that includes hundreds of hooks, showing how much money various hooks have earned. Currently, one of the most commonly used hooks, Flaunch, has already made the developer over a million dollars. What does it do? With its hook, you can create a pool for a meme coin and direct the fees of that pool in any proportion, for example, 80% to your own wallet and 20% for buybacks. Fortunately, when Trump Jr. launched $Trump, he didn't know about this feature; otherwise, wouldn't he direct all the fees to the future world finance 😂? Moreover, as the saying goes, only competitors understand you best. Later, Pancake also decisively introduced hooks, but instead of calling it V4, they named it Pancake Infinity. Of course, that's another story for another time. In conclusion, hooks are quite an interesting thing, worthy of being named V4.
Recently, I saved some money in Uniswap V4, so I seriously studied the hooks of Uni.

Many people have privately told me that after Uni launched V4, they did not feel the same sense of amazement as when V3 was launched. The main reason is that the concept of 'hook' itself is too abstract and is directly blamed.

Rather than being directly translated as 'hook', I personally think it would be better to interpret it as a 'plugin'.

The hook itself is meant to add some functionalities to the pool that go beyond Uni itself. Its documentation emphasizes at what time hooks can be called, but most people do not care about this; it would be better to explain what hooks can actually do.

[Examples of Hook Usage]

-- For example, it can limit access to your created pool, such as ETH-USDT, to certain specific addresses only;

-- Or, it can allow your pool to charge higher fees during busy times and lower fees during idle times;

-- Or, it can even allow your pool to operate without the X*Y=K curve (PS: probably inspired by Curve at that time 😂).

In summary, you can freely develop all sorts of functionalities that you need, which Uni's official team might never release.

It's a bit like the Creative Workshop on Steam, where the official team stops producing and allows others to create freely.

Another change is that in the past, there were only two profitable niches on Uni: LPs and traders, and both would take money from each other 😂.

After V4, with hooks, it actually allows some script kiddies to have profitable niches as well.

You write a hook, and others can pay to use your hook when they create pools (selling... hooks?).

Each pool can use one hook plugin, but one set of hook plugins can be subscribed to by countless pools, with a relatively low marginal cost.

There is a website called Hook Rank that includes hundreds of hooks, showing how much money various hooks have earned. Currently, one of the most commonly used hooks, Flaunch, has already made the developer over a million dollars.

What does it do? With its hook, you can create a pool for a meme coin and direct the fees of that pool in any proportion, for example, 80% to your own wallet and 20% for buybacks.

Fortunately, when Trump Jr. launched $Trump, he didn't know about this feature; otherwise, wouldn't he direct all the fees to the future world finance 😂?

Moreover, as the saying goes, only competitors understand you best. Later, Pancake also decisively introduced hooks, but instead of calling it V4, they named it Pancake Infinity. Of course, that's another story for another time.

In conclusion, hooks are quite an interesting thing, worthy of being named V4.
See original
A wave of eating and taking 😂 Yesterday, I made some $SKYAI / $BNB Pancake V3 LP Today, early in the morning, I broke out of the range, and in less than a day, I earned nearly 3000 U in fees, and my principal increased by 7 BNB. Family members are fiercely racking up points on Binance Alpha I didn’t expect to end up giving all the fees to LP 😂 The 0.25% I made Saved some fees for my family Is it considered conscientious? 😂 ----Divider---- After doing arbitrage for a long time, I have a theory: That is, if an arbitrage opportunity gradually becomes public Then everyone will drive it to a point where there’s no profit to be made The same goes for new listings and airdrops on Binance Alpha Once it starts to get competitive You need to figure out who is profiting from it Binance's new projects support studios But the studios are paying LPs The new projects on Alpha Need to rely on "trading volume activity" to impact spot on Binance contracts Although doing LP has some gambling elements There are considerations: (1) Someone is supporting (the studio) (2) Someone has a vision (the project side) Then the winning rate is sufficient PS: The screenshot of 5,128% APR is not accurate, as I just withdrew funds, and the denominator has become smaller.
A wave of eating and taking 😂
Yesterday, I made some $SKYAI / $BNB Pancake V3 LP

Today, early in the morning, I broke out of the range, and in less than a day, I earned nearly 3000 U in fees, and my principal increased by 7 BNB.

Family members are fiercely racking up points on Binance Alpha
I didn’t expect to end up giving all the fees to LP 😂

The 0.25% I made
Saved some fees for my family
Is it considered conscientious? 😂

----Divider----

After doing arbitrage for a long time, I have a theory:

That is, if an arbitrage opportunity gradually becomes public
Then everyone will drive it to a point where there’s no profit to be made
The same goes for new listings and airdrops on Binance Alpha

Once it starts to get competitive
You need to figure out who is profiting from it
Binance's new projects support studios
But the studios are paying LPs

The new projects on Alpha
Need to rely on "trading volume activity" to impact spot on Binance contracts

Although doing LP has some gambling elements
There are considerations:
(1) Someone is supporting (the studio)
(2) Someone has a vision (the project side)

Then the winning rate is sufficient

PS: The screenshot of 5,128% APR is not accurate, as I just withdrew funds, and the denominator has become smaller.
See original
Seeing $BTC return to 100K really makes me happy I feel warm inside
Seeing $BTC return to 100K really makes me happy
I feel warm inside
See original
First of all, congratulations to $ETH for successfully completing the Pectra upgrade last night. Let's briefly discuss the Pectra upgrade. This Pectra upgrade mainly revolves around three dimensions: Staking, L2, and account abstraction. 1. First, let's talk about the upgrades related to Staking. This upgrade changes the limit for each node from 32 to 2048 ETH (EIP-7251). Two meanings: 1. It makes Staking deposits and withdrawals faster during peak times. ETH has emphasized decentralization in the past, requiring each node to hold a maximum of 32 ETH. Even when you have earned more in interest, the network would automatically help you collect rewards, ensuring you return to 32 ETH. However, this brought about a problem where deposits and withdrawals were too slow during peak times. Because ETH also has a requirement that only a maximum of 8 nodes can withdraw and exit per epoch, this leads to deposit waiting times of about ten days during peak periods, and withdrawals also needing around ten days in line. PS: However, it is currently a low period, and Staking deposits and withdrawals are completed within ten minutes. After changing to 2048, the speed of deposits and withdrawals during peak times will significantly increase. PS2: Faster withdrawals may not necessarily be a good thing for L2; those that lock for 21 days are the real slow-release capsules 😂. Additionally, there is a detail that existing nodes are all at 32 ETH, so we will have to wait until everyone gradually merges into 2048 before it becomes effective. 2. Preparing for future L1 expansion. ETH has over a million nodes, making it the blockchain with the most nodes in the universe. It is normal for its L1 peak performance to be limited to 60 TPS, as it is impossible to make decisions quickly with 1 million nodes involved. However, as the nodes gradually develop to 2048 ETH, a model of [large node unanimous voting] + [small node random voting] may be adopted. There will be a slight decrease in security (the staking amount is still sufficient), but performance can be greatly liberated. PS: As shown in the diagram, adopting the Orbit model may be the approach for ETH L1 expansion. This is also why it is crucial for Pectra to push for 2048. In addition, there are two EIPs related to Staking that are beneficial for LST protocols, such as Lido, Etherfi, etc. In the past, we often said that due to certain limitations in ETH's underlying mechanism, xxxx could only be forced to do so and so. Now, ETH has directly opened two avenues for LST in its underlying mechanism, namely EIP-6110 and EIP-7002, one allows direct deposits at the consensus layer, and the other allows smart contracts to control withdrawals. In simple terms, it allows LST protocols like Lido to automate and decentralize certain parts that were previously handled centrally and manually. This is good news for LSTs, as it removes some of their shackles. Although the foundation may not favor LSTs, it is still paving the way for them. 3. Now, let's talk about ETH's preference for L2. This upgrade will double the Blob's capacity and make its capacity dynamic (EIP-7742 and 7691). As we all know, the Blob area mainly stores transaction backups from L2. Although L2 is already very cheap now, starting from yesterday's Blob expansion, L2 will change from [very cheap] to [very, very cheap]. Of course, this will reduce the already low daily ETH burn, as the major gas fee payers on ETH are currently L2 gateways. 4. Account abstraction (EIP-7702) Thanks to everyone for the long-term popularization efforts. Although the name 'account abstraction' itself is very abstract 😂, everyone has some understanding of it now. Wallets created based on the new account abstraction can issue batch transactions and use other tokens to pay transaction fees (for example, when transferring USDT, you can pay the USDT fee, covered by a third party), and even allow social recovery. In summary, after the implementation of 7702, it opens a door for wallets that utilize account abstraction. Previous wallets were fixed, and after account abstraction, developers can freely add various new features to wallets, similar to how games allow others to create mods. So, aside from what we mentioned above, there will likely be many new ways to play, with smoother experiences. Perhaps after May 2025, new entrants will no longer need to use difficult wallets like MetaMask, but will start registering wallets with their phone numbers, and will not need to acquire gas in advance to use it, solving the last-mile problem 🥳.
First of all, congratulations to $ETH for successfully completing the Pectra upgrade last night. Let's briefly discuss the Pectra upgrade.

This Pectra upgrade mainly revolves around three dimensions: Staking, L2, and account abstraction.

1. First, let's talk about the upgrades related to Staking.

This upgrade changes the limit for each node from 32 to 2048 ETH (EIP-7251).

Two meanings:

1. It makes Staking deposits and withdrawals faster during peak times.

ETH has emphasized decentralization in the past, requiring each node to hold a maximum of 32 ETH. Even when you have earned more in interest, the network would automatically help you collect rewards, ensuring you return to 32 ETH.

However, this brought about a problem where deposits and withdrawals were too slow during peak times.

Because ETH also has a requirement that only a maximum of 8 nodes can withdraw and exit per epoch, this leads to deposit waiting times of about ten days during peak periods, and withdrawals also needing around ten days in line.

PS: However, it is currently a low period, and Staking deposits and withdrawals are completed within ten minutes.

After changing to 2048, the speed of deposits and withdrawals during peak times will significantly increase.

PS2: Faster withdrawals may not necessarily be a good thing for L2; those that lock for 21 days are the real slow-release capsules 😂.

Additionally, there is a detail that existing nodes are all at 32 ETH, so we will have to wait until everyone gradually merges into 2048 before it becomes effective.

2. Preparing for future L1 expansion.

ETH has over a million nodes, making it the blockchain with the most nodes in the universe. It is normal for its L1 peak performance to be limited to 60 TPS, as it is impossible to make decisions quickly with 1 million nodes involved.

However, as the nodes gradually develop to 2048 ETH, a model of [large node unanimous voting] + [small node random voting] may be adopted.

There will be a slight decrease in security (the staking amount is still sufficient), but performance can be greatly liberated.

PS: As shown in the diagram, adopting the Orbit model may be the approach for ETH L1 expansion.

This is also why it is crucial for Pectra to push for 2048.

In addition, there are two EIPs related to Staking that are beneficial for LST protocols, such as Lido, Etherfi, etc.

In the past, we often said that due to certain limitations in ETH's underlying mechanism, xxxx could only be forced to do so and so.

Now, ETH has directly opened two avenues for LST in its underlying mechanism, namely EIP-6110 and EIP-7002, one allows direct deposits at the consensus layer, and the other allows smart contracts to control withdrawals.

In simple terms, it allows LST protocols like Lido to automate and decentralize certain parts that were previously handled centrally and manually.

This is good news for LSTs, as it removes some of their shackles. Although the foundation may not favor LSTs, it is still paving the way for them.

3. Now, let's talk about ETH's preference for L2.

This upgrade will double the Blob's capacity and make its capacity dynamic (EIP-7742 and 7691).

As we all know, the Blob area mainly stores transaction backups from L2.

Although L2 is already very cheap now, starting from yesterday's Blob expansion, L2 will change from [very cheap] to [very, very cheap].

Of course, this will reduce the already low daily ETH burn, as the major gas fee payers on ETH are currently L2 gateways.

4. Account abstraction (EIP-7702)

Thanks to everyone for the long-term popularization efforts. Although the name 'account abstraction' itself is very abstract 😂, everyone has some understanding of it now.

Wallets created based on the new account abstraction can issue batch transactions and use other tokens to pay transaction fees (for example, when transferring USDT, you can pay the USDT fee, covered by a third party), and even allow social recovery.

In summary, after the implementation of 7702, it opens a door for wallets that utilize account abstraction.

Previous wallets were fixed, and after account abstraction, developers can freely add various new features to wallets, similar to how games allow others to create mods.

So, aside from what we mentioned above, there will likely be many new ways to play, with smoother experiences.

Perhaps after May 2025, new entrants will no longer need to use difficult wallets like MetaMask, but will start registering wallets with their phone numbers, and will not need to acquire gas in advance to use it, solving the last-mile problem 🥳.
See original
Recently, $Obol was listed on Binance, and we take this opportunity to discuss Obol's technical architecture. First of all, the significance of the Obol project is to help increase fault tolerance in ETH Staking. To put it imprecisely, Obol essentially creates a multi-signature mechanism specifically for managing Staking. The most common scenario is: if you run an Ethereum client to Stake ETH directly, once your node goes offline, not only do you lose the time while offline, but you may also incur penalties in the form of ETH being deducted. Therefore, preventing ETH Staking nodes from going offline has become a research topic for everyone. Obol's solution is: While running the ETH client, you run an additional Obol client (middleware). After running the Obol client, you can choose 3-10 node operators to jointly manage your node, so even if several nodes go offline, the other nodes can continue to operate normally. PS: Of course, theoretically, you can also run multiple servers yourself or have your friends help you run them. However, while this process seems easy, it has its challenges in implementation. The ETH Staking mechanism is already established, and there was no designated space for the DVT protocol in its initial design, which means that DVT protocols must operate under the constraints of the existing ETH Staking mechanism. Thus, the circumvention method employed by Obol is to run additional clients through the nodes, establishing an Obol network that operates independently of the ETH network. (Charon is the name of the Obol client) This network is a peer-to-peer network that solves the communication security issue between different nodes. Then, after the nodes in the same group complete mutual handshaking, they will hold a DKG ceremony. The so-called DKG stands for Distributed Key Generation, and this design can achieve (taking 3/4 signatures as an example): 1. No intermediaries 2. Four nodes are unaware of each other's key fragments 3. At least three nodes online can perform validator work In this way, the private key is stored locally on the nodes. Additionally, Obol does not perform the action of [splitting]; it directly requires nodes to generate one of the fragments locally, thus avoiding the issue of keys being exposed on the network. Furthermore, since ETH Staking itself consists of [asset private keys] and [validator private keys], similar to non-custodial Staking, the DVT protocol here is also only responsible for the validator key and does not touch the asset private key, which is always held by the ETH holder. Thus, even if there is collusion among multiple parties in the DVT protocol, the most they can do is interfere with the nodes' ability to produce blocks, but technically, they cannot steal your Staked ETH directly, resulting in actual principal loss. On the contrary, if your Staking accidentally goes offline, the other 3 can still rescue your node. There are also some interesting small mechanism designs, such as Obol allowing those clients to decide on upgrades based on their own will, enabling different versions to run without requiring simultaneous upgrades. Therefore, when Obol releases a new version, the existing 3/4 that do not upgrade can still continue to use it, as long as the clients within the group remain consistent. This minimizes the possibility of faults caused by the Obol official. This is how Obol increases the fault tolerance of ETH Staking through its technical principles.
Recently, $Obol was listed on Binance, and we take this opportunity to discuss Obol's technical architecture.

First of all, the significance of the Obol project is to help increase fault tolerance in ETH Staking.

To put it imprecisely, Obol essentially creates a multi-signature mechanism specifically for managing Staking.

The most common scenario is: if you run an Ethereum client to Stake ETH directly, once your node goes offline, not only do you lose the time while offline, but you may also incur penalties in the form of ETH being deducted.

Therefore, preventing ETH Staking nodes from going offline has become a research topic for everyone.

Obol's solution is:

While running the ETH client, you run an additional Obol client (middleware).

After running the Obol client, you can choose 3-10 node operators to jointly manage your node, so even if several nodes go offline, the other nodes can continue to operate normally.

PS: Of course, theoretically, you can also run multiple servers yourself or have your friends help you run them.

However, while this process seems easy, it has its challenges in implementation.

The ETH Staking mechanism is already established, and there was no designated space for the DVT protocol in its initial design, which means that DVT protocols must operate under the constraints of the existing ETH Staking mechanism.

Thus, the circumvention method employed by Obol is to run additional clients through the nodes, establishing an Obol network that operates independently of the ETH network.

(Charon is the name of the Obol client)

This network is a peer-to-peer network that solves the communication security issue between different nodes.

Then, after the nodes in the same group complete mutual handshaking, they will hold a DKG ceremony.

The so-called DKG stands for Distributed Key Generation, and this design can achieve (taking 3/4 signatures as an example):

1. No intermediaries
2. Four nodes are unaware of each other's key fragments
3. At least three nodes online can perform validator work

In this way, the private key is stored locally on the nodes. Additionally, Obol does not perform the action of [splitting]; it directly requires nodes to generate one of the fragments locally, thus avoiding the issue of keys being exposed on the network.

Furthermore, since ETH Staking itself consists of [asset private keys] and [validator private keys],

similar to non-custodial Staking, the DVT protocol here is also only responsible for the validator key and does not touch the asset private key, which is always held by the ETH holder.

Thus, even if there is collusion among multiple parties in the DVT protocol, the most they can do is interfere with the nodes' ability to produce blocks, but technically, they cannot steal your Staked ETH directly, resulting in actual principal loss.

On the contrary, if your Staking accidentally goes offline, the other 3 can still rescue your node.

There are also some interesting small mechanism designs, such as Obol allowing those clients to decide on upgrades based on their own will, enabling different versions to run without requiring simultaneous upgrades.

Therefore, when Obol releases a new version, the existing 3/4 that do not upgrade can still continue to use it, as long as the clients within the group remain consistent. This minimizes the possibility of faults caused by the Obol official.

This is how Obol increases the fault tolerance of ETH Staking through its technical principles.
See original
What optimizations has Binance made recently? 😂 Withdrawals have suddenly become very fast. After entering the verification code, it takes almost 2 seconds for the funds to arrive in the on-chain wallet.
What optimizations has Binance made recently? 😂
Withdrawals have suddenly become very fast.
After entering the verification code,
it takes almost 2 seconds for the funds to arrive in the on-chain wallet.
See original
Still the same saying, true profits come from: Facts ✔ Public ❌ Self ✔ And similar to 【 Binance delisting so go short】, it looks very much like: Facts ✔ Public ✔ Self ✔ The 3 ✔ represent: Thin profit margins From the moment of participation It means it is impossible to share the big cake On the contrary, it may lead to a loss due to some slight mistakes In fact, there was no need to short it from the very beginning Risks and rewards are not proportional In case we encounter something like the current alpaca Able to bear the delisting announcement and do market making in reverse For the pumpers, the pumpers have forcibly created their own: Facts ✔ Public ❌ Self ✔ Instead, they are about to achieve the maximum profit
Still the same saying, true profits come from:
Facts ✔ Public ❌ Self ✔

And similar to 【 Binance delisting so go short】, it looks very much like:
Facts ✔ Public ✔ Self ✔

The 3 ✔ represent: Thin profit margins
From the moment of participation
It means it is impossible to share the big cake
On the contrary, it may lead to a loss due to some slight mistakes

In fact, there was no need to short it from the very beginning
Risks and rewards are not proportional

In case we encounter something like the current alpaca
Able to bear the delisting announcement and do market making in reverse
For the pumpers, the pumpers have forcibly created their own:
Facts ✔ Public ❌ Self ✔
Instead, they are about to achieve the maximum profit
Login to explore more contents
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number

Latest News

--
View More

Trending Articles

zarpash
View More
Sitemap
Cookie Preferences
Platform T&Cs