Is data annotation, this 'hard and tiring work', quietly becoming a hot commodity? With over $11.2 million in funding led by Polychain, @OpenledgerHQ aims to address the long-neglected pain point of 'data value distribution' through its unique PoA + infini-gram mechanism. Let's explore this from a technical perspective:
1) To be honest, the biggest 'original sin' in the current AI industry is the unfair distribution of data value. OpenLedger's PoA (Proof of Contribution) aims to establish a 'copyright tracking system' for data contributions.
Specifically: Data contributors upload content to specific domain DataNets, and each data point is permanently recorded along with contributor metadata and content hash.
When the model is trained on these datasets, the attribution process occurs during the inference phase, which is when the model generates output. PoA tracks which data points influenced that output by analyzing the matching range or impact score, and these records determine the proportional impact of each contributor's data.
When the model generates revenue through inference, PoA ensures that profits are accurately distributed based on each contributor's influence—creating a transparent, fair, and on-chain reward mechanism.
In other words, PoA addresses the fundamental contradiction in data economics. The logic in the past was simple and crude—AI companies obtained massive amounts of data for free and then made a fortune through model commercialization, while data contributors received nothing. But PoA realizes 'data privatization' through technical means, allowing each data point to generate clear economic value.
I believe that once this transformation mechanism from 'free-riding mode' to 'labor distribution' is successfully implemented, the incentive logic for data contribution will completely change.
Moreover, PoA adopts a layered strategy to address attribution issues for models of different scales: small models with millions of parameters can estimate the impact of each data point by analyzing the model's influence function, and the computational load is bearable, while medium to large parameter models become unfeasible and inefficient using this method. This is where the powerful Infini-gram comes into play.
2) Here comes the question, what is the infini-gram technology? The problem it aims to solve sounds quite extreme: accurately tracking the data source of each output Token in medium to large parameter black-box models.
Traditional attribution methods mainly rely on analyzing the model's influence function, but they basically fall short in the face of large models. The reason is simple: the larger the model, the more complex the internal calculations, and the analysis cost increases exponentially, making it unfeasible and inefficient for computation. This is entirely unrealistic in commercial applications.
Infini-gram completely changes the approach: since the internal workings of the model are too complex, it directly searches for matches in the raw data. It builds an index based on suffix arrays, using dynamically selected longest matching suffixes instead of traditional fixed-window n-grams. Simply put, when the model outputs a certain sequence, Infini-gram will identify the longest exact match in the training data for each Token context.
The performance data brought about by this is indeed impressive; for a dataset of 1.4 trillion Tokens, querying takes only 20 milliseconds, and storage is just 7 bytes per Token. More importantly, there is no need to analyze the internal structure of the model or perform complex calculations to achieve precise attribution. For AI companies that view models as trade secrets, this is practically a tailor-made solution.
It's worth noting that existing data attribution solutions are either inefficient, lack precision, or require access to the internal model. Infini-gram has found a balance across these three dimensions.
3) Additionally, I feel that OpenLedger's concept of dataNets on-chain datasets is particularly trendy. Unlike traditional one-time data transactions, DataNets allow data contributors to sustainably enjoy a share of profits when their data is used in inference.
In the past, data annotation was a laborious task with thin rewards and was one-off. Now, it has transformed into an asset with continuous income, fundamentally changing the incentive logic.
While most AI + Crypto projects are still focusing on relatively mature directions like computing power leasing and model training, OpenLedger has chosen to tackle the toughest bone of data attribution. This technology stack may redefine the supply side of AI data.
After all, in an era where data quality reigns supreme, whoever can solve the data value distribution problem will be able to attract the highest quality data resources.
In summary, the combination of OpenLedger's PoA + Infini-gram not only solves technical challenges but, more importantly, provides a completely new value distribution logic for the entire industry.
As the arms race in computing power gradually cools and competition for data quality intensifies, this type of technology path is sure to not be a one-off. This track will see multiple solutions competing in parallel—some focusing on attribution precision, some emphasizing cost efficiency, and others working on usability. Each is exploring the optimal solution for data value distribution.
Ultimately, which company will emerge victorious will still depend on whether they can truly attract enough data providers and developers.
$ZKJ and $KOGE were both manipulated and plummeted, waking up a large number of retail investors who were brushing trading volumes on Binance's Alpha platform from their dreams. Originally, they were brushing some trading volume to earn 'interest' from airdrops, but in the end, they lost even their principal. What exactly happened behind this? Who should pay for this disaster? I will try to analyze it deeply:
1) Let's start with the gossip about what exactly happened? Binance launched an airdrop activity for brushing trading volumes on the Alpha platform. ZKJ and KOGE, as popular projects, were featured on Alpha, and a large number of retail investors started to brush volumes madly in anticipation of airdrops.
However, just when the Alpha event was in full swing and retail funds were flooding in, a large holder withdrew about $3.6 million worth of tokens from OKX and directly dumped them on the market. ZKJ collapsed first, and due to the high correlation of the KOGE pool, KOGE passively followed the decline. Retail investors saw the crash and started to panic sell, further accelerating the collapse cycle. In the end, those users who were 'diligently' brushing volumes on Binance Alpha for the airdrop not only did not wait for returns but also lost all their principal.
2) Who should bear the responsibility in this 'evil process'?
The project party might say: We didn't ask the big holder to dump; this is a market behavior, but a TGE valued at 2B can actually be manipulated by a few large holders; it’s simply unbelievable;
The dumping large holder might say: My money is mine to handle freely; losing money is my own fault, but knowing that such a precise timing would cause a chain collapse raises questions about their intentions;
The Binance Alpha platform could also say: We are just providing a trading platform; users bear their own risks, but without Binance's endorsement, how would users dare to invest huge amounts? Now that something has happened, how can we possibly distance ourselves;
You see, every stakeholder in this chain seems to have a reason to distance themselves, except for the retail investors who are left bewildered: Why did this hot Alpha Summer end before it even started? Where is my principal?
3) So where did the problem actually lie? On the surface, it seems like an accidental market risk, but in reality, it is a premeditated systematic harvesting:
The project party 'designed' a correlation trap, the large holder chose a precise 'timing' to strike, and Binance provided a 'legitimate' harvesting platform, while retail investors bore all the losses.
Specifically:
Binance Alpha made strategic errors under competitive anxiety. Seeing OKX making inroads in the Web3 DEX and wallet sectors, they were anxious as their on-chain trading share was being eroded. Alpha was originally designed quite well—to give project parties a testing period, users an observation period, and themselves a risk control period.
But Binance obviously overestimated its risk control capabilities and underestimated the 'malice' of market participants. In a bid to quickly reclaim market share, they abruptly transformed Alpha from an 'observation deck' into a 'battlefield'. To put it bluntly, Alpha was not originally designed to create a better Binance, but to build a new 'Binance' on-chain?
Even more critically, Binance was overly idealistic about the market environment when designing the Alpha mechanism. The 'win-win-win' model envisioned by Binance sounds beautiful: project parties test the market through Alpha, users brush volumes to earn returns, and the platform earns fees? This logic sounds great, but it’s based on a fatal assumption—that everyone would 'act according to the script'. What’s the reality? In this liquidity-weak small coin market, any artificially created heat is false prosperity, and it can burst with a poke.
Binance seems to have forgotten that while the Alpha platform provides convenience, it also creates a perfect 'hunting ground' for malicious operators—after all, with Binance's endorsement increasing credibility, an incentive mechanism gathering retail funds, and ample liquidity available for harvesting, everything was in place.
With this combination, Alpha—originally an observation area meant for 'risk isolation'—turned into a breeding ground for large holders to 'precisely harvest'.
In the end, the whole incident exposed the structural flaws of the current market ecology, where each participant was pursuing short-term profit maximization: project parties wanted to quickly exit liquidity for cashing out, large holders wanted precise arbitrage, trading platforms wanted to increase trading volume and revenue, and retail investors always wanted to grab excess returns. Everyone was calculating their own interests, ultimately leading to a 'perfect' defeat in a multi-party game.
But after all, this happened on the Binance platform, the world's largest exchange, which should have been the 'stabilizing force' for the entire industry, but instead became the main stage for this harvesting drama.
Binance's Alpha strategy essentially used its brand credibility to guarantee others' harvesting actions. Wanting market share, wanting trading volume, wanting fee income, the result was stepping on their own toes.
Alas, it's lamentable that if 'top players' act so recklessly and no one is responsible for maintaining order, when can we expect the industry to truly mature? The answer is likely further away than we think.
Everyone says that Ethereum's Rollup-Centric strategy seems to have failed? And they deeply resent this L1-L2-L3 nesting doll game. But interestingly, the development of the AI track over the past year has also gone through a rapid evolution of L1—L2—L3. Comparing the two, where exactly does the problem lie?
1) The hierarchical logic of AI is that each layer addresses core issues that the upper layer cannot solve.
For example, L1's LLMs solve the foundational abilities of language understanding and generation, but logical reasoning and mathematical calculations are indeed hard shortcomings; thus, at L2, reasoning models specifically tackle this weakness. DeepSeek R1 can solve complex math problems and debug code, directly filling the cognitive blind spots of LLMs. After laying this groundwork, L3's AI Agent naturally integrates the capabilities of the first two layers, transforming AI from passive answering to active execution, allowing it to plan tasks, call tools, and handle complex workflows on its own.
You see, this kind of layering is “ability progression”: L1 lays the foundation, L2 addresses shortcomings, and L3 integrates. Each layer produces a qualitative leap based on the previous layer, and users can clearly feel that AI becomes smarter and more useful.
2) The hierarchical logic of Crypto is that each layer patches the problems of the previous layer, but unfortunately, this brings about brand new and larger problems.
For instance, if the performance of the L1 public chain is insufficient, it is natural to think of using layer 2's expansion solutions. However, after a wave of layer 2 Infra inflation, it seems that Gas has decreased, TPS has increased, but liquidity has become fragmented, and ecological applications continue to be scarce, making the excessive layer 2 infra a significant problem. Thus, they start creating layer 3 vertical application chains, but the application chains govern themselves and cannot enjoy the ecological synergies of the infra universal chain, resulting in an even more fragmented user experience.
As a result, this kind of layering has become “problem shifting”: L1 has bottlenecks, L2 patches, L3 is chaotic and dispersed. Each layer merely shifts the problem from one place to another, as if all solutions are merely aimed at “issuing tokens.”
At this point, everyone should understand what the crux of this paradox is: AI layering is driven by technological competition, with OpenAI, Anthropic, and DeepSeek all competing fiercely in model capabilities; Crypto layering is hijacked by Tokenomics, where the core KPI of each L2 is TVL and token price.
So, fundamentally one is solving technical challenges, while the other is packaging financial products? There may not be a clear answer as to which is right or wrong; it may vary by perspective.
Of course, this abstract analogy isn’t so absolute; it just seems that the comparison of the development paths of the two is very interesting, a little thought exercise for the weekend 💆.
After observing various trends in the AI field over the past month, I found an interesting evolution logic: web2AI is moving from centralization to distribution, while web3AI is transitioning from proof of concept to practicality. The two are accelerating their integration.
1) First, let's look at the development dynamics of web2AI. Apple's local intelligence and the popularization of various offline AI models reflect that AI models are becoming lighter and more convenient. This tells us that the carriers of AI are no longer limited to large cloud service centers, but can be deployed on smartphones, edge devices, and even IoT terminals.
Moreover, Claude and Gemini achieve AI-AI dialogue through MCP, marking an innovation that signifies AI is transitioning from unitary intelligence to collaborative clusters.
The question arises: as the carriers of AI become highly distributed, how can we ensure data consistency and decision credibility among these decentralized AI instances?
Here lies a layer of demand logic: technological advancement (model lightweight) → changes in deployment methods (distributed carriers) → emergence of new demands (decentralized verification).
2) Now, let's look at the evolution path of web3AI. Most early AI Agent projects were primarily based on MEME attributes, but recently, the market has shifted from pure hype of launchpads to systematic construction of underlying architecture for AI layer1 infrastructure.
Projects are beginning to engage in specialized division of labor in various functional areas such as computing power, inference, data labeling, and storage. For instance, we previously analyzed @ionet focusing on decentralized computing power aggregation, Bittensor building a decentralized inference network, @flock_io making strides in federated learning and edge computing, @SaharaLabsAI focusing on distributed data incentives, and @Mira_Network reducing AI hallucinations through decentralized consensus mechanisms, etc.;
Here, a gradually clear supply logic emerges: cooling of MEME speculation (bubble clearing) → emergence of infrastructure demands (driven by necessity) → emergence of specialized division of labor (efficiency optimization) → ecological synergy effects (network value).
You see, the "shortcomings" of web2AI's demands are gradually approaching the "strengths" that web3AI can provide. The evolution paths of web2AI and web3AI are gradually intersecting.
Web2AI is becoming increasingly mature technologically, but lacks economic incentives and governance mechanisms; web3AI has innovations in economic models but lags behind web2 in technical implementation. Their integration can complement each other's advantages.
In fact, the integration of the two is giving rise to a new paradigm of AI that combines "efficient computing" off-chain and "rapid verification" on-chain.
In this paradigm, AI is no longer just a tool, but an economic identity participant; resources such as computing power, data, and inference will be focused off-chain, but a lightweight verification network will also be needed.
This combination is quite clever: it maintains the efficiency and flexibility of off-chain computing while ensuring credibility and transparency through lightweight on-chain verification.
Note: To this day, some still regard web3AI as a pseudo-proposition, but if one carefully feels and possesses a certain foresight, one will understand that, with the rapid development of AI, there has never been a distinction between web2 and web3, but human biases will be.
On one side, Meta has spent $14.8 billion to acquire nearly half of Scale AI, and the entire Silicon Valley is exclaiming that the giant has revalued "data labeling" with sky-high prices; on the other side is the upcoming TGE of @SaharaLabsAI, still trapped under the Web3 AI bias label of "riding on concepts and unable to self-validate." What has the market overlooked behind this huge contrast?
First of all, data labeling is a more valuable sector than decentralized computing power aggregation.
The story of challenging cloud computing giants with idle GPUs is indeed exciting, but computing power is essentially a standardized commodity, with differences mainly in price and availability. Price advantages may seem to find gaps in the giants' monopoly, but availability is restricted by geographic distribution, network latency, and insufficient user incentives. Once the giants lower prices or increase supply, such advantages will be instantly wiped out.
Data labeling is completely different - it is a differentiated field that requires human wisdom and professional judgment. Each high-quality label carries unique professional knowledge, cultural background, cognitive experience, and more, and cannot be "standardized" and replicated like GPU computing power.
A precise cancer imaging diagnosis label requires the professional intuition of a senior oncologist; a seasoned financial market sentiment analysis relies on the practical experience of a Wall Street trader. This inherent scarcity and irreplaceability give "data labeling" a moat depth that computing power can never reach.
On June 10, Meta officially announced the acquisition of a 49% stake in data labeling company Scale AI for $14.8 billion, making it the largest single investment in the AI field this year. What is even more noteworthy is that Scale AI's founder and CEO, Alexandr Wang, will also serve as the head of Meta's newly established "Super Intelligence" research lab.
This 25-year-old Chinese entrepreneur dropped out of Stanford University to establish Scale AI in 2016, and today his company is valued at $30 billion. Scale AI's client list is a "dream team" in the AI world: OpenAI, Tesla, Microsoft, and the Department of Defense are all long-term partners. The company specializes in providing high-quality data labeling services for AI model training, with over 300,000 professionally trained labelers.
You see, while everyone is still arguing about whose model scores higher, the real players have quietly shifted the battlefield to the source of the data.
A "cold war" over the future control of AI has already begun.
The success of Scale AI exposes a neglected truth: computing power is no longer scarce, model architectures are becoming homogenized, and what truly determines the ceiling of AI intelligence are those carefully "tamed" data. What Meta bought at a sky-high price is not an outsourcing company, but the "oil rights" of the AI era.
There's always a rebel against monopoly.
Just as cloud computing aggregation platforms attempt to disrupt centralized cloud computing services, Sahara AI is trying to completely rewrite the value distribution rules of data labeling with blockchain. The fatal flaw of the traditional data labeling model is not a technical issue, but an incentive design issue.
A doctor spends several hours labeling medical images and may only receive a few dozen dollars in labor fees, while the AI models trained on this data are worth billions of dollars, and the doctor does not receive a penny. This extreme unfairness in value distribution severely suppresses the willingness to supply high-quality data.
With the catalyst of the web3 token incentive mechanism, they are no longer cheap data "workers" but the true "shareholders" of the AI LLM network. Clearly, the advantages of web3 in transforming production relations are more suitable for data labeling scenarios compared to computing power.
Interestingly, Sahara AI happens to be at the node of Meta's expensive acquisition TGE. Is it a coincidence or a carefully planned move? In my opinion, this actually reflects a market turning point: whether Web3 AI or Web2 AI, they have already moved from "competing on computing power" to the crossroads of "competing on data quality."
While traditional giants build data barriers with money, Web3 is constructing a larger "data democratization" experiment with Tokenomics.
What should a good project's "ecological niche" look like? Recently, after deep conversations with some bosses, I found that most projects have not found their ecological niche:
1. High technical barriers, but there must be deep application scenarios. For example, ZK zero-knowledge proofs can be used for zkVM, cross-chain bridges, and verifiable computation, but considering the overall cost and efficiency, only zk-Rollup layer 2 expansion has been successfully implemented. Other directions lack deep application scenarios; no matter how advanced the technology is, it remains a castle in the air.
The FHE technology of Mind Network seems to have high barriers, but it has always struggled to find application scenarios. Most projects working on ZK co-processors also face this issue;
2. Market demands must be grounded, not driven by assumptions. Some projects often hypothesize that if 1% of users use our product, the commercial imagination space would be vast, but in reality, even this 1% demand could be fabricated.
Huma’s PayFi cuts into accounts receivable and cross-border payments, which is relatively reliable based on its compliance background. But projects claiming to be "decentralized Stripe"—what's wrong with traditional payments?;
3. The business model should be able to bridge B2B and B2C. Pure B2C enjoys the FOMO bonus but cannot survive the cold winter; pure B2B leaves retail investors feeling excluded, with high marketing costs. The smartest approach is to cater to both sides, with institutions paying and retail investors engaging, to navigate through cycles.
Backpack wallet + exchange + NFT community considers both institutions and retail. Particle’s chain abstraction + application products also balance B2B and B2C. In contrast, those purely doing infrastructure projects, which aim to create a complete chain DA layer, can only rely on institutional blood transfusions;
4. The business vision only needs to be "unfalsifiable," don’t be greedy for completeness. What does unfalsifiable mean? In the short term, you cannot prove that I am wrong. Some layer 2 projects always say, "We just need a wave of mass adoption to explode," but such grand visions without short-term verifiability equate to having no prospects.
KaitoAI may not even be an AI company, but it has tapped into the attention economy gap of KOLs and project parties, thus possessing unfalsifiability. Don’t just say you want to “redefine XXX”;
5. Timing is crucial. The intersection of three variables: technology maturity, market education, and competitive landscape, defines the time window. Why is AI Agent popular now? LLMs are sufficient, TEE is mature, and user acceptance has increased. Three years ago, discussing AI as a game-changer was pure hype.
Amidst the Solana MEME craze, there are still projects focusing on GameFi, hoping that sector rotation will favor them. Consider the operational logic behind MEME, and you’ll understand why projects like games, which have slow implementation and long cycles, struggle to get into the spotlight;
6. The ecosystem must have self-growth attributes and cannot rely on operations forever. Airdrops to attract users, grants to subsidize developers—these are just starting methods. The real network effect is that the more users there are, the greater the value, and the more developers there are, the stronger the ecosystem.
Those layer 2 projects that rely on point wars to maintain heat, from zkSync, Scroll to Linea, where are the real users after losing the opportunists?
Recently, many people have asked me how I view the new rising star of the Base ecosystem, @b3dotfun? Under the operation of the 'old team' from former Coinbase employees, can this L3 designed specifically for on-chain games truly solve the 'island' dilemma of Web3 games? Let me discuss this in detail:
—— A New Concept of Open Gaming in Web3
The concept of 'Open Gaming' proposed by B3 has a clear goal: to break the current isolated state of Web3 games, each operating independently. This is indeed the case; if you look at leading on-chain games like Axie Infinity, StepN, and Parallel, which of them is not engaged in a closed loop within their own ecosystems? Users need to switch chains, handle different tokens, and adapt to different wallets when playing different games, resulting in a fragmented experience.
B3's solution is to maintain the independence of each game while achieving interoperability through the GameChains architecture. For instance, Parallel's Prime chain and Infinigods' God chain can operate independently on B3, while still sharing liquidity and user incentives at the underlying level. This 'both-and' idea is quite idealistic; it mainly depends on whether it can be implemented.
Here comes the issue: for GameChains to truly achieve interoperability, various game parties need to reach consensus on technical standards, asset definitions, economic models, etc. This is not a technical problem; it is a matter of profit distribution.
Fortunately, B3 has an inherent advantage with the support of the Coinbase ecosystem, having traffic access through Base and regulatory endorsement, which can indeed attract many game parties to actively integrate.
—— Technical Combination of L3 Architecture + Chain Abstraction
From a technical architecture perspective, B3 has taken a relatively prudent yet distinctive route. As an L3 on Base, the cost per transaction is controlled at around $0.001, which is indeed very attractive for on-chain games.
B3's AnySpend technology allows users to access cross-chain assets instantly through a single account, without manually switching networks or bridging tokens.
In other words, it is essentially a hybrid model of 'sharding + cross-chain', where each GameChain maintains an independent state but achieves atomic cross-chain operations through B3's unified settlement layer, avoiding the security risks and time delays of traditional bridging solutions.
In plain terms, B3 is engaged in the business of game operation, not in the infrastructure business of selling shovels.
However, the competition in the L3 track is fierce. You have the Base ecosystem, while others have Arbitrum's Orbit and Polygon's CDK. B3's differentiated moat may lie in its deep understanding of game scenarios and unified entry points for operational services like https://t.co/8wAhsmoQuu.
—— Tokenomics Design and Business Model
B3's token distribution is relatively balanced: 34.2% for the community ecosystem, with only 19% released at TGE, and the remaining part has a 4-year lock-up plan to avoid short-term selling pressure. The application scenarios for $B3 include staking for GameChains rewards, funding game projects, and paying transaction fees, making the logic quite complete.
From a business model perspective, B3 adopts a 'platform economy + network effect' model. Unlike traditional game publishers that take a 30-70% cut, B3 attracts ecosystem participants through a lower transaction fee (0.5%) and token incentives. The key value flywheel is: more games integrate → more players gather → stronger network effects → higher demand for $B3 → more resources invested in the ecosystem.
What I am particularly concerned about is B3's positioning as 'the main circulating token of the entire chain game ecosystem'. Most blockchain games currently have their own token economies; how does B3 convince these projects to accept $B3 as a universal currency? From a valuation perspective, B3 resembles a 'game version of the App Store', with its value derived not only from technical fees but also from the scale effect of the ecosystem.
That’s all.
The biggest highlight of the B3 project is not its technological innovation, but its systematic attempt to solve the structural problems of the Web3 gaming industry. From the team's background and resource integration capabilities, the Coinbase team, support from the Base ecosystem, and $21 million in financing are all solid advantages. With 6 million active wallet users, over 80 integrated games, and 300 million cumulative transactions, it shows that B3 indeed has a solid strategy for user acquisition and ecosystem building.
B3's differentiation lies in taking an intermediate route that 'does not fully rely on a single game IP, nor does it engage purely in technical infrastructure', theoretically allowing for greater imaginative space, but it also faces the risk of being 'unsupported on both ends'.
Of course, the Web3 gaming track is still in the early exploratory stage; whether B3 can truly realize the vision of 'open gaming' depends on its ability to continuously attract quality game content and real users. After all, no matter how good the infrastructure is, its value ultimately relies on the prosperity of the application ecosystem.
About 3 months ago, $FLOCK @flock_io was delisted from Binance Alpha, but today some friends reported that FLock has been 'quietly' relisted. This wave of back-and-forth actions is likely to leave the project team confused.
There's no need to analyze the reasons; after all, exchanges have their own rights to explain listings and delistings. However, this does give the project team a direction:
In fact, there is no need to consider being listed on Binance spot or Alpha as an ultimate milestone and to pay an unbearable price for it.
The fact is that the key factors determining whether a ticker can be listed are real and valid holding addresses, trading volume, turnover rate, and other fundamental indicators. Theoretically, if there are concerns about not being listed, one only needs to continue building and strengthening these indicators. This is the bargaining chip for the project team to negotiate equally with any exchange.
FLock is a typical example. Originally, it performed very well on Bybit TGE and was listed on Binance Alpha. However, as the market trend declined, it was suddenly delisted. Who would have thought that just a few days ago, it was relisted on Upbit and Bithumb, with a single-day trading volume exceeding $100 million and a cumulative increase of over 222% within 5 days. Perhaps it was this outstanding performance that led it to be included in Binance Alpha again.
Therefore, rather than racking their brains to meet the conditions for listing on Alpha, the project team should focus on building a strong market performance for the project first.
Seeing https://t.co/sqAa0uamkt planning to issue tokens at a $4 billion valuation to raise $1 billion evokes mixed feelings. It's hard to imagine that a MEME launch platform is valued higher than most DeFi blue-chip protocols. Is such an exorbitant valuation reasonable? Here are a few points of view:
1) The inflated, bubble-like market valuation is quite unreasonable.
From the data, it's clear that https://t.co/sqAa0uamkt is indeed the largest beneficiary of this MEME super cycle, with monthly revenue peaks reaching tens of millions of dollars, a wealth creation effect that is phenomenal even in traditional internet terms.
However, the attention economy business of https://t.co/sqAa0uamkt relies on the short-term, irrational product of market MEME coin FOMO. In simple terms, it relies on "gambling" driven traffic monetization. This means that the monetization ability of https://t.co/sqAa0uamkt's business model is entirely a product of the short-term spotlight effect of the market, rather than a sustainable, normalized profit logic.
Based on this, is the $4 billion valuation reasonable? This pricing far exceeds that of most DeFi blue-chip protocols, making it hard to imagine how a platform that has been mocked for harvesting retail investors would have a valuation crushing that of innovative blue-chip protocols. Once the MEME craze fades or the market returns to rationality, the revenue model of https://t.co/sqAa0uamkt could collapse instantly. So, what exactly does https://t.co/sqAa0uamkt offer the market by issuing tokens at this moment of MEME cooling?
2) A fragile business moat is easily surpassed.
The success of https://t.co/sqAa0uamkt seems accidental but is actually inevitable; it seized the technical dividend of Solana's high performance and low cost, as well as the era dividend of MEME culture moving from niche to mainstream.
But how deep is this "first-mover advantage" moat? Technically, similar token issuance platforms can be quickly replicated; operationally, the MEME launch platform is essentially a traffic business, and once the hot spots shift or regulations tighten, the cost for users to migrate is extremely low.
More critically, https://t.co/sqAa0uamkt is highly dependent on the Solana ecosystem. Once there are significant changes in the Solana ecosystem, the fragility of its business model will be fully exposed. This business model, built on others' infrastructure, is essentially a "dependent" business, and given its extreme unsustainability, how can it support a $4 billion independent valuation?
3) The tool-like property of Launchpad is hard to form an ecosystem.
Currently, even if https://t.co/sqAa0uamkt is "making money," it is just a "token issuance tool," and to support a $4 billion market valuation, at least a large MEME economic ecosystem is needed. Knowing that it is unattainable, it is hard to imagine what the purpose of raising $1 billion is.
Unbeknownst to many, transforming from a pure Launchpad into a complex MEME economic ecosystem inherently contains a paradox: the core of MEME culture is precisely simplicity, directness, and viral spread; excessive functional overlap will only cause the platform to lose its original "wildness."
In fact, balancing the "short and quick" characteristics of MEMEs with the platform's long-term value accumulation is challenging. Products that attempt to evolve from tools to platforms often lose themselves in the pursuit of being "big and complete," ultimately becoming something unrecognizable. With $1 billion in hand, https://t.co/sqAa0uamkt may be heading toward such a fate.
4) Extremely high valuations will disrupt the original value innovation system.
The extremely high valuation of https://t.co/sqAa0uamkt sends a dangerous signal to the entire industry: in the current Crypto ecosystem, the value of "traffic aggregation + speculative monetization" may exceed that of "technological innovation + infrastructure." One must ask, when creating gambling platforms is more profitable than promoting technological innovation, who will still chew on the tough bone of infrastructure? It is hard to imagine what kind of disastrous industry chain reaction this new value orientation will produce.
On one hand, more capital and talent will flood into MEME-related infrastructure construction; on the other hand, it may also exacerbate the industry's "entertainment" trend, marginalizing true technological innovation.
In summary, the token issuance by https://t.co/sqAa0uamkt is both a sign of the maturation of the MEME economy and possibly a signal of the collapse of industry values.
The key is whether it can truly build a sustainable business moat after obtaining massive capital, otherwise, this distorted valuation will bring tremendous innovative disasters to the entire industry, heralding a more utilitarian, shortsighted, and further distancing from the technical geek essence of the Crypto future.
What's scarier than traditional KOLs calling trades is that trained AI Agents are directly entering the trading frontline. It's no longer a game of 'information asymmetry', but rather a crushing gap in 'execution power'.
No matter how powerful traditional KOLs are, they still rely on their own eyes to monitor the market and manually place orders, facing network delays from discovery of opportunities to completion of trades, as well as limitations of hand speed and uncertainty in decision-making;
But AI Agents compress this entire process to milliseconds—multi-chain synchronous monitoring, algorithmic identification of arbitrage windows, automated execution of trades, all seamlessly connected.
So, is it only a matter of time before AI Agents replace traditional KOLs?
This time window is rapidly opening but is not yet fully established. From a technical foundation perspective, TEE (Trusted Execution Environment), DeFi liquidity infrastructure, and cross-chain protocol stacks are still in exploratory and adaptation phases:
1. Most AI Agents are still in the 'single-point breakthrough' stage, and the true Multi-Agent combat mode is still under construction;
2. The web3 industry still lacks unified API and standardized permission management across protocols;
3. Regulatory boundaries are still a mess; no one knows when there will be a one-size-fits-all approach, which makes many funding sources hesitant;
4. On-chain gas fees and MEV (Miner Extractable Value) front-running costs are still consuming most small arbitrage opportunities; only large funds can afford this game;
5. There is still a scarcity of technical teams that truly understand the native logic of Crypto and can write reliable Agents; most projects rigidly apply web2 thinking to web3 scenarios.
At this development speed, I estimate that the second half of 2025 will be a watershed moment. To avoid becoming 'human chives' in this AI trading war, there is only one way out: everyone must cultivate their own AI Agent assistants and completely shift to Agentic thinking to navigate Crypto.
This is not some lofty concept; it's about gradually outsourcing the tasks you used to personally monitor and trade manually to the AI Agents you've trained— from price monitoring, risk alerts, to multi-pool arbitrage and cross-chain trading, making AI Agents your 'digital avatars' in the on-chain world.
In other words, the era of traditional KOLs calling trades is coming to an end, and what follows will be the reign of Agentic opinion leaders.
Regarding the airdrop from @humafinance to Kaito Yaper, the intricacies behind it are much deeper than they appear on the surface. Here are three points:
1) The transition from "interaction-based farming" to the "algorithm-based farming" era. In the past, farming relied on "diligence"—opening multiple wallets, increasing interactions, and piling up TVL. Now, we have directly entered an era where "algorithm weight" and Mindshare are what matter.
Platforms like @KaitoAI and @cookiedotfun essentially create a "digital profile" for each KOL, quantifying the value of content, audience quality, interaction efficiency, and other impact dimensions through machine learning.
To some extent, this upgrades the KOL selection mechanism, which originally depended on "insider relationships" and "subjective judgment," to a precision targeting driven by AI data.
However, initial algorithm assessments are often unsatisfactory. For example, it’s possible to engage in small-scale collusion through mutual liking groups, follower boosting, and comment exchanges, leading to a short-term influx of farming studios eager to seize the farming opportunities.
But remember, algorithms can be continuously optimized. Paying attention to the relationship between IP and assets during interactions can help avoid being flagged, but with algorithm-based farming, especially under "black box" conditions, the chances of being flagged only increase. Treating this as a “farming business strategy” requires caution.
2) The "layered differentiation" of the KOL ecosystem on platforms will accelerate. Frankly speaking, top KOLs already have the alpha research capabilities and opportunities to participate deeply in quality projects early on, and can monetize their influence through consulting, investments, and on-chain finance.
Therefore, these major influencers tend to be quite "aloof", posting infrequently and interacting cautiously, which may lead them to be classified as "inactive users" in the eyes of algorithms. Meanwhile, some mid-tier and lower-tier KOLs are frequently reposting, commenting, and interacting daily, achieving high scores in the algorithm's activity ratings.
This actually exposes a core bug in the current algorithm assessment—mistaking "quantity" for "quality" and treating "frequency" as "value". In the short term, this will indeed bring a wave of benefits to those KOLs willing to frequently promote projects.
However, algorithms ultimately need to rely on objective impact assessments to succeed. As algorithms continue to optimize, "interaction frequency" will inevitably give way to the weight of "content value"; otherwise, top KOLs and high-quality projects will leave, which is something platform providers controlling the algorithm black box definitely do not want to see. The key is how to balance content value and interaction frequency, avoiding serious differentiation in KOL resources.
3) The "implicit inflation" of marketing costs for project parties has already begun. On the surface, moving from finding agencies to package KOL resources to directly using platforms like Kaito for precise targeting indeed cuts out the middleman. But what is the reality? Project parties must pay booth fees to participate in this "algorithm arms race," and as competition for bidding positions intensifies, the hidden costs will only rise.
Even worse, algorithms overly rely on quantitative indicators—like the interaction numbers of Smart Followers—while neglecting truly valuable elements, such as content depth, audience quality, and brand match. The issues caused by algorithm bias are quite apparent:
First, marketing ROI declines—airdropping to accounts whose influence value does not match will definitely yield lower conversion results than expected; second, brand reputation risks—overemphasizing interaction quantity over content quality might damage the market perception that project parties have painstakingly established.
Of course, this is also a dynamic game process. Algorithm models will continuously optimize, and project parties can intervene manually, ultimately returning to the two-way match of brand value and user value, allowing the business strategies of algorithm platforms like Kaito and Cookie to truly grow and strengthen.
Note: Personally, I have obtained my Yap points in a very relaxed manner; in the past week, I have clearly felt that content with substance has been weighted more heavily, and my ranking is quite high. Such AI algorithm platforms play a significant role in the allocation of “ecological niches” in the attention economy's Mindshare.
However, it’s best to avoid monopolization; therefore, supporting more platforms like Cookie to join the competitive landscape is very necessary. (I have 10 beta test invitation codes; DM or comment if you need one.)
Just finished chatting with a few big names in the industry, and everyone is discussing the same thing...
The theory of "a four-year cycle" is completely outdated!
If you're still holding onto the idea of getting rich quickly, still fantasizing about "the opportunities to effortlessly win in a bull market by tenfold or hundredfold," you may have already been completely abandoned by the market. Why?
Because smart money has long discovered a secret: the current Crypto does not apply a single strategy, but rather four completely different gameplay cycles are running simultaneously 🧵:
The rhythm, gameplay, and logic of profit-making for each gameplay cycle are completely different.
—— Bitcoin Super Cycle: Retail investors exit, a decade-long slow bull market may become the norm
The "script" of the traditional halving cycle? Completely ineffective! BTC has evolved from being a "speculative target" to an "institutional allocation asset". The funding volume and allocation logic from Wall Street, publicly listed companies, and ETFs are fundamentally different from the retail investors' approach of "bull-bear switching".
Where is the key change? Retail chips are being handed over on a large scale, while institutional funds, represented by MicroStrategy, are flooding in. This fundamental restructuring of the chip structure is redefining the price discovery mechanism and volatility characteristics of BTC.
What are retail investors facing? The dual squeeze of "time cost" and "opportunity cost". Institutions can endure a holding period of 3-5 years to wait for the long-term value of BTC to be realized, but what about retail investors? Obviously, they cannot have such patience and financial strength for layout.
In my view, we may very well see a BTC super slow bull market lasting more than ten years. The annualized return rate remains stable in the range of 20-30%, but the intraday volatility is significantly reduced, resembling a steadily growing tech stock. As for how high BTC's price ceiling will reach? From the current retail investors' perspective, it is even difficult to predict.
—— MEME Attention Short Wave Cycle: From Slum Paradise to Professional Harvesting Grounds
The MEME long bull theory is actually valid; during the window of technical narrative performance, MEME narratives will always fill the market's "boredom vacuum" with the rhythm of emotion, capital, and attention.
What is the essence of MEME? It is a speculative vehicle for "instant gratification". No need for white papers, no need for technical validation, no need for roadmaps, just a symbol that can make people smile or resonate is enough. From cat and dog culture to political MEMEs, from AI concept packaging to community IP incubation, MEME has evolved into a complete "emotional monetization" industrial chain.
The deadly reality is that MEME's "short, flat, and fast" characteristic has made it a barometer of market sentiment and a reservoir for capital. When funds are abundant, MEME becomes the first choice testing ground for hot money; when funds are scarce, MEME turns into the last refuge for speculation.
However, reality is harsh; the MEME market is evolving from "grassroots carnival" to "professional competition". The difficulty for ordinary retail investors to profit in this high-frequency rotation is increasing exponentially.
The story of P Xiao sitting idly to create legends may become increasingly rare, as the entry of studios, scientists, and large players will cause this once "slum paradise" to become overly competitive.
—— Technological Narrative Leap Long Cycle: Bottom fishing in the Valley of Death, starting at ten times in three years?
Has the technical narrative disappeared? Not at all. Innovations with real technical barriers, such as Layer 2 scaling, ZK technology, AI infrastructure, etc., require 2-3 years or even longer Build time to see actual results. These types of projects follow the technology maturity curve (Gartner Hype Cycle), rather than the emotional cycle of the capital market—there is a fundamental time misalignment between the two.
The market's criticism of the technical narrative is entirely due to overvaluation when the project is still in the concept stage, and then undervaluation occurs during the "Valley of Death" phase when the technology truly begins to take shape. This determines that the value release of technical projects presents a non-linear leap characteristic.
For patient investors with technical judgment, laying out truly valuable technical projects in the "Valley of Death" phase may be the best strategy for obtaining excess returns. But the prerequisite is that you must endure a long waiting period and market trials, as well as potential mockery and ridicule.
—— Innovative Small Hotspot Short Cycle: 1-3 Month Window Period, Brewing the Major Uptrend Narrative
Before the mainline technical narrative takes shape, various small narratives rotate quickly, from RWA to DePIN, from AI Agent to AI Infra (MCP + A2A), each small hotspot may only have a 1-3 month window.
This fragmentation of narratives and high-frequency rotation reflects the current dual constraints of market attention scarcity and capital rent-seeking efficiency.
In fact, it is not difficult to find that typical small narrative cycles follow a six-stage model: "Concept Validation → Capital Testing → Public Opinion Amplification → FOMO Entry → Valuation Overdraft → Capital Withdrawal". Want to profit in this model?
The key is to enter at the "Concept Validation" to "Capital Testing" stage and exit at the peak of "FOMO Entry".
The competition between small narratives is essentially a zero-sum game for attention resources. However, there are technological correlations and conceptual progression relationships between narratives. For example, the MCP (Model Context Protocol) in AI Infra and the A2A (Agent-to-Agent) interaction standards are actually a technical underlying reconstruction of the AI Agent narrative. If subsequent narratives can continue the previous hotspots, forming systemic upgrade linkages, and truly sediment a sustainable value closed loop during the linkage process, it is highly likely that a super narrative at the level of a major uptrend similar to DeFi Summer will emerge.
From the existing small narrative pattern, the AI infrastructure layer is most likely to achieve breakthroughs first. If MCP protocols, A2A communication standards, distributed computing, inference, data networks, and other underlying technologies can be organically integrated, there is indeed the potential to construct a super narrative similar to "AI Summer".
That's all.
In summary, recognizing the essence of these four parallel gameplay cycles is essential to finding suitable strategies within their respective rhythms. Undoubtedly, the single-minded "four-year cycle" thinking has completely failed to keep up with the complexity of the current market.
Adapting to the new normal of "multiple gameplay cycles running in parallel" may be the key to truly profiting in this bull market.
Recently, @MMTFinance's rapid rise on the Sui chain is truly impressive, with TVL surpassing $55 million in just one month, making it the fastest-growing project in Sui chain history. In the current DEX landscape, where homogeneity is severe and user stickiness is insufficient, how can Momentum break through quickly? The answer lies in its adoption of the ve(3,3) mechanism. Next, let me share my thoughts:
—— The DEX track urgently needs a new breakthrough point, and ve(3,3) comes at the right time.
In my view, the current DEX market faces two major dilemmas: 1) The incentives of the traditional AMM model are unsustainable, with most projects falling into a "mine, withdraw, sell" death spiral; 2) Liquidity fragmentation is severe, user loyalty is low, and there is a lack of long-term value capture mechanisms.
The emergence of the ve(3,3) mechanism provides solutions to these dilemmas. Simply put, it combines Curve's Vote Escrow locking mechanism with the (3,3) game theory of Olympus DAO, allowing users to lock tokens to gain voting rights, guiding protocol resource allocation through voting. Project teams can "bribe" to attract votes, with all transaction fees distributed to voters.
The brilliance of this mechanism lies in its construction of a self-reinforcing Flywheel: increased staking ratio → reduced selling pressure on tokens → price increase → improved market-making APR → attracting more liquidity → increased trading volume → increased fee rebates, forming a positive cycle. To some extent, this is the "holy grail" mechanism for the sustainable development of DeFi protocols.
—— Sui's technical foundation: The natural advantages of Move-based public chains.
Momentum's choice of Sui is not accidental. The ve(3,3) mechanism involves complex interactions of voting, "bribing", reward distribution, etc., which require extremely high underlying performance and security.
1) In terms of performance, Sui achieves 297,000 TPS through parallel transaction processing, supporting the high-frequency complex interactions of ve(3,3);
2) In terms of security, the resource ownership model and object-centric design of the Move language make dynamic weight management and complex state changes more secure and reliable;
3) In terms of user experience, Sui's sub-second confirmation ensures the timely execution of voting and reward distribution, which is crucial for the timeliness of the "bribing" mechanism.
In layman's terms, if we compare ve(3,3) to a precision flywheel machine, Sui is the high-performance engine tailor-made for this machine.
—— Strong backing: The dual buff of the Qatari royal family and foundation.
The cold start of the ve(3,3) mechanism is a classic chicken-and-egg problem that requires strong initial resource support. Momentum's investment lineup is luxurious: @Coinbase Ventures, @SuiNetwork Foundation, @jump_Crypto, and Varys Capital, backed by the Qatari royal family, as the leading investor.
More critically, founder @ChefMMT_X was involved in Meta's Diem project, which is the predecessor of Sui. This technical heritage has enabled Momentum to gain dual support from the Sui Foundation and Mysten Labs, not only in terms of funding but also in comprehensive ecological resource tilt.
In fact, successful ve(3,3) projects (like Aerodrome on the Base chain) cannot do without foundation support. The strong backing provides key assistance for Momentum to quickly accumulate TVL and activate the flywheel.
—— Ecological positioning: Not just a DEX, but also a resource distribution center for Sui.
From the performance of breaking into the top ten of Sui chain TVL in just one month, Momentum clearly has ambitions to become the DEX infrastructure of the Sui ecosystem. The ve(3,3) mechanism makes it not just a trading platform but more like an ecological traffic distribution center.
Through the "bribing" mechanism, new projects in the Sui ecosystem can obtain liquidity support through Momentum, which gives Momentum the potential to replicate Curve's success in the Ethereum ecosystem, becoming an important component of Sui's DeFi Lego.
Undoubtedly, in the competition of high-performance L1s, the chain that establishes a strong DeFi infrastructure first is more likely to gain ecological dividends, and Momentum is an important weapon for Sui in this competition.
That's all.
However, there are also significant challenges that need to be addressed. The recent @CetusProtocol hacking incident is a typical example; although MMT itself is secure, the overall risk aversion sentiment in the Sui ecosystem has led to a noticeable drop in TVL. It currently appears to be a code-level issue with Cetus, not involving the underlying security of Sui, but the transmission effect of ecological risk cannot be ignored.
To some extent, this also confirms the risks we mentioned earlier: firstly, the controversy over centralization at the Sui ecosystem level requires the market to digest for a while; secondly, the sustainability of the ve(3,3) mechanism itself still needs to be verified. Historically, many similar projects faced the risk of flywheel reversal after an initial FOMO, and once TVL growth slows down, the positive cycle may turn into a negative spiral.
However, crises often present opportunities. The dissipating risk aversion sentiment after the Cetus incident may bring a window of liquidity rebound for safe projects like Momentum. Finally, the upcoming July TGE will be a key test, as token distribution and long-term incentive mechanism design will directly impact the project's sustainable development.
Overall, Momentum's rapid rise through the ve(3,3) mechanism on Sui represents an important attempt at DeFi innovation for Move-based public chains. This is both a test that Sui must face and a crucial examination of whether the old DeFi model of ve(3,3) can regain vitality on new soil.
Recently, the launch of Binance on @SpaceandTimeDB has sparked widespread discussion in the market, igniting the previously relatively niche narrative of 'ZK data infrastructure.' As a bridge connecting smart contracts and off-chain data, $SXT is attempting to address a more fundamental pain point in the on-chain world: the trustworthy execution and verification of data. Next, let me share my observations:
1) Essentially, Space and Time is a decentralized layer one blockchain (SXT Chain), but its core value lies not in building a general-purpose smart contract platform, but in taking a different path to focus on solving a specific problem: achieving trustworthy data processing under zero-knowledge proofs. Its killer feature, Proof of SQL, makes tamper-proof data tables no longer just theoretical, achieving double insurance of verifiability of queries and integrity of data through ZK technology.
From another perspective, this completely overturns the inherent thinking of handling data in the blockchain world: in the past, smart contracts either had to endure exorbitant Gas costs for on-chain storage, or were forced to trust centralized APIs and oracles. SXT provides a third path: building a dedicated decentralized data layer that combines on-chain cryptographic commitments and off-chain SQL execution, making data processing both secure and trustworthy, as well as efficient and low-cost.
2) From a technical architecture perspective, the SXT network consists of three key components:
1. Indexer Nodes: acting as data collectors, responsible for obtaining real-time and historical data from mainstream blockchains and converting it into SQL relational format;
2. Prover Nodes: acting as the computational engine, handling query requests, executing ZK-proven SQL queries on tamper-proof tables, and generating sub-second ZK proofs;
3. SXT Chain Validators: serving as data notaries, maintaining network integrity, handling data insertion, and collectively endorsing on-chain cryptographic commitments through BFT consensus.
This architecture allows on-chain storage to only retain cryptographic commitments (similar to data fingerprints), rather than complete data, significantly reducing on-chain storage costs. More importantly, these commitments are updatable/homomorphic, meaning that when updating data, there is no need to recompute the fingerprint of the entire data set, only to overlay changes on the original fingerprint—this is the key move to solving the performance bottleneck encountered by traditional ZK solutions in big data processing.
3) SXT's Proof of SQL is not just a technical innovation but also solves the core pain points of current ZK proof systems when dealing with large-scale data:
1. Scalability: Traditional ZK proofs are inefficient when handling large datasets, while SXT claims to achieve millisecond-level ZK proof generation, and if on-chain verification Gas consumption is as low as 150k, it is a significant breakthrough in the entire ZK Prove field;
2. Developer Friendliness: providing developers with a familiar SQL interface rather than complex ZK circuit programming, significantly lowering the development threshold;
3. Universality: applicable not only to SXT's own decentralized database but also to traditional databases (such as PostgreSQL, Snowflake), expanding the technological applicability.
From an abstract perspective: SXT is essentially creating a 'trusted data computing platform' for the blockchain world, breaking through the inherent data blind spots of smart contracts, allowing on-chain applications to no longer be data islands. It is like a 'query co-processor' that resolves the inherent limitations of smart contracts in directly accessing historical on-chain data, cross-chain data, off-chain data, or complex aggregated data.
4) Setting aside the technical narrative, SXT's commercial value may deserve more attention. Its application scenarios almost cover all current hot topics in Web3:
1. ZK-Rollups/L2 Optimization: as an L2 data layer, reducing Gas costs and enhancing scalability; 2. Cross-chain Secure Bridging: providing multi-chain data verification, enhancing the security of bridges; 3. Decentralized DApp Backend: replacing traditional centralized backends, providing verifiable data services; in addition, it includes data-driven DeFi, RWA, GameFi, and SocialFI, among all applications facing on-chain storage bottlenecks.
5) Finally, let’s take a look at the design of SXT's token economic model, which I find quite reminiscent of traditional POS + data market:
1. Validators: stake SXT to participate in network security, earning network fees and token emission rewards; 2. Table Owners: create and maintain tamper-proof data tables, profiting through insertion fees and query fees; 3. Users: pay query fees to use network services.
The most brilliant aspect of this model is the division of 'query fees' between data providers and validators, forming a self-driven data market ecosystem—the more valuable the data, the larger the query volume, the more all parties benefit, thus attracting more high-quality data into the ecosystem, completing a positive cycle.
In summary, the greatest innovative value of $SXT lies in creating a solution that combines the traditional database tool SQL with the Web3 zero-trust architecture, enabling the blockchain ecosystem to handle more complex data logic. This not only addresses the 'inherent data shortcomings' of smart contracts but also provides a feasible path for enterprise applications with strict requirements on data quality and processing capabilities to go on-chain.
With the project's deep binding to leading ecosystems like zkSync, Avalanche, and Chainlink, coupled with the prestigious Binance brand, SXT has indeed secured a 'ticket' to challenge mainstream infrastructure. Of course, challenges are also evident, as the technical implementation still needs to overcome the inherent contradictions between decentralization and performance, and market education and developer adoption will take time.
With the upgrade of the Alpenglow protocol on @solana, reducing transaction confirmation time to an astonishing 100-150 milliseconds, this marks a new stage in the competition between the two giants of public chains, Solana and Ethereum: it is no longer just a race in technical metrics, but a true test of business models and application implementation.
Solana's challenge: Having achieved millisecond-level confirmation speed, Solana faces an ecological landing challenge similar to that once faced by Ethereum. When your engine is already faster than the demands of the track, what should come next?
Solana needs to prove to the market what truly revolutionary applications can be brought by millisecond-level confirmations, aside from MEME. Currently, most DeFi and NFT applications are running well in sub-second confirmation environments, making it difficult to fully utilize Solana's technical strengths. It needs to innovate categories of applications that can only be realized with millisecond-level confirmations; otherwise, its technological advantage will be greatly underestimated.
Ethereum's challenge: With the Layer2 ecosystem taking shape and performance bottlenecks gradually alleviated, Ethereum needs to more strongly demonstrate the actual value of its decentralization and security advantages in terms of institutional adoption. How much is the consensus on decentralization and security really worth?
Ethereum needs to prove that its security and decentralization are not just technical concepts, but core competitive advantages that can translate into real commercial value. Especially in the fields of stablecoins, DeFi, and RWA (real-world assets), Ethereum needs to accelerate the integration of traditional financial institutions.
A symbiotic and prosperous outcome is that the two will no longer fight to the death, but instead will have functional division of labor: Solana is likely to become the preferred platform for 'performance-intensive' applications, while Ethereum consolidates its position as the 'value storage layer.' For the entire industry, this competition will drive blockchain technology towards a more diverse and mature future.
When discussing issues with some Builders in the AI + Crypto space, it was found that everyone has deep grievances about the definition of 'AI Agent', while the term 'AI Copilot' seems to be more pragmatic?
1) An AI Agent is essentially an autonomous agent AI that aims to allow AI to independently analyze, think, and even execute tasks without user supervision.
This sounds cool, but when the goal is to achieve this while constantly fine-tuning, correcting, and backing up the AI workflow, it becomes evident that the concept of AI Agent may be overly idealistic. It might be better to acknowledge the excessive marketing of AI autonomy issues and define it as AI Copilot, which is more pragmatic;
2) As the name suggests, the AI Copilot positions humans as the decision-makers and operators, while AI is merely a tool. Just like flying a plane, the co-pilot can handle many standardized procedures, but key decisions still need to be made by the captain.
By positioning AI this way, although it sacrifices some sense of intelligence, it places significant demands on humans' ability to write prompts, their ability to combine various AI LLM tools, and their ability to judge and manage AI hallucinations. However, this approach maximizes the utility of AI tools and truly enhances human capabilities under AI assistance.
3) Of course, downgrading the AI Agent to an AI Copilot does not deny the boundaries of AI's autonomous decision-making capabilities. Instead, it gives most people a golden window period to coexist and thrive with AI.
For us personally, rather than passively accepting the day when we are eventually replaced by AI, it is better to learn how to wield AI before it truly achieves superintelligence.
For entrepreneurship in the AI space, a mindset correction should be made. Rather than blindly hype AI's autonomy, it is better to focus on making AI a better assistant and avoid misleading users with promotions of complete autonomy and no need for human intervention.
Ultimately, the debate of AI Agent vs AI Copilot is not just a naming issue but a fundamental reflection on the relationship between AI and humans throughout the AI industry.
The recent significant rise of Ethereum $ETH has led many to question whether it is related to the recent Pectra upgrade. The answer may not be.
The Pectra upgrade is more like the "finishing touches" of the Cancun upgrade, primarily involving some underlying optimizations and refinements rather than breakthrough technological innovations.
From a technical perspective, the four EIPs included in the Pectra upgrade all point in the same direction: to make Ethereum run more stably and efficiently. EIP-7044's standardization of state expiration, EIP-7524's redefinition of fuel limits, EIP-7697's optimization of transaction pipelines, and EIP-6789's improvements in difficulty adjustments—these are all typical "patch-up" upgrades that address some marginal issues left over from the Cancun upgrade.
The logic that truly determines the price trend of Ethereum this time is actually a "value recovery" after being excessively FUDed.
In the past few months, Ethereum has indeed undergone a round of concentrated skepticism: the decentralization of layer2 liquidity has been exaggerated into an ecological split, comparisons with Solana's performance have been interpreted as a failure of the technical route, and the expansion of various layer2 ecological applications has fallen short of expectations, with narratives around Restaking, modularization, zk, and other technologies failing to capture value, etc.;
When all the focus is on Ethereum's problems, people overlook some key facts: the total locked value in DeFi remains stable at $119B, the Cancun upgrade has indeed significantly reduced layer2 costs, ETF fund inflows continue to strengthen, and new narratives such as RWA and PayFi are also primarily developing within the Ethereum ecosystem.
The fundamentals of Ethereum are not as bad as the market sentiment reflects.
Institutional investors have clearly seen through this emotional imbalance. A typical example is Abraxas Capital's massive purchase of 242,652 ETH (approximately $561 million). Moreover, during the period from May 9 to 14, large ETH transfers (>$1M) also significantly increased, and the ETH balances of institutional wallets have noticeably grown, all of which indicate a planned large-scale accumulation by institutions.
So, if we must find a logic for this round of Ethereum's rise: Ethereum has been excessively FUDed and needs to rediscover its existing value, while institutions have taken the opportunity to buy at a low price?
Chatting with friends, I discovered something interesting: this round of $ETH vs $SOL looks just like the last round of $DOT vs $ETH.
Last round: Polkadot had all sorts of ceiling-breaking technologies—cross-chain, parachains, governance mechanisms, and a roadmap that was more beautiful than poetry. Ethereum was just a "slow student," and DeFi Summer told you in the wildest way—"as long as it works, it's fine."
This round: Ethereum has become a "tech geek" with Rollup Centric, zkRollup, DA modularity, Based Rollup... Developers are left confused. Solana, on the other hand, resembles Ethereum from back in the day, with MEME flying everywhere, smooth DEX, and newcomers getting the hang of it in no time—"as long as it's fun, it's fine."
Thinking carefully, it's really a case of the wheel of fortune turning. Back then, Ethereum used "rough and quick" to defeat Polkadot's perfectionism, and now Solana is challenging Ethereum's tech worship with simplicity and brutality.
Users don’t care how advanced your technology is, as long as it makes them feel good???
As for whether Solana can become the second-in-command across cycles, and whether Ethereum will fall into the decline of empty technical talk like Polkadot?
If this cycle doesn't work, there will definitely be a conclusion in the next cycle. In fact, the technical narrative is not wrong; the mistake is detaching from user experience and community.
Last night's AI + web3 Space event was quite successful, lasting a full 3 hours and once again breaking the previous record for the duration of informative Spaces.
I was almost constantly in a fast-paced mental state, asking many in-depth questions, and several AI industry startup builders were very engaged in the discussion. It is this kind of authentic, informative, non-formalistic, and fundamentally problem-solving Space that is meaningful.
AI + web3 is a very special industry, with the grandest prospects but many current issues; many people are optimistic, while many others stigmatize it; there are many entrepreneurs with a firm belief in building, but also many who are just looking to make a quick profit; there are a host of technical problems in AI infrastructure waiting to be solved, and all sorts of strange and chaotic AI agents everywhere... that's why it was so informative.
In the future, we may randomly continue to organize similar events, so please stay tuned.
I have to say, partnering with the all-rounder @Leoninweb3 is unbeatable. Thanks to all the participants: @sunny_unifAI @YilunZhang4 @cosmeticfish @YFYkuner @jeffrey_hu @scottshics @rickawsb @yyyzzgc ! The recording of the Space is as follows:
Looking forward to the stars and the moon, Ethereum $ETH has finally shown some strength. So, as the leader of altcoins, what narrative directions are worth looking forward to in the future market of the Ethereum ecosystem after its resurgence?
1) zkEVM/zkVM: The market will revolve around the upgrade from EVM to RISC-V and the planned zkSNARKs integration in the entire Ethereum Roadmap. As a technical narrative that has not yet been fully explored, ZK modular combinations, interoperable layers, chain abstraction, etc., will become the mainline of development;
2) RWAFi/PayFi: The market will still be based on the already large foundation of DeFi + stablecoins in Ethereum, seeking support from institutional funds around the mainline of Ethereum ETF's transition from virtual to real in the second half. The market will explore the application of web3 technology in the real economy, RWA assets will bridge on-chain DeFi yield demands and off-chain real business scenarios, bringing a new wave of growth opportunities;
3) Layer2 + AI: As Ethereum L1 enters a phase of self-strengthening, Ethereum layer2 needs to seek potential breakthroughs in the form of independent chains, and in the context of AI Agent narratives sweeping the Crypto industry, various Ethereum layer2 ecosystems will naturally not miss this AI feast. The application preconditions demanded by AI scenarios + high performance + ZK integration + infrastructure gaps will provide many layer2 with new business growth points.