Some friends say that the continued decline of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first glance, it’s a bit confusing, WTF does it have to do with it? But after thinking about it, there is indeed a certain logic: the valuation and pricing logic of existing web3 AI Agents has changed, and the narrative direction and product landing routes need adjustment. Below, I will share my personal views:

1) MCP (Model Context Protocol) is an open standardized protocol aimed at allowing various AI LLM/Agents to seamlessly connect to various data sources and tools, equivalent to a plug-and-play USB 'universal' interface, replacing the previously end-to-end 'specific' packaging method.

Simply put, there were obvious data silos among AI applications, and to achieve interoperability between Agents/LLMs, they had to develop their respective calling API interfaces, which complicates the operational process and lacks bidirectional interaction functionality, usually having relatively limited model access and permission restrictions.

The emergence of MCP is equivalent to providing a unified framework that allows AI applications to break away from the past state of data silos, realizing the possibility of 'dynamic' access to external data and tools, which can significantly reduce development complexity and integration efficiency, particularly in automated task execution, real-time data querying, and cross-platform collaboration. Speaking of this, many people immediately think that if the Manus integrated with the innovative multi-Agent cooperation of MCP can promote multi-Agent collaboration, wouldn't it be invincible?

That's right, Manus + MCP is the key to the impact on web3 AI Agents.

2) However, surprisingly, both Manus and MCP are frameworks and protocol standards aimed at web2 LLM/Agents, addressing the data interaction and collaboration issues between centralized servers, with their permissions and access controls still relying on the 'active' opening of each server node. In other words, it is merely an open-source tool attribute.

Logically speaking, it completely contradicts the central idea pursued by web3 AI Agents of 'distributed servers, distributed collaboration, distributed incentives', etc. How can a centralized artillery blow up a decentralized fortress?

The reason for this is that the first stage of web3 AI Agents has become too 'web2-like'. On one hand, it stems from the fact that many teams come from a web2 background and lack a sufficient understanding of web3 native needs. For example, the ElizaOS framework was originally a packaging framework that helps developers quickly deploy AI Agent applications, integrating platforms like Twitter, Discord, and some APIs like OpenAI, Claude, and DeepSeek, while appropriately packaging some Memory and Character general frameworks to help developers quickly develop and finalize AI Agent applications. But if we get serious, what difference does this service framework have from web2's open-source tools? What differentiated advantages does it have?

Uh, is the advantage just having a set of Tokenomics incentives? Then using a framework that can be completely replaced by a web2, incentivizing a batch of AI Agents that exist mainly to issue new tokens? Scary... Following this logic, you can roughly understand why Manus + MCP can impact web3 AI Agents. Since many web3 AI Agent frameworks and services only solve the rapid development and application needs similar to web2 AI Agents, but cannot keep up with the pace of web2 innovation in terms of technical services, standards, and differentiated advantages, the market/capital has re-evaluated and repriced the previous batch of web3 AI Agents.

3) Speaking of this, the general problem must have found its crux, but how should we break the deadlock? There is only one way: focus on creating web3 native solutions, because the operation and incentive architecture of distributed systems are the absolute differentiated advantages of web3.

Taking distributed cloud computing power, data, algorithms, and other service platforms as an example, on the surface, this kind of computing power and data aggregated under the guise of idle resources seems unable to meet the needs of engineering innovation in the short term. However, when a large number of AI LLMs are engaged in a centralized computing power arms race for performance breakthroughs, a service model that uses 'idle resources, low cost' as a gimmick will naturally be looked down upon by web2 developers and VC teams.

But once web2 AI Agents move past the performance innovation phase, they will inevitably pursue the expansion of vertical application scenarios and fine-tuning model optimizations, at which point the advantages of web3 AI resource services will truly emerge. In fact, when web2 AI, which has climbed to a certain stage through resource monopolization, finds it difficult to revert to a strategy of surrounding cities with rural areas, gradually breaking down individual segmented scenarios, that is when the surplus web2 AI developers + web3 AI resources will unite and exert force.

Therefore, the opportunity space for web3 AI Agents is also very clear now: before the web3 AI resource platform has overflowing web2 developer demand customers, explore and practice a feasible solution and path that is indispensable for a non-web3 distributed architecture. In fact, apart from the web2 quick deployment + multi-Agent collaboration communication framework + Tokenomic issuance narrative, there are many web3 native innovative directions worth exploring:

For example, equipped with a set of distributed consensus cooperation framework, considering the characteristics of LLM large model off-chain computation + on-chain state storage, many adaptable components are needed.

1) A decentralized DID identity verification system that allows Agents to have verifiable on-chain identities, similar to the unique addresses generated for smart contracts by executing virtual machines, primarily for the continuous tracking and recording of subsequent states;

2) A decentralized Oracle oracle system, mainly responsible for the trustworthy acquisition and verification of off-chain data. Unlike previous Oracles, this oracle adapted for AI Agents may need to create a combination architecture that includes multiple Agents for data collection layers, decision consensus layers, and execution feedback layers, so that the data required by the Agents on-chain and off-chain computation and decision-making can be reached in real-time;

3) A decentralized storage (DA) system, due to the uncertainty of the knowledge base status when AI Agents run, and the reasoning process being somewhat temporary, requires a system to record and store the key state library and reasoning paths behind LLMs in a distributed storage system, providing a cost-controllable data proof mechanism to ensure data availability during public chain verification;

4) A zero-knowledge proof (ZKP) privacy computing layer can link with privacy computing solutions including TEE and FHE, achieving real-time privacy computing + data proof verification, allowing Agents to have a wider range of vertical data sources (medical, financial), and subsequently, more professionally customized service Agents can emerge on top of that;

5) A cross-chain interoperability protocol, somewhat similar to the framework defined by the MCP open-source protocol, but the difference is that this interoperability solution needs to have a relay and communication scheduling mechanism adapted for Agent operation, transmission, and verification, capable of completing asset transfers and state synchronization issues for Agents across different chains, especially including complex states such as Agent context, Prompt, knowledge base, Memory, etc.;

……

In my view, the key to truly conquering web3 AI Agents should be how to make the 'complex workflows' of AI Agents fit as closely as possible with the 'trust verification flows' of blockchain. As for these incremental solutions, whether they are upgraded iterations from existing old narrative projects or re-forged from newly constituted AI Agent narrative tracks, both are possible.

This is the direction that web3 AI Agents should strive to build, aligning with the fundamental innovation ecosystem under the macro narrative of AI + Crypto. If there are no related innovations and differentiated competitive barriers established, then every minor wind and grass in the web2 AI track could turn the web3 AI landscape upside down.