
"Have you ever questioned the nature of your reality?"
(Westworld)
1. The role of AI Agent in Web3
The rise and trend of decentralized AI
With the development of blockchain and artificial intelligence, decentralized AI is becoming an emerging trend. Traditional AI is controlled by a few giants and data, but under the decentralized approach, multiple parties can collaborate to contribute computing power and data, alleviate data monopoly and enhance security and inclusiveness.
For example, public chain ecosystems such as BNB Chain are actively exploring decentralized AI infrastructure, gathering idle GPU and other computing resources through incentive mechanisms, and creating distributed "supercomputers" to train large models. In this model, individuals and organizations can contribute idle resources to obtain returns, which is more cost-effective than centralized cloud computing. At the same time, there are also attempts to split large models and have multiple nodes jointly host reasoning (similar to the Petals project that distributes each layer of the LLM model to different nodes) to reduce single-point pressure. These explorations show that decentralized AI infrastructure is taking shape, providing a new path for open innovation of AI models.
Application scenarios of AI and blockchain
The trust mechanism of blockchain provides a reliable execution environment for AI systems, making "trusted AI" possible. Integrating AI in smart contracts helps to achieve trustless computing on the chain: AI agents can perform complex calculations or decisions off-chain, and then submit trusted results to the chain through cryptographic proofs or multi-node consensus.
For example, the decentralized AI oracle Ora protocol allows multiple AI agents to run reasoning in a distributed manner, and the results are confirmed by consensus before being uploaded to the chain. Another example is Modulus Labs' "Leela vs. the World", which demonstrates the use of zero-knowledge circuits to verify AI chess game decisions and achieve verifiable AI output in the prediction market. These solutions make the AI calculation process and results transparent and reliable, solving the problem that traditional AI, as a black box, is difficult to directly use in high-value scenarios.
Blockchain also provides digital identity and ownership for AI agents, and smart contracts can specify the permissions and incentives of agents, making their behavior traceable and the distribution of benefits clear.
For example, the AI-Agent Contract framework proposed by Phala Network integrates AI capabilities with the decentralized nature of blockchain, providing a platform for AI agents to operate autonomously and create value, while ensuring the security, privacy and trustworthiness of interactions with on-chain applications. This smart contract-level support allows developers to create, own and profit from AI agents, pushing blockchain applications towards a more intelligent and autonomous form.
Application of AI Agents in Various Fields of Web3
DeFi
AI agents can act as intelligent trading assistants and risk managers. They can automatically monitor market conditions 24/7 and perform multi-chain arbitrage or rebalancing operations to improve capital efficiency.
For example, Merlin Chain's "Eliza" AI agent framework has achieved deep integration of AI agents with blockchain, supporting automatic execution of Bitcoin main chain transactions, cross-chain lending and asset exchange operations, and accelerating the process of intelligent finance. These AI agents analyze the user's transaction intentions and place orders across protocols at the best time, optimizing user benefits while reducing manual operation errors.
NFT Field
AI agents are giving rise to new concepts such as "autonomous artists":
For example, the Botto project is an autonomous artist that uses AI agents to create artworks, which are selected by community voting and sold through on-chain NFT casting. The AI creates paintings every week, and the community holding tokens votes to decide which one will be cast into NFT on the chain for sale, and the proceeds are distributed to the community according to the proportion of tokens held. This reflects the autonomy of AI in content creation and value distribution. Another project uses reinforcement learning AI for game NFTs, allowing agents to automatically interact with players as game characters to create a new entertainment experience.
DAO Governance
AI agents can assist decision-making: Some DAOs have begun to introduce "proxy voters" where AI automatically analyzes proposal content, community discussions, and historical data, and then votes on behalf of coin holders. This alleviates the problem of information overload for governance participants and improves decision-making efficiency.
For example, projects such as Autonolas provide governance AI tools that can summarize proposal points, predict the impact of different decisions, and help DAOs make more informed decisions.
Decentralized Identity (DID)
AI combined with cryptography can be used for identity authentication and anti-sybil mechanisms. Traditional KYC often relies on centralized institutions for verification, while AI agents can provide trustless identity proof for on-chain users through technologies such as biometric recognition.
For example, Privasea AI's#ImHumanprotocol uses facial recognition AI to verify that the user is a unique real person, and uses fully homomorphic encryption to ensure that the data is always encrypted throughout the verification process to protect privacy. At the same time, zero-knowledge proof can also be combined with AI for "personality proof", verifying that a user is unique and does not use deep fake technology without leaking their private information. This type of AI DID solution is expected to be used in scenarios such as airdrop anti-sybil attacks and on-chain reputation systems to ensure that each identity corresponds to a real person.
2. Multi-agent systems and consensus security
Multi-agent collaboration and game
Multi-agent system (MAS) refers to a framework in which multiple autonomous agents interact to accomplish complex goals. Different AI agents can cooperate or compete, and for the overall system performance, incentive mechanisms and protocols often need to be designed to make them work towards a common goal. This involves game theory and mechanism design: for example, promoting collaboration through reward sharing, or reaching a consensus on task allocation among agents through auction/voting mechanisms.
In terms of security, it is necessary to prevent malicious behavior or mistakes between agents from causing damage to the whole. Therefore, the system will design an algorithm to ensure that even if some agents make mistakes or do evil, the whole is still robust (this is consistent with the Byzantine fault tolerance concept of blockchain).
In a collaborative scenario, multiple agents can use information sharing protocols (such as publish-subscribe, blackboard system) to improve collaboration efficiency, but at the same time be careful not to tamper with or forge information. To enhance credibility, on-chain records or third-party verification are often introduced, and each agent's key decisions can be submitted to the blockchain for evidence, making their behavior transparent and traceable.
At the same time, multi-agents also use the concept of game equilibrium to analyze system stability: the design makes it unprofitable to deviate from the protocol (cheating), thereby incentivizing agents to act honestly. For example, in distributed learning, reputation scores or penalty mechanisms are introduced to suppress lazy or poisoning participants.
Consensus issues and security challenges among AI agents
When multiple AI agents need to agree on a decision or data, agent consensus is involved. This is similar to distributed system consensus, but the agents may be more autonomous and intelligent, which also brings new attack surfaces.
One of the common challenges is the Sybil Attack: malicious actors forge a large number of fake agent identities in an attempt to gain an undue advantage in voting or consensus. For example, attackers may create many fake AI nodes to impersonate independent agents, thereby manipulating the results in decision-making. To combat Sybil attacks, the system needs to introduce cost mechanisms such as identity authentication or staking (similar to blockchain requiring miners to pay computing power or equity), or use decentralized identities to ensure that each agent is unique and trustworthy.
Another major threat is Byzantine failure, that is, an agent intentionally or unintentionally provides wrong information. To this end, a Byzantine Fault Tolerant (BFT) consensus algorithm is needed to ensure that as long as the majority of agents are honest, the system can ignore bad actors and reach a correct consensus. For example, when agents jointly perform tasks or vote, algorithms similar to PBFT or Raft can be used to allow more than 2/3 of the agents to agree to pass a resolution, thereby tolerating a certain proportion of malicious agents.
In addition, it is necessary to guard against agent collusion and manipulation attacks: multiple malicious AI agents may conspire to manipulate the market or decision-making (for example, colluding to report false information in the prediction market for profit), so the consensus mechanism may introduce randomness or diversity to reduce the success rate of collusion.
It is worth noting that when AI agents are connected to the blockchain, they not only inherit traditional Web2 attacks (such as data poisoning and adversarial samples that cause AI to make mistakes), but also superimpose blockchain-specific attacks (contract vulnerability exploitation, etc.). Attackers may take advantage of AI agents' reliance on external tools to conduct supply chain attacks or abuse of authority. Therefore, the AI agent ecosystem requires in-depth defense, which not only protects the security of models and data, but also ensures the security of interactive interfaces, identity authentication, and other links.
AI-driven blockchain consensus mechanism
Introducing AI into blockchain consensus is giving rise to some new ideas. For example, the concept of "intelligent proof of work" or "useful proof of work" is proposed, which uses AI computing to replace the useless computing power competition in traditional mining, allowing nodes to obtain accounting rights while completing actual valuable AI tasks (such as model training), thereby converting consensus energy consumption into AI model training benefits.
The academic community already has a "Deep Learning Workload Proof (PoDL)" solution, which requires miners to train a specified deep model, and the workload is considered completed when the training is completed. In this way, the energy consumed by miners produces a useful model, rather than just wasting electricity by solving hash puzzles. This AI consensus can be seen as an improvement on PoW, improving overall efficiency.
Another approach is to directly replace consensus voting with AI model decision-making, the so-called "Proof of Intelligence". In this framework, trained AI models are responsible for verifying blocks and transactions, analyzing patterns and predicting network behavior to determine transaction validity. AI can monitor on-chain data in real time, detect abnormal patterns or potential attacks, and achieve automated security monitoring. For example, AI agents can promptly detect signs of 51% attacks or traces of double-spending transactions, and issue early warnings and reject suspicious blocks.
Unlike PoW/PoS, which relies on computing power or staking, "AI consensus" relies on model intelligence to quickly reach consensus, which has the potential advantages of high efficiency and low latency. Simulation studies have shown that AI model verification transactions can be expanded to thousands of transactions per second faster than manually designed protocols. At the same time, AI prediction capabilities can improve security, detect and respond to attacks before they occur, and thus are expected to reduce the success rate of 51% attacks.
However, the introduction of AI also brings risks and challenges:
Centralization risk: The training and operation of advanced AI models require powerful computing power, which may only be available in a few institutions, thus centralizing consensus to a small number of computing powers.
Bias problem: AI models are trained by data. If the training data is biased, the model decision will also have systematic bias, which may treat certain transactions or addresses unfairly.
Transparency and trust issues: Blockchain consensus emphasizes transparency and simplicity, but the AI decision-making process is complex and difficult to explain. How can nodes trust that AI does not have any shady operations?
Adversarial attack: AI itself is vulnerable to adversarial examples or model poisoning attacks. Hackers may look for model loopholes to interfere with consensus decision-making.
Therefore, AI-driven consensus is still in the exploratory stage. There are practical applications such as using AI to assist in consensus parameter adjustment and anomaly detection, but block generation based entirely on AI decision-making is not yet mature.
However, some fusion solutions have emerged:
For example, DAG is combined with AI to optimize transaction sorting, and machine learning is used to predict network congestion to dynamically adjust block size or handling fees to improve the performance of DAG consensus. These attempts show that AI is expected to improve the efficiency and security of existing PoW/PoS/DAG mechanisms as an assistant, such as using machine learning models to predict the optimal block node in the next time window or detect malicious nodes, thereby enhancing the robustness of the existing consensus.
Other studies have used AI in Rollup scenarios to improve the efficiency and security of the second-layer network: for example, using AI to screen batch transactions submitted to the Rollup chain and intelligently identify suspicious transactions, thereby helping the Fraud Proof mechanism to run more efficiently.
These explorations provide room for imagination for the future blockchain + AI consensus: perhaps in the future, the dual competition of "computing power + intelligence" will determine the generation of blocks, and network nodes will compete on both hardware and algorithm models, prompting the blockchain network to more intelligently maintain its own security and performance.
3. Research on FHE (Fully Homomorphic Encryption) and AI Agent Consensus
The role of FHE in AI agent computing
Fully homomorphic encryption (FHE) is a cryptographic technique that allows computations to be performed directly on ciphertext. For AI agents, FHE provides an ideal means to resolve the contradiction between privacy and collaboration: agents can process and analyze data without decrypting it, thereby protecting data privacy.
This is particularly critical in areas involving sensitive data (finance, healthcare, etc.), where agents can collaborate to complete calculations without exposing their own data.
For example, AI agents of multiple banks can jointly train risk control models with the support of FHE. The customer data of each bank is always encrypted, and no party can see the data of others, but can obtain model update results through homomorphic operations. This realizes "data is available but invisible", breaking down data silos while ensuring compliance and security.
Introducing FHE in blockchain smart contracts can also enable contracts to process privacy-sensitive data and AI models: smart contracts can perform FHE ciphertext operations, and the output results are also ciphertext, which can only be decrypted by authorized parties. This kind of on-chain privacy AI contract can manage digital assets or execute complex strategies without leaking any intermediate information.
For example, users can submit their encrypted biometrics to the contract for matching and verification, and the contract completes the authentication through FHE calculation without exposing the original biometrics, thus achieving private identity authentication on the chain. In general, FHE allows AI agents to fully utilize the value of data while following the principle of "minimum disclosure", thus ensuring the data sovereignty of all parties involved.
FHE improves AI proxy consensus security
In the process of reaching consensus among multiple agents, the introduction of FHE can greatly enhance security and trust. First, the full encryption of data ensures that agents cannot peek into each other's private information when collaborating or voting. Each AI agent submits an encrypted result or decision to the consensus mechanism, and other agents and verifiers cannot reverse the source data from it, but the correctness and consistency of these ciphertext results can be verified through the FHE network.
This means that even if a malicious agent attempts to speculate or attack by analyzing other people's messages, it will be unable to do so due to data encryption, greatly reducing the possibility of collusion and cheating.
Secondly, FHE supports the construction of encrypted consensus: after the agent submits the ciphertext results, the network can homomorphically aggregate or vote to calculate the ciphertext of the overall decision, and use the verification between ciphertexts to ensure the reliability of the results. For example, when multiple AI investment advisory agents jointly decide on a trading strategy, each agent gives an encrypted score to an asset, and the on-chain consensus mechanism uses FHE to aggregate these scores and select the highest one without decrypting individual scores. Only after the final decision is reached, the network or authorized party decrypts and announces the results.
Sensitive information is never exposed during the entire process, which ensures the correctness of decision-making and protects the confidentiality of models and data. Under this architecture, even if a witch node submits forged information, other honest nodes will not disclose their secrets; at the same time, through encrypted voting consistency checks, inconsistent false information can be discovered and rejected, ensuring that the final consensus is safe and reliable.
Thirdly, FHE can also protect model intellectual property: many AI agents are driven by proprietary models and are reluctant to share model parameters. Traditional multi-party consensus may require agents to explain their decision-making basis, resulting in the risk of model confidentiality leakage. With FHE, agents can prove their decision-making basis in encrypted form or provide encrypted evidence, allowing others to verify the rationality of the decision without exposing model details.
For example, in a multi-AI trading decision scenario, each agent gives trading suggestions based on its own private quantitative model, submits votes through FHE encryption, and finally the network selects the consensus result based on the encrypted vote count and decrypts it for publication. In this way, the model logic and parameters of each agent are protected (because only the ciphertext of the model output is provided), while achieving group intelligent decision-making and improving accuracy and reliability.
This "encrypted crowd intelligence" is very valuable in fields such as finance and healthcare that require the integration of multiple expert opinions. In summary, FHE provides a new paradigm for establishing secure consensus between AI agents: all intermediate processes are completed in the encrypted domain, which ensures privacy and enables collaboration. As the study points out, the unique advantage of FHE is that it allows multiple agents to exchange information securely and achieve consensus in an encrypted state. This is seen as a breakthrough direction for improving the efficiency and security of multi-agent systems, and there are already projects (such as Mind Network and Swarms) exploring related solutions.
Combination of FHE with ZKP, MPC and other technologies
To achieve the ideal decentralized secure AI, multiple cutting-edge encryption technologies are often needed to work together. FHE is good at "calculation", while zero-knowledge proof (ZKP) is good at "proof". The combination of the two can complement each other.
For example, after using FHE to allow an AI agent to complete a decision calculation on a ciphertext, ZKP can be used to prove that the decision was indeed calculated according to a correct AI model and satisfies specific properties without exposing the details of the decision-making process. Such a ZK proof allows other nodes to be sure that the agent's output is valid and has not violated any regulations without having to repeat the calculation.
This is particularly important for verifying AI model reasoning on the chain, a direction called ZKML (zero-knowledge machine learning). The core idea is to let large models calculate off-chain, and then submit the results and a validity proof to the chain to prove that "this result is obtained by the AI model calculation that complies with the specification." In this way, the on-chain smart contract can trust the result for subsequent consensus or decision-making without having to run complex models in person, greatly reducing the amount of calculation and gas costs.
The current difficulty of ZKML lies in the efficiency and size of generating proofs, but with the advancement of algorithms, it can be used to prove the correctness of smaller models and gradually expanded to more complex models.
Another important technology is multi-party secure computation (MPC), which allows multiple parties to jointly compute the output of a function while keeping their inputs confidential. MPC can be used in AI agent scenarios for key management and joint decision-making.
For example, the AI agent developed by Coinbase uses MPC or TEE to securely manage private keys, so that the agent does not expose the private key to a single node when signing a transaction. MPC can also be used for joint reasoning of models: several agents each hold part of the data and use the MPC protocol to run an inference algorithm, so that no single party can obtain the complete input.
Compared with FHE, MPC performs better when the number of participants is small and can prevent a small number of malicious participants from tampering with the results. Therefore, the two are often used in combination - first use MPC to integrate data encryption in a small circle, and then use FHE to process it on a larger scale.
Finally, Trusted Execution Environment (TEE) is also a technology that can be used in conjunction with other technologies. It provides near-plaintext computing efficiency through hardware isolation while ensuring computing integrity through remote attestation.
In general, FHE/ZKP/MPC/TEE each has its own strengths: FHE protects data throughout the entire process, ZKP provides a trust bridge for the correctness of the results, MPC coordinates multi-party collaborative computing, and TEE provides an efficient execution sandbox.
In a decentralized AI system, the possible architecture is: running AI models in TEE, encrypting sensitive parameters with FHE, outputting results with ZK proof for network-wide verification, and multiple parties (nodes) sharing keys and control rights through MPC to avoid single-point tampering. This multi-layer protection will provide unprecedented security and privacy protection for consensus and interaction between AI agents.
Although the implementation is complex, as technologies such as FHE mature, we are expected to see an AI agent collaboration network where "privacy computing + trusted verification" coexist, allowing decentralized AI to maximize its effectiveness while ensuring security.
4. Use cases and technical analysis
Existing AI Agent Application Cases in Web3
In recent years, a variety of projects have put the concept of AI agents into practice, covering areas such as data, finance, and the Internet of Things.
Fetch.ai
An early representative is Fetch.ai, which has built an autonomous agent economy platform based on blockchain. Developers can deploy intelligent agents on Fetch.ai to automatically perform real-world tasks, such as booking parking spaces, flights, or electric vehicle charging stations. These agents use the decentralized search and communication network provided by Fetch.ai to directly connect with agents on the service provider side to reach the best transaction.
The native token FET of the Fetch.ai network is used to pay for agent service fees and staking security networks. According to statistics, on Fetch.ai's Agentverse platform, agents have been used for practical scenarios such as on-chain asset management (such as automatic staking and reinvestment), oracle (pulling off-chain data to trigger contracts), and wallet assistants (automatically prompting recharge when the balance is low). Fetch.ai's vision is to create an M2M economy with the participation of massive machine agents, each of which can negotiate autonomously to provide seamless services to humans.
Ocean Protocol
Another typical example is Ocean Protocol, which is committed to building a decentralized data and AI model market. Ocean allows data providers to issue data sets as NFTs and tokens (Datatokens) for AI developers to purchase and use, while ensuring that data is not directly copied through "Compute-to-Data".
That is to say, after obtaining authorization, the AI agent can bring the algorithm to the data location to run training or analysis, and the data itself does not leave the owner's environment. This mechanism combines the access control and payment of blockchain to achieve the secure circulation of data and the efficient development of models.
Ocean also launched the "Ocean Nodes" network, which provides decentralized computing resources, supports AI model training and reasoning in a secure environment, and rewards those who provide computing power. Through Ocean, many previously closed data sources have been gradually opened up for AI use, solving the "data hunger" problem in the AI field and creating a new economic model where data is an asset.
SingularityNET
The SingularityNET project is a long-established decentralized AI service market. Developers can publish AI algorithm service interfaces on its platform, and anyone can call these services and pay with AGIX tokens to create, share and monetize AI services.
SingularityNET's vision is to bring together AI algorithms from around the world and combine them to form a more powerful artificial intelligence - called "decentralized AGI". Currently, the platform has a variety of services such as computer vision, language processing, robot control, and even creative AI such as AI painting and music generation. Users verify the quality of services and pay for them through blockchain. This AI App Store model gives small and medium-sized AI developers the opportunity to directly access the market and monetization models, which increases the openness and innovation speed of the AI field.
Other vertical applications
In addition to the above platform projects, there are also many AI agent applications in vertical fields:
- Numerai in decentralized finance (using crowdsourced AI models to predict the market and coordinate model integration and rewards through blockchain)
- Matrix AI Network in the IoT field (trying to use AI to optimize consensus and communication of IoT devices)
- DeepBrain Chain in the decentralized social field (provides AI computing power network, supports DApps such as voice recognition, etc.)
These cases together depict the ecological outline of the integration of AI+Web3: the agent is both a service provider and a consumer, forming a self-sufficient economic closed loop on the chain.
It is worth mentioning that Merlin Chain’s recent exploration of combining AI has attracted attention: its open source Eliza framework has realized AI agent cross-chain transactions, enabling an AI agent to seamlessly call protocols on different blockchains to complete complex financial operations. This is seen as the prototype of AI agents acting as "brokers" between chains, which is expected to improve the intelligence level of blockchain interoperability.
In general, the practical application of AI agents is moving from concept to implementation, and various innovative cases are emerging one after another, injecting intelligent power into the Web3 ecosystem.
Analysis of key technologies
To support the implementation of the above applications, many underlying technical problems need to be solved, which has spawned a series of specialized technical solutions and frameworks.
Optimizing AI training and reasoning
Due to the special limitations of the blockchain environment (consensus overhead, gas fees, etc.), it is difficult to perform large-scale model training or reasoning directly on the chain. Therefore, the paradigm of off-chain computing + on-chain verification is usually adopted: model training and optimization are completed by high-performance nodes or networks off-chain, and only necessary model summaries or result proofs are stored on the chain.
For example, the zero-knowledge machine learning (ZKML) mentioned above is a typical solution that ensures the credibility of off-chain model calculations through validity proofs. Another idea is distributed training, which decomposes the model training task to multiple nodes, each of which processes a part, and then aggregates the results using on-chain mechanisms (such as smart contracts or consortium chains). This is similar to federated learning, but uses blockchain to record each round of parameter updates and incentive rewards to ensure that the process is open and transparent.
Some projects have also tried to use incentive mechanisms to improve training: for example, some have proposed using tokens to reward users who participate in model annotation, fine-tuning or optimization, thereby establishing a community-driven model optimization cycle. OpenAI's decentralized alternatives such as Collective Learning are concepts that incentivize the public to improve models in Web3.
Encryption computing technology
This is the core of ensuring the security and privacy of AI agents. We have discussed the role of FHE, MPC, and TEE in the third part: in short, they make it possible to "use data but not see it" and "trusted and confidential processes".
In terms of specific implementation, existing projects have provided development frameworks: For example, Zama has launched a homomorphic encryption library TFHE for Ethereum EVM and a solution called "FHEvm", which allows smart contracts to directly call homomorphic encryption operations. Developers can use this to write "privacy smart contracts" to encrypt AI model weights and user inputs, encrypt outputs, and only authorize viewing of results. This makes on-chain AI computing no longer worry about data privacy leaks.
Another example is the blockchains that use TEE technology, such as Secret Network and Phala Network, which support private contracts on the chain (executing contract logic in a hardware-isolated environment). Developers can deploy AI reasoning code as private contracts, enabling them to access sensitive data and call external AI libraries, while only exposing the results to the outside world.
Phala's Phat Contract (now upgraded to AI-Agent Contract) is a representative of this type of technology. It provides pre-built computing templates to cope with various scenarios (DeFi, NFT, social, etc.), and opens up channels for data acquisition and result writing on and off the chain. Through Phala, AI agents can securely obtain off-chain data (such as calling on-chain data indexed by The Graph, or accessing Internet APIs), complete AI calculations in TEE, and then submit the processing results back to the blockchain. This realizes a trust bridge between on-chain and off-chain, and provides important support for AI agents to participate in on-chain financial and social applications.
In addition to homomorphism and TEE, multi-party computing (MPC) has also made breakthroughs recently, such as Tencent's open source Angel and FedAI frameworks, which apply MPC to federated learning, allowing multiple institutions to jointly train models with accuracy close to centralized training. In the Web3 environment, MPC can interact with smart contracts, such as scheduling multiple parties to jointly execute a function through contracts, and then putting the ciphertext results on the chain, allowing the on-chain logic to be used without touching the original data.
On-chain AI reasoning capabilities
Currently, fully deploying AI models on the chain is limited by performance and cost, but there have been attempts in specific scenarios. For example, Internet Computer (ICP) claims to support AI smart contracts, which can run small machine learning models on its chain to achieve simple AI reasoning functions.
Some Ethereum developers have also experimentally written simplified models (such as regression models or simplified neural networks) into Solidity contracts, allowing the chain to calculate outputs in real time based on inputs. However, this is only applicable to very small models, and complex AI still needs to be off-chain.
What is more practical is to verify the off-chain reasoning on the chain: ZKML provides a verification method, and another method is through a trusted intermediary or a decentralized oracle. Chainlink and other oracle networks are considering providing "AI oracle" services: multiple nodes jointly run an AI model, and then consensus on its output to feed the on-chain contract. This essentially treats the reasoning result as a kind of data that requires consensus, and multiple nodes compare and confirm it before putting it on the chain, similar to the process of oracle feeding prices.
What needs to be solved is how nodes can be sure that they are running the same model and not falsifying it. This can be ensured by remote proof (such as TEE proof) or ZK-SNARK. The aforementioned cases of Ora Protocol and Modulus Labs are preliminary practices, and have achieved verified results in text question answering and game AI output respectively.
In addition, with the popularity of large model APIs, some DApps simply call centralized AI services (such as OpenAI API) to enhance their functions, and then upload the results to the chain. In order to reduce the trust dependence on this, the community also explores the use of "multiple comparison of results": for example, let three different AI service providers each give results for the same input. If they are consistent, they are considered valid for chaining, otherwise they are discarded.
In short, realizing on-chain or chain-available AI reasoning is a systematic project. Currently, a variety of solutions have gradually evolved, such as homomorphic computing, oracle networks, and ZK verification. Each has its own trade-offs and can be used in combination to support the various AI agent applications mentioned above.
AI agent protocol and development framework in Web3 ecosystem
To facilitate developers to build AI agents, many protocols and frameworks are emerging. In addition to the platforms detailed in the previous article, such as Fetch.ai, Ocean, and SingularityNET, this section introduces several more representative projects:
Autonolas
Provides a multi-agent collaborative infrastructure that supports developers to combine multiple AI containers into decentralized services. It focuses on the orchestration and governance of agents. For example, a oracle agent group in a decentralized prediction market can use Autonolas to coordinate updates, voting, and reward distribution. This makes it possible to build complex AI DAOs.
Oraichan
Claiming to be the first AI-centric blockchain, it has a built-in AI model library and execution engine, which allows smart contracts to directly call AI APIs. Oraichain also has its own ORAI tokens to incentivize third parties to upload AI APIs and maintain their performance. Developers can call an AI service (such as image recognition) in a line of code in the contract, which will be executed by the Oraichain node and return the result and deduct the fee. This simplifies the process of integrating AI agents into contracts.
OpenAGI
A decentralized AI collaboration alliance protocol, different from a simple Marketplace, allows multiple AI agents to work together to complete tasks, and use tokens to settle according to contribution. Its goal is to simulate the process of human collaboration to complete complex AI projects, such as one party is responsible for data cleaning, another party is responsible for model training, and a third party is responsible for evaluation, and together they build a high-quality model, and each party is rewarded with AGI tokens according to their contribution.
Fetch.ai uAgents Framework
Fetch.ai also provides the open source uAgents Python framework, which allows developers to create lightweight proxy entities locally. These agents can easily connect to the Fetch network and interact with other agents and contracts. The framework has built-in communication, security, and some common behavior patterns, allowing developers to focus on customizing proxy logic. For example, using uAgents, you can quickly develop an automated market maker agent, monitor decentralized exchange prices and place orders, and communicate with the Fetch chain to complete settlement.
Mind Network
The encrypted computing platform mentioned above focuses on providing data privacy protection solutions for decentralized AI applications. It provides SDKs for developers to integrate FHE into AI agent applications, supporting functions such as training models on multi-party data and reasoning on ciphertext. Developers can use Mind Network's API to encrypt sensitive data and call homomorphic operations with one click without a deep cryptography background. This lowers the threshold for developing privacy-preserving AI agents.
The above protocols and tools have enriched the AI technology stack in Web3. From the data layer, computing layer to the application layer, each link is gradually building a soil suitable for the growth of AI agents. For example, Ocean solves data acquisition, Phala solves privacy computing, Fetch solves agent communication, Autonolas solves collaborative governance, and Mind Network solves encrypted training. They are closely linked.
These frameworks are still being iterated, but there are initial signs of cross-project cooperation (for example, Swarms and Mind Network cooperate to use FHE for multi-agent collaboration). It is foreseeable that mature AI agent applications in the future are likely to be built on multiple underlying protocols, each with its own functions and seamlessly connected, just as the construction of today's Web applications is inseparable from the combination of TCP/IP, databases, cloud services, etc.
5. Summary and Future Outlook
The integration of AI Agent and Web3 is creating a new paradigm for autonomous intelligent entities to participate in economic activities. In decentralized networks, AI agents play an increasingly important role as data analysts, decision makers, and executors, and their applications range from finance to art, from governance to identity.
Future Trends
More powerful large-scale models (such as the GPT system) will be adapted to run on-chain or off-chain nodes through technologies such as pruning and distillation, becoming DAO advisors, personal assistants, etc.
AI agents will be deeply involved in DAO governance, acting as decision support and even automatically executing simple decisions, making organizational operations more efficient and transparent
The ubiquitous on-chain AI economy will emerge, where agents can autonomously trade services and data, forming a self-sufficient intelligent agent market
For example, an agent can sell the model parameters it has learned to other agents in need, or rent other people's agents to complete its own subtasks, and all these settlements and contract executions are automatically completed on the blockchain.
In this scenario, there may be "AI-DAO" - a decentralized organization where AI agents hold tokens and vote to govern, and "smart economy" - an economic network formed by many AI agents competing and cooperating with each other. By then, AI will no longer be just a tool for humans, but will become an autonomous participant in the blockchain ecosystem, creating value together with humans.
Technical Challenges
To achieve the above vision, many technical challenges need to be overcome:
Computational efficiency
How to provide computing power support comparable to that of centralized clouds without sacrificing decentralization is a difficult problem that needs to be solved urgently. The current decentralized computing power network is still lagging behind in performance and latency, and the GPU shortage problem still exists.
In the future, software and hardware collaboration may be needed, such as developing dedicated AI acceleration chip nodes, using sharding parallel technology to reduce collaboration overhead, and more advanced model compression algorithms, so that AI agents can still run smoothly in resource-constrained environments.
Energy consumption
Although AI-driven consensus is expected to reduce blockchain energy consumption, overall, if a large number of AI models continue to run, they will consume considerable energy. How to seek green AI, such as using idle electricity or renewable energy to drive AI nodes, or developing more energy-efficient algorithms, are all directions that need to be considered in the future.
At the same time, the necessity of the task should be weighed to avoid unnecessary waste of computing power.
Privacy Protection
As the data and transactions handled by AI agents become more sensitive (for example, agents may handle medical records and financial decisions), the importance of privacy and security has never been higher. In addition to the continued maturity of technologies such as FHE and ZKP, we also need standardized AI security audits and governance mechanisms.
At present, there is no unified security framework for AI agents. In the future, there may be industry consensus security standards that stipulate the code of conduct and information leakage responsibilities of AI agents in Web3. At the same time, preventing AI from getting out of control and being abused will also be a governance focus - we must ensure that the goals of these autonomous agents are consistent with the interests of humans and the overall ecosystem, and prevent agents from pursuing profits by any means or being manipulated by attackers. This may require the implementation of ethical constraints or emergency brake switches on agents, as well as on-chain community supervision mechanisms.
Conclusion
Despite the challenges, the prospect of combining Web3 with AI is still exciting. Imagine a decentralized autonomous organization (DAO) composed of hundreds or thousands of AI agents and human members: AI is responsible for daily operations and data analysis, and humans are responsible for high-level decision-making and value guidance, with both sides complementing each other's strengths.
These AI agents can be financial advisors, legal advisors, content creators, etc., and they are always on call to serve the DAO. When decisions are needed, AI provides objective analysis, and humans vote accordingly; when the decision is passed, AI automatically executes specific matters, such as investment, recruitment, and marketing. Some start-up DAOs are already trying such prototypes. For example, some people have proposed that GPT-4 serve as an assistant for the initial review of DAO proposals, filtering out bad proposals and summarizing the key points for human reference.
Looking further into the future, an AI-driven autonomous economy may emerge: AI agents trade data and services with each other, forming a true machine-to-machine market that accounts for a significant proportion of the entire Web3 transaction volume. This actually opens up the "fourth industry", that is, the intelligent industry after agriculture, industry, and information industry, where value is created and consumed by autonomous intelligent agents. Blockchain will act as a key link in this, providing a value settlement layer and trust foundation for these AI participants.
As someone said: "Blockchain reshapes production relations, and artificial intelligence improves productivity", the integration of the two has the potential to trigger the next industrial revolution. It can be foreseen that in the future decentralized network, humans, AI agents and smart contracts will jointly form a complex collaborative network.
We need to continue to research and practice safe and efficient mechanisms to make AI better for human use while enjoying the open innovation brought by decentralization. The combination of AI Agent and Web3 has just started, and its infinite possibilities deserve our continued attention and exploration.
By- Chatgpt DeepSearch