
Author: Turbo Guo Reviewers: Mandy, Joshua
Kernel Ventures is a crypto venture capital fund driven by the research and development community with over 70 early-stage investments focused on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, modular blockchains, and the Vertical areas for billions of crypto users in the future, such as account abstraction, data availability, scalability, etc. For the past seven years, we have been committed to supporting the growth of core development communities and university blockchain associations around the world.
TLDR:
Modulus labs implements verifiable AI by performing ML calculations off-chain and generating zkp for it. This article re-examines this solution from an application perspective, analyzes in which scenarios it is strictly needed, and in which scenarios the demand is weak, and finally extends horizontally. There are two AI ecological models based on public chain, vertical and vertical. The main contents are:
Whether verifiable AI is needed depends on: whether on-chain data is modified, and whether fairness and privacy are involved
When AI does not affect the state of the chain, AI can act as a suggester, and people can judge the quality of AI services through actual effects without verifying the calculation process.
When affecting the status on the chain, if the service is targeted at individuals and has no impact on privacy, users can still directly judge the quality of the AI service without checking the calculation process.
When the output of AI will affect fairness and personal privacy among multiple people, such as using AI to evaluate and distribute rewards to community members, using AI to optimize AMM, or involving biological data, people will want to review the calculations of AI, which is possible. Verify where the AI might find PMF.
Vertical AI application ecosystem: Since one end of verifiable AI is a smart contract, it may be possible to call each other without trust between verifiable AI applications or even between AI and native dapps. This is a potential composable AI application ecosystem.
Horizontal AI application ecology: The public chain system can handle service payment for AI service providers, payment dispute coordination, matching of user needs and service content, etc., allowing users to obtain a more free and decentralized AI service experience.
1. Introduction and application cases of Modulus Labs
1.1 Introduction and core solutions
Modulus Labs is an “on-chain” AI company that believes that AI can significantly enhance the capabilities of smart contracts and make web3 applications more powerful. However, there is a contradiction when AI is applied to web3, that is, the operation of AI requires a lot of computing power, and AI's off-chain calculation is a black box, which does not meet the basic requirements of web3 to be trustworthy and verifiable.
Therefore, Modulus Labs draws on the zk rollup [off-chain preprocessing + on-chain verification] solution and proposes a verifiable AI architecture, specifically: the ML model runs off-chain, and a zkp is generated off-chain for the ML calculation process. , through this zkp, the architecture, weights and inputs of the off-chain model can be verified. Of course, this zkp can also be published to the chain for verification by smart contracts. At this time, AI and on-chain contracts can interact more trustlessly, which is to realize "on-chain AI".
Based on the idea of verifiable AI, Modulus Labs has launched three “on-chain AI” applications so far, and also proposed many possible application scenarios.
1.2 Application cases
The first to be launched is Rocky bot, an automated trading AI. Rocky is trained from historical data of the wEth/USDC trading pair. It determines the future weth trend based on historical data. After making a transaction decision, it will generate a zkp for the decision-making process (calculation process), and send a message to L1 to trigger the transaction.
The second is the on-chain chess game "Leela vs the World". The two parties in the game are AI and humans, and the chess situation is placed in the contract. Players operate (interact with contracts) through their wallets. The AI reads the new chess game situation, makes a judgment, and generates zkp for the entire calculation process. Both steps are completed on the AWS cloud, and the zkp is verified by the contract on the chain. After the verification is successful, the chess game contract is called to "download". chess".
The third is an "on-chain" AI artist, and has launched the NFT series zkMon. The core is that AI generates NFTs and publishes them on the chain, and at the same time generates a zkp. Users can use zkp to check whether their NFTs are generated from the corresponding AI model.
Additionally, Modulus Labs mentioned some other use cases:
Use AI to evaluate personal on-chain data and other information, generate personal reputation ratings, and publish zkp for user verification;
Use AI to optimize AMM performance and publish zkp for user verification;
Use verifiable AI to help privacy projects cope with regulatory pressure without exposing privacy (perhaps using ML to prove that this transaction is not money laundering without exposing information such as user addresses);
The AI oracle machine also releases zkp for everyone to check the reliability of off-chain data;
In the AI model competition, contestants submit their own architecture and weights, and then run the model with a unified test input to generate zkp for the operation. In the end, the contract will automatically send the bonus to the winner;
Worldcoin said that in the future, users may be able to download a model that generates a corresponding code for the iris on a local device, run the model locally and generate zkp. In this way, the contract on the chain can use zkp to verify that the user's iris code is generated from the correct model and reasonable iris. , while keeping biometric information from leaving the user’s own device;

Image source: Modulus Labs
1.3 Discuss different application scenarios based on the need for verifiable AI
1.3.1 Scenarios where verifiable AI may not be needed
In the Rocky bot scenario, users may not have the need to verify the ML calculation process.
First, users do not have the expertise to conduct real verification. Even if there are verification tools, users will only see that they have pressed a button and a pop-up window tells them that the AI service was indeed generated by a certain model, and they cannot be sure of its authenticity.
Second, users have no need for verification, because users care about whether the AI’s return rate is high. Users will migrate when the yield is not high and will always choose the model with the best performance. In short, when users are pursuing the ultimate effect of AI, the verification process may not mean much, because users only need to migrate to the service with the best effect.
One possible solution is that AI only acts as a suggester, and users execute transactions independently. When people input their trading goals into AI, AI calculates and returns a better trading path/trading direction off-chain, and users choose whether to execute it. People do not need to verify the model behind it, they just need to choose the product with the highest return.
Another dangerous but very likely situation is that people don’t care about their control over assets and the AI computing process. When a robot that automatically makes money appears, people are even willing to entrust their money directly to it, just like the agent. It is common to deposit coins into CEX or traditional banks for financial management. Because people don’t care about the principle behind it, they only care about how much money they get in the end, or even how much money the project side shows them. This kind of service may also be able to quickly acquire a large number of users, even more than using verifiable AI project side product iteration speed is faster.
Taking a step back, if AI does not participate in the modification of on-chain status at all, and only captures on-chain data for pre-processing for users, there is no need to generate ZKP for the calculation process. This type of application is called [data service], and the following are a few examples:
The chatbox provided by Mest is a typical data service. Users can use question and answer methods to understand their own on-chain data, such as asking how much they spent on NFT;
ChainGPT is a multifunctional AI assistant that can interpret smart contracts for you before trading, telling you whether you are trading with the correct pool, or telling you whether the transaction is likely to be pinched or front-run. ChainGPT is also preparing to make AI news recommendations, enter prompts to automatically generate images and publish them into NFT and other services;
RSS3 provides AIOP, which allows users to choose what on-chain data they want and do some pre-processing, so that they can easily train AI with specific on-chain data;
DefiLlama and RSS3 have also developed ChatGPT plug-ins, allowing users to obtain on-chain data through conversations;
1.3.2 Scenarios that require verifiable AI
This article believes that scenarios involving multiple people, fairness and privacy require ZKP to provide verification. Here we discuss several applications mentioned by Modulus Labs:
When the community awards rewards based on AI-generated personal reputation, community members will inevitably demand review of the evaluation decision process, which is the ML calculation process;
The scenario of AI optimizing AMM involves the distribution of interests among multiple people, and the AI calculation process also needs to be checked regularly;
When balancing privacy and supervision, ZK is currently one of the better solutions. If the service provider uses ML to process private data in the service, it needs to generate ZKP for the entire calculation process;
Since the oracle has a wide range of influence, if it is controlled by AI, ZKP needs to be generated regularly to check whether the AI is functioning normally;
In the competition, the public and other participants need to check whether the ML calculations meet the competition standards;
Among Worldcoin’s potential use cases, protecting personal biometric data is also a strong requirement;
Generally speaking, when AI is like a decision-maker, and its output has a wide range of impacts and involves the fairness of many parties, people will demand a review of the decision-making process, or simply ensure that there are no major problems in the AI's decision-making process. And protecting personal privacy is a very direct need.
Therefore, [whether the AI output modifies the on-chain state] and [whether it affects fairness/privacy] are the two criteria for judging whether a verifiable AI solution is needed.
When the AI output does not modify the state on the chain, the AI service can act as a suggester, and people can judge the quality of the AI service through the suggestion effect without the need to verify the calculation process;
When the AI output modifies the on-chain status, if the service is only for individuals and has no impact on privacy, the user can still directly judge the quality of the AI service without checking the calculation process;
When the output of AI will directly affect the fairness among multiple people, and the AI automatically modifies on-chain data, the community and the public have a need to examine the AI decision-making process;
When the data processed by ML involves personal privacy, zk is also needed to protect privacy and respond to regulatory requirements.

Image source: Kernel Ventures
2. Two AI ecological models based on public chains
In any case, Modulus Labs' solution has great inspiration for how AI can be combined with crypto and bring practical application value. However, the public chain system can not only enhance the capabilities of individual AI services, but also has the potential to build a new AI application ecosystem. This new ecosystem brings different relationships between AI services, the relationship between AI services and users, and even the collaboration methods of various upstream and downstream links, which are different from Web2. We can summarize the potential AI application ecosystem models into two types: vertical model and horizontal model.
2.1 Vertical mode: Focus on achieving composability between AIs
The use case of “Leela vs the World” on-chain chess has a special feature. People can place bets for humans or AI, and tokens are automatically distributed after the game. At this time, the significance of zkp is not only for users to verify the process of AI calculation, but also as a trust guarantee for triggering state transitions on the chain. With trust guarantees, it is also possible to have dapp-level composability between AI services and between AI and crypto-native dapps.

Image source: Kernel Ventures, referenced from Modulus Labs
The basic unit of composable AI is [off-chain ML model - zkp generation - on-chain verification contract - main contract]. This unit is borrowed from the framework of "Leela vs the World", but the actual architecture of a single AI dapp may be as shown in the figure above The display is different.
First, the situation in chess requires a contract, but in reality, AI may not need an on-chain contract. But from the perspective of the architecture of composable AI, if the main business is recording through contracts, it may be more convenient for other dapps to be combined with it.
Second, the main contract does not necessarily need to affect the AI dapp's own ML model, because a certain AI dapp may have a one-way influence. After the ML model is processed, it can trigger its own business-related contract, and the contract will be processed by other dapps. transfer.
Extended view, the calls between contracts are calls between different web3 applications, which are calls for personal identity, assets, financial services, and even social information. We can imagine a specific combination of AI applications:
Worldcoin uses ML to generate iris codes and zkp for personal iris data;
The reputation scoring AI application first verifies whether there is a real person behind this DID (with iris data behind it), and then allocates NFT to the user based on on-chain reputation;
The lending service adjusts the lending share based on the NFT owned by the user;
......
The interaction between AIs under the public chain framework is not something that has not been discussed. Loaf, a contributor to the full-chain game Realms ecosystem, once proposed that AI NPCs can trade with each other like players, so that the entire economic system can self-optimize and operate automatically. AI Arena has developed an AI automatic battle game. Users first buy an NFT, which represents a combat robot, and behind it is an AI model. Users first play the game themselves, and then give the data to the AI for simulation learning. When the user thinks the AI is strong enough, it can be put in the arena to automatically fight against other AIs. Modulus Labs mentioned that AI Arena hopes to convert all these AIs into verifiable AIs. In both cases, we can see the possibility of interaction between AIs and directly modifying on-chain data during interaction.
However, there are still a lot of issues to be discussed in the specific implementation of composable AI, such as how different dapps can use each other's zkp or verify contracts. However, there are also a large number of excellent projects in the field of zk. For example, RISC Zero has made a lot of progress in performing complex operations off-chain and publishing zkp on the chain. Perhaps a suitable solution can be put together one day.
2.2 Horizontal model: focusing on realizing a decentralized AI service platform
In this regard, we mainly introduce a decentralized AI platform called SAKSHI, which was jointly proposed by people from Princeton, Tsinghua University, University of Illinois at Urbana-Champaign, Hong Kong University of Science and Technology, Witness Chain and Eigen Layer. Its core goal is to allow users to obtain AI services in a more decentralized manner, making the entire process more trustless and automated.

Image source: SAKSHI
SAKSHI's architecture can be divided into six layers: Service Layer, Control Layer, Transaction Layer, Proof Layer, Economic Layer and Marketplace.
The market is the layer closest to users. There are aggregators in the market to provide services to users on behalf of different AI providers. Users place orders through the aggregator and reach an agreement with the aggregator on service quality and payment price (the agreement is called SLA -Service-level agreement).
Next, the service layer provides an API to the client side, and then the client side initiates an ML inference request to the aggregator, and the request is transmitted to the server used to match the AI service provider (the route used to transmit the request is part of the control layer). Therefore, the service layer and the control layer are similar to a service with multiple servers web2, but the different servers are operated by different entities, and the individual servers are associated with the aggregator through SLA (previously signed service agreement).
SLA is deployed on the chain in the form of smart contracts, and these contracts all belong to the transaction layer (note: deployed on the Witness Chain in this solution). The transaction layer also records the current status of a service order and is used to coordinate users, aggregators and service providers to handle payment disputes.
In order for the transaction layer to have evidence to rely on when handling disputes, the Proof Layer will verify whether the service provider uses the model in accordance with the SLA. However, SAKSHI did not choose to generate zkp for the ML calculation process. Instead, it used the idea of optimistic proof and hoped to establish a challenger node network to test the service. The node incentives are borne by the Witness Chain.
Although the SLA and the challenger node network are both on the Witness Chain, in SAKSHI's plan, the Witness Chain does not intend to use its own native token incentives to achieve independent security, but to borrow the security of Ethereum through Eigen Layer. Therefore, the entire economic layer actually relies on Eigen Layer.
It can be seen that SAKSHI is between AI service providers and users. It organizes different AIs in a decentralized manner to provide services to users. This is more like a horizontal solution. The core of SAKSHI is that it allows AI service providers to focus more on managing their own off-chain model calculations, allowing the matching of user needs and model services, service payment and service quality verification to be completed through on-chain protocols, and attempts to automatically resolve payment issues dispute. Of course, SAKSHI is still in the theoretical stage, and there are also a lot of implementation details that need to be determined.
3. Future prospects
Whether it is composable AI or decentralized AI platform, the AI ecological model based on public chain seems to have something in common. For example, AI service providers do not directly interface with users. They only need to provide ML models and perform calculations off-chain. Payments, dispute resolution, and matching between user needs and services can all be solved by decentralized protocols. As a trustless infrastructure, the public chain reduces friction between service providers and users, and users also have higher autonomy.
Although the advantages of using public chains as application bases are cliché, they do also apply to AI services. The difference between AI applications and existing dapp applications is that AI applications cannot place all calculations on the chain, so zk or optimistic proofs must be used to allow AI services to connect to the public chain system in a more trustworthy manner.
With the implementation of a series of experience optimization solutions such as account abstraction, users may not be aware of the existence of mnemonics, chains, gas, etc. This makes the public chain ecosystem close to web2 in terms of experience, and users can obtain higher benefits than web2 services. The degree of freedom and composability will have greater appeal to users, and the AI application ecosystem based on the public chain is worth looking forward to.
References
Conversation with Nil Foundation founder: ZK technology may be misused, and public traceability is not the original intention of encryption: https://www.techflowpost.com/article/detail_12647.html
IOSG Weekly Brief |Lighting up the spark of blockchain: LLM opens up new possibilities for blockchain interaction #187
https://mp.weixin.qq.com/s/sVIBF6iPXwhamlKEvjH19Q
Chapter 1: How to Put Your AI On-Chain:https://medium.com/coinmonks/chapter-1-how-to-put-your-ai-on-chain-8af2db013c6b
Chapter 4: Blockchains that Self-Improve:https://medium.com/@ModulusLabs/chapter-4-blockchains-that-self-improve-e9716c041f36
Chapter 6: The World’s 1st On-Chain AI Game:https://medium.com/@ModulusLabs/chapter-6-leela-vs-the-world-the-worlds-1st-on-chain-ai-game-17ea299a06b6
AN INTRODUCTION TO ZERO-KNOWLEDGE MACHINE LEARNING (ZKML):https://worldcoin.org/blog/engineering/intro-to-zkml#zkml-use-cases
Zero-Knowledge Proof: Applications and Use Cases:https://blog.chain.link/zero-knowledge-proof-use-cases/
SAKSHI: Decentralized AI Platforms:https://arxiv.org/pdf/2307.16562.pdf
Honey, I Shrunk the Proof: Enabling on-chain verification for RISC Zero & Bonsai:https://www.risczero.com/news/on-chain-verification