The official launch of the Chainbase AVS mainnet is deeply interlinking the different layers of Chainbase, bringing verification and processing capabilities to the Chainbase data network and marking a new opportunity for the comprehensive launch of the data ecosystem.
On December 4th, the chain-native Web3 AI data network Chainbase officially launched the Chainbase AVS mainnet, simultaneously releasing the first batch of 20 AVS node operator lists.
Chainbase AVS is the first application-oriented mainnet AVS in EigenLayer AVS, adopting a four-layer network architecture, where Chainbase AVS in the execution layer integrates with EigenLayer to ensure scalability and security of data processing.
The execution layer utilizes the Chainbase Virtual Machine (CVM) and the Chainbase Manuscripts data framework to achieve seamless execution of complex workflows, providing strong support for large-scale AI data processing.
In fact, since Chainbase launched the AVS testnet in July, the network has attracted over 2,100 node operators to register, pledging more than 600,000 ETH, currently performing over 600 million API calls daily as one of the most important data stacks. In its previously launched Chainbase Genesis Odyssey event, Chainbase has already connected 31,161,249 wallets, indicating market recognition of Chainbase.
With the release of the new AVS mainnet, it not only signifies further improvement of the Chainbase Web3 full-chain data network ecosystem but also lays an important foundation for the launch of the Chainbase mainnet.
Meanwhile, Chainbase AVS is the first mainnet AVS in EigenLayer AVS with a focus on data intelligence as the core application scenario. This not only expands the application field of EigenLayer AVS but also lays the foundation for deeper integration with more data-related sectors.

Chainbase: A chain-native distributed full-chain data network designed for Web3 and the AI economy.
AI technology is enabling a leap in human productivity and efficiency, achieving explosive growth in recent years.
Computing power, algorithms, and data are the three driving forces of AI development. Among them, computing power is no longer scarce due to continuous upgrades in computing hardware and expanding large-scale computing clusters hitting the market, while algorithms are continuously optimized through innovation; however, the availability of usable, uniformly formatted data for AI training is decreasing, such as:
Due to a lack of sufficient open data and restrictions on data access, it has become increasingly difficult for developers or researchers to obtain the necessary datasets, directly impacting the development and application of AI models.
Missing or contaminated captured data makes it difficult to ensure accuracy, completeness, and untampered conditions, which is becoming a significant factor misleading AI models and resulting in inaccurate or unreliable outcomes.
Even high-quality data often lacks uniformity in format and requires appropriate preprocessing; otherwise, it is challenging to apply directly to AI model training, significantly increasing the workload and cost of AI training.
In this context, Chainbase is building a system in a Web3 manner, aiming to solve a series of issues related to data accessibility, integrity, and usability faced in the development of AI, allowing AI to obtain high-quality, structured, and trustworthy data supply, and redefining the ways of data acquisition, management, and utilization for developers and users through advanced technical architecture and an open ecosystem, providing comprehensive and efficient solutions while rewarding multiple contributors for their participation.
A transparent, open, and collaborative Web3 data ecosystem
As mentioned above, the data issues faced in the AI field can generally be summarized into aspects of accessibility, integrity, and usability.
Therefore, for an ideal Web3 data system, it is necessary to ensure that it is open and permissionless while also ensuring data standardization, high quality, and comprehensiveness. The Chainbase network is based on the principle of decentralization, ensuring that data is not controlled or processed by a single entity, and establishing four core guidelines: open source, collaboration, incentive mechanisms, and AI readiness.
Chainbase itself has built a distributed data ecosystem based on blockchain solutions aimed at Web3, allowing all data demanders to capture specific data from the network and providing necessary accessibility for AI data capture in an open-source manner for all developers and researchers.
Meanwhile, Chainbase is building a system centered around collaboration and incentives to provide comprehensive, high-quality, and uniformly standardized datasets, ensuring data integrity and availability, while avoiding control or processing by a single entity.
Chainbase allows users to become data providers in the network and achieve data standardization and processing through Manuscripts (by simplifying data formatting and standardization processes, allowing AI systems to seamlessly access the required data). Data providers can continue to contribute to network data decoding by sharing, verifying data, or providing computing power, and these contributions will be rewarded through the $C token to incentivize more users to participate in data provision and sharing.
Four-Layer Architecture Design
To ensure the efficient operation of the system, Chainbase has designed a four-layer architecture system, including the Data Accessibility Layer, Consensus Layer, Execution Layer, and Co-processing Layer. Different layers encapsulate different functionalities and roles, achieving seamless integration among them. Based on this hierarchical architecture, Chainbase will be able to build a decentralized environment for collaborative knowledge sharing, strong execution capabilities, consensus-driven data verification, and high-quality data accessibility.
Data Accessibility Layer
The Data Accessibility Layer is the source of data for Chainbase and serves as the data foundation, responsible for collecting, verifying, and storing on-chain and off-chain data while responding to the data processing functional needs of Manuscripts.
This layer encompasses not only on-chain data, including recorded transaction history, staking information, and metadata, but also off-chain data (stored in decentralized storage systems), addressing scalability and privacy issues, suitable for storing large raw datasets, programming code, and complex AI models.
This layer obtains information through a decentralized network to ensure the dispersion and diversity of data sources, avoiding control or manipulation by a single entity. At the same time, cryptographic techniques such as ZKP are used to verify data sources, protecting sensitive information, and consensus mechanisms ensure the credibility of data before permanent storage, providing an early foundation for data compliance and integrity.
Consensus Layer
As a distributed on-chain ecosystem, Chainbase needs to ensure that data is complete, secure, and trustworthy, and that all transactions and data states are verified and recognized by network participants.
Chainbase has built a consensus algorithm based on CometBFT that offers strong resistance to network failures and malicious attacks, achieving rapid finality and ensuring instant data updates. At the same time, Chainbase also introduces economic factors into the system through the DPoS consensus mechanism, binding them to validators.
In this, validators can maintain the integrity of the blockchain, verify data operations, and ensure consistency by staking $C tokens, while delegators enhance the network's staking scale by staking $C tokens to validators they support or trust, thereby improving network security and strengthening the economic resilience of the system.
Validators are rewarded for their crucial role in ensuring the accuracy and stability of the network. The consensus layer maintains a strong and trustworthy decentralized framework by aligning economic incentives with network security.
Execution Layer
The execution layer is the computational core of Chainbase, responsible for executing Manuscripts and managing large-scale data processing tasks. Its goal is to ensure the efficient, secure, and scalable processing of Manuscripts, enabling developers to execute complex AI tasks while maintaining high performance and reliability.
The execution layer encapsulates the Chainbase Virtual Machine (CVM), designed as the core of the execution layer, specifically optimized for processing Manuscripts and executing data workflows. It is designed as a parallel architecture, supporting the simultaneous processing of different parts of datasets to ensure parallel data processing and allowing multiple tasks to be executed concurrently, thus optimizing resource utilization and achieving task parallelism.
Meanwhile, the CVM also supports node operators to contribute computing resources, laying the foundation for smoother network operations, and the system will reward nodes contributing computing resources based on their workload and performance.
With the launch of the Chainbase AVS mainnet, the execution layer will officially take effect on EigenLayer and will be able to collaborate with Chainbase AVS.

Co-processing Layer
The top layer of the Chainbase network is the co-processing layer, supporting users with expertise in data processing and AI knowledge to contribute and collaborate. The core concept in the co-processing layer, 'Manuscripts', is a programmable script used to define and execute data processing tasks. Developers can utilize Manuscripts to standardize data formats and processing flows, transforming raw data into a unified format usable by AI.
Manuscripts not only represent a data format, but they can also be viewed as a tradable resource, facilitating the construction of a creator economy ecosystem.
Contributors can assetize their work by compiling it into Manuscripts, circulating their work content in the network, and building a creator economy ecosystem on this basis. In this economic system, the $C token plays a key role as a medium for payment, settlement, staking, and governance.
The co-processing layer not only promotes collaborative knowledge sharing but also establishes a transparent economic system, providing fair compensation to contributors, incentivizing innovation, and enhancing the overall utility of the network. At the same time, this lays the foundation for Chainbase to build a complete and comprehensive usable data system.
Chainbase AVS Mainnet: A new opportunity for the comprehensive launch of the data ecosystem.
The official launch of the Chainbase AVS mainnet is deeply interlinking the different layers of Chainbase, bringing verification and processing capabilities to the Chainbase data network.
By providing a range of powerful features, it will empower developers through Chainbase and initiate seamless data processing capabilities for decentralized applications and AI-driven solutions, marking a new opportunity for the comprehensive launch of the data ecosystem.
With the launch of the Chainbase AVS mainnet, developers will be able to utilize Manuscripts to convert raw blockchain data into standardized, AI-ready formats, and through collaboration with advanced open-source models like Google Gemini and OpenAI, implement AI-ready workflows, simplifying the preparation and utilization of blockchain data to provide efficient support for advanced applications and AI inference.
At the same time, Chainbase has officially introduced the $C token as part of its incentive system, based on which manuscript creators, data providers, network operators, and validators can earn rewards for their contributions to the network, building an AGI economy and exploring new revenue models, ensuring the generation of high-quality data and encouraging active participation from all stakeholders.
Additionally, the Chainbase execution layer will officially take effect on EigenLayer and will be able to collaborate with Chainbase AVS. By leveraging the decentralized architecture supported by EigenLayer, node roles can not only enhance the security of the network but also demonstrate their capability to handle complex workloads (such as real-time AI data processing).
Based on the core CVM of data processing workflows, developers can achieve seamless execution of complex manuscripts and gain a reliable and scalable environment to efficiently manage data-intensive tasks, which is crucial for large-scale AI data processing.
From another perspective, Chainbase AVS plays a crucial role in expanding the application scenarios of the EigenLayer ecosystem.
As the first mainnet focused on data intelligence in EigenLayer AVS, Chainbase AVS is dedicated to achieving data standardization and AI readiness through decentralized data processing and AI application optimization. This not only expands the application field of EigenLayer AVS but also establishes the foundation for deeper integration of more data-related industries with the EigenLayer system.
At the same time, Chainbase AVS promotes deeper integration of the EigenLayer system with the AI track, further extending its applications beyond the industry.
It is reported that in the next phase following the launch of the Chainbase AVS mainnet, Chainbase plans to further expand its data sources, incorporating real-time IoT data streams and various other datasets, thereby enhancing developers' data accessibility and expanding the applicability and practicality of data infrastructure across industries, further becoming an important cornerstone of technological development.