Original Title: (Defining New Primitives in an AI-Native Economy)
Introduction
Decentralized Finance (DeFi) has ignited a story of exponential growth through a series of simple yet powerful economic primitives, transforming blockchain networks into global permissionless markets that radically disrupt traditional finance. In the rise of DeFi, several key metrics became the universal language of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity. These concise metrics sparked participation and trust. For instance, the TVL in DeFi surged 14 times in 2020, and then quadrupled again in 2021, peaking above $112 billion. High yields (some platforms claimed APYs as high as 3000% during the liquidity mining frenzy) attracted liquidity, while the depth of liquidity pools signified lower slippage and more efficient markets. In short, TVL tells us 'How much capital is involved', APR tells us 'How much yield can be earned', and liquidity indicates 'The ease of trading assets'. Despite its flaws, these metrics built a financial ecosystem worth tens of billions of dollars from scratch. By translating user participation into direct financial opportunities, DeFi has created a self-reinforcing adoption flywheel that has led to rapid proliferation and widespread participation.
Today, AI stands at a similar crossroads. But unlike DeFi, the current AI narrative is dominated by large general models trained on massive internet datasets. These models often struggle to deliver effective results in niche domains, specialized tasks, or personalized needs. Their 'one-size-fits-all' approach is powerful yet fragile, universal yet misaligned. This paradigm needs a shift. The next era of AI should not be defined by the scale or generality of models but should focus on bottom-up - smaller, highly specialized models. Such customized AI requires a new type of data: high-quality, human-aligned, and domain-specific data. But acquiring such data is not as simple as web crawling; it requires proactive and conscious contributions from individuals, domain experts, and communities.
To promote this specialized, human-aligned era of AI, we need to build an incentive flywheel similar to what DeFi designed for finance. This means introducing new AI-native primitives to measure data quality, model performance, agent reliability, and alignment incentives - these metrics should directly reflect the true value of data as an asset (rather than just an input).
This article will explore these new primitives that can form the pillars of an AI-native economy. We will illustrate how, if the correct economic infrastructure is established (i.e., generating high-quality data, reasonably incentivizing its creation and use, and being individual-centered), AI can thrive. We will also analyze platforms like LazAI that are pioneering the construction of these AI-native frameworks, leading the new paradigm of pricing and rewarding data to fuel the next leap in AI innovation.
The Incentive Flywheel of DeFi: TVL, Yield, and Liquidity - A Quick Review
The rise of DeFi is no coincidence; its design makes participation both profitable and transparent. Key metrics like Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity are not just numbers, but primitives that align user behavior with network growth. These metrics together create a virtuous cycle that attracts users and capital, thus driving further innovation.
Total Value Locked (TVL): TVL measures the total capital deposited in DeFi protocols (such as lending pools and liquidity pools), becoming synonymous with the 'market value' of DeFi projects. Rapid growth of TVL is seen as a sign of user trust and protocol health. For example, during the DeFi boom from 2020 to 2021, TVL surged from under $10 billion to over $100 billion, and by 2023 surpassed $150 billion, demonstrating the scale of value participants are willing to lock into decentralized applications. High TVL creates a gravitational effect: more capital means higher liquidity and stability, attracting more users seeking opportunities. Critics point out that blindly chasing TVL may lead to protocols offering unsustainable incentives (essentially 'buying' TVL), thus obscuring inefficiency issues, but without TVL, early DeFi narratives would lack concrete ways to track adoption.
Annual Percentage Yield (APY/APR): Yield promises to translate participation into tangible opportunities. DeFi protocols began offering astonishing APRs for liquidity or capital providers. For example, Compound launched the COMP token in mid-2020, pioneering the liquidity mining model - rewarding liquidity providers with governance tokens. This innovation triggered a wave of activity. Using the platform became not just a service but an investment. High APYs attracted yield seekers, further driving up TVL. This reward mechanism incentivized early adopters with substantial returns, propelling network growth.
Liquidity: In finance, liquidity refers to the ability to transfer assets without causing significant price volatility - a cornerstone of healthy markets. In DeFi, liquidity is often initiated through liquidity mining schemes (users earn tokens for providing liquidity). The depth of decentralized exchanges and lending pools means users can trade or borrow with low friction, thereby improving user experience. High liquidity brings higher trading volumes and utility, which in turn attracts more liquidity - a classic positive feedback loop. It also supports composability: developers can build new products (derivatives, aggregators, etc.) on top of liquid markets, driving innovation. Thus, liquidity becomes the lifeblood of the network, driving the emergence of adoption and new services.
These primitives together form a powerful incentive flywheel. Participants creating value by locking assets or providing liquidity are immediately rewarded (through high yields and token incentives), encouraging more participation. This transforms individual engagement into widespread opportunities - users earn profits and governance influence - which in turn spawn network effects, attracting thousands of users. The results are remarkable: by 2024, the number of DeFi users exceeds 10 million, with its value growing nearly 30 times in a few years. Clearly, large-scale incentive alignment - converting users into stakeholders - is key to DeFi's exponential rise.
The current deficiencies in the AI economy
If DeFi demonstrated how bottom-up participation and incentive alignment could spark a financial revolution, the current AI economy still lacks foundational primitives to support a similar shift. Today's AI is dominated by large general models trained on massive crawled datasets. These foundational models are impressive in scale but designed to address all problems, often failing to serve anyone particularly well. Their 'one-size-fits-all' architecture struggles to adapt to niche domains, cultural differences, or individual preferences, leading to fragile outputs, blind spots, and a growing disconnection from real-world needs.
The definition of the next generation of AI will no longer just be scale, but also contextual understanding ability - the ability of models to understand and serve specific domains, professional communities, and diverse human perspectives. However, this contextual intelligence requires different inputs: high-quality, human-aligned data. And this is precisely what is currently lacking. There is no widely recognized mechanism to measure, identify, value, or prioritize such data, nor is there an open process for individuals, communities, or domain experts to contribute their perspectives and improve the intelligent systems that increasingly impact their lives. Thus, value remains concentrated in the hands of a few infrastructure providers, disconnecting the masses from the upward potential of the AI economy. Only by designing new primitives that can discover, validate, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth cycle that DeFi relies on for prosperity.
In short, we must equally ask:
How should we measure the value created? How to build a self-reinforcing adoption flywheel to drive bottom-up individual-centered data participation?
To unlock an 'AI-native economy' similar to DeFi, we need to define new primitives that transform participation into opportunities for AI, catalyzing network effects unseen in the field to date.
AI Native Tech Stack: New Primitives for a New Economy
We are no longer just transferring tokens between wallets, but inputting data into models, transforming model outputs into decisions, and actioning AI agents. This requires new metrics and primitives to quantify intelligence and alignment, just as DeFi metrics quantify capital. For instance, LazAI is building the next-generation blockchain network to solve AI data alignment issues by introducing new asset standards for AI data, model behavior, and agent interactions.
The following outlines several key primitives defining on-chain AI economic value:
Verifiable Data (New 'Liquidity'): Data to AI is Like Liquidity to DeFi - The Lifeblood of the System. In AI (especially large models), having the right data is crucial. However, raw data may be of poor quality or misleading, necessitating on-chain verifiable high-quality data. The potential primitives here may be 'Proof of Data (PoD)/Proof of Data Value (PoDV)'. This concept will measure the value of data contributions based not just on quantity but also on quality and its impact on AI performance. It can be seen as a counterpart to liquidity mining: contributors providing useful data (or labels/feedback) will be rewarded based on the value their data brings. Early designs of such systems are already taking shape. For example, a blockchain project’s data proof (PoD) consensus views data as the primary resource for validation (similar to energy in Proof of Work or capital in Proof of Stake). In this system, nodes are rewarded based on the quantity, quality, and relevance of the data they contribute.
Expanding this to a general AI economy, we might see 'Total Locked Data Value (TDVL)' as a metric: an aggregated measure of all valuable data in the network, weighted by verifiability and usefulness. Verified data pools could even be traded like liquidity pools - for example, a verified medical imaging pool for on-chain diagnosing AI could have quantifiable value and utility. Data provenance (understanding the source and modification history of data) will be a key part of this metric, ensuring that the data input into AI models is trustworthy and traceable. Essentially, if liquidity concerns usable capital, verifiable data concerns usable knowledge. Metrics like Proof of Data Value (PoDV) could capture the amount of useful knowledge locked in the network, while on-chain data anchoring via LazAI's Data Anchoring Tokens (DAT) makes data liquidity a measurable and incentivizable economic layer.
Model Performance (A New Asset Class): In the AI economy, well-trained models (or AI services) themselves become assets - they can even be regarded as a new asset class alongside tokens and NFTs. Well-trained AI models have value due to the intelligence encapsulated in their weights. But how do we represent and measure this value on-chain? We may need on-chain performance benchmarks or model certifications. For example, accuracy on standard datasets, or win rates in competitive tasks, could serve as performance scores recorded on-chain. This could be viewed as an on-chain 'credit rating' or KPI for AI models. Such ratings could be adjusted as models are fine-tuned or data is updated. Projects like Oraichain have explored integrating AI model APIs with reliability scores (by validating AI outputs against expected outcomes through test cases) on-chain. In AI-native DeFi ('AiFi'), staking based on model performance could be envisioned - for example, if developers believe their model performs excellently, they could stake tokens; if independent on-chain audits confirm its performance, they would be rewarded (if the model performs poorly, they would lose their stake). This would incentivize developers to report truthfully and continuously improve their models. Another idea is tokenized model NFTs carrying performance metadata - the 'floor price' of a model NFT might reflect its utility. Such practices are already emerging: certain AI markets allow the buying and selling of model access tokens, and protocols like LayerAI (formerly CryptoGPT) explicitly view data and AI models as emerging asset classes in the global AI economy. In short, DeFi asks 'How much capital is locked?', while AI-DeFi will ask 'How much intelligence is locked?' - referring not only to computing power (though equally important) but also to the efficiency and value of models running in the network. New metrics may include 'Proof of Model Quality' or time-series indices of on-chain AI performance improvements.
Agent Behavior and Utility (On-chain AI Agents): One of the most exciting and challenging new elements in AI-native blockchains is the autonomous AI agents running on-chain. These could be trading bots, data curators, customer service AIs, or complex DAO governors - essentially software entities capable of perceiving, deciding, and even acting on behalf of users in the network. The DeFi world only had basic 'bots'; whereas in the AI blockchain world, agents may become first-class economic entities. This gives rise to the need for metrics around agent behavior, trustworthiness, and utility. We might see mechanisms similar to 'agent utility scores' or reputation systems. Imagine each AI agent (potentially represented by NFTs or semi-fungible tokens (SFTs)) accumulating reputation based on their actions (task completion, cooperation, etc.). Such ratings are akin to credit scores or user ratings, but targeted at AI. Other contracts could then decide whether to trust or utilize the agent's services. In LazAI's proposed iDAO (individual-centered DAO) concept, each agent or user entity has its own on-chain domain and AI assets. We can envision these iDAOs or agents establishing measurable records.
Existing platforms have started tokenizing AI agents and assigning on-chain metrics: for example, Rivalz's 'Rome protocol' creates NFT-based AI agents (rAgents), with their latest reputation metrics recorded on-chain. Users can stake or lend these agents, and their rewards depend on the agents' performance and impact within the collective AI 'cluster'. This is essentially DeFi for AI agents and showcases the importance of agent utility metrics. In the future, we may discuss 'active AI agents' like we discuss active addresses, or 'agent economic impact' like we discuss trading volume.
Attention trajectories may become another primitive - recording what agents focus on during decision-making (which data, signals). This could make black-box agents more transparent and auditable, attributing their successes or failures to specific inputs. In summary, agent behavior metrics will ensure accountability and alignment: if autonomous agents are to be entrusted with managing large sums of money or critical tasks, their reliability must be quantified. High agent utility scores may become prerequisites for on-chain AI agents managing large funds (similar to how high credit scores are thresholds for large loans in traditional finance).
Using incentives and AI alignment metrics: Finally, the AI economy needs to consider how to incentivize beneficial use and alignment. DeFi incentivizes growth through liquidity mining, early user airdrops, or fee rebates; whereas in AI, mere usage growth is insufficient, we need to incentivize the use that improves AI outcomes. At this point, metrics tied to AI alignment become crucial. For instance, human feedback loops (such as user ratings of AI responses or corrections provided through iDAO, which will be elaborated on later) could be recorded, allowing feedback contributors to earn 'alignment rewards'. Alternatively, envision 'Attention Proof' or 'Participation Proof', where users investing time to improve AI (through preference data, corrections, or new use cases) receive rewards. Metrics could include attention trajectories, capturing high-quality feedback or human attention invested in optimizing AI.
Just as DeFi needs blockchain explorers and dashboards (like DeFi Pulse, DefiLlama) to track TVL and yields, the AI economy also needs a new browser to track these AI-centric metrics - imagine a dashboard like 'AI-llama' displaying total alignment data, active AI agents, cumulative AI utility gains, etc. It shares similarities with DeFi but the content is entirely new.
Towards a DeFi-style AI Flywheel
We need to build an incentive flywheel for AI - treating data as a first-class economic asset, thus transforming AI development from a closed endeavor to an open, participatory economy, just as DeFi turned finance into a user-driven liquidity open field.
Early explorations in this direction have emerged. For instance, projects like Vana have started rewarding users for participating in data sharing. The Vana network allows users to contribute personal or community data to DataDAO (decentralized data pools) and earn dataset-specific tokens (which can be exchanged for the network's native tokens). This is an important step towards the monetization of data contributors.
However, simply rewarding contribution behavior is insufficient to replicate DeFi's explosive flywheel. In DeFi, liquidity providers are rewarded not only for depositing assets; the assets they provide also have transparent market value, and yields reflect actual usage (trading fees, lending interest plus incentive tokens). Similarly, the AI data economy needs to go beyond generic rewards, directly pricing data. Without economic pricing based on data quality, scarcity, or improvement to models, we may fall into shallow incentives. Simply distributing token rewards for participation may encourage quantity over quality or stall when tokens lack actual AI utility linkage. To truly unleash innovation, contributors need to see clear market-driven signals, understand the value of their data, and receive returns when their data is actually used in AI systems.
We need an infrastructure that focuses more on directly valuing and rewarding data to create a data-centered incentive loop: the more high-quality data people contribute, the better the models become, attracting more usage and data demand, thereby increasing returns for contributors. This will transform AI from a closed competition for big data into an open market for trustworthy, high-quality data.
How are these ideas reflected in real projects? Taking LazAI as an example - this project is building the next-generation blockchain network and foundational primitives for a decentralized AI economy.
Introduction to LazAI - Aligning AI with Humanity
LazAI is a next-generation blockchain network and protocol designed to solve AI data alignment issues, building the infrastructure for a decentralized AI economy by introducing new asset standards for AI data, model behavior, and agent interactions.
LazAI offers one of the most forward-looking approaches, solving the AI alignment problem by making data verifiable, incentivizable, and programmable on-chain. The following will illustrate how AI-native blockchains can put the aforementioned principles into practice using LazAI's framework as an example.
Core Issues - Data Misalignment and Lack of Fair Incentives
AI alignment often boils down to the quality of training data, whereas the future needs new data that aligns with human perspectives, is trustworthy, and governed. As the AI industry shifts from centralized general models to contextualized, aligned intelligence, infrastructure must evolve in sync. The next AI era will be defined by alignment, precision, and provenance. LazAI directly addresses the data alignment and incentive challenges, proposing fundamental solutions: aligning data at the source and directly rewarding the data itself. In other words, ensuring that training data verifiably represents human perspectives, is denoised/bias-free, and rewards based on data quality, scarcity, or improvement to the model. This is a paradigm shift from patching models to organizing data.
LazAI not only introduces primitives but also proposes a new paradigm for data acquisition, pricing, and governance. Its core concepts include Data Anchoring Tokens (DAT) and individual-centered DAOs (iDAOs), both of which realize data pricing, provenance, and programmable usage.
Verifiable and Programmable Data - Data Anchoring Tokens (DAT)
To achieve this, LazAI introduces a new on-chain primitive - Data Anchoring Tokens (DAT), a new token standard designed for AI data assetization. Each DAT represents a piece of on-chain anchored data and its provenance information: contributor identity, evolution over time, and usage scenarios. This creates a verifiable history for each piece of data - akin to a version control system for datasets (like Git), but secured by blockchain. Since DAT exists on-chain, they possess programmability: smart contracts can manage their usage rules. For example, data contributors may specify that their DAT (such as a set of medical images) is accessible only to specific AI models or under certain conditions (enforced through code to ensure privacy or ethical constraints). The incentive mechanism manifests in that DAT can be traded or staked - if the data is valuable to the model, the model (or its owner) may pay for access to DAT. Essentially, LazAI constructs a data tokenization and traceability market. This directly echoes the previously discussed 'verifiable data' metrics: by inspecting DAT, one can verify whether it has been validated, how many models have utilized it, and what performance improvements it has brought. Such data will receive higher valuations. By anchoring data on-chain and linking economic incentives to quality, LazAI ensures AI is trained on trustworthy and measurable data. This resolves the issue through incentive alignment - high-quality data is rewarded and stands out.
Individual-Centered DAO (iDAO) Framework
The second key component is LazAI's iDAO (individual-centered DAO) concept, which redefines governance models within the AI economy by placing individuals (rather than organizations) at the core of decision-making and data ownership. Traditional DAOs often prioritize collective organizational goals, inadvertently weakening individual will. iDAOs disrupt this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and validate the data and models they contribute to the AI system. iDAOs support customized, aligned AI: as governance frameworks, they ensure models always adhere to the values or intentions of contributors. From an economic perspective, iDAOs also grant programmability to AI behavior - rules can be set to restrict how models use specific data, who can access the models, and how the output revenues are distributed. For example, iDAOs can stipulate that whenever their AI model is called (like API requests or task completions), a portion of the revenue will be returned to the DAT holders who contributed relevant data. This establishes a direct feedback loop between agent behavior and contributor rewards - similar to the mechanism in DeFi where liquidity provider yields are linked to platform usage. Additionally, iDAOs can interact with composability through protocols: one AI agent (iDAO) can invoke another iDAO's data or models under negotiated terms.
By establishing these primitives, LazAI's framework turns the vision of a decentralized AI economy into reality. Data becomes an asset that users can own and profit from; models transition from private islands to collaborative projects, and every participant - from individuals curating unique datasets to developers building small specialized models - can become stakeholders in the AI value chain. This incentive alignment is expected to replicate the explosive growth of DeFi: when people realize that participating in AI (contributing data or expertise) directly translates into opportunities, they will engage more actively. As the number of participants increases, network effects will kick in - more data will give rise to better models, attracting more users, which in turn generates more data and demand, forming a positive cycle.
Building the Trust Foundation for AI: Verified Computing Framework
In this ecosystem, LazAI's Verified Computing Framework serves as the core layer for building trust. This framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision, and every incentive distribution has a verifiable chain of traceability, making data ownership executable, governance processes accountable, and agent behavior auditable. By transforming iDAO and DAT from theoretical concepts into reliable, verifiable systems, the Verified Computing Framework realizes a paradigm shift in trust - from reliance on assumptions to guarantees based on mathematical verification.
The establishment of this foundational element of decentralized AI economy's value realization makes the vision of a decentralized AI economy truly tangible:
Data Assetization: Users can claim ownership of data assets and receive returns.
Model Collaboration: AI Models Transitioning from Closed Islands to Open Collaborative Products
Participation Is Capitalized: From data contributors to vertical model developers, all participants can become stakeholders in the AI value chain.
This incentive-compatible design is expected to replicate the growth momentum of DeFi: when users realize that participating in AI construction (through data or expertise contributions) translates directly into economic opportunities, enthusiasm for participation will be ignited. As the scale of participants grows, network effects will emerge - more high-quality data will spawn better models, attracting more users, which in turn generates more demand for data, forming a self-reinforcing growth flywheel.
Conclusion: Towards an Open AI Economy
The journey of DeFi shows that the right primitives can unleash unprecedented growth. In the upcoming AI-native economy, we stand on the threshold of a similar breakthrough. By defining and implementing new primitives that value data and alignment, we can shift AI development from centralized engineering to a decentralized community-driven endeavor. This journey is not without challenges: we must ensure that economic mechanisms prioritize quality over quantity and avoid ethical pitfalls that might compromise privacy or fairness due to data incentives. However, the direction is clear. Practices like LazAI's DAT and iDAOs are paving the way to transform the abstract idea of 'AI aligned with humanity' into concrete mechanisms for ownership and governance.
Just as early DeFi experimented with optimizing TVL, liquidity mining, and governance, the AI economy will also iterate its new primitives. In the future, debates and innovations surrounding data value measurement, fair reward distribution, AI agent alignment, and benefits will surely emerge. This article only touches on the surface of incentive models that could drive AI democratization, hoping to inspire open discussions and deeper research: How to design more AI-native economic primitives? What unexpected consequences or opportunities might arise? Through broad community participation, we are more likely to build an AI future that is not only technologically advanced but also economically inclusive and aligned with human values.
The exponential growth of DeFi is no magic - it is driven by incentive alignment. Today, we have the opportunity to drive a revival of AI through analogous practices with data and models. By converting participation into opportunities, and opportunities into network effects, we can launch a flywheel that reshapes value creation and distribution in the digital age.
Let's build this future together - starting from a verifiable dataset, an aligned AI agent, and a new primitive.
Original Link
This article comes from a submission and does not represent BlockBeats' views.