Author: 0xJeff, Crypto KOL

Compiled by: Felix, PANews

Competition is the foundation of AI development.

The goals of participants in the competition include:

  • Train the best model to complete specific tasks

  • Co-training a single model for optimal improvement

  • Providing the best insights

  • Providing the best trading signals

  • Providing the most accurate predictions

  • And many more competitive advantages

Without competition, innovation would develop at its own pace — often very slowly. The ongoing Bittensor competition is being witnessed in real-time, with many subnet outputs exceeding industry benchmarks on their respective tasks.

Subnet owners can design any incentive mechanism, allowing miners to compete for $TAO rewards, enabling validators to verify miners' tasks, and allowing stakers to delegate their $TAO to validators who are best at verification (for maximum incentives), making Bittensor a thriving ecosystem that constantly pushes the boundaries of decentralized AI.

Flock has implemented mechanisms similar to Bittensor in its ecosystem to accelerate the development process of initial models, and further fine-tune domain-specific models through Federated Learning to adapt to unique use cases.

What is Federated Learning

Federated Learning: A way for multiple devices (people) to train a single model without sharing data. This is particularly useful in privacy-centric environments, such as healthcare, government, banking, customer data, etc., where privacy/confidentiality is paramount.

Unlike raw data, federated learning shares 'gradients' to a central server. The server then aggregates these updates to improve the model and sends it back to the devices used for model training. This process is repeated continuously.

Federated Learning typically uses edge devices (smartphones, computers, IoT) because they are capable of:

  • Generating and storing sensitive data locally, compliant with regulations.

  • Highly scalable due to the variety of smartphone types.

  • Contains personalized data, making it very suitable for training domain-specific models.

Moreover, by only sharing gradients (rather than raw data), edge devices with limited CPU and connectivity can operate efficiently.

Flock's products

(No obscure technical terms will be used here; the focus will be on how it works)

Flock's product flow is: (i) AI Arena (ii) FL Alliance (iii) Moonbase

  • AI Arena is a competitive event where AI engineers ('trainers') train the models of their choice based on specified tasks (building initial models).

Currently, tasks are manually created by projects/ecosystems, and participants can propose business plans/ideas to Flock via FLock.io and define their desired final use cases.

Flock will create tasks on the platform based on these needs, which trainers can access and start training. Trainers improve the model by submitting data and gradients, thus enhancing model performance/reducing hallucinations (trainers are akin to miners in the Bittensor ecosystem).

Validators score the model based on the gradients submitted by trainers.

  • Trainers and validators need to stake $FLOCK into gmFLOCK to participate (lock-up period can be optional from 0 to 365 days).

  • Trainers and validators with higher gmFLOCK stakes can receive more tasks and higher reward multipliers (both have their own $FLOCK incentive standards, with gmFLOCK staking being one of them).

  • If trainers and validators engage in malicious behavior (training submissions fail verification or verification is inaccurate), their gmFLOCK may be reduced.

  • Delegators (stakers) can stake $FLOCK into gmFLOCK and delegate it to trainers or validators. Delegators will receive a portion of the $FLOCK rewards (annual yield of 60%-230%).

After the initial model training and validation in AI Arena, FL Alliance will adopt these models (the best models) and fine-tune them using private datasets on edge devices through federated learning.

  • FL Alliance is a process of further fine-tuning the initial models from AI Arena using specific domain datasets on edge devices (through federated learning).

The main differences between AI Arena and FL Alliance

  1. AI Arena = Competition | Initial model training using traditional machine learning | Public dataset | First step

  2. FL Alliance = Collaboration | Fine-tuning using federated learning | Private datasets on local devices | Advanced fine-tuning for specific domain applications | Second step

Moonbase or AI Model Marketplace

Here, the models trained in AI Arena and those fine-tuned through FL Alliance can be deployed, used, and monetized.

Moonbase is still in the testing phase, but phases two and three will introduce seamless ways for contributors (trainers, validators, delegators) to own these models/agents. Anyone can pay/subscribe to use the models (project owners, researchers, enterprises, etc.), and the models can be deployed and integrated on any launch platform.

You can think of Flock as a complete loop, an end-to-end agent development platform, starting with trainers competing to build the best initial models, fine-tuning for specific domain applications, and deploying models/agents to solve unique problems.

Recent initiatives/ecosystem partners

  • Flock x Qwen: Alibaba Cloud utilizes Flock to train small language models focused on specific domains (such as medicine and finance).

  • Flock's FlockOff SN96 on Bittensor: FlockOff is a research-focused subnet aimed at improvements, incubated by Yuma.

Its goal is to help AI models pick out the most meaningful and representative data points from large datasets, achieving more accurate training without processing all available data.

For example, training a trading model to enhance the order book — API/SDK scans Binance's trading behavior, but the number of trades is vast, requiring too much computation to process all trades.

SLM selects precise data from Binance that can represent your trading behavior on your smartphone, so your smartphone's FL doesn't have to look at all trades — it may only need to check 10 from 10,000 data points that represent the entire dataset.

Top AI applications of Flock

Before delving into applications built on Flock, it's worth mentioning that models trained on Flock have already outperformed industry-leading models on Web3 tasks.

The model can natively understand complex blockchain logic, interact in real-time with smart contracts and decentralized applications, automate DeFi strategies, manage liquidity pools, and perform multi-chain analysis.

The model is trained and validated through AI Arena and can serve as a foundational model for more in-depth domain-specific use cases.

1. Flock x Animoca Brands

HeyAni - AI for venture capital research

Flock provides a Web3 model that has been fine-tuned based on Animoca's Investment Committee (IC) 10-year memorandum. Ultimately, Flock has built an experienced Web3 venture capital agent capable of parsing white papers, GitHub, token contract addresses, X profiles, and providing a score and likelihood of venture capital firms investing in your project.

The agent will also provide a summary of pros and cons and suggestions on how to improve the project.

Animoca uses Ani to help alleviate due diligence workloads while continuously improving the agents to become better venture capitalists.

Animoca's @AimonicaBrands also utilizes Flock models to help refine its trading models.

2. Flock x Eden (still ongoing)

Eden: SexualFi - integrating AI technology to mimic the behavior of OnlyFans performers, role-playing while they are offline.

In the first phase, they will interact with you through their personalities, starting with voice.

They are pairing AI agents with sex toys (controlled by the agents), allowing users to enjoy the toys while having sexual conversations with the agents.

The ultimate goal is to create an immersive experience through 3D avatars, animations, voice, and more.

Why is FLOCK promising?

$FLOCK has strong demand

Every participant in the economy needs $FLOCK — task creators, trainers, validators, delegators, etc., all have demand.

Once Moonbase starts actively using the models, delegators/stakers will be able to earn real returns (revenue sharing).

Unlike the tokenization model that tokenizes agents (like Virtuals), Flock retains all the value accumulation generated by the growing demand for models on the platform.

Network participation continues to increase.

  • High staking participation: In its tokenomics v1 (staking from T+0 to T+20 days), the staking participation rate exceeds 47%.

  • In the v2 gmFLOCK model, about 25% of the circulating supply has been locked for an average of 265 days.

Additionally, Messari reports indicate that all indicators are bullish for the first quarter.

The catalysts for this second half of the year are emerging one after another.

The gates of Moonbase are opening, and access to AI models will become more democratized (similar to how Virtuals opened its AI agent tokenization platform). Network effects are beginning to form, and the flywheel effect of $FLOCK is starting to kick in.

There are already multiple partnerships and collaborations in specific domains behind the scenes, many of which cannot be disclosed yet (but can be speculated based on their past collaborations).

Long lock-up period for early investors

Investors face a 12-month cliff and a 24-month vesting period after investing $150 million to $300 million (the last round being $300 million). There are about 6 months left until the cliff period ends. The community's valuation is comparable to those venture capital firms that are permanently locked.

Due to listing on Upbit and Bithumb, liquidity from the Korean market has significantly increased.

Flock has also staked most of the foundation tokens for one year (just before listing on Upbit/Bithumb).

However, there are also some drawbacks to consider.

Incentive design may trigger dynamics similar to Bittensor (i.e., the potential sell-off pressure generated daily by participants).

By the end of the first year, circulating supply should reach 25%, and 50% by the second year. The speed of network growth and actual applications needs to exceed the speed of issuance. (Otherwise... you know what will happen).

The issuance lasts only 5 years and gradually declines each year — it is highly likely that after the network develops to a certain extent, enterprises and projects will need to pay actual revenue to maintain training on Flock, thus filling the issuance gap for trainers, validators, and delegators working on the platform.

In other words, enterprises will find that paying Flock to develop specific domain use cases is cheaper and more efficient than developing them independently.

Flock also utilizes the Bittensor subnet (SN96) to enhance FL Alliance's R&D, using the dTAO subnet issuance instead of the $FLOCK issuance. This reduces the potential sell-off pressure on $FLOCK while improving Flock's products.

How does Flock make a profit?

Very simple. When converting gmFLOCK back to FLOCK, Flock charges about a 5% conversion fee.

Summary

You can think of Flock as a combination of Bittensor + Nous Research + Virtuals:

  • Bittensor: AI Arena — Compete for the best model

  • Nous: FL Alliance — Collaboratively adjust a single model to fit specific domain use cases

  • Virtuals: Moonbase — Model marketplace where anyone can deploy, profit, and subscribe to models/agents

$FLOCK, as the ecosystem token, is essential for all operations, integrating the value of demand side (enterprises/projects) and supply side (trainers/validators/delegators).

This is the only decentralized AI ecosystem that provides an end-to-end model development process for specific domain use cases while having a distribution channel capable of creating real-world economic value.

Meanwhile, the project has gained market attention, and the token's trading price is below the venture capital valuation (with long lock-up and vesting periods).

Related reading: The Next Generation AI Infrastructure Paradigm from Flock and Alibaba's Computing Alliance