Twelve months ago, we didn’t have “Nasdaq companies aping into TAO” on the bingo card.
But here we are.
- Synaptogenix (SNPX) did a MicroStrategy-style bet on Bittensor stock jumped 40% after it announced a $10M TAO buy (2x its own market cap). They’re aiming for $100M.
James Altucher is leading it. $5.5M personal stake. They're releasing preferred shares and TAO-linked warrants. And a full rebrand incoming.
- Oblong (OBLG) is next. $7.5M raise to accumulate TAO and back Subnet 0. The stock popped after the announcement.
The thesis for these companies is simple: TAO is scarce, programmable, and productive. It’s the native asset of decentralized intelligence.
This won’t be the last. We’re watching the start of a public market land grab for crypto AI infrastructure.
Since the 1st edition of our AI & Crypto newsletter was published on 11 June 2024, we’ve been breaking down the madness so you don’t have to live inside Arxiv or scroll Twitter/X until your eyes bleed.
This is the biggest tech shift of our lifetime. It’s chaotic, it’s fast, and if you’re not neck-deep in it, it’s easy to miss what’s really happening.
So, a big thank you for riding with us. Year two starts now. 🫡
Weekly AI Edge #51 is out! Read this, then get back to enjoying summer:
🌈 Project Updates = @NillionNetwork's new Enterprise Cluster is live, with Vodafone, Deutsche Telekom, Alibaba Cloud, and stc Bahrain, aiming for a privacy-native internet. = @TRNR_Nasdaq, listed on NASDAQ, is raising $500M to build the biggest AI-token treasury on a US exchange, backed by ATW and DWF Labs. = @USDai_Official entered private beta with $10M in deposits for a yield model tied to tokenized Treasuries and AI assets. = @PondGNN launched AI Studio and Pond Markets to help AI projects grow and fund. = @Worldcoin launched native USDC and CCTP V2 on World Chain, enhancing transfers for 27M users. = @peaq and Pulsar launched a Machine Economy Free Zone in the UAE for AI-powered machine pilots. = @thedkingdao is deploying $300M with a sports-betting hedge fund via an on-chain DeFAI system. = @CrucibleLabs launched Smart Allocator to auto-stake TAO into top subnets. = @hyperlane introduced TaoFi's Solana-to-Bittensor USDC bridge, unlocking DeFi access for Solana, Base, and Ethereum.
🌴 AI Agents = @Virtuals_io released I.R.I.S., a Virtuals Genesis AI agent on Ethereum, for contract security alerts. = @TheoriqAI launched Theo Roo, an AI strategist for real-time on-chain efficiency. = @AlloraNetwork started a six-week Agent Accelerator with $ALLO grants for top agents. = @Gizatechxyz's Arma now integrates into Rainbow Wallet for yield tracking. = @Chain_GPT launched AgenticOS, an open-source AI for posting crypto insights using on-chain data.
🐼 Web2 AI = @MistralAI released Magistral, a multilingual model for domain-specific tasks. = @xAI and Polymarket are partnering to integrate Grok’s AI with prediction markets. = @OpenAI launched o3-pro, the new ChatGPT Pro model, with enhanced features. = @Yutori released Scouts, AI agents for personalized internet alerts; beta at https://t.co/gxJvB6iC7h. = @Krea entered image modeling with Krea 1 in private beta, offering artist-grade output.
+ much more alpha in the full newsletter @cot_research (link in bio)
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh:
So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model.
- @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code.
- @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs.
- @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible
- @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model
Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors.
Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients)
True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated.
Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives.
Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later.
For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable.
Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly.
One thing people often miss: not all compute is equal!
We treat GPUs and FLOPs like they’re interchangeable units, but they’re not. The context: who owns the infra, who controls it, who can shut it off, matters more than we admit.
Your model might run just fine on AWS or on some decentralized network. Output looks the same. But the trust layer is entirely different.
Decentralized compute offers what hyperscalers can’t: censorship resistance, no central point of failure or kill-switch, and real user control over data and execution.
That sovereignty should come with a premium. Today, it doesn’t. Most devs still optimize for cost.
But that will shift. slowly, then structurally. As pressure builds and workloads demand stronger guarantees, control becomes non-negotiable.