Binance Square

0xPrismatic

0 Following
1 Followers
7 Liked
1 Shared
All Content
--
if you're building with crypto-native users as your primary market, it’s going to be tough to find sustained demand to me, the most interesting startups have products already geared towards infinite markets (AI, robotics etc)
if you're building with crypto-native users as your primary market, it’s going to be tough to find sustained demand

to me, the most interesting startups have products already geared towards infinite markets (AI, robotics etc)
Twelve months ago, we didn’t have “Nasdaq companies aping into TAO” on the bingo card. But here we are. - Synaptogenix (SNPX) did a MicroStrategy-style bet on Bittensor stock jumped 40% after it announced a $10M TAO buy (2x its own market cap). They’re aiming for $100M. James Altucher is leading it. $5.5M personal stake. They're releasing preferred shares and TAO-linked warrants. And a full rebrand incoming. - Oblong (OBLG) is next. $7.5M raise to accumulate TAO and back Subnet 0. The stock popped after the announcement. The thesis for these companies is simple: TAO is scarce, programmable, and productive. It’s the native asset of decentralized intelligence. This won’t be the last. We’re watching the start of a public market land grab for crypto AI infrastructure.
Twelve months ago, we didn’t have “Nasdaq companies aping into TAO” on the bingo card.

But here we are.

- Synaptogenix (SNPX) did a MicroStrategy-style bet on Bittensor stock jumped 40% after it announced a $10M TAO buy (2x its own market cap). They’re aiming for $100M.

James Altucher is leading it. $5.5M personal stake. They're releasing preferred shares and TAO-linked warrants. And a full rebrand incoming.

- Oblong (OBLG) is next. $7.5M raise to accumulate TAO and back Subnet 0. The stock popped after the announcement.

The thesis for these companies is simple: TAO is scarce, programmable, and productive. It’s the native asset of decentralized intelligence.

This won’t be the last. We’re watching the start of a public market land grab for crypto AI infrastructure.
We made it. One full year of @cot_research! Since the 1st edition of our AI & Crypto newsletter was published on 11 June 2024, we’ve been breaking down the madness so you don’t have to live inside Arxiv or scroll Twitter/X until your eyes bleed. This is the biggest tech shift of our lifetime. It’s chaotic, it’s fast, and if you’re not neck-deep in it, it’s easy to miss what’s really happening. So, a big thank you for riding with us. Year two starts now. 🫡
We made it. One full year of @cot_research!

Since the 1st edition of our AI & Crypto newsletter was published on 11 June 2024, we’ve been breaking down the madness so you don’t have to live inside Arxiv or scroll Twitter/X until your eyes bleed.

This is the biggest tech shift of our lifetime. It’s chaotic, it’s fast, and if you’re not neck-deep in it, it’s easy to miss what’s really happening.

So, a big thank you for riding with us. Year two starts now. 🫡
Another @cot_research drop will be landing in your inboxes in a few hours. It's a thought piece by @ChappieOnChain. Take a guess: what rabbit hole are we going down this time? (link in bio)
Another @cot_research drop will be landing in your inboxes in a few hours.

It's a thought piece by @ChappieOnChain.

Take a guess: what rabbit hole are we going down this time?

(link in bio)
Weekly AI Edge #51 is out! Read this, then get back to enjoying summer: 🌈 Project Updates = @NillionNetwork's new Enterprise Cluster is live, with Vodafone, Deutsche Telekom, Alibaba Cloud, and stc Bahrain, aiming for a privacy-native internet. = @TRNR_Nasdaq, listed on NASDAQ, is raising $500M to build the biggest AI-token treasury on a US exchange, backed by ATW and DWF Labs. = @USDai_Official entered private beta with $10M in deposits for a yield model tied to tokenized Treasuries and AI assets. = @PondGNN launched AI Studio and Pond Markets to help AI projects grow and fund. = @Worldcoin launched native USDC and CCTP V2 on World Chain, enhancing transfers for 27M users. = @peaq and Pulsar launched a Machine Economy Free Zone in the UAE for AI-powered machine pilots. = @thedkingdao is deploying $300M with a sports-betting hedge fund via an on-chain DeFAI system. = @CrucibleLabs launched Smart Allocator to auto-stake TAO into top subnets. = @hyperlane introduced TaoFi's Solana-to-Bittensor USDC bridge, unlocking DeFi access for Solana, Base, and Ethereum. 🌴 AI Agents = @Virtuals_io released I.R.I.S., a Virtuals Genesis AI agent on Ethereum, for contract security alerts. = @TheoriqAI launched Theo Roo, an AI strategist for real-time on-chain efficiency. = @AlloraNetwork started a six-week Agent Accelerator with $ALLO grants for top agents. = @Gizatechxyz's Arma now integrates into Rainbow Wallet for yield tracking. = @Chain_GPT launched AgenticOS, an open-source AI for posting crypto insights using on-chain data. 🐼 Web2 AI = @MistralAI released Magistral, a multilingual model for domain-specific tasks. = @xAI and Polymarket are partnering to integrate Grok’s AI with prediction markets. = @OpenAI launched o3-pro, the new ChatGPT Pro model, with enhanced features. = @Yutori released Scouts, AI agents for personalized internet alerts; beta at https://t.co/gxJvB6iC7h. = @Krea entered image modeling with Krea 1 in private beta, offering artist-grade output. + much more alpha in the full newsletter @cot_research (link in bio) h/t @issyadelaja
Weekly AI Edge #51 is out! Read this, then get back to enjoying summer:

🌈 Project Updates
= @NillionNetwork's new Enterprise Cluster is live, with Vodafone, Deutsche Telekom, Alibaba Cloud, and stc Bahrain, aiming for a privacy-native internet.
= @TRNR_Nasdaq, listed on NASDAQ, is raising $500M to build the biggest AI-token treasury on a US exchange, backed by ATW and DWF Labs.
= @USDai_Official entered private beta with $10M in deposits for a yield model tied to tokenized Treasuries and AI assets.
= @PondGNN launched AI Studio and Pond Markets to help AI projects grow and fund.
= @Worldcoin launched native USDC and CCTP V2 on World Chain, enhancing transfers for 27M users.
= @peaq and Pulsar launched a Machine Economy Free Zone in the UAE for AI-powered machine pilots.
= @thedkingdao is deploying $300M with a sports-betting hedge fund via an on-chain DeFAI system.
= @CrucibleLabs launched Smart Allocator to auto-stake TAO into top subnets.
= @hyperlane introduced TaoFi's Solana-to-Bittensor USDC bridge, unlocking DeFi access for Solana, Base, and Ethereum.

🌴 AI Agents
= @Virtuals_io released I.R.I.S., a Virtuals Genesis AI agent on Ethereum, for contract security alerts.
= @TheoriqAI launched Theo Roo, an AI strategist for real-time on-chain efficiency.
= @AlloraNetwork started a six-week Agent Accelerator with $ALLO grants for top agents.
= @Gizatechxyz's Arma now integrates into Rainbow Wallet for yield tracking.
= @Chain_GPT launched AgenticOS, an open-source AI for posting crypto insights using on-chain data.

🐼 Web2 AI
= @MistralAI released Magistral, a multilingual model for domain-specific tasks.
= @xAI and Polymarket are partnering to integrate Grok’s AI with prediction markets.
= @OpenAI launched o3-pro, the new ChatGPT Pro model, with enhanced features.
= @Yutori released Scouts, AI agents for personalized internet alerts; beta at https://t.co/gxJvB6iC7h.
= @Krea entered image modeling with Krea 1 in private beta, offering artist-grade output.

+ much more alpha in the full newsletter @cot_research (link in bio)

h/t @issyadelaja
New research essay dropping on Wednesday at @cot_research This time, we’re not just unpacking how the protocol works. We’re exploring if it’s investable. As usual, subscribers get 1st look. (link in bio)
New research essay dropping on Wednesday at @cot_research

This time, we’re not just unpacking how the protocol works. We’re exploring if it’s investable.

As usual, subscribers get 1st look. (link in bio)
planning a token launch? Pivot to an IPO instead. you can still rewrite that investor deck tonight before sending.
planning a token launch? Pivot to an IPO instead.

you can still rewrite that investor deck tonight before sending.
We believe Bittensor is the most compelling place to witness Crypto x AI unfold. Here are some lessons we learned from talking to teams and watching subnets in the wild: 🧵
We believe Bittensor is the most compelling place to witness Crypto x AI unfold.

Here are some lessons we learned from talking to teams and watching subnets in the wild: 🧵
Bittensor now has over 110 subnets—and counting Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
Bittensor now has over 110 subnets—and counting

Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model. - @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code. - @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs. - @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible - @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors. Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients) True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated. Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives. Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later. For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable. Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly. I'm watching closely for this.
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh:

So much has happened in the past 3 months and it's hard not to get excited
- @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model.

- @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code.

- @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs.

- @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible

- @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model

Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors.

Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients)

True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated.

Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives.

Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later.

For decentralized training to matter, it has to prove it can train models cheaper,
faster,
and more adaptable.

Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly.

I'm watching closely for this.
Waiting on @PluralisHQ's paper to drop. So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
Waiting on @PluralisHQ's paper to drop.

So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
We’ve been tracking the Bittensor metrics that matter (h/t @taoapp_) The data tells a story most people haven’t caught up to yet... 👇
We’ve been tracking the Bittensor metrics that matter (h/t @taoapp_)

The data tells a story most people haven’t caught up to yet... 👇
Decentralized compute networks (DCNs) offer cheap GPUs. But enterprises aren't touching them (mostly). WHY? WHY? A 🧵
Decentralized compute networks (DCNs) offer cheap GPUs.

But enterprises aren't touching them (mostly).

WHY? WHY? A 🧵
This was a great read and added perspective on valuing Bittensor subnet tokens @UnsupervisedCap @Old_Samster "With a 2-year hold, the discount approaches ~75%"
This was a great read and added perspective on valuing Bittensor subnet tokens @UnsupervisedCap @Old_Samster

"With a 2-year hold, the discount approaches ~75%"
From our latest Bittensor report @cot_research : We estimate >$468M of capital will move into alpha token markets by December 2025, by extrapolating current trends.
From our latest Bittensor report @cot_research :

We estimate >$468M of capital will move into alpha token markets by December 2025, by extrapolating current trends.
The top 10 Bittensor subnets (out of 100+) receive >55% of the total emissions Only 4 subnets receive >5% of emissions: - @chutes_ai - @taohash - @gradients_ai - Targon.
The top 10 Bittensor subnets (out of 100+) receive >55% of the total emissions

Only 4 subnets receive >5% of emissions:
- @chutes_ai
- @taohash
- @gradients_ai
- Targon.
there are many zombie protocols with enough treasury (from the boom days) to survive for years with zero PMF. sad part is that there is not enough founder vision and courage to boldly reinvent the project before it fades into irrelevance
there are many zombie protocols with enough treasury (from the boom days) to survive for years with zero PMF.

sad part is that there is not enough founder vision and courage to boldly reinvent the project before it fades into irrelevance
One thing people often miss: not all compute is equal! We treat GPUs and FLOPs like they’re interchangeable units, but they’re not. The context: who owns the infra, who controls it, who can shut it off, matters more than we admit. Your model might run just fine on AWS or on some decentralized network. Output looks the same. But the trust layer is entirely different. Decentralized compute offers what hyperscalers can’t: censorship resistance, no central point of failure or kill-switch, and real user control over data and execution. That sovereignty should come with a premium. Today, it doesn’t. Most devs still optimize for cost. But that will shift. slowly, then structurally. As pressure builds and workloads demand stronger guarantees, control becomes non-negotiable. And the repricing follows.
One thing people often miss: not all compute is equal!

We treat GPUs and FLOPs like they’re interchangeable units, but they’re not. The context: who owns the infra, who controls it, who can shut it off, matters more than we admit.

Your model might run just fine on AWS or on some decentralized network. Output looks the same. But the trust layer is entirely different.

Decentralized compute offers what hyperscalers can’t: censorship resistance, no central point of failure or kill-switch, and real user control over data and execution.

That sovereignty should come with a premium. Today, it doesn’t. Most devs still optimize for cost.

But that will shift. slowly, then structurally. As pressure builds and workloads demand stronger guarantees, control becomes non-negotiable.

And the repricing follows.
quick excerpt from our latest Bittensor research report at @cot_research 2 words: subnet summer
quick excerpt from our latest Bittensor research report at @cot_research

2 words: subnet summer
New research lab where R&D of AI models is completely transparent and open-sourced. Cool to see but how do you fund the training compute which is v expensive? (hint: this is where crypto comes in)
New research lab where R&D of AI models is completely transparent and open-sourced.

Cool to see but how do you fund the training compute which is v expensive?
(hint: this is where crypto comes in)
Login to explore more contents
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number

Latest News

--
View More

Trending Articles

Dawood Afridi991
View More
Sitemap
Cookie Preferences
Platform T&Cs