Home
Notification
Profile
Trending Articles
News
Bookmarked and Liked
History
Creator Center
Settings
0xPrismatic
--
Follow
planning a token launch? Pivot to an IPO instead.
you can still rewrite that investor deck tonight before sending.
Disclaimer: Includes third-party opinions. No financial advice. May include sponsored content.
See T&Cs.
4
0
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number
Sign Up
Login
Relevant Creator
0xPrismatic
@0xPrismatic
Follow
Explore More From Creator
New research essay dropping on Wednesday at @cot_research This time, we’re not just unpacking how the protocol works. We’re exploring if it’s investable. As usual, subscribers get 1st look. (link in bio)
--
We believe Bittensor is the most compelling place to witness Crypto x AI unfold. Here are some lessons we learned from talking to teams and watching subnets in the wild: 🧵
--
Bittensor now has over 110 subnets—and counting Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
--
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model. - @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code. - @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs. - @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible - @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors. Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients) True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated. Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives. Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later. For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable. Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly. I'm watching closely for this.
--
Waiting on @PluralisHQ's paper to drop. So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
--
Latest News
Hong Kong Securities and Futures Commission Expands Virtual Asset Regulation
--
Binance Market Update (2025-06-13)
--
Smarter Web Company Increases Bitcoin Holdings
--
Thailand and U.S. Aim to Conclude Online Tariff Talks by July Deadline
--
NATO Secretary General Emphasizes Importance of Middle East Stability
--
View More
Sitemap
Cookie Preferences
Platform T&Cs