Home
Notification
Profile
Trending Articles
News
Bookmarked and Liked
History
Creator Center
Settings
0xPrismatic
--
Follow
planning a token launch? Pivot to an IPO instead.
you can still rewrite that investor deck tonight before sending.
Disclaimer: Includes third-party opinions. No financial advice. May include sponsored content.
See T&Cs.
3
0
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number
Sign Up
Login
Relevant Creator
0xPrismatic
@0xPrismatic
Follow
Explore More From Creator
We believe Bittensor is the most compelling place to witness Crypto x AI unfold. Here are some lessons we learned from talking to teams and watching subnets in the wild: 🧵
--
Bittensor now has over 110 subnets—and counting Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
--
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model. - @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code. - @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs. - @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible - @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors. Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients) True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated. Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives. Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later. For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable. Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly. I'm watching closely for this.
--
Waiting on @PluralisHQ's paper to drop. So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
--
We’ve been tracking the Bittensor metrics that matter (h/t @taoapp_) The data tells a story most people haven’t caught up to yet... 👇
--
Latest News
Survey Reveals Financial Struggles Among U.S. Consumers
--
Huma Finance After Binance Launchpool: How This PayFi Pioneer Is Reimagining Real-World Assets in DeFi
--
Japanese Firm Remixpoint Acquires Additional Bitcoin Worth $4.7 Million
--
Musk and Trump Clash Impacts Tesla Stock
--
Uber CEO Discusses Bitcoin and Stablecoins
--
View More
Trending Articles
📍 $BTC /USDT LIQUIDATION MAP – PRESSURE BUILDING! Current
Crypto Vantix
Ripple Moves $498 Million in XRP to Unknown Wallet: What’s Going On?
Coinstages
Trump Just Dumped His Tesla — And the Musk Feud Is Getting Expensive
Saba urooj
🚨 ACCOUNT RESTRICTION WARNING! 🚨 Don’t Get Banned — Read T
Ashworld
$ETH / USDT ✅ Entry Point (EP): $2,515 (Price just broke t
Awais1628
View More
Sitemap
Cookie Preferences
Platform T&Cs