Home
Notification
Profile
Trending Articles
News
Bookmarked and Liked
History
Creator Center
Settings
0xPrismatic
--
Follow
Bittensor now has over 110 subnets—and counting
Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
Disclaimer: Includes third-party opinions. No financial advice. May include sponsored content.
See T&Cs.
TAO
367.4
-1.50%
55
0
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number
Sign Up
Login
Relevant Creator
0xPrismatic
@0xPrismatic
Follow
Explore More From Creator
We believe Bittensor is the most compelling place to witness Crypto x AI unfold. Here are some lessons we learned from talking to teams and watching subnets in the wild: 🧵
--
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model. - @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code. - @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs. - @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible - @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors. Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients) True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated. Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives. Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later. For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable. Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly. I'm watching closely for this.
--
Waiting on @PluralisHQ's paper to drop. So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
--
We’ve been tracking the Bittensor metrics that matter (h/t @taoapp_) The data tells a story most people haven’t caught up to yet... 👇
--
Decentralized compute networks (DCNs) offer cheap GPUs. But enterprises aren't touching them (mostly). WHY? WHY? A 🧵
--
Latest News
Davis Commodities Stock Surges Amid Bitcoin Reserve Plans
--
U.S. Labor Market Shows Signs of Cooling, Potential Fed Rate Cuts in July
--
Glassnode Highlights Key Support Levels Amid Bitcoin Sell-Off
--
Tech Giants Explore Stablecoin Integration Amid Growing Interest
--
Binance Alpha to List DeFi App (HOME)
--
View More
Trending Articles
🚨 WHY DID $BTC & THE WHOLE MARKET JUST DUMP?!
FARAZ AHMED 786
One tweet… and the whole market crashed. Why does this happe
Awais_Nazir
#TradingSignals💹💬 - Short sell Coin - $SOL TERM - Short t
Mediterranean Sea
🚨 "Buy the Dip" ke Peeche ka Shocking Math (Jis wajah se zy
Divine SP
小号每天2+15=17分。 17×12=204 17×13=221 17×14=238 17×15=255 分数基本在这
夏木airdrop
View More
Sitemap
Cookie Preferences
Platform T&Cs