Home
Notification
Profile
Trending Articles
News
Bookmarked and Liked
History
Creator Center
Settings
0xPrismatic
--
Follow
We’ve been tracking the Bittensor metrics that ma
t
ter (h/t @taoapp_)
The data tells a story most people haven’t caught up to yet... 👇
Disclaimer: Includes third-party opinions. No financial advice. May include sponsored content.
See T&Cs.
56
0
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number
Sign Up
Login
Relevant Creator
0xPrismatic
@0xPrismatic
Follow
Explore More From Creator
Bittensor now has over 110 subnets—and counting Here are 12 Bittensor subnets that have stood out to us, and we're keeping on our watchlist👇
--
Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - @NousResearch pre-trained a 15B model in a distributed fashion and is now training a 40B model. - @PrimeIntellect fine-tuned a 32B Qwen base model over a distributed mesh, outperforming its Qwen baseline on math and code. - @tplr_ai trained a 1.2B model from scratch using token rewards. Early loss curves outperformed centralized runs. - @PluralisHQ showed that low-bandwidth, model-parallel training is actually quite feasible... something most thought impossible - @MacrocosmosAI releases a new framework with data+pipeline parallism + incentive design and starts training a 15B model Most teams today are scaling up to ~40B params, a level that seems to mark the practical limit of data parallelism across open networks. Beyond that, hardware requirements become so steep that participation is limited to only a few well-equipped actors. Scaling toward 100B or 1T+ parameter models, will likely depend on model parallelism, which comes with an order of magnitude harder challenges (dealing with activations, not just gradients) True decentralized training is not just training AI across distributed clusters. It’s training across non-trusting parties. That’s where things get complicated. Even if you crack coordination, verification, and performance, none of it works without participation. Compute isn’t free. People won’t contribute without strong incentives. Designing those incentives is a hard problem: lots of thorny issues around tokenomics which I will get into later. For decentralized training to matter, it has to prove it can train models cheaper, faster, and more adaptable. Decentralized training may stay niche for a while. But when the cost dynamics shift, what once seemed experimental can become the new default, quickly. I'm watching closely for this.
--
Waiting on @PluralisHQ's paper to drop. So we can update our upcoming deep dive report on decentralized training and hit publish. Hopefully within the next 24 hours!
--
Decentralized compute networks (DCNs) offer cheap GPUs. But enterprises aren't touching them (mostly). WHY? WHY? A 🧵
--
This was a great read and added perspective on valuing Bittensor subnet tokens @UnsupervisedCap @Old_Samster "With a 2-year hold, the discount approaches ~75%"
--
Latest News
U.S. Ethereum Spot ETF Sees Significant Inflow
--
Sunrise Secures $3 Million Seed Funding to Enhance Interoperability Protocol
--
Cipher Mining Reports May Bitcoin Production and Expansion Plans
--
BitMine Immersion Technologies Completes Public Offering and Plans Bitcoin Purchase
--
Binance Completes Hashflow (HFT) Integration on Solana Network
--
View More
Trending Articles
#BOB ChatGPT opinion regarding capture: (Build On BNB) that
Charlie1347
10 Altcoins Under $1 That Could Explode 1000X by 2025
JaneBennet8474
Elon Musk and long-time Trump supporters object to Trump's big beautiful bill
Cryptopolitan
[new companin join and get token](https://www.binance.info/e
umar SherazG
Why Ripple (XRP) Price Isn’t Exploding Despite the Hype
CaptainAltcoin
View More
Sitemap
Cookie Preferences
Platform T&Cs