🚨DEEP SEEK R2 LEAK - READY TO EAT GPT-4 FOR BREAKFAST?

Alleged leaks say DeepSeek R2 has 1.2 trillion parameters, is 10x bigger than GPT-4—and 97% cheaper to run. Yes, cheaper and smarter.

Forget cat facts and casual convo—this thing was supposedly trained on 5.2 petabytes of legal, financial, and patent-grade data.

Using a MoE (Mixture of Experts) setup—where only the most relevant parts of the model are activated for each task—it turns on just 78 billion of its 1.2 trillion parameters at a time, staying efficient without losing brainpower.

no press release