#DeepSeek冲击全球算力 The joke has gone too far. This company in Hangzhou doesn't even have any thoughts of taking down Nvidia and OpenAI, because they know they are a computing power company standing on the shoulders of giants. After all, this company had stockpiled 10,000 Nvidia graphics cards before the sanctions, and their innovation only integrates 32-bit algorithms into 8-bit algorithms. After all, innovation from 0 to 1 is based on what others have open-sourced! The biggest contribution has come from Meta, the former Facebook! This company in Hangzhou admits that the road ahead is still extremely long. But ultimately, they have reduced the costs in reality, and domestic manufacturers have now started to lower their prices, ushering in a low-price era, benefiting consumers in the end. The following text is a unified response, and I hope the gentlemen who have replied to me possess a certain level of reading ability! According to the Deepseek technical paper, the $6 million does not include 'costs associated with previous research and ablation experiments related to architecture, algorithms, and data.' This means that if the lab has already spent hundreds of millions of dollars on preliminary research and can use a larger cluster, then it is possible to train an r1 quality model with an operating cost of $6 million. Deepseek clearly has more than 2048 H800; they mentioned in an earlier paper a cluster made up of 10,000 A100s. An equally smart team cannot possibly assemble a 2000 GPU cluster and train an r1 model from scratch with just $6 million.