⚡Grok-3: AI on steroids?⚡

📹 During the morning broadcast where #grok3 was launched, #xAI also shared new information about its data centers.

🌐 Some of you may remember how the news spread across the Internet in October that in just 122 days Nvidia, together with xAI, built the world's largest supercomputer, Colossus, with 100 thousand GPUs.

🤔 Then the moment simply surprised everyone: normally the construction of such clusters took 2-3 years to complete.

🔍 Immediately after this, the number of GPUs in Colossus doubled and took less time - only 92 days. It turns out that it was ready at the end of January, and the pre-training of Grok-3, according to Musk himself, was completed in the first days of the year.

🧠 So, it was unlikely that the basic Grok-3 would be trained with these capabilities. But training with reasoning is still ongoing (a lack of training checkpoint was shown in the demo).

✍🏻 But that's not all: Musk promises that Colossus will grow five times larger and eventually have 1 million video cards. That growth would cost roughly $25-30 billion, and work has already begun.

🇲🇨 For reference, each #NVIDIA H100 draws up to 700W, so Colossus will generally be about the same as Monaco...