Complex text, feed your AI, it will explain)
Today I watched Nvidia and Vertiv's report from the AI-conference in Frankfurt am Main (Frankfurt). Here are the numbers to remember.
The cost of one data center per gigawatt for training models is now estimated at 60 billion dollars. Every day the downtime of a cluster with four thousand GPUs costs the company $300,000. Each minute of downtime of one rack costs five thousand.
The compute unit has ceased to be a server. Now it's a POD, a 12.5 megawatt module. To reach a gigawatt, the industry multiplies the power four times with a module of 12.5 megawatts: 12.5 → 50 → 250 → 1000 megawatts. The area for a one-gigawatt factory is approximately 1.3 by 1.2 kilometers. This is already the size of a city block.
Half a megawatt on a rack is not supplied with ordinary alternating current. The industry is moving to 800 volts DC. A service technician needs a high voltage electrician certificate, not just an installer. The first facility for 50 megawatts at 800V DC is included in the Asia-Pacific region in the first or second quarter of 2027.
what i see Competition in AI is shifting from code to land and kilowatts. Who does not have 60 billion dollars and access to the power grid, he will no longer be able to train his own large model. Open source continues to live, but without its GPUs and without its energy, the leadership is no longer with it.
By the end of the month, I will show you the development of ours. There are containers of 1 megawatt each and +/- $9 million per piece. My soul hears, we will soon invest heavily in such projects. The video in the attachment shows a cheap example, purely for visualization.