Elon Musk's xAI raised $6 billion in its latest round of funding to build tools that can compete with artificial intelligence companies such as OpenAI and Anthropic. And computing infrastructure is key to the future competition among these companies.

In a post on xAI, Musk said xAI's pre-investment valuation was $18 billion. Andreessen Horowitz, Sequoia Capital and others became investors in xAI's Series B financing.

Musk said more announcements will be made in the coming weeks and xAI will soon launch "multiple exciting technology updates and products."

It is reported that the funds raised in the latest round of xAI will be used to bring the first batch of products to market, build advanced infrastructure, and accelerate the research and development of future technologies.

Last November, xAI released an AI chatbot called Grok, which is only available to X users with a paid subscription.

Musk told investors in a recent presentation that xAI is preparing to build a supercomputer for the next version of Grok. The goal is to have the supercomputer operational by the fall of 2025, and is considering xAI and Oracle working together to develop it.

According to reports, the supercomputer platform will be built using Nvidia H100 GPUs in series, and its size is at least four times that of the largest GPU cluster currently available. xAI has previously rented servers with about 16,000 H100 chips from Oracle. Data shows that if xAI does not develop its own computing power, it is likely to spend $10 billion on cloud servers in the next few years.

GPU is also an important infrastructure for technology companies to occupy the commanding heights of AI. Musk recently said that Tesla may be the second company with the most Nvidia H100 GPUs, second only to Meta, and xAI ranks third. However, he still said that the training and release of the Grok 2 model was delayed due to the lack of enough advanced chips.

It is reported that training Grok 2 requires about 20,000 NVIDIA H100 GPUs based on the Hopper architecture, and Grok 3 models and higher will require 100,000 H100 chips.

Tesla just added $500 million in new investment in January this year to purchase about 10,000 H100 GPUs for autonomous driving training. Data shows that the number of H100 GPUs owned by Tesla may exceed 30,000.

Last month, Nvidia delivered the first super AI chip DGX H200 system to OpenAI, and Nvidia founder and CEO Huang Renxun personally delivered it to the door. This delivery is a critical moment for OpenAI, a leader in artificial intelligence research. Compared with the previous generation H100, H200 has greatly improved performance.

In addition, Nvidia's latest Blackwell AI chip will also begin shipping this fiscal quarter, and production is expected to increase in the next quarter.

Huang Renxun said that Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and Musk's artificial intelligence company xAI will be the first users of the new generation of chips.

xAI is Musk’s platform for building “big models in pursuit of truth.” He posted a job posting to AI developers, encouraging them to “believe in the mission of understanding the universe.”

$FET $DOGE #PEPE创历史新高 $BOME #现货以太坊ETF获美SEC批准