OpenAI unveiled two new open-source AI models, GPT-OSS-120b and GPT-OSS-20b, marking its first public code release since GPT-2 in late 2019. Many see the move as a huge challenge to China’s dominance in open-source models.

Sam Altman, co-founder and CEO, hailed the launch as “a big deal,” writing on the X platform that these are “the best and most usable open models in the world.”

The company said both models deliver performance on par with its smaller o-series reasoning models, o4-mini and o3-mini, and are designed to balance capability with accessibility.

OpenAI’s Chinese rivals step up their game

The larger GPT-OSS-120b requires substantial compute infrastructure, while GPT-OSS-20b is optimized to run on a single high-end laptop. According to OpenAI, offering on-premise options lets enterprises avoid latency and privacy concerns tied to cloud-hosted APIs.

The announcement also follows reports that OpenAI is in talks over a potential US$500 billion valuation in a stock sale for current and former employees, fueled, in part, by excitement around its next-generation GPT-5.

The timing of OpenAI’s release coincides with a surge of Chinese open-source AI models aiming to erode the West’s lead. Last year, Hangzhou-based DeepSeek surprised the market by open-sourcing two high-performance, cost-effective models.

Alibaba Cloud’s Qwen family has expanded to include video-generation tools and Qwen-Image, which excels at rendering and editing text in images.

Ray Wang, research director at consultancy Futurum Group, noted that “in the open-source space overall, China still has an edge over the US in the number of highly competitive models available,” but conceded that OpenAI’s move would “put pressure on Chinese rivals.”

Beijing-based Zhipu AI has released GLM-4.5, which it claims ranks third globally and first among domestic models across 12 key benchmarks. Moonshot AI’s Kimi 2, boasting 1 trillion parameters, more than eight times GPT-OSS-120b’s size, has also drawn acclaim for its frontier performance.

Are tech firms balancing openness with safety concerns?

Open source has long powered software innovation, but AI’s unique risks, disinformation, hate speech and even the potential for biothreats, have tempered the trend. After releasing GPT-2, OpenAI sharply curtailed open releases, citing safety and misuse worries.

However, in recent months, a chorus of voices, including OpenAI president Greg Brockman, have argued that sharing models fosters a feedback loop essential for rapid improvement.

“If we provide a model, people use us and give feedback,” Brockman told reporters, “which helps us make further progress.”

Clément Delangue, CEO of Hugging Face weighed in.

“If you lead in open source, you will soon lead in AI. Open ecosystems accelerate innovation.”

Delangue

Still, national security hawks and safety commentators caution that wider distribution could lower the barrier for malicious actors.

The US government’s recent approval for Nvidia to sell advanced AI chips in China suggests a growing confidence in managing these dual-use technologies, but the debate is far from settled.

OpenAI’s new open-source models arrive as the World Artificial Intelligence Conference in Shanghai reported 1,509 active AI models, both proprietary and open, highlighting the sheer scale of the competition. Chinese tech forums like Zhipu have buzzed with more than 250,000 views on discussions about GPT-OSS, and companies such as Zhipu have already labeled the release “game changers.”

Looking ahead, the success of GPT-OSS-120b and GPT-OSS-20b will be measured not only by benchmark scores but by community adoption, third-party integrations and the quality of downstream innovations.

As OpenAI balances commercial API offerings with freely available alternatives, the industry will watch whether open-source truly levels the playing field or simply raises the stakes in a global race for AI supremacy.

Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites