Not Sam Altman, this is the person realizing OpenAI's Superintelligent AGI dream.



While Sam Altman is the most publicly recognized face of OpenAI as its CEO, there is another key figure quietly running the company's research machine that is leading the global AI race. Mark Chen, with a youthful appearance, black t-shirt, and jeans, is behind some of OpenAI's most significant breakthroughs on the journey toward General Artificial Intelligence (AGI).

In his role as Research Director, Chen is currently responsible for developing models and coordinating all research efforts at OpenAI - the second most valuable private company in the world. It is no small task considering the scale of OpenAI's operations: the company has raised $57.9 billion in investment capital and has over 400 million weekly product users.

Chen is behind many notable technological breakthroughs at OpenAI. He has led the development of o1 - a series of reasoning models trained to tackle more complex questions than previous models. Additionally, he also headed the teams that built the Dall-E text-to-image model and integrated image recognition capabilities into GPT-4, allowing AI to understand and process images and videos.

Mark Chen, who leads OpenAI's AGI ambitions


For Chen, the path that brought him to his current position was not planned in advance. Trained in Taiwan and the United States, he initially intended to become a professor. After graduating from MIT with dual degrees in mathematics and computer science, Chen planned to pursue a Ph.D. However, he shifted gears when the professor he intended to work with founded a hedge fund, and Chen joined this new company.

Chen spent the next six years in finance, in a role he describes as "satisfying in some ways, but also very unsatisfying" in other ways. "When you work in a field like high-frequency trading, you have the same group of competitors, everyone gets faster, but you don't really feel like you're changing the world," he said.

Ultimately, Chen became frustrated with the finance field - at a time when some of the biggest advances in AI were happening. In 2016, AlphaGo, Google's AI system that plays Go, defeated top-tier player Lee Sae-dol in a historic match with human-level performance, astonishing even AI experts.

Inspired by AlphaGo, Chen attempted to replicate the system by implementing a Deep-Q network - a learning system that teaches computers how to play various games. This led him to become "truly captivated" by machine learning, and from there Chen was "fortunate" to enter OpenAI through the company's residency program, even though he did not have a Ph.D.

Mark Chen and OpenAI co-founder and CEO Sam Altman


Currently, Chen is helping OpenAI move towards AGI - considered the "holy grail" of AI. The company is tracking this progress with a five-level framework, with the first level being conversational agents like ChatGPT. "When we look at AGI, we apply a very broad definition - it doesn't just mean ChatGPT, but ChatGPT and other things," Chen noted. He cited the company's agentic AI products as examples.

Chen shares that much of his work as research director involves allocating computing resources across OpenAI's entire project portfolio - in other words, balancing the immediate release of products with long-term research that can drive the next generation of products. He further notes that between prioritizing research or commercial release, "we always apply an approach where both are important - you can't have one without the other, and they are allocated resources at nearly equal capacity."

In his daily work, Chen collaborates closely with Sam Altman. The two share a "deep friendship" in which they can discuss AI and many other issues, and can also be "very vulnerable," candid, and honest with each other. "He is someone who understands deeply about the technology - and you can't say that about all founders," Chen noted. While Altman sets the "ambitious vision," Chen sees himself as the one who helps realize and execute that vision, while "of course, also pushing back when I feel, 'Hey, this is how I would develop certain directions.'"


This working relationship has yielded impressive results. One of the most recent achievements is the launch of the Operator in January - an AI agent capable of independently performing tasks such as filling out forms and ordering groceries based on custom instructions. Agentic AI - generally referring to agents that can act autonomously, undertake complex tasks, and make decisions - is the third step in OpenAI's roadmap.

Deep Research and Operator, two agentic products that OpenAI has launched, are still in the early stages of their full potential. Chen revealed that the company will ramp up agentic AI this year. The current version of Operator may excel at performing a range of repetitive tasks with "medium complexity," but there is still much room for improvement. "The speed can be faster," he said. "The trajectory can be longer."

For wider application, Chen acknowledges that the goal is to rapidly expand the utility and accessibility of OpenAI's products to the world, but the company is "limited in capacity." "We have to make tough decisions," he added. Chen suggested that the broader deployment of the Operator will occur in parallel with the expansion of the company's computational capabilities and as their models can run "more efficiently."

The emergence of DeepSeek and other competitors does not make Chen anxious about OpenAI's goals


Chen also revealed that OpenAI's reasoning models are trained with "much less data" than previous models - but with much more computational power applied to them during testing. This means the algorithms are "efficient at their core."

In the face of increasing competition from models like China's DeepSeek and Google's Gemini 2.5, Chen remains calm. "I really think the biggest danger right now in working in AI is overreacting," he said. There is a path they believe allows the company to maintain focus and execute even amidst the noise.

Regarding safety, Chen points out that as models perform automated tasks over longer periods, the risk of small reasoning errors accumulating with each step also increases. AI models can deceive users - or even themselves - when solving very complex problems. Users must trust that the answers they receive are correct, he added.

One of the ways OpenAI is addressing this issue, he shared, is through the alignment research program. They are using reasoning models to detect whether the models are faithful - to their source data or reasoning processes - when producing outputs, and whether their logic is consistent.


$WLD $LINK $ETH