Sam Altman revealed the launch timeline of GPT-5, the progress of o3 and Deep Research, and a $500 billion "Stargate" infrastructure project in the official OpenAI podcast. (Background: Are AI replacing engineers in Hsinchu? Jensen Huang claims "robots will replace thousands of employees": Eight Taiwanese manufacturers are optimizing their processes) (Context: AI has really started to take human jobs) Major companies are accelerating layoffs, and American university graduates are facing immediate unemployment.. ) At midnight today (19th), OpenAI launched its podcast program for the first time on its official YouTube, with the first episode featuring CEO Sam Altman leading the discussion. In a 40-minute conversation, he outlined the company's next steps, revealing that GPT-5 is expected to launch this summer, while enhancing reasoning capabilities with the o3 family and Deep Research tools... This article summarizes the key points for you. GPT-5 is about to be released? Sam Altman's new blueprint for model evolution Regarding the market's primary concern about the next-generation flagship model, Altman provided a clear timeline: "GPT-5 will likely be launched sometime this summer." At the same time, he stated that the naming and iteration of future models may undergo fundamental changes. He explained that OpenAI's past approach was to train a large model and then release it. But now, the system has become more complex and can continuously evolve through ongoing "post-training". This has sparked an internal debate: Should updates be made continuously like GPT-4, while maintaining the main version number, or should a numbering system like GPT-5.1, 5.2, 5.3 be adopted, allowing users to clearly understand the version changes? This issue reflects the paradigm shift of AI technology from "discrete releases" to "continuous evolution". Altman admitted that the current naming method, where GPT-4o coexists with models like GPT-3, is a "product" of this transitional process, which is indeed somewhat confusing for users. He hopes to quickly move away from this situation and enter a clearer era of GPT-5 and GPT-6. He believes that users should not have to think about whether to use O4-mini-high or O3, but rather have one top-tier, reliable model available for use. This continuous evolution capability also makes the definition of "GPT-5" vague. Altman posed a rhetorical question: "Can users really distinguish between a top-tier GPT-4.5 and a completely new GPT-5? The answer is likely not." This suggests that future model upgrades will be seamless and incremental, with performance improvements being more important than the jumps in version numbers. At the same time, Altman redefined the standards of Artificial General Intelligence (AGI). He believes that if measured by five-year-old standards, the current models have already surpassed the previous definitions of AGI. Therefore, he proposed a higher goal: "superintelligence." For him, the hallmark of superintelligence is AI's ability to "autonomously discover new science" or greatly enhance human scientists' ability to discover new knowledge. He believes that scientific progress is the highest factor in improving human life, and AI's potential in this area is limitless. Currently, AI has already shown tremendous value in assisting programmers and scientists, which gives him increasing confidence in the roadmap to achieving this goal. Stargate Project: Unlocking the Multi-Billion Dollar Bet on AI's Future In the current AI race, computing power has become the overwhelming key. Altman revealed the U.S. "Stargate" project, a grand initiative to create ultra-large-scale computing power infrastructure. He candidly stated that the existing computing power globally is far from enough, "If people knew what more computing power could do, they would crave much more." Although the rumored $500 billion funding scale has not been confirmed, Altman expressed high confidence in fundraising and future deployment. He emphasized that the Stargate project involves not only hardware construction but also international politics and energy distribution. At the same time, Altman criticized Elon Musk for using his influence to obstruct cooperation with the UAE, emphasizing that AI should not be a zero-sum game but rather create an entirely new industry like transistors. Energy is the core driving force of this project. In the short term, it will rely on a combination of natural gas, solar energy, and nuclear energy, with future hopes pinned on nuclear fission and fusion. Altman proposed a key shift in thinking: "Make energy intelligent, and then export that intelligence globally." AI can deconstruct the geographical limitations of energy distribution, thereby completely reshaping global digital infrastructure. From Privacy Wars to Advertising Doubts: OpenAI's Trust Challenges As AI becomes deeply integrated into users' private lives, trust and privacy have become core issues that OpenAI cannot avoid. Altman firmly responded to the New York Times' requests in its lawsuit against OpenAI. The newspaper demanded that the court compel OpenAI to retain user chat records beyond the usual 30-day limit, which Altman called a "crazy overreach." He stated, "We will obviously fight to the end, and I believe we will win." He hopes this incident can serve as an opportunity for society to realize the importance of user privacy in the AI era and to establish a solid legal and ethical framework. "People are now having quite intimate conversations with ChatGPT; it will become a very sensitive source of information." Altman emphasized that privacy protection must be taken seriously. This also raises another sensitive topic: advertising. How will OpenAI handle the commercial potential of its vast user data? Altman's attitude is extremely cautious. He admitted that he is not entirely against advertising and even thinks that some advertising experiences on Instagram are not bad, but he believes that introducing advertising into ChatGPT must be done very carefully, and the standards for proof will be "very, very high." He pointed out that users have high trust in ChatGPT, partly because its experience is not "polluted" by advertising intentions like traditional social media or search engines. Therefore, he drew a red line: "If we start modifying the content flow returned by large language models (LLMs) for who pays more, that would feel very bad. For users, that would be a moment of trust collapse." He envisioned some possible models that do not undermine trust, such as taking a cut on transactions without affecting model output or placing ads outside the main conversation flow. But in any case, the premise must be "truly useful to users" and clearly stated that it will not interfere with the objectivity of the LLM. In contrast, he believes that OpenAI's current model of "creating high-quality services that users pay for" is very clear and healthy. The Ultimate Form of Human-Machine Interaction: Partnering with Jony Ive to Create New Hardware Another major highlight of the interview is Altman's confirmation that OpenAI is working with legendary Apple designer Jony Ive to develop new AI hardware. He candidly stated, "We are trying to do something of extremely high quality, this...