based on material from the site - By crypto.news

On the same week that Meta poached key researchers from OpenAI, reports emerged of a week-long internal shutdown at OpenAI. What is happening?
On June 30, OpenAI reportedly instituted what several sources described as a week-long shutdown of the entire company to help employees recover from the unregulated workdays and increasing internal pressure.
The company has not officially confirmed the pause, but internal communications indicate that it occurred amid growing concerns about Meta's recruitment of top artificial intelligence specialists.
Days after Meta CEO Mark Zuckerberg hired four senior researchers from OpenAI to work in his superintelligence lab, OpenAI's Chief Scientist Mark Chen sent a stern memo to employees.
"Right now, I feel an internal sense as if someone has broken into our home and stolen something," Chen wrote in a Slack message obtained by WIRED.
The memo described how Chen, CEO Sam Altman, and other executives were actively working to prevent ongoing departures.
"We have been more active than ever before," Chen said, describing efforts to reassess compensation and look for new ways to retain top researchers. "We are working around the clock to talk to those with offers."
Calling Meta's approach "deeply disruptive," Chen emphasized that OpenAI's response would be based on internal fairness. "I will fight to keep each of you," he wrote, "but I will not do so at the expense of fairness to others."
Throughout June, Meta's recruiting activity intensified, shifting from standard outreach to direct and coordinated efforts involving CEO Mark Zuckerberg himself.
According to The New York Times on June 28, the approach included emails, WhatsApp messages, and personal dinner invitations to Zuckerberg's homes in Palo Alto and Lake Tahoe.
The efforts were organized through an internal chat group at Meta called the "Recruitment Team" and focused specifically on OpenAI researchers working on cutting-edge models.
In addition to OpenAI, Meta also hired several AI researchers from Anthropic and Google, further expanding its new superintelligence division.
The effect on OpenAI was immediate. Eight researchers left to join Meta's new AI superintelligence division. Trapit Bansal, a key figure in reinforcement learning and the o1 reasoning model, was among the first to leave, as confirmed by TechCrunch on June 26.
Soon after, others followed. Lukas Beyer, Alexander Kolesnikov, and Xiaohua Chai, who helped build OpenAI's office in Zurich, also joined Meta, according to a Wall Street Journal report on June 25.
Four more researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—later also left. Yu and Bi contributed to GPT-4o and o4-mini, two of OpenAI's latest models. Their departures were confirmed through deactivated Slack profiles.
The string of departures raised growing concerns among OpenAI leadership. In an interview on the Uncapped podcast on June 17, CEO Sam Altman stated that Meta offers signing bonuses of up to $100 million.
This figure was later challenged by Meta's CTO Andrew Bosworth, who told WIRED on June 30 that the compensation was structured differently and included several components.
Internally, OpenAI warned staff that Meta might take advantage of the company's downtime to expand its reach.
In a memo shared by Chief Scientist Mark Chen, employees were advised to exercise caution and not to be swayed by what was described as hasty or inflated offers.
In response, OpenAI began revising compensation packages and exploring new strategies to retain key talent as competition for AI researchers continues to intensify.
By July 2025, OpenAI and Meta were heading in completely different directions in their approaches to AI development.
OpenAI continues to create closed proprietary models intended for controlled deployment and premium pricing. The current set of models includes ChatGPT, GPT-4o, GPT-4.5, o3, and o4-mini.
Although these models are competitive in performance testing, they are only available through API and are costly for developers.
GPT-4.5, launched in February, showed performance gains over GPT-4o but faced widespread criticism for its price, reaching $150 per million output tokens.
In comparison, GPT-4o costs $10, while o3 costs $40 per million tokens. Despite their strength in reasoning tasks, the costs have created challenges for developers seeking scalable access.
OpenAI has expressed interest in expanding availability, announcing the release of an open reasoning model by the end of the year. This plan was first outlined in feedback posted on the company's website in April.
Meta, on the other hand, has built its AI ecosystem on open-source foundations. The Llama family of models has become central to this approach, especially the Llama 3.1 405B model, which has surpassed one billion downloads.
$BTC , $TON , $SUI
#MarketRebound , #Cryptomarketnews
In this group, we strive to promptly inform readers of news updates (from over a dozen sites) regarding changes in the cryptocurrency market and financial markets. There is no goal to impose on any readers the correctness (or incorrectness) of the article's website!
The information is retransmitted by us for the purpose of familiarizing with changes in the "news agenda," and without claims to originality! Conclusions after reading this information, every sane person will always make independently!