OpenAI has dismantled its Superalignment team, initially formed to address AI risks, following the resignations of key leaders Ilya Sutskever and Jan Leike.
OpenAI Disbands Superalignment Team Days After Leaders Resign, Sparking Concerns Over AI Safety
According to a Business Insider report, in July 2023, OpenAI formed its Superalignment team, which Ilya Sutskever and Jan Leike lead. The team was focused on reducing AI dangers, such as the possibility of it "going rogue."
The squad split up days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever stated in his post that he was "confident that OpenAI will build AGI that is both safe and beneficial" under the present leadership.
He also stated that he was "excited for what comes next," referring to a "very personally meaningful" project. The former OpenAI executive has yet to discuss the project in-depth but will provide more information later.
OpenAI Leaders' Departures Raise Questions About Company Values and AI Safety Priorities
Sutskever, a cofounder and former top scientist at OpenAI, made waves after announcing his leave. In November, the executive helped to remove CEO Sam Altman. Despite later regretting his role in Altman's expulsion, Sutskever's future at OpenAI had been questioned since Altman's reinstatement.
Following Sutskever's declaration, Leike announced on X, formerly Twitter, that he would also leave OpenAI. On May 17, the former CEO wrote a series of articles explaining his leave, which he said happened after differences regarding the company's primary values for "quite some time."
Leike stated that his team has been "sailing against the wind" and has struggled to obtain computing resources for their research. The Superalignment team's objective was to use 20% of OpenAI's processing capacity over the next four years to "build a roughly human-level automated alignment researcher," according to OpenAI's announcement of the team last July.
Leike added, "OpenAI must become a safety-first AGI company." He stated that constructing generative AI is "an inherently dangerous endeavor" and that OpenAI focused more on releasing "shiny products" than safety.
Jan Leike did not return a request for comment.
The Superalignment team's goal was to "solve the core technical challenges of superintelligence alignment in four years," which the corporation conceded was "incredibly ambitious." They also stated that they weren't guaranteed success.
The team addressed risks such as "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance." According to the company's post, the new team's work was in addition to existing OpenAI work aimed at improving the safety of current models, such as ChatGPT.
Wired stated that some remaining members have been transferred to other OpenAI teams.
Photo: Jonathan Kemper/Unsplash


OpenAI Faces Scrutiny After Banning ChatGPT Account of Tumbler Ridge Shooting Suspect
Anthropic Resists Pentagon Pressure Over Military AI Restrictions
Nvidia Earnings Preview: AI Chip Demand, Data Center Growth and Blackwell Shipments in Focus
Hyundai Motor Group to Invest $6.26 Billion in AI Data Center, Robotics and Renewable Energy Projects in South Korea
Samsung Stock Hits Record High on Nvidia HBM4 Supply Deal, Boosting AI Chip Rally
Meta Encryption Plan Sparks Child Safety Concerns Amid New Mexico Lawsuit
Nvidia Earnings Beat Expectations as AI Demand Surges, Stock Rises on Strong Revenue Outlook
Apple to Begin Mac Mini Production in Texas Amid $600 Billion U.S. Investment Plan
Amazon’s $50B OpenAI Investment Tied to AGI Milestone and IPO Plans
Microsoft Gaming Leadership Shake-Up: Phil Spencer Retires, Asha Sharma Named New Xbox CEO
Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology
Nintendo Share Sale: MUFG and Bank of Kyoto to Sell Stakes in Strategic Unwinding
Nvidia Earnings Preview: AI Growth Outlook Remains Strong Beyond 2026 



