OpenAI has dismantled its Superalignment team, initially formed to address AI risks, following the resignations of key leaders Ilya Sutskever and Jan Leike.
OpenAI Disbands Superalignment Team Days After Leaders Resign, Sparking Concerns Over AI Safety
According to a Business Insider report, in July 2023, OpenAI formed its Superalignment team, which Ilya Sutskever and Jan Leike lead. The team was focused on reducing AI dangers, such as the possibility of it "going rogue."
The squad split up days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever stated in his post that he was "confident that OpenAI will build AGI that is both safe and beneficial" under the present leadership.
He also stated that he was "excited for what comes next," referring to a "very personally meaningful" project. The former OpenAI executive has yet to discuss the project in-depth but will provide more information later.
OpenAI Leaders' Departures Raise Questions About Company Values and AI Safety Priorities
Sutskever, a cofounder and former top scientist at OpenAI, made waves after announcing his leave. In November, the executive helped to remove CEO Sam Altman. Despite later regretting his role in Altman's expulsion, Sutskever's future at OpenAI had been questioned since Altman's reinstatement.
Following Sutskever's declaration, Leike announced on X, formerly Twitter, that he would also leave OpenAI. On May 17, the former CEO wrote a series of articles explaining his leave, which he said happened after differences regarding the company's primary values for "quite some time."
Leike stated that his team has been "sailing against the wind" and has struggled to obtain computing resources for their research. The Superalignment team's objective was to use 20% of OpenAI's processing capacity over the next four years to "build a roughly human-level automated alignment researcher," according to OpenAI's announcement of the team last July.
Leike added, "OpenAI must become a safety-first AGI company." He stated that constructing generative AI is "an inherently dangerous endeavor" and that OpenAI focused more on releasing "shiny products" than safety.
Jan Leike did not return a request for comment.
The Superalignment team's goal was to "solve the core technical challenges of superintelligence alignment in four years," which the corporation conceded was "incredibly ambitious." They also stated that they weren't guaranteed success.
The team addressed risks such as "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance." According to the company's post, the new team's work was in addition to existing OpenAI work aimed at improving the safety of current models, such as ChatGPT.
Wired stated that some remaining members have been transferred to other OpenAI teams.
Photo: Jonathan Kemper/Unsplash


Xiaomi's AI Model "Hunter Alpha" Mistaken for DeepSeek's Next Release
Elon Musk Confirms SpaceX, xAI, and Tesla Will Continue Large-Scale Nvidia Chip Orders
Apple Defies China's Smartphone Slump with Strong Early 2026 Sales
Alibaba Bets on AI Agents to Unify Its Vast Digital Ecosystem
Cyberattack on Stryker Triggers U.S. Government Warning Over Microsoft Intune Security
Amazon's AWS Could Hit $600 Billion in Revenue as AI Reshapes Cloud Growth
Meta Eyes Massive Layoffs to Fund AI Ambitions
AMD CEO Lisa Su Heads to Samsung's South Korea Chip Facility Amid AI Expansion Talks
Palantir's Maven AI Earns Pentagon "Program of Record" Status, Reshaping Military AI Strategy
Foxconn Shares Slip After Q4 Profit Miss Despite Record Revenue and Strong AI Outlook
Judge Dismisses Sam Altman Sexual Abuse Lawsuit, But Sister Can Refile
xAI Faces Lawsuit Over Grok AI-Generated Sexual Content Involving Minors
NVIDIA Resumes China AI Chip Production Amid $1 Trillion Revenue Forecast
Zhipu AI Launches GLM-5-Turbo Model to Power Next-Gen AI Agent Workflows
Jeff Bezos Eyes $100 Billion Fund to Transform Manufacturing With AI
Microsoft Eyes Legal Action as Amazon-OpenAI Deal Threatens Azure Exclusivity 



