Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI Disbands Team Tackling AI Risks Amid Leadership Changes and GPT-4o Launch

OpenAI disbands its Superalignment team, focused on managing AI risks like it "going rogue."

OpenAI has dismantled its Superalignment team, initially formed to address AI risks, following the resignations of key leaders Ilya Sutskever and Jan Leike.

OpenAI Disbands Superalignment Team Days After Leaders Resign, Sparking Concerns Over AI Safety

According to a Business Insider report, in July 2023, OpenAI formed its Superalignment team, which Ilya Sutskever and Jan Leike lead. The team was focused on reducing AI dangers, such as the possibility of it "going rogue."

The squad split up days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever stated in his post that he was "confident that OpenAI will build AGI that is both safe and beneficial" under the present leadership.

He also stated that he was "excited for what comes next," referring to a "very personally meaningful" project. The former OpenAI executive has yet to discuss the project in-depth but will provide more information later.

OpenAI Leaders' Departures Raise Questions About Company Values and AI Safety Priorities

Sutskever, a cofounder and former top scientist at OpenAI, made waves after announcing his leave. In November, the executive helped to remove CEO Sam Altman. Despite later regretting his role in Altman's expulsion, Sutskever's future at OpenAI had been questioned since Altman's reinstatement.

Following Sutskever's declaration, Leike announced on X, formerly Twitter, that he would also leave OpenAI. On May 17, the former CEO wrote a series of articles explaining his leave, which he said happened after differences regarding the company's primary values for "quite some time."

Leike stated that his team has been "sailing against the wind" and has struggled to obtain computing resources for their research. The Superalignment team's objective was to use 20% of OpenAI's processing capacity over the next four years to "build a roughly human-level automated alignment researcher," according to OpenAI's announcement of the team last July.

Leike added, "OpenAI must become a safety-first AGI company." He stated that constructing generative AI is "an inherently dangerous endeavor" and that OpenAI focused more on releasing "shiny products" than safety.

Jan Leike did not return a request for comment.

The Superalignment team's goal was to "solve the core technical challenges of superintelligence alignment in four years," which the corporation conceded was "incredibly ambitious." They also stated that they weren't guaranteed success.

The team addressed risks such as "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance." According to the company's post, the new team's work was in addition to existing OpenAI work aimed at improving the safety of current models, such as ChatGPT.

Wired stated that some remaining members have been transferred to other OpenAI teams.

Photo: Jonathan Kemper/Unsplash

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.