Former OpenAI executive Jan Leike resigns, criticizing the company's prioritization of "shiny products" over AI safety, calling out Sam Altman's leadership.
Safety vs. Product Focus
Leike, the leader of an artificial intelligence company's superalignment group, announced his resignation on Tuesday. He has since elaborated on his departure, stating that OpenAI does not demonstrate sufficient concern for safety, Business Insider shares.
In a lengthy X post on Friday, Leike wrote, "Over the past years, safety culture and processes have taken a backseat to shiny products." He adds, "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
Leike stated in his posts that he joined OpenAI because he believed it would be the greatest location to conduct research on "steer and control" of artificial general intelligence. Regarding the company's priorities, the ex-OpenAI executive stated that they should focus on "security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics."
Superalignment Team Dissolution
Leike also claimed that his team at OpenAI was "sailing against the wind" to match AI systems with human benefit. In fact, Wired has reported that their superalignment team is no longer active.
Following Leike's and Ilya Sutskever's announcement of their resignations earlier this week, the team reportedly broke apart a few days later. Under the current leadership, Sutskever stated in his post that he was "confident that OpenAI will build AGI that is both safe and beneficial."
OpenAI declined to comment on Sutskever's or other members of the superalignment team's departures or the status of its research into long-term AI threats. John Schulman will now head research on the risks associated with more powerful models and the team in charge of fine-tuning AI models after training.
Photo: Emiliano Vittoriosi/Unsplash