OpenAI founders Sam Altman and Greg Brockman are defending their safety measures after top researchers Ilya Sutskever and Jan Leike resigned, highlighting internal disagreements on AI priorities.
Key OpenAI Safety Researchers Resign Amid Disputes Over AI Development Priorities
In a recent report by Business Insider, Ilya Sutskever, the company's principal scientist and founder, revealed on X that he was leaving on May 14. Hours later, his colleague Jan Leike followed likewise.
Sutskever and Leike led OpenAI's super alignment team, which worked on creating AI systems compatible with human interests. This sometimes put them at odds with members of the company's leadership, who urged for more aggressive development.
"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point," Leike wrote on X on May 17.
Sutskever was one of six board members that attempted to remove Altman as CEO in November, though he later indicated he regretted the decision.
Altman named Sutskever "one of the greatest minds of our generation" after their departures and said he was "super appreciative" of Leike's contributions to X. He also stated that Leike was correct: "We have a lot more to do; we are committed to doing it."
However, as public concern grew, Brockman provided more information on Saturday about how OpenAI plans to tackle safety and risk in the future, particularly as it advances artificial general intelligence and builds AI systems that are more advanced than chatbots.
In a nearly 500-word post on X that he and Altman both signed, Brockman discussed the efforts OpenAI has already made to assure the technology's safe development and implementation.
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.
Altman recently stated that the ideal approach to regulate AI would be through an international institutionthat provides appropriate safety testing. Still, he also raised concerns about government lawmakers' technology regulation, which may need to be fully comprehended.
OpenAI Faces Skepticism Despite Efforts to Ensure Safe Deployment of Advanced AI Systems
According to Brockman, OpenAI has also laid the groundwork for safely deploying AI systems with more excellent capabilities than GPT-4.
"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman stated.
Brockman and Altman concluded in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaboration with "governments and many stakeholders on safety."
But only some people are convinced that the OpenAI team is moving forward with research in a way that assures human safety, least of all, it appears, the people who, until a few days ago, oversaw the company's effort in this respect.
"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.
Photo: Andrew Neel/Unsplash


SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
EssilorLuxottica Bets on AI-Powered Smart Glasses as Competition Intensifies
Nvidia Develops New Location-Verification Technology for AI Chips
SpaceX Reportedly Preparing Record-Breaking IPO Targeting $1.5 Trillion Valuation
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
Trello Outage Disrupts Users as Access Issues Hit Atlassian’s Work Management Platform
Apple Explores India for iPhone Chip Assembly as Manufacturing Push Accelerates
Mizuho Raises Broadcom Price Target to $450 on Surging AI Chip Demand
Adobe Strengthens AI Strategy Ahead of Q4 Earnings, Says Stifel
SpaceX Insider Share Sale Values Company Near $800 Billion Amid IPO Speculation
noyb Files GDPR Complaints Against TikTok, Grindr, and AppsFlyer Over Alleged Illegal Data Tracking.
Australia’s Under-16 Social Media Ban Sparks Global Debate and Early Challenges
Microsoft Unveils Massive Global AI Investments, Prioritizing India’s Rapidly Growing Digital Market
Amazon in Talks to Invest $10 Billion in OpenAI as AI Firm Eyes $1 Trillion IPO Valuation
Australia Enforces World-First Social Media Age Limit as Global Regulation Looms




