Menu

Search

  |   Technology

Menu

  |   Technology

Search

Ex-OpenAI Researchers Warn Sam Altman’s AI Regulation Support Is a Public Relations Facade

Ex-OpenAI researchers accuse the company of prioritizing optics over safety in AI regulation. Credit: TechCrunch/Flickr(CC BY 4.0)

Two former OpenAI researchers have accused Sam Altman of opposing real AI regulation, suggesting that his public support for AI safety is more about optics than actual protection, endangering public safety.

$5 Billion Losses Reported

According to Business Insider, OpenAI is reportedly facing $5 billion in losses and is on the verge of bankruptcy. In response, the creator of ChatGPT argued against a measure (SB 1047) that would have established safety protocols to keep AI from becoming too advanced.

There is an urgent need for regulations and procedures to address users' concerns about privacy and security. William Saunders and Daniel Kokotajlo, two former OpenAI researchers, are among those who have criticized the organization for its stance against the law.

Resignation Over AI Safety Concerns

In response to OpenAI's concerns raised by the proposed AI legislation, the researchers note in a letter:

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."

According to the letter, the creator of ChatGPT creates complex AI models without extensive safeguards to keep them from becoming unmanageable, Windows Central shares.

Speedy GPT-40 Launch Raises Concerns

Reportedly sending out invitations for the event before testing even started, OpenAI seemed to speed through the GPT-40 launch, which is quite interesting. The business acknowledged that the alignment and safety team was understaffed and overworked, leaving little time for testing.

Claims that the tech company put an emphasis on flashy items above safety procedures have not stopped it from insisting that it did not skimp on shipping quality. The researchers warned that "foreseeable risks of catastrophic harm to the public" could result from developing AI models without safeguards.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.