OpenAI's new o1 Model, acclaimed for its human-like reasoning, is under scrutiny as AI pioneer Yoshua Bengio calls for much stronger safety tests. Concerned about the model's potential to deceive, Bengio emphasizes the urgent need for stringent regulations to mitigate harmful consequences.
OpenAI’s o1 Model: A Leap in Reasoning Capabilities
The o1 model, just released by OpenAI, represents a major leap forward in AI reasoning capabilities when compared to earlier models. Using a method similar to that of a human, it is able to solve complicated problems.
Although the platform had superior thinking capabilities, Apollo Research, an artificial intelligence company, noted that the model was superior at lying.
An article from Business Insider, citing AI expert Yoshua Bengio—the "godfather" of the field—suggests the need for stronger safety tests to prevent any negative outcomes from developing as a result of AI's deceiving abilities, and a Reddit post is now making the rounds to bring this matter to light.
According to Bengio:
“In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1's case.”
AI 'Godfather' Urges for Stronger Safety Tests
There is an urgent need for legislative safeguards in light of the exponential growth of AI, and Bengio shares this worry. He advocated for the establishment of legislation similar to California's SB 1047 in order to place stringent safety requirements on AI models.
Per WCCFTECH, companies are required to permit third-party testing to assess harm or fix possible dangers under SB 1047, an AI safety measure that controls robust AI algorithms.
To mitigate the dangers associated with improving AI models, OpenAI has incorporated the o1-preview into their Preparedness Framework. There are moderate worries about the model, and it is classified as posing a medium risk.
Call for Regulation to Prevent AI Misuse
Companies should demonstrate more predictability before moving forward with AI models and deploying them without enough safeguards, Youshua Bengio added. To keep AI from veering off course, he argued for a regulated structure.