Policymakers are facing tough decisions on AI safety as rapidly evolving technology creates cybersecurity risks and oversight gaps. A US official emphasized the need for industry collaboration to address synthetic content manipulation and other challenges.
AI Safeguards Hindered by Evolving Science
Reuters reports that a huge obstacle stands in the way of policymakers who want to propose AI protections: a field that is still developing.
Elizabeth Kelly, head of the U.S. Artificial Intelligence Safety Institute, stated on Tuesday that developers of AI systems are themselves struggling with the question of how to avoid the exploitation of such technologies, and that there is no simple solution that government authorities can adopt.
Kelly raised the issue of cybersecurity while speaking at the New York City Reuters NEXT conference. According to her, "jailbreaks," or methods to get over the security and other restrictions put in place by AI research facilities, are not hard to implement.
"It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," said Kelly.
Experts in the field of technology are currently arguing about the best way to evaluate and safeguard AI in all its forms.
Cybersecurity Challenges Amplify AI Safety Concerns
The use of synthetic materials is another topic. She expressed her concern that authorities are finding it too easy to manipulate digital watermarks, which show customers when photos are created using artificial intelligence, in order to set industry standards.
Kelly noted that the U.S. AI Safety Institute, which was established during the Biden administration, is working to address these concerns through coalitions of academia, businesses, and nonprofits, which in turn drive the institute's assessments of AI technology.
Per Investing.com, concerning the fate of the organization following Donald Trump's inauguration in January, she stated that AI safety is a "fundamentally bipartisan issue."
Global Cooperation on Interoperable AI Safety Testing
Earlier this month, in San Francisco, Kelly, the institute's first director, presided over the first-ever meeting of AI safety institutions from all over the globe.
With the assistance of more technical, hoodie-wearing specialists than in a normal diplomatic meeting, Kelly stated that the ten member countries were striving toward interoperability safety testing when questioned about the results of these gatherings.
"It was very much getting the nerds in the room," according to her.


USDA $12 Billion Farm Aid Program Draws Mixed Reactions from Row Crop Farmers
Discord Confidentially Files for U.S. IPO, Signaling Major Milestone
Trump Considers Starlink to Restore Internet Access in Iran Amid Protests
OpenAI Sets $50 Billion Stock Grant Pool, Boosting Employee Equity and Valuation Outlook
South Korea Factory Activity Returns to Growth in December on Export Rebound
SK Hynix Shares Hit Record High as AI Memory Demand Fuels Semiconductor Rally
Oil Prices Slip Slightly as Markets Weigh Geopolitical Risks and Supply Glut Concerns
Federal Reserve Begins Treasury Bill Purchases to Stabilize Reserves and Money Markets
Mercedes-Benz to Launch Advanced Urban Self-Driving System in the U.S., Challenging Tesla FSD
Asian Currencies Trade Flat as Dollar Weakens in Thin New Year Trading
Oil Prices Stabilize at Start of 2026 as OPEC+ Policy and Geopolitical Risks Shape Market Outlook
Supreme Court to Hear Cisco Appeal on Alien Tort Statute and Human Rights Liability
U.S. Dollar Slides Toward Biggest Annual Loss Since 2017 as 2026 Risks Loom
Intel Unveils Panther Lake AI Laptop Chips at CES 2025, Marking Major 18A Manufacturing Milestone
Wall Street Ends Mixed as Tech and Financial Stocks Weigh on Markets Amid Thin Holiday Trading
South Korea Factory Output Misses Forecasts in November Amid Ongoing Economic Uncertainty
Baidu’s AI Chip Unit Kunlunxin Prepares for Hong Kong IPO to Raise Up to $2 Billion 



