The White House has announced significant new measures aimed at protecting U.S. national security from the misuse of artificial intelligence, particularly in nuclear and cybersecurity domains. The initiative will increase collaboration between the intelligence community and AI industry to safeguard vital technologies from hostile actors.
The White House Expands Efforts to Safeguard U.S. AI Innovations from Adversarial Intelligence Threats
In an unprecedented move, the White House has issued a new memorandum aimed at strengthening collaboration between the U.S. national security establishment and the artificial intelligence (AI) industry. In a report by Wccftech, this comes as successive administrations, from Trump to Biden, have focused on addressing the growing threats to U.S. national security, particularly from hostile state and non-state actors attempting to gain an advantage in high-tech industries such as semiconductor fabrication by accessing proprietary information or technology.
The October 24 announcement builds on the White House's efforts to protect American AI innovations from intelligence operations conducted by adversarial actors. The initiative seeks to increase information sharing between the U.S. intelligence sector and the AI industry to safeguard national security.
The memorandum outlines vital policy objectives, including enhancing U.S. AI leadership through talent acquisition, leveraging AI to bolster national security, and developing a global AI policy framework. A central component of this strategy is ensuring that the AI industry has access to relevant counterintelligence information to better defend against threats posed by hostile state and non-state actors.
New AI Safety Measures Target Misuse in Chemical, Biological, and Cybersecurity Threats, Says White House
Additionally, the memorandum aims to mitigate risks arising from deliberate misuse of AI and accidental consequences. It directs the Commerce Department to collaborate with the AI Safety Institute (AISI) and engage with the private sector through classified and unclassified activities. One focus area is protecting against the misuse of AI in developing chemical and biological weapons and enhancing biosecurity.
The Department of Commerce is tasked with establishing a lasting capability to lead voluntary, unclassified pre-deployment safety testing of advanced AI models on behalf of the U.S. government. This testing will assess potential risks related to chemical, biological, and cybersecurity threats.
Within three months of the memorandum’s issuance, the AISI is expected to test at least two AI models to determine whether they could "aid offensive cyber operations, accelerate the development of biological and/or chemical weapons, autonomously engage in malicious activities, automate the development of other harmful models, or give rise to other risks."
Department of Energy Tasked with Evaluating AI’s Nuclear Risks as U.S. Strengthens AI Safeguards
The memorandum also assigns the Department of Energy (DOE), through the National Nuclear Security Administration (NNSA), to oversee the nuclear aspect of AI misuse. The DOE is instructed to test AI models for their potential to generate or exacerbate nuclear and radiological risks and evaluate their capabilities in atomic knowledge. Following these evaluations, the DOE will submit a report to the President recommending any necessary corrective actions, particularly regarding protecting restricted data and classified information.
Highlighting that adversarial actors have often used tactics such as research collaborations, investment schemes, insider threats, and cyber espionage to exploit U.S. scientific advancements, the White House has directed the Office of the Director of National Intelligence (ODNI) and the National Security Council (NSC) to improve their identification and assessment of foreign intelligence threats to the U.S. AI ecosystem. This extends to critical sectors like semiconductor fabrication.
Moreover, the memorandum instructs the Department of Defense, Department of Commerce, Department of Homeland Security, the Department of Justice, and other relevant U.S. government agencies to "develop a list of the most plausible avenues" through which adversaries could harm the U.S. AI supply chain, ensuring that protective measures are in place to counter such threats.