In a provocative development, Chinese military-linked researchers have reportedly adapted Meta’s Llama AI model into a tool for intelligence gathering. This AI, named ChatBIT, uses the open-source model to analyze sensitive data, raising global alarm over potential misuse.
China’s Military AI Tool Unveiled in Research
Reuters combed through three academic articles and found that China had developed its own artificial intelligence tool based on an existing Llama model; this technology might find its way into the Chinese military.
In order to create "ChatBIT," which is based on Llama 13B, two Chinese military-affiliated institutions participated in the research. Military intelligence data was gathered and processed by means of the AI. Its military might make use of it for analysis or training in the future.
A Limited Dataset Raises Questions on ChatBIT’s Capabilities
However, it's worth noting that ChatBIT was trained on a rather small dataset for an AI model—only around 100,000 military documents. Professor Joelle Pineau of McGill University and Meta's VP of AI research told Reuters that this shows it could not be as capable as the research suggests.
Espionage, "military, warfare, nuclear industries or applications" or model re-sharing outside of the US makes it difficult to enforce Meta's regulations that ban Llama models for these purposes.
Meta’s Policy on Military Use and Unauthorized Access
A statement from Meta reads: "any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy." The company claims to have taken measures to avoid the misuse of its models.
Meta announced in February of last year that its 13B model would be accessible solely to researchers upon its release.
How Did Chinese Researchers Gain Access?
The exact way the Chinese researchers got their hands on the model—via direct access or some other channel—remains unknown.
According to Meta, the Llama model in question is "outdated" and useless in light of China's AI research, which is creating significantly better models that could even outstrip the ones made in the US right now.
Global Security Risks of AI Misuse Highlighted
The unfortunate reality is that various AI tools and models have previously been abused. Among the most pernicious applications of AI right now are political deepfakes. Campaigns of political disinformation have made use of AI-generated images and videos. A campaign aimed at discouraging Americans from casting ballots has made use of AI-powered audio.
Russia, Israel, and Iran have all used social media bots driven by artificial intelligence to try to alter public opinion on international politics or elections.
China-US Tech Rivalry Drives Further Sanctions
Another persistent tech rivalry is that between the United States and China. From chips to drones, both nations have sanctioned each other's tech. Meanwhile, China is hard at work on its own Neuralink rival and advanced AI processors.
To prevent China from gaining access to the most cutting-edge chips and artificial intelligence technology, the US has invested billions into its home semiconductor manufacturing industry since 2022, PC Mag shares. Chinese institutions and businesses, meanwhile, have, for years, used a plethora of legal loopholes to acquire cutting-edge AI chips.
Debate Over Open-Source AI for Innovation and Security
While some policy experts do worry about security concerns, others maintain that open-source AI is essential for fairness and open innovation. However, with open-source or accessible AI, any country, including China, can use it in the end.