Meta, Facebook's parent company, is taking steps to address the potential impact of AI-generated images on its platforms as the 2024 election season approaches. Meta plans to identify and label images created by third-party AI tools to combat the proliferation of misleading content.
According to Reuters, the company aims to ensure transparency and provide users with information about the origin of these images.
Partnerships and Labels
Meta will begin adding "AI generated" labels to images created using tools from prominent companies such as Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. Meta also applies an "imagined with AI" label to photorealistic images generated by its AI generator tool.
CNN noted that by collaborating with leading firms in the AI development space, Meta intends to implement common technical standards, including invisible metadata or watermarks, that will enable its systems to identify AI-generated images created with these tools.
Meta will roll out the labels in multiple languages across its platforms, including Facebook, Instagram, and Threads. This global approach addresses the risks associated with AI-generated images and their potential to mislead voters in the United States and numerous other countries facing elections in 2024.
The announcement by Meta comes in response to growing concerns raised by experts, lawmakers, and even tech executives regarding the spread of false information facilitated by realistic AI-generated images and the rapid dissemination capabilities of social media. The Oversight Board of Meta recently criticized the company's manipulated media policy, calling it "incoherent." An altered video of US President Joe Biden prompted this decision.
Industry-Standard Markers
Meta's implementation of industry-standard markers will allow the company to label AI-generated images effectively. However, these markers currently do not extend to videos and audio generated by artificial intelligence.
To address this limitation, Meta plans to introduce a feature that enables users to identify and disclose when AI has generated the video or audio content they share. Failure to comply with this disclosure requirement may result in penalties.
In cases where digitally created or altered images, videos, or sounds pose a high risk of materially deceiving the public on significant matters, Meta may apply more prominent labels. Additionally, Meta is actively working to prevent users from removing the invisible watermarks from AI-generated images, ensuring the integrity and authenticity of the labeled content.
Photo: Meta Newsroom Facebook Page


Mark Carney Reaffirms Canada’s Support for Ukraine as Peace Talks With Russia Gain Momentum
Dina Powell McCormick Resigns From Meta Board After Eight Months, May Take Advisory Role
iRobot Files for Chapter 11 Bankruptcy Amid Rising Competition and Tariff Pressures
U.S. Signs $2.3 Billion Global Health MOUs With Four African Nations
TSMC Honors Japanese Chip Equipment Makers With 2025 Supplier Awards
Apple Explores India for iPhone Chip Assembly as Manufacturing Push Accelerates
Amazon in Talks to Invest $10 Billion in OpenAI as AI Firm Eyes $1 Trillion IPO Valuation
US and Japan Fast-Track $550 Billion Strategic Investment Initiative
Jared Isaacman Confirmed as NASA Administrator, Becomes 15th Leader of U.S. Space Agency
Democratic Governors Urge Trump Administration to Lift Halt on East Coast Offshore Wind Projects
SUPERFORTUNE Launches AI-Powered Mobile App, Expanding Beyond Web3 Into $392 Billion Metaphysics Market
Kim Jong Un Signals Continued Missile Development as North Korea Plans Five-Year Military Modernization
US Airstrikes Target Islamic State Militants in Northwest Nigeria Amid Rising Security Concerns
Anutin Charnvirakul Named Bhumjaithai PM Candidate Ahead of Thailand’s February Election
Bolsonaro Endorses Son Flavio for Brazil’s 2026 Presidential Election From Hospital
EU Condemns U.S. Visa Ban on European Figures, Warns of Firm Response
Moore Threads Unveils New GPUs, Fuels Optimism Around China’s AI Chip Ambitions 



