Elon Musk’s artificial intelligence chatbot Grok, developed by xAI and integrated into social media platform X, has come under intense international scrutiny after admitting that lapses in safeguards led to the generation of images depicting minors in minimal clothing. The incident has raised serious concerns about AI safety, content moderation, and regulatory compliance across multiple countries.
In a public post on X, Grok acknowledged that “isolated cases” occurred in which users were able to prompt the AI to generate or alter images resulting in inappropriate content involving minors. The chatbot stated that while safeguards exist, improvements are being urgently implemented to fully block such requests. Grok emphasized that child sexual abuse material (CSAM) is illegal and strictly prohibited, adding that failures in its systems are being actively addressed.
The issue surfaced after users shared screenshots showing Grok’s public media tab filled with images they claimed had been altered by the AI after uploading photos and issuing prompts. These images quickly sparked backlash, prompting responses from regulators and government officials worldwide.
When Reuters contacted xAI for comment, the company responded with the brief message “Legacy Media Lies,” offering no further clarification. Grok later reiterated in a separate reply that while advanced filters and monitoring can prevent most cases, no AI system is entirely foolproof. It added that xAI is prioritizing improvements and reviewing user-submitted reports.
Regulatory pressure has intensified in Europe and Asia. French ministers reported Grok-generated “sexual and sexist” content to prosecutors, calling it “manifestly illegal,” and also alerted media regulator Arcom to assess compliance with the EU’s Digital Services Act. In India, the Ministry of Electronics and Information Technology issued a formal notice to X, stating the platform failed to prevent misuse of Grok to generate obscene and sexually explicit content involving women, and demanded an action-taken report within three days.
While the U.S. Federal Communications Commission did not respond to inquiries, the Federal Trade Commission declined to comment. The controversy highlights growing global concerns over AI-generated content, platform accountability, and the urgent need for stronger safeguards as artificial intelligence tools become more widely used.


Meta Acquires AI Startup Manus to Expand Advanced AI Capabilities Across Platforms
John Carreyrou Sues Major AI Firms Over Alleged Copyrighted Book Use in AI Training
Neuralink Plans High-Volume Brain Implant Production and Fully Automated Surgery by 2026
Hyundai Faces Deadline on Russia Plant Buyback Amid Ukraine War and Sanctions
U.S. Lawmakers Urge Pentagon to Blacklist More Chinese Tech Firms Over Military Ties
Boeing Secures Major $2.7 Billion U.S. Military Contract for Apache Helicopter Support
Baidu Shares Surge as Company Plans Kunlunxin AI Chip Spin-Off and Hong Kong Listing
China’s LandSpace Takes Aim at SpaceX With Reusable Rocket Ambitions
Drugmakers Plan 2026 U.S. Price Increases on Over 350 Branded Medications Despite Political Pressure
Moore Threads Unveils New GPUs, Fuels Optimism Around China’s AI Chip Ambitions
Lockheed Martin Secures Nearly $500 Million in U.S. and Allied Defense Contracts
China Proposes Stricter Rules for AI Services Offering Emotional Interaction
SoftBank Completes $41 Billion OpenAI Investment in Historic AI Funding Round
TSMC Honors Japanese Chip Equipment Makers With 2025 Supplier Awards
Royalty Pharma Stock Rises After Acquiring Full Evrysdi Royalty Rights from PTC Therapeutics
Italy Fines Apple €98.6 Million Over App Store Dominance
Anghami Stock Soars After Strong H1 2025 Results, Revenue Nearly Doubles on OSN+ Integration 



