Elon Musk’s artificial intelligence chatbot Grok, developed by xAI and integrated into social media platform X, has come under intense international scrutiny after admitting that lapses in safeguards led to the generation of images depicting minors in minimal clothing. The incident has raised serious concerns about AI safety, content moderation, and regulatory compliance across multiple countries.
In a public post on X, Grok acknowledged that “isolated cases” occurred in which users were able to prompt the AI to generate or alter images resulting in inappropriate content involving minors. The chatbot stated that while safeguards exist, improvements are being urgently implemented to fully block such requests. Grok emphasized that child sexual abuse material (CSAM) is illegal and strictly prohibited, adding that failures in its systems are being actively addressed.
The issue surfaced after users shared screenshots showing Grok’s public media tab filled with images they claimed had been altered by the AI after uploading photos and issuing prompts. These images quickly sparked backlash, prompting responses from regulators and government officials worldwide.
When Reuters contacted xAI for comment, the company responded with the brief message “Legacy Media Lies,” offering no further clarification. Grok later reiterated in a separate reply that while advanced filters and monitoring can prevent most cases, no AI system is entirely foolproof. It added that xAI is prioritizing improvements and reviewing user-submitted reports.
Regulatory pressure has intensified in Europe and Asia. French ministers reported Grok-generated “sexual and sexist” content to prosecutors, calling it “manifestly illegal,” and also alerted media regulator Arcom to assess compliance with the EU’s Digital Services Act. In India, the Ministry of Electronics and Information Technology issued a formal notice to X, stating the platform failed to prevent misuse of Grok to generate obscene and sexually explicit content involving women, and demanded an action-taken report within three days.
While the U.S. Federal Communications Commission did not respond to inquiries, the Federal Trade Commission declined to comment. The controversy highlights growing global concerns over AI-generated content, platform accountability, and the urgent need for stronger safeguards as artificial intelligence tools become more widely used.


ANZ and Westpac Forecast Two RBA Rate Hikes in March and May 2026
Domino's Pizza UK Reports 15% Drop in Annual Profit Amid Weak Sales and Rising Costs
Big Tech Signs White House Pledge to Fund Power for AI Data Centers
Heinz Wattie's to Close Three New Zealand Plants, Cutting 350 Jobs
U.S. Senate Greenlights AI Chatbots for Official Staff Use
U.S. Considers New Rules Tying AI Chip Exports to Investment and Security Guarantees
UK Regulators Demand Social Media Platforms Strengthen Children's Age Verification
Estée Lauder Sues Jo Malone Over Trademark Dispute Involving Zara
Iran Crisis Could Threaten AI Data Center Expansion and Global Chip Demand, South Korea Warns
Amazon Invests $535 Million in Brisbane Robotics Fulfillment Center
Nvidia CEO Jensen Huang Says $100B OpenAI Investment Unlikely as AI Demand Surges
Lockheed Martin Invests $150M in Alabama Missile Production Facility
Alphabet's GFiber Merges with Astound Broadband to Build Major U.S. Internet Provider
Costco Faces Class Action Lawsuit Over Tariff Refunds as Supreme Court Strikes Down Trump's IEEPA Tariffs
Boeing Secures $289 Million Smart Bomb Contract With Israel
Joby Aviation Reaches Major Milestone in FAA Certification for Electric Air Taxi
Broadcom Stock Jumps After Strong Earnings Beat and Bullish AI Revenue Outlook 



