Elon Musk’s artificial intelligence chatbot Grok, developed by xAI and integrated into social media platform X, has come under intense international scrutiny after admitting that lapses in safeguards led to the generation of images depicting minors in minimal clothing. The incident has raised serious concerns about AI safety, content moderation, and regulatory compliance across multiple countries.
In a public post on X, Grok acknowledged that “isolated cases” occurred in which users were able to prompt the AI to generate or alter images resulting in inappropriate content involving minors. The chatbot stated that while safeguards exist, improvements are being urgently implemented to fully block such requests. Grok emphasized that child sexual abuse material (CSAM) is illegal and strictly prohibited, adding that failures in its systems are being actively addressed.
The issue surfaced after users shared screenshots showing Grok’s public media tab filled with images they claimed had been altered by the AI after uploading photos and issuing prompts. These images quickly sparked backlash, prompting responses from regulators and government officials worldwide.
When Reuters contacted xAI for comment, the company responded with the brief message “Legacy Media Lies,” offering no further clarification. Grok later reiterated in a separate reply that while advanced filters and monitoring can prevent most cases, no AI system is entirely foolproof. It added that xAI is prioritizing improvements and reviewing user-submitted reports.
Regulatory pressure has intensified in Europe and Asia. French ministers reported Grok-generated “sexual and sexist” content to prosecutors, calling it “manifestly illegal,” and also alerted media regulator Arcom to assess compliance with the EU’s Digital Services Act. In India, the Ministry of Electronics and Information Technology issued a formal notice to X, stating the platform failed to prevent misuse of Grok to generate obscene and sexually explicit content involving women, and demanded an action-taken report within three days.
While the U.S. Federal Communications Commission did not respond to inquiries, the Federal Trade Commission declined to comment. The controversy highlights growing global concerns over AI-generated content, platform accountability, and the urgent need for stronger safeguards as artificial intelligence tools become more widely used.


Nintendo Switch 2 Production Cut as Holiday Sales Miss Targets
KPMG UK Cuts 440 Audit Jobs Amid Low Attrition and Cooling Professional Services Demand
Microsoft Eyes $7B Texas Energy Deal to Power AI Data Centers
TSMC Japan's Second Fab to Produce 3nm Chips by 2028
NASA Artemis II: First Crewed Moon Mission Since Apollo Takes Four Astronauts on 10-Day Lunar Journey
Federal Judge Blocks Pentagon's Blacklisting of AI Company Anthropic
Ukrainian Drones and the #MadeByHousewives Movement: Kyiv Fires Back at Rheinmetall CEO
Nike Beats Q3 Estimates but China Weakness and Margin Pressure Weigh on Outlook
Golden Dome Missile Defense: Anduril and Palantir Join Forces on Trump's $185B Space Shield
Fonterra Admits Anchor Butter "Grass-Fed" Label Misled Consumers After Greenpeace Lawsuit
Trump Administration Plans 100% Tariffs on Pharmaceutical Imports
SoftwareONE Posts 22.5% Revenue Surge in 2025 on Crayon Acquisition
California's AI Executive Order Pushes Responsible Tech Use in State Contracts
Chinese Universities with PLA Ties Found Purchasing Restricted U.S. AI Chips Through Super Micro Servers
Cybersecurity Stocks Tumble After Anthropic's Claude Mythos AI Leak Sparks Market Fears
AWS Bahrain Region Disrupted by Drone Activity Amid Middle East Conflict
Palantir's Maven AI Earns Pentagon "Program of Record" Status, Reshaping Military AI Strategy 



