Elon Musk’s artificial intelligence chatbot Grok, developed by xAI and integrated into social media platform X, has come under intense international scrutiny after admitting that lapses in safeguards led to the generation of images depicting minors in minimal clothing. The incident has raised serious concerns about AI safety, content moderation, and regulatory compliance across multiple countries.
In a public post on X, Grok acknowledged that “isolated cases” occurred in which users were able to prompt the AI to generate or alter images resulting in inappropriate content involving minors. The chatbot stated that while safeguards exist, improvements are being urgently implemented to fully block such requests. Grok emphasized that child sexual abuse material (CSAM) is illegal and strictly prohibited, adding that failures in its systems are being actively addressed.
The issue surfaced after users shared screenshots showing Grok’s public media tab filled with images they claimed had been altered by the AI after uploading photos and issuing prompts. These images quickly sparked backlash, prompting responses from regulators and government officials worldwide.
When Reuters contacted xAI for comment, the company responded with the brief message “Legacy Media Lies,” offering no further clarification. Grok later reiterated in a separate reply that while advanced filters and monitoring can prevent most cases, no AI system is entirely foolproof. It added that xAI is prioritizing improvements and reviewing user-submitted reports.
Regulatory pressure has intensified in Europe and Asia. French ministers reported Grok-generated “sexual and sexist” content to prosecutors, calling it “manifestly illegal,” and also alerted media regulator Arcom to assess compliance with the EU’s Digital Services Act. In India, the Ministry of Electronics and Information Technology issued a formal notice to X, stating the platform failed to prevent misuse of Grok to generate obscene and sexually explicit content involving women, and demanded an action-taken report within three days.
While the U.S. Federal Communications Commission did not respond to inquiries, the Federal Trade Commission declined to comment. The controversy highlights growing global concerns over AI-generated content, platform accountability, and the urgent need for stronger safeguards as artificial intelligence tools become more widely used.


Daiichi Sankyo Stock Drops After Earnings Delay and Oncology Review
SK Hynix to Invest $13 Billion in AI Chip Packaging Facility
Florida Launches Criminal Probe Into OpenAI Over FSU Shooting Incident
Meta Expands AI Training With Employee Activity Tracking Tools
LG Innotek Stock Hits Record High on $68M Automotive Wi-Fi 7 Deal
Elon Musk Signals Intel 14A Chips for Tesla’s Terafab AI Semiconductor Venture
$16B Michigan Data Center Project Boosts U.S. AI Infrastructure Expansion
Intel Stock Surges as AI Chip Demand Drives Strong Q2 Forecast
U.S. Raises Alarm Over Chinese AI Firms’ Alleged IP Theft Through Model Distillation
Tesla Q1 Earnings Preview: Robotaxi Delays and SpaceX Merger Speculation Grow
John Ternus Signals Apple’s Future with Product-First AI Strategy
Amazon Expands AI Bet with Up to $25 Billion Investment in Anthropic
SpaceX Eyes $60B Cursor Deal to Boost AI Power Ahead of IPO
PLS Reports Record Lithium Output as EV Demand Fuels Market Growth
Tesla Earnings Beat Expectations as EV Growth Holds Amid Robotics and AI Shift
Why Global Web3 Projects Can't Afford to Skip South Korea: TokenPost Unveils Data-Driven Entry Solutions
U.S. Sanctions Target Chinese Refinery Over Iranian Oil Purchases 



