Menu

Search

  |   Governance

Menu

  |   Governance

Search

Australia Targets AI Platforms With Strict Age Verification Rules

Australia Targets AI Platforms With Strict Age Verification Rules. Source: Jernej Furman from Slovenia, CC BY 2.0, via Wikimedia Commons

Australia’s internet regulator has warned that search engines and app stores could be required to block artificial intelligence services that fail to implement age verification systems by an upcoming compliance deadline. The move marks one of the world’s toughest regulatory crackdowns on AI platforms, particularly those offering chatbot and generative AI services.

Under new online safety rules taking effect March 9, digital services operating in Australia — including AI chatbots such as OpenAI’s ChatGPT and other companion bots — must prevent users under 18 from accessing harmful content. This includes pornography, extreme violence, self-harm material, and eating disorder-related content. Companies that fail to comply face fines of up to A$49.5 million ($35 million). The eSafety Commissioner has stated it is prepared to use its full enforcement powers, including action against “gatekeeper services” like search engines and major app stores that provide access to non-compliant AI tools.

The regulation follows growing global concern over AI and youth mental health. Several AI companies, including OpenAI and Character.AI, have faced lawsuits linked to alleged harmful interactions with young users. Although Australia has not yet reported chatbot-related violence, regulators say children as young as 10 are spending hours each day interacting with AI-powered tools. Officials also expressed concern that some platforms use emotional manipulation and human-like design features to increase user engagement among minors.

A recent Reuters review found that only nine of the 50 most popular text-based AI products have introduced or announced age assurance systems. Eleven others use blanket content filters or plan to block Australian users entirely. However, the majority — including many companion chatbot providers — have yet to demonstrate clear compliance measures.

Major tech companies such as Apple and Google have not provided detailed public responses. As global scrutiny of AI safety intensifies, Australia’s new age verification rules may set a precedent for stronger AI regulation worldwide.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.