Menu

Search

  |   Technology

Menu

  |   Technology

Search

A new online safety bill could allow censorship of anyone who engages with sexual content on the internet

Under new draft laws, the eSafety Commissioner could order your nude selfies, sex education or slash fiction to be taken down from the internet with just 24 hours notice.

Officially, the Morrison government’s new bill aims to improve online safety.

But in doing so, it gives broad, discretionary powers to the commissioner, with serious ramifications for anyone who engages with sexual content online.

Broad new powers

After initial consultation in 2019, the federal government released the draft online safety bill last December. Public submissions closed on the weekend.

The bill contains several new initiatives, from cyberbullying protections for children to new ways to remove non-consensual intimate imagery.

Crucially, it gives the eSafety Commissioner — a federal government appointee — a range of new powers.

It contains rapid website-blocking provisions to prevent the circulation of “abhorrent violent material” (such as live-streaming terror attacks). It reduces the timeframe for “takedown notices” (where a hosting provider is directed to remove content) from 48 to 24 hours. It can also require search engines to delete links and app stores to prevent downloads, with civil penalties of up to $111,000 for non-compliance.

But one concerning element of the bill that has not received wide public attention is its takedown notices for so-called “harmful online content”.

A move towards age verification

Due to the impracticality of classifying the entire internet, regulators are now moving towards systems that require access restrictions for certain content and make use of user complaints to identify harmful material.

In this vein, the proposed bill will require online service providers to use technologies to prevent children gaining access to sexual material.

Controversially, the bill gives the commissioner power to impose their own specific “restricted access system”.

This means the commissioner could decide that, to access sexual content, users must upload their identity documents, scan their fingerprints, undergo facial recognition technology or have their age estimated by artificial intelligence based on behavioural signals.

But there are serious issues with online verification systems. This has already been considered and abandoned by similar countries. The United Kingdom dropped its plans in 2019, following implementation difficulties and privacy concerns.

The worst-case scenario here is governments collect databases of people’s sexual preferences and browsing histories that can be leaked, hacked, sold or misused.

eSafety Commissioner as ‘chief censor’

The bill also creates an “online content scheme”, which identifies content that users can complain about.

The bill permits any Australian internet user to make complaints about “class 1” and “class 2” content that is not subject to a restricted access system. These categories are extremely broad, ranging from actual, to simulated, to implied sexual activity, as well as explicit nudity.

In practice, people can potentially complain about any material depicting sex that they find on the internet, even on specific adult sites, if there is no mechanism to verify the user’s age.

The draft laws then allow the commissioner to conduct investigations and order removal notices as they “think fit”. There are no criteria for what warrants removal, no requirement to give reasons, and no process for users to be notified or have opportunity to respond to complaints.

Without the requirement to publish transparent enforcement data, the commissioner can simply remove content that is neither harmful nor unlawful and is specifically exempt from liability for damages or civil proceedings.

This means users will have little clarity on how to actually comply with the scheme.

Malicious complaints and self-censorship

The potential ramifications of the bill are broad. They are likely to affect sex workers, sex educators, LGBTIQ health organisations, kink communities, online daters, artists and anyone who shares or accesses sexual content online.

While previous legislation was primarily concerned with films, print publications, computer games and broadcast media, this bill applies to social media, instant messaging, online games, websites, apps and a range of electronic and internet service providers.

It means links to sex education and harm reduction material for young people could be deleted by search engines. Hook up apps such as Grindr or Tinder could be made unavailable for download. Escort advertising platforms could be removed. Online kink communities like Fetlife could be taken down.

The legislation could embolden users - including anti-pornography advocates, disgruntled customers or ex-partners - to make vexatious complaints about sexual content, even where there is nothing harmful about it.

The complaints system is also likely to have a disproportionate impact on sex workers, especially those who turned to online work during the pandemic, and who already face a high level of malicious complaints.

Sex workers consistently report restrictive terms of service as well as shadowbanning and deplatforming, where their content is stealthily or selectively removed from social media.

The requirement for service providers to restrict children’s access to sexual content also provides a financial incentive to take an over-zealous approach. Providers may employ artificial intelligence at scale to screen and detect nudity (which can confuse sex education with pornography), apply inappropriate age verification mechanisms that compromise user privacy, or, where this is too onerous or expensive, take the simpler route of prohibiting sexual content altogether.

In this sense, the bill may operate in a similar way to United States “FOSTA-SESTA” anti-trafficking legislation, which prohibits websites from promoting or facilitating prostitution. This resulted in the pre-emptive closure of essential sites for sex worker safety, education and community building.

New frameworks for sexual content moderation

Platforms have been notoriously poor when it comes to dealing with sexual content. But governments have not been any better.

We need new ways to think about moderating sexual content.

Historically, obscenity legislation has treated all sexual content as if it was lacking in value unless it was redeemed by literary, artistic or scientific merit. Our current classification framework of “offensiveness” is also based on outdated notions of “morality, decency and propriety”.

Research into sex and social media suggests we should not simply conflate sex with risk.

Instead, some have proposed human rights approaches. These draw on a growing body of literature that sees sexual health, pleasure and satisfying sexual experiences as compatible with bodily autonomy, safety and freedom from violence.

Others have pointed to the need for improved sex education, consent skills and media literacy to equip users to navigate online space.

What’s obvious is we need a more nuanced approach to decision-making that imagines sex beyond “harm”, thinks more comprehensively about safer spaces, and recognises the cultural value in sexual content.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.