There are risks and harms that come with relying on algorithms to make decisions. People are already feeling the impact of doing so. Whether reinforcing racial biases or spreading misinformation, many technologies that are labelled as artificial intelligence (AI) help amplify age-old malfunctions of the human condition.
In light of such problems, calls have been made to create a new human right against being subject to automated decision-making (ADM), which the UK Information Commissioner’s Office (ICO) describes as “the process of making a decision by automated means without any human involvement”.
Such systems rely on being exposed to data, whether factual, inferred, or created via profiling. But if effective regulation of ADM is the goal, creating new laws is probably not the way to go.
Our research suggests we should consider a different approach. Legal frameworks for data protection, non-discrimination, and human rights already offer protection to people from the negative impacts of ADM. Rules from these bodies of law can also guide regulation more generally. We could therefore focus on ensuring that the laws we already have are properly implemented.
Current harms and future risks
Automated decision making is being used in various ways – and there are more applications on the way. Areas subject to automation include the processing of asylum and welfare support applications and the deployment of lethal military technology. But even where ADM is considered to bring benefits, it can also have negative effects.
The criminalisation of children is one possible risk of using certain ADM systems, where “predictive risk models” used in child protection services can result in vulnerable children being further discriminated against. ADM can also make securing work harder –- a hiring algorithm developed by Amazon “scored female applicants more poorly than their equivalently qualified male counterparts.”
In several countries, including the UK, courts also rely on ADM. For example, it’s used to make sentencing recommendations, calculate the probability of a person reoffending, and assess the flight risk of defendants, which determines whether they will be released on bail pending trial.
These applications can result in unfair processes and unjust outcomes for many reasons. This could happen because a judge unwittingly accepts erroneous results produced by ADM, or because no one is able to understand how or why a particular system arrived at its conclusion.
Historically, human prejudices have also been embedded in the design of such software. This is because the algorithms are trained on real world data, often from the internet. Exposing the system to this information may help improve their performance at a task from one perspective, but the data also reflects people’s biases. This means that members of marginalised groups can end up being punished, in the way we saw earlier when women were disadvantaged by a hiring algorithm.
Protection and regulation
The urge to adopt new legal rules is perhaps understandable considering the stakes and the potential harm ADM could and does do. However, as regards creating a new human right, negotiating new laws takes time, money and resources. And once any new law comes into force it can take decades to be accurately understood for the purposes of practice.
Given that many relevant laws already exist, it’s unclear whether a new human right would significantly influence how systems for automated decision making are designed and deployed.
Yet without tangible implementation and enforcement, the content of these existing laws can become hollow. Effective governance of ADM by these laws requires impact assessments of automated decisions, human supervision of ADM systems, and complaints processes. These should all be mandated. A thorough impact assessment will be able to identify, for example, unintended harms to individuals and groups, and help shape appropriate mitigation measures.
Yet these information gathering measures need to be accompanied by sufficient oversight by a competent, resourced, and – possibly – public body. This would help uphold democratic accountability. Such bodies would also be tasked with ensuring that people negatively affected by ADM could file complaints that are adequately dealt with. These steps would make current laws on data protection, non-discrimination, and human rights more meaningful and effective in protecting individuals and groups from the harms of automated decisions.
The law across many areas is often criticised – sometimes rightly – for struggling to adapt to change. But a merit of the law in general is its ability to provide recourse to people that have experienced wrongdoing. It provides principled teeth to take a bite out of unprincipled conduct.
This capacity is significant for another reason. Corporate spin regarding digital technologies matches how they are often portrayed in public. Commentary, too, frequently tends towards “hyperbole, alarmism, or exaggeration”. This hype complements practices such as ethics-washing that provide a means of feigning commitment to regulation, while ignoring the very laws capable of providing it.
Chatter about the likes of “AI ethics” grease the wheels of these strategies, sometimes turning nuanced and significant philosophical insights into box-ticking exercises. Ethics are an essential component of guiding the design, development, and deployment of automated decision making. However, the language of “ethics” can also be used by spin doctors to distract us.
If anything here is worth remembering, it’s that ADM is not only a future problem, it’s a present problem. The laws that exist now can be used to address pressing issues stemming from this technology.
Whether this happens depends on public and private bodies improving the procedural machinery needed to enforce and oversee legal rules. These rules, many of which have been around for a while, just need a bit more life breathed into them to function effectively.


SpaceX Seeks FCC Approval for Massive Solar-Powered Satellite Network to Support AI Data Centers
Federal Judge Signals Possible Dismissal of xAI Lawsuit Against OpenAI
Pentagon and Anthropic Clash Over AI Safeguards in National Security Use
Nvidia Confirms Major OpenAI Investment Amid AI Funding Race
Google Cloud and Liberty Global Forge Strategic AI Partnership to Transform European Telecom Services
Elon Musk’s SpaceX Acquires xAI in Historic Deal Uniting Space and Artificial Intelligence
Microsoft AI Spending Surge Sparks Investor Jitters Despite Solid Azure Growth
Samsung Electronics Posts Record Q4 2025 Profit as AI Chip Demand Soars
Apple Faces Margin Pressure as Memory Chip Prices Surge Amid AI Boom
Advantest Shares Hit Record High on Strong AI-Driven Earnings and Nvidia Demand
SpaceX Updates Starlink Privacy Policy to Allow AI Training as xAI Merger Talks and IPO Loom
OpenAI Reportedly Eyes Late-2026 IPO Amid Rising Competition and Massive Funding Needs
SoftBank and Intel Partner to Develop Next-Generation Memory Chips for AI Data Centers
US Judge Rejects $2.36B Penalty Bid Against Google in Privacy Data Case
Nvidia’s $100 Billion OpenAI Investment Faces Internal Doubts, Report Says 



