Self-driving cars have made impressive progress. They can follow lanes, keep their distance, and navigate familiar routes with ease. However, despite years of development, they still struggle with one critical problem: the rare and dangerous situations that cause the most serious accidents.
These “edge cases” include sharp bends on wet roads, sudden changes in slope, or situations where a vehicle approaches its physical limits of grip and stability. In real-world deployments, which often involve some level of shared control between driver and automation, such moments can arise from human misjudgment or from automated systems failing to anticipate rapidly changing conditions.
They happen infrequently, but when they occur, the consequences can be severe. A car might handle a thousand gentle curves perfectly, but fail on the one sharp bend taken a little too fast.
Current autonomous systems are not trained well enough to handle these moments reliably. From a data perspective, these events form what scientists call a “long tail”: they are statistically rare, but disproportionately important.
Collecting more real world data does not fully solve the problem, because deliberately seeking out dangerous conditions is costly, slow, and risky. Many of these scenarios are simply too dangerous to practise in real life. We cannot deliberately put vehicles into near-crashes on public roads just to see whether the software can cope. If an AI system rarely sees extreme situations during training, it has little chance to respond well when they occur in real life.
In the current fleet of self-driving cars, a human in a control centre is often at hand to intervene if something goes wrong. But to achieve fully driverless cars, researchers need to find ways of effectively training AI systems to handle high-risk situations.
Our research team at Dublin City University, working with colleagues at the University of Birmingham, has been tackling this gap.
We have developed a virtual “proving ground” that uses generative AI to safely create rare, high-risk driving scenarios, allowing vehicles to learn from them without putting anyone in danger. Instead of waiting for rare events to happen naturally, we can teach an AI model to create realistic but challenging driving scenarios on demand, including ones that push vehicles close to their physical limits.
Practising safely
The generative AI that is used in our system is designed to learn from real driving data and then produces new, realistic scenarios. Crucially, it does not just reproduce typical roads and speeds.
It focuses deliberately on the most demanding situations including sharp curves, steep slopes and high speeds, combined in ways that challenge both human drivers and automated systems. This allows us to expand the range of situations a vehicle can experience during training, without ever leaving the simulator.
In effect, the car can “practise” dangerous situations safely, repeatedly and systematically. However, the goal of our work is not to replace the human driver entirely. Instead, we focus on human–machine shared driving: a partnership in which the car and the driver support each other.
Humans are very good at intuition, anticipation and adapting to unfamiliar situations. Machines excel at fast reactions and precise control. Shared driving aims to combine these strengths. In our system, control is continuously adjusted depending on risk.
When the road is straight and safe, the driver remains firmly in charge, but when the system detects a high-risk situation, such as a sharp bend that the driver may be approaching too quickly, it smoothly increases the level of automated assistance to help stabilise the vehicle. Importantly, this is not a sudden takeover. The transition is gradual and adaptive, designed to feel natural rather than intrusive.
To evaluate the system, we went beyond pure simulation. We used a driver-in-the-loop platform, where real people sit in a high-fidelity driving simulator and interact with the AI in real time. The results were encouraging. Less experienced drivers benefited most: when they struggled on complex or winding roads, the system provided timely support, reducing the risk of losing control.
At the same time, the system avoided unnecessary intervention during safe driving, helping drivers feel more engaged rather than overridden. Overall, this adaptive approach led to safer, smoother driving compared with fixed or overly conservative control strategies. It also allows both the human driver and the AI to improve at their handling of extreme road situations.
Autonomous vehicles are often judged by how well they handle routine driving, but public trust will ultimately depend on how they behave when things go wrong. By using generative AI to train vehicles on rare but critical scenarios, we can expose weaknesses early, improve decision making, and build systems that are better prepared for the real world.
Just as importantly, by keeping humans in the loop, we can design automation that supports drivers rather than replacing them outright. Fully driverless cars may still be some way off, but smarter training systems like this can help bridge the gap by making both human-driven and automated vehicles safer on today’s roads.

Mingming Liu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Mingming Liu, Assistant Professor, School of Electronic Engineering, Dublin City University



Trump Administration Plans Chip Tariff Exemptions for Big Tech Amid AI Data Center Push
Salesforce Workforce Reduction Affects Fewer Than 1,000 Roles Amid Ongoing Restructuring
Russia Signals Further Restrictions on Telegram Amid Ongoing Regulatory Disputes
Anduril Eyes $60 Billion Valuation in New Funding Round to Expand Defense Manufacturing and Autonomous Fighter Jet Development
Russia Moves to Fully Block WhatsApp as Kremlin Pushes State-Backed MAX App
SMIC Shares Slide Despite Strong AI-Driven Earnings as Margin Pressure Looms
Samsung Electronics Sees Sustained AI-Driven Demand for Memory Chips Into Next Year
xAI Co-Founder Jimmy Ba Departs as Elon Musk’s AI Startup Faces Turbulence
ByteDance Advances AI Chip Development With Samsung Manufacturing Talks
OpenAI Hires OpenClaw Creator Peter Steinberger to Advance Next-Generation AI Agents
GE Aerospace Expands Singapore Engine Repair Hub with Automation and AI to Tackle Aviation Bottlenecks
Anthropic’s Claude AI Reportedly Used in U.S. Operation to Capture Nicolas Maduro
Nasdaq Proposes Fast-Track Rule to Accelerate Index Inclusion for Major New Listings
Cloudflare Forecasts Strong Revenue Growth as AI Fuels Cloud Services Demand




