Beautiful Virgin Islands

Wednesday, May 13, 2026

Robo-cop: EU wants firms to be held liable for harm done by AI

Robo-cop: EU wants firms to be held liable for harm done by AI

New liability regime would give victims of AI systems their day in court.
The European Commission on Wednesday proposed new rules that would see makers of artificial intelligence-powered software and products forced to compensate people harmed by their creations.

A new AI Liability Directive would make it easier to sue for compensation when a person or organization gets hurt or suffers damages through artificial intelligence-powered drones and robots or because of software such as automated hiring algorithms.

“The new rules will give victims of damage caused by AI systems an equal chance and access to a fair trial and redress,” Justice Commissioner Didier Reynders told reporters ahead of the presentation of the proposals.

The draft law is the latest attempt by European officials to regulate AI and set a global standard to control the flourishing technology. It comes as the EU is in the throes of negotiating the AI Act, the world’s first bill to rein in high-risk uses of AI, including facial recognition, "social scoring" systems and AI-boosted software for immigration and social benefits.

“If we want to have real trust of consumers and users in the AI application, we need to be sure that it's possible to have such an access to compensation and to have access to real decision in justice if it's needed, without too many obstacles, like the opacity of the systems,” said Reynders.

Under the new law, victims would be able to challenge a provider, developer or user of AI technology if they suffer damage to their health or property, or suffer discrimination based on fundamental rights such as privacy. Until now, it has been hard and extremely expensive for victims to build cases when they think they have been harmed by an AI because the technology is complex and opaque.

Courts would get more power to pry open the black boxes of AI companies and ask for detailed information about the data used for the algorithms, the technical specifications and risk-control mechanisms.

With this new access to information, victims could prove that damage came from a tech company that sold an AI system or that the user of the AI — for instance, a university, workplace or government agency — failed to comply with obligations in other European laws like the AI Act or a directive to protect platform workers. Victims would also have to prove the damage is linked to the specific AI applications.

The European Commission also presented a revamped Product Liability Directive. The 1985 law is not adapted for new product categories like connected devices, and revised rules aim to enable customers to claim compensation when they experience harm from a defective software update, upgrade or service. The proposed product liability rules also bring online marketplaces into the crosshairs, which, according to the rules, can be held liable if they don't disclose the name of a trader to a person that experienced harm upon request.

The Commission's proposal will still need approval from national governments in the EU Council and from the European Parliament.

Parliament in particular could object to the European Commission's choice to propose a weaker liability regime than it itself suggested earlier.

The chamber in 2020 called on the Commission to adopt rules to ensure victims of harmful AI can obtain compensation, asking specifically that developers, providers and users of high-risk autonomous AI could be held legally responsible even for unintentional harm. But the EU executive decided to go with a “pragmatic” approach that is weaker than this strict liability regime, saying the evidence was “not sufficient to justify” such a regime.

“We chose the lowest level of intervention,” said Reynders. “We need to see whether new developments [will] justify stronger rules for the future.”

The Commission will review whether a stricter regime is needed, five years after it comes into force, it said.
Newsletter

Related Articles

Beautiful Virgin Islands
0:00
0:00
Close
The Great Western Exit: Why Best Citizens Are Fleeing the Rich World [PODCAST]
The New Robber Barons of Intelligence: Are AI Bosses More Powerful Than Rockefeller?
The End of the Old Order [Podcast]
Britain’s Democracy Is Now a Costume
The AI Gold Rush Is Coming for America’s Last Open Spaces [Podcast]
The Pentagon’s AI Squeeze: Eight Tech Giants Get In, Anthropic Gets Shut Out [Podcast]
The War Map: Professor Jiang’s Dark Theory of Iran, Trump, China, Russia, Israel, and the Coming Global Shock [Podcast]
Labour Is No Longer a National Party [Podcast]
AI Isn’t Stealing Your Job. It’s Dismantling It Piece by Piece.
Lawyers vs Engineers: Why China Builds While America Litigates [Podcast]
Churchill’s Glass: The Drunk, the Doctor, and the Myth Britain Refuses to Sober Up From
Apple issues an unusual warning: this is how your iPhone can be hacked without you doing anything
The Met Gala Meets the Age of Billionaire Backlash
Russian Oligarch’s Superyacht Crosses Hormuz via Iran-Controlled Route
Gunfire Disrupts White House Correspondents’ Dinner as Trump Is Evacuated
A Leak, a King, and a Fracturing Alliance
Inside the Gates Foundation Turmoil: Layoffs, Scrutiny, and the Cost of Reputational Risk
UK Biobank Breach Exposes Health Data of 500,000, Listed for Sale on Chinese Platform
KPMG Cuts Around 10% of US Audit Partners After Failed Exit Push
French Police Probe Suspected Weather-Data Tampering After Unusual Polymarket Bets on Paris Temperatures
News Roundup
Microsoft lost 2.5 millions users (French government) to Linux
Privacy Problems in Microsoft Windows OS
News roundup
Péter András Magyar and the Strategic Reset of Hungary
Hungary After the Landslide — A Strategic Reset in Europe
×