find-partner-btn-inner

The New EU AI Liability Directive: What Developers & Operators Of AI Systems Need To Know

The European Commission published its proposal for an AI Liability Directive[1] on 28 September 2022. Part of a package of new rules which will change the legal landscape for AI systems, the AL Liability Directive will sit alongside the new AI Act[2] (expected to come into effect on 6 December 2022[3]), as well as a revised Product Liability Directive[4].

The AI Liability Directive sets out a new EU-wide liability regime for AI systems and will be relevant to all involved in the procurement, design, deployment and use of AI systems (referred here to as operators). The Directive will introduce two new rules for attributing liability in non-contractual fault-based claims (i.e. civil liability) where an AI system is intrinsically involved. These rules will clarify on an EU-wide basis how claims for damages involving AI systems will be handled.

The first rule, which is set out in Article 3, provides a right to evidence. This means that a person injured by a high-risk AI system has the right to access the documentation, information and records which operators involved in the design, development and deployment of the system are required to maintain under the AI Act. Where an operator refuses to provide access to this information, the injured person can apply to the courts for an order requiring preservation and disclosure; failing to provide this information will give rise to a presumption that the operator did not meet the required duty of care (rebuttable if the operator can submit evidence to counter this). To offer operators some protection, certain measures are aimed at protecting their confidential information and trade secrets and limiting disclosure to only what is necessary.

The second rule, which is set out in Article 4, is the presumption of causation. This rule is designed to stop operators’ AI systems from raising a defence that, due to the opaque and complex nature of these systems (i.e. an autonomous black box), the injured person cannot prove that they caused the system to do the thing which did the damage. Broadly speaking (and there are some additional factors set out in the Directive which will need to be carefully considered), where three conditions are met to the satisfaction of the court, the rule creates a rebuttable presumption that the system’s output (or failed output) caused the damage:

  1. The operator is shown (or the court has presumed pursuant to Article 3) to be at fault because they did not meet the applicable duty of care. Further, in the case of a high-risk system, the claimant must also show that the operator failed to meet at least one of the applicable requirements of the AI Act – these include training data, transparency, and oversight, as well as appropriate levels of accuracy, robustness and cybersecurity.
  2. It is reasonably likely, based on the circumstances, that the fault has influenced the output (or failed output) of the AI system.
  3. That output (or failed output) gave rise to the damage suffered.

Whilst the AI Liability Directive is unlikely to come into effect in the immediate future and when it does will be subject to a 2-year transition period, it sets out a clear shift in the civil liability regime (away from traditional fault based systems) which will apply to users and operators of AI systems within the EU as well as those marketing or implementing AI systems in the EU or where the systems’ output is used in the EU.

As mentioned, the AI Liability Directive is part of a package and, alongside the AI Act, complements the EU’s new Product Liability Directive which will (amongst other things) widen the scope of products covered by the defective products regime to include ‘electricity, digital manufacturing files and software’ and programs ‘enabling the automated control of machinery or tools’.

Operators may find that gaining a complete understanding of how this package, in particular the interplay between the liability regime and some of the more difficult to interpret aspects of the AI Act, a challenge; however the AI Liability Directive reinforces the need to gain such an understanding as they will need to operationalise those key requirements of the AI Act in order to manage the risk of liability under the new regime.

Key steps to be considered include:

  • Developing an AI systems compliance strategy
  • Mapping current use of AI systems, especially high-risk systems
  • Ensuring robust documentation are in place, e.g. testing and activity logs
  • Operationalising robust processes and procedures, e.g. incident response
  • Implementing a training program

If you would like to learn more about the AI Liability Directive, please contact the author.


[1] Liability Rules for Artificial Intelligence | European Commission (europa.eu)

[2] EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu)

[3] AI Act: Czech EU presidency makes final tweaks ahead of ambassadors’ approval – EURACTIV.com

[4] COM(2022) 495 - Proposal for a directive of the European Parliament and of the Council on liability for defective products | Internal Market, Industry, Entrepreneurship and SMEs (europa.eu)

Featured Insights

How can we partner with you?

Fladgate has always been structured around deep relationships, creating true partnerships with clients.

get in touch
Photo