EP JURI Committee INL Draft Report stresses: No over-regulation of liability for AI

Shares

On 4 May, the second Draft Initiative Report on Artificial Intelligence was published by the Legal Affairs Committee (JURI). Rapporteur Axel Voss (EPP, Germany) gave the European Commission his perspective on a future civil liability regime for Artificial Intelligence (2020/2014(INL)). Overall, Voss recognizes that AI brings new legal challenges, but argues that the European Commission should refrain from major changes to the liability framework. If a person suffered harm caused by a defective AI-system, the Product Liability Directive (PLD) should remain the legal means to seek compensation from the producer. If the harm was caused by an interfering third person, the existing fault-based liability system in the Member States offers a sufficient level of protection.

The draft report makes one important addition to the existing liability regimes: it addresses the liability of deployers of AI-systems via a proposal for a regulation on AI liability. Deployers are those deciding on the use of AI-systems. They are mainly exercising control over the associated risks and are benefiting from their operations. Currently liability claims towards deployers are failing due the difficulty to prove the deployer’s fault.

To solve this, the Rapporteur proposes two approaches:

  1. High-risk AI-systems: The deployer of an AI-system exercises control over a system that can harm people, in a manner that is random and impossible to predict in advance. Consequently, the deployer should be subject to a strict liability regime and compensate the victim for any harm to its important legally protected rights. The Draft Report defines clear criteria on which AI-systems can qualify as high-risk. Voss notes that due to the rapid developments in the field, the Commission should amend the list of high-risk AI-systems through delegated acts. This should be supported by a newly formed standing committee, involving national experts and stakeholders.
  2. All other AI-systems: The person who suffered harm caused by an AI-system that is not listed as high-risk, should still benefit from a presumption of fault towards the deployer. The national law regulating the compensation in case of harm caused by the AI-system should remain applicable.

The Draft Report also argues that all new laws on AI should be written in the form of regulations. The digital sphere is characterized by cross-border dynamics, hence only a harmonized approach works to catch up with the global digital competition. Finally, Voss calls on the Commission to combine this new Regulation on the liability regime for deployers with a review of the Product Liability Directive.

Shares