As artificial intelligence (“AI”) technology is advancing, AI-based software and products are becoming more and more a part of our daily lives. With this unprecedented technology used widely in our daily lives, questions on the legal framework of AI systems also arise.
The potential harm that AI-based systems can cause to human health, financial well-being of a real person or a legal entity, or financial status of companies is a setback for enterprises that consider deploying AI systems in operation. Companies are required to predict potential damages that AI, especially high-risk AI systems, can cause, do testing of the software, take necessary measures to prevent it, compensate for damages if there are any.
Due to the AI-based technologies’ specific features such as complex operation, autonomy and opacity, conventional liability regimes are now considered incompatible with the AI-based systems and the conventional liability regimes such as fault liability or product liability need to be renewed or altered.
With the purpose of ensuring a credible liability regime for AI-related damages and addressing other liability issues arising from digital devices, such as drones and smart devices, the European Union (“EU”) has been implementing new legislation for a while. This article will give you brief information and outline of the EU’s latest legislation “The Proposal For Artificial Intelligence Liability Directive”.
With the development of AI and new technologies, almost human-like, autonomous, and self-driven systems, previous sets of liability rules become insufficient for remedying the damages caused by these new technologies and the existing liability rules become incompatible with the real-life problems in which AI is involved.
On 28 September 2022, EU Commission published the Proposal for AI Liability Directive that sets out the non-contractual liability rules for the damages caused by artificial intelligence and other emerging technologies.
As mentioned above, this Directive is not the first EU legislation on AI. EU previously published the study “Liability for artificial intelligence and other emerging digital Technologies” which lays out the shortcomings of the current liability regimes for people impacted by AI. Then released “The White Paper on AI” on 19 February 2020 which provides policy guidance on reducing the risks of AI. In April 2021, AI Act was proposed by the EU with the main focus to introduce a common legal framework for AI systems.
2019 2020 2021 2022
Study on AI Liability White Paper on AI AI Act AI Liability Directive
- THE MAIN PURPOSES OF THE AI LIABILITY DIRECTIVE
The main purpose of the AI Liability Directive is to introduce rules which can embody legal liabilities for AI-related damages that might occur in our everyday life. Considering the incompatibility of the existing liability regimes and the lack of precise remedies for AI-related damages, it is hard to say that we have the same level of protection against damages arising from AI as the ones caused by other conventional technologies.
The paragraph which outlines the main purpose of the Directive is quoted from the explanatory memorandum of the AI Liability Directive:
“Current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy, and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether. These concerns have also been retained by the European Parliament (EP) in its resolution of 3 May 2022 on artificial intelligence in a digital age.”
The Directive aims at ensuring a trustworthy liability regime that is applicable to and compatible with AI systems and other new emerging technologies, enabling consumers to obtain redress and compensation for potential damages.
- NEW PROVISIONS INTRODUCED BY THE AI LIABILITY DIRECTIVE
AI Liability Directive introduces several important concepts. Firstly, we would like to mention that the rules consider data loss a potential harm for which someone can seek civil liability for his/her damages. Likewise, the proposed rules make clear that software counts as a kind of “product” under the EU’s liability laws, and Product Liability rules apply to damages caused by software. [1] The broadening of these two definitions enlarges the extent of adaptation of the legal framework to these new technologies.
The new AI Liability Directive brings two different approaches to the strict liability regime which will be explained hereinunder:
- Presumption of Causality
- Right to access evidence
2.1. PRESUMPTION OF CAUSALITY
In the strict liability regime, the claimant is expected to prove the wrongful act, the damage, and the causal link between the two. However, proving these might be too difficult, if not impossible, when it comes to AI-involved damages. Due to the complex and still unknown nature and operation of AI systems, let alone proving the causal link in front of judicial authority, one may not even understand if the damage was caused by AI, if so which action, specific fault, or omission of AI caused it and how this action affected the outcome which resulted in damage. Therefore, a rebuttable presumption of causality has been laid down in Article 4 /1.
Under the new AI Liability Directive, it is presumed that there is a causal link between the wrongful act and damage under these conditions, if:
- The claimant can prove the fault of an AI system provider or user,
- The court considers that it can be reasonably likely that this fault caused the AI system’s act or failure to act,
- The claimant can prove that the AI system’s act caused the damage.
Nonetheless, a potentially liable person could rebut the presumption if he/she can demonstrate that another reason led the AI to give rise to the damage. The Directive covers all types of damage that are currently compensated for in each Member State’s national law — such as issues resulting in physical injury, damage of materials, or discrimination. [2]
In other words, the Directive introduces a rebuttable presumption of causality to alleviate the ‘burden of proof’ problem attached to complex AI systems, easing the fulfillment of the burden of proof even if not reversing it.
2.1.1 EXCEPTIONS OF PRESUMPTION OF CAUSALITY
Though the presumption of causality is the principle, the AI Directive sets forth some exceptions to this rule for a number of specific situations:
- For “high-risk AI systems” with its definition in the AI Act, the Directive sets forth a different regime which includes the presumption of causality applying only in some situations where AI sııppliers or users do not comply with certain obligations under Article 4/2 and 4/3.
- Furthermore, according to Article 4/4 of the Directive, if the defendant proves that the claimant had sufficient evidence and expertise to lay out the causal link between the damage and AI’s act or failure to act, then the presumption of causal link does not apply.
- Another exception is the situations where the AI system causes damage while being used for personal activity. As to Article 4/6 of the Directive, in personal use, the presumption of causality only applies if the defendant materially interfered with the operation of the AI system or if the defendant was liable for determining the operation of the AI system but failed to fulfill this obligation.
2.2. RIGHT TO ACCESS EVIDENCE
Another novelty introduced with the AI Liability Act is the claimants’ expanded right to access evidence which only applies to high-risk AI system-related damages. As explained hereinabove, proving the wrongful act of an AI system is considered difficult due to the technicalities, complexity, and opacity of these systems. Therefore, the Directive provides the courts with the initiative to order providers or users of high-risk AI systems to preserve and disclose evidence about the operation of AI systems. This right enables the claimants to prove fault, damage and causal link by obtaining more information and evidence from the producer or users of the AI systems.
The limits of the right to access evidence are determined in Article 3 /4 of the Directive. The right to access evidence should be executed by obeying the proportionality principle and without violating the defendant’s legitimate rights. The court may order such disclosure only if the claimed information could be of critical evidence to the claimant. All information should be asked to be disclosed if such disclosure is necessary and proportionate to the nature of the claim, and legitimate rights of the parties such as sensitive information or trade secrets of companies should be protected. The defendant party’s interests such as the preservation of trade secrets or confidential information should also be protected by the court.
Specifically, the AI Directive makes specific reference to trade secrets protections under EU Directive 2016/943 (the so-called “Trade Secret Directive”) and national transposing legislation, leaving the national courts to make the delicate assessment of which one should prevail between disclosure/preservation or protection of secrets.[3]
The AI Directive ensures that the national courts are authorized to request one party to take specific measures or adopt these measures ex officio to preserve confidentiality. For example, the Trade Secret Directive provides for, among such measures, the possibility of limiting access to documents, hearings, recordings, and transcripts to a smaller number of persons, and of redacting sensitive parts of the rulings.[4]
To obtain this order of submission of evidence, the claimant must present sufficient evidence to support the claim and make every proportionate effort to obtain evidence from the defendant. In addition to the ability to require disclosure, the AI Directive also provides that the claimant may request the preservation of evidence as set forth in Article 3/3.[5]
Article 3/5 introduces a presumption of non-compliance with a duty of care. If the defendant does not comply with the court’s disclosure or preservation order, a rebuttable presumption of non-compliance with a relevant duty of care applies.
It should also be noted that if a company or organization fails to comply with a court-ordered disclosure, or an AI system is involved in the decision-making and possibly contributed to the harm, then the court can presume that the entity using the A.I. software is liable for the damages occurred.
- CONCLUSION
In this respect, this Directive eases the burden of proof which would cause an unfair situation for the claimant, if applied as in the existing liability regimes, alleviates it in a very targeted and proportionate manner through the use of disclosure of evidence and rebuttable presumptions. Yet, the EU acts cautiously when it comes to more far-reaching changes such as a reversal of the burden of proof or an irrebuttable presumption of causality which may put businesses in a difficult position and disturb the balance of interest between the parties.
Targeted measures to ease the burden of proof in form of rebuttable presumptions were chosen as pragmatic and appropriate ways to help victims meet their burden of proof in the most targeted and proportionate manner possible.
[1] https://fortune.com/2022/09/29/eu-new-product-liability-rules-for-ai-eye-on-a-i/
[2] https://techcrunch.com/2022/09/28/eu-ai-liability-directive/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADO2OBFc5CbJejKzf8hW6PSrdlZ0OwkdEVSHAgpEsqwaF-9KtApxJWIgI8tSxYtbeujnnhfrCFrX7Wx0GNA4_ChNDpsPPH-2IM-obXLGhHLngkvLTJzZggNkIqqAAuNANQqECM3Ra2ev0YHvCX9RbPBQuiJRMh4Kx1JDVflJwb5l
[3] https://www.lexology.com/library/detail.aspx?g=a67d241a-d993-4964-8dd6-7ab944c65972
[4] https://www.technologyslegaledge.com/2022/12/the-ai-liability-directive-eu-improves-liability-protections-for-those-impacted-by-ai/#_ftn12
[5] https://www.lexology.com/library/detail.aspx?g=a67d241a-d993-4964-8dd6-7ab944c65972
Yazı: Att. İrem Özbay, Özoğul Yenigün&Partners
Not: Makalelerdeki görüş ve yorumlar yazar veya yazarlara ait olup , Etik ve İtibar Derneği’nin konu ile ilgili düşüncelerini yansıtmamaktadır.
Etik ve Uyum Programı Nasıl Hazırlanır?
Kurumsal Etik ve Uyum Programı Geliştirme Gereğinin Ardındaki İtici Güçler
Sorumlu İş Modelinin Şirkete Yararları
Sorumlu İş Modeli
Yeni Yıl Yaklaşırken Hediye Ağırlama ve Etik Denge
Abdi İpekçi ve Etik
Şeffaf Bir Dünya İçin: 9 Aralık Dünya Yolsuzlukla Mücadele Günü
Yolsuzlukla Mücadelede İletişim Stratejileri: Etkiyi Arttırmanın Yolları