Artificial Intelligence 101: A Special Mention to Accountability


"Just as electricity transformed almost everything 100 years

ago, today I actually have a hard time thinking of an industry

that I don’t think AI (Artificial Intelligence)

will transform in the next several years".

— Andrew Ng


Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings to their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.

Artificial Intelligence 101 is mainly divided into ten chapters, including:


A robot arm threatening a human worker
Dettmer, O.(2020). [Untitled] [Illustration]. The Economist. https://www.economist.com/finance-and-economics/2020/07/30/the-fear-of-robots-displacing-workers-has-returned

Artificial Intelligence 101: A Special Mention to Accountability


After analyzing the risks associated with AI systems and some of the possible solutions to tackle them, it is important to discuss accountability. This is what the present article is about.


Identifying AI risks will be of no use if it is not accompanied by regulations on civil liability that provide sufficient guarantees to individuals who may be harmed by AI systems. If the regulatory framework fails to provide sufficient guarantees to individuals in a society, with due integration of the principles of responsibility and precaution, any action or policy that may cause harm to people or the environment, and on which there is no scientific consensus, should be abandoned. No society is perfect but efforts must be made in order to work towards zero risk for AI systems.


As noted in a previous 101 article, Artificial Intelligence 101: Risks on AI Implementation (Part I), AI systems are becoming more independent. Thus, it will be more difficult to follow the liability scheme and to trace the obligations attributable to each party involved in the life chain of an AI system i.e., developer, producer, distributor, end customer, and user. Therefore, it becomes highly necessary to establish a trustworthy liability criterion applicable to the intervening subjects in case of harm caused by AI systems -or where an AI system is involved- and to adapt it to various social contexts. It is no secret that everyone will suffer the negative consequences of AI systems' harm.


Xausa, E. (2019). Surgical robots have thin rods instead of bulky hands, and the rods never tremble. The New Yorker. https://www.newyorker.com/magazine/2019/09/30/paging-dr-robot




































Considering current relevant discussions regarding accountability around AI systems, the most popular AI systems accountability principles up for implementation are as follows:


1) Applying the principle of presumption of guilt. This means placing the burden of proof in a claim for damages on the agent responsible for the AI system. In this instance, the agent has to prove that the system has acted diligently and in compliance with the regulations. Once this is proved, the consumer would have to prove that there is a causal link between the damage and the AI system. Under this principle, it is assumed that the entity responsible for the AI system is in a better position to explain and prove that it has acted diligently since it knows the AI system, its algorithms, and its decisions. Generally, consumers do not have access to this knowledge.


2) Implementing the risk doctrine. Here, the standard of due diligence is raised for agents involved in the development and implementation of AI systems. This is so because these systems pose a high risk for the population. Also, under this doctrine, the burden of proof is reversed and the agents must prove that they have indeed acted diligently and with sufficient care considering the risks involved. If they cannot prove the required diligence and sufficient care, then they will be liable for the damages.


3) There are also discussions on vicarious liability, i.e., liability for the acts of others. In this instance, the guardian or employer is liable for the acts of the person under guardianship or in service. An example of this liability would be the liability of an employer for wrongful acts of their employees. Liability for the acts of others is established when it can be proved that there was a lack of due supervision due to the relationship of subordination or dependence between the person who caused an injury and the person who is the supervisor. In the case of AI systems, a similar model can be instituted: the owner of an AI system is responsible for monitoring the actions of the AI system, being directly liable for any harm the AI system may cause -and always based on the understanding that the AI systems cannot be subject to rights and obligations towards humans and organizations.


Some of the main challenges facing accountability procedures related to AI systems are the opacity that some of these AI systems have, i.e., the difficulty in understanding the reasoning and operations of the machines (black box effect) which, in the end, translates to pressure and difficulty in determining who in the AI systems' responsibility chain is liable for potential malfunctions of the AI system. Also, it is important to highlight that, since AI systems can work in different spheres and sectors, they will be subject to diverse regulations such as the specific AI regulations, products regulations (liability and safety), banking and financial regulations, medical and healthcare regulations and more.


In the Report on liability arising from AI and other emerging technologies of 2020, the EU Group of Experts proposed that, among other essential characteristics, the agents involved in the development, implementation, and operation of AI systems should have mandatory civil liability insurance and be subject to supervision and control. The group also proposed the possibility of implementing a certification system for AI systems, subject to examination and control by public authorities. Finally, the possibility of creating a supervisory body similar to the European Data Protection Board (EDPB) was mentioned.


Cristofani, B. (2018). Leave it to the experts: a thriving ecosystem has sprung up to offer A.I. expertise and technical help. The Economist. https://www.behance.net/gallery/68099809/The-Economist-AI-in-business-Special-Report

The above-mentioned proposals seem to have been taken seriously since they are somewhat reflected in the new European Artificial Intelligence Act. This regulation will surely bring a lot of debates to the AI responsibility table but it must be recognized as the first consolidated attempt to unify and establish a general legal framework for businesses, governments, institutions, and citizens regarding the regulation of AI systems. It formally establishes a distinction according to the risk associated with the AI system: minimal risk, limited risk, high risk, and unacceptable risk.


In the attempt to prevent and mitigate harmful results, the European AI Act already foresees prevention measures such as the safety and security-by-design mechanisms that should allow verification of the AI system at every step -a closer audit control- which is aligned with the EU certification of suitability to commercialize the AI systems that are also included in the act. Control before, during, and after the implementation of an AI system is not an easy task, especially because -as shown in all the 101 Artificial Intelligence analyses- it often ends up having unexpected, harmful results and impacts on human life that are hard to anticipate and do not arise until the AI system has been implemented and produced harm.


Considering the above, the European AI Act should contribute to help set up the general background where AI can exist and its limitations, along with the relevant sector-like regulations -not just for the EU, but globally. With this, it must not be forgotten that AI systems can contribute to the greater wellbeing and advancement of society in general. They should not just be seen as potential hazards but as useful technologies that can boost good inventions and advance the future. In the last 101 article of this series, projects that are using AI systems to contribute to good social impact, thereby proving that AI systems are worth fighting for, with caution and understanding shall be examined.




References:


Azcárate, M., Ruiz, D. and Amorós, L. (2020). A vueltas con la Inteligencia Artificial y la Responsabilidad Civil: ¿Dónde convergen y qué problemática conllevan? Diario La Ley, nº 42, Sección Ciberderecho, Wolters Kluwer. https://diariolaley.laleynext.es/dll/2020/07/28/a-vueltas-con-la-inteligencia-artificial-y-la-responsabilidad-civil-donde-convergen-y-que-problematica-conllevan.


Cotino, L. (2019). Riesgos e impactos del big data, la inteligencia artificial y la robótica. enfoques, modelos y principios de la respuesta del derecho. Iustel–Revista General de Derecho Administrativo 50. https://www.iustel.com/v2/revistas/detalle_revista.asp?id_noticia=421227&d=1


Crawford, K. et al. (2019). AI Now 2019 Report. AI Now Institute, p. 21. https://ainowinstitute.org/AI_Now_2019_Report.pdf.


European Commission, Directorate-General for Justice and Consumers, Karner, E., Koch, B., Geistfeld, M. (2021). Comparative law study on civil liability for artificial intelligence, Publications Office. https://data.europa.eu/doi/10.2838/66412


European Commission, Directorate-General for Justice and Consumers. (2020). Report From the Commission to The European Parliament, The Council, and the European Economic and Social Committee Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. Register of Commission Documents. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52020DC0064&from=en


European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206


European Commission. (2020). White Paper on Artificial Intelligence - A European approach to excellence and trust. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf


European Institute of Public Administration. (Sept. 2021). The Artificial Intelligence Act Proposal and its Implications for Member States. EIPA. https://www.eipa.eu/publications/briefing/the-artificial-intelligence-act-proposal-and-its-implications-for-member-states/


European Parliament. (2017). Resolution (UE) 2017/0051 of the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html


Fernández, C.B. (2020). Estados Unidos presenta diez principios para el desarrollo de la inteligencia artificial. Diario la Ley. https://diariolaley.laleynext.es/dll/2020/01/22/estados-unidos-presenta-diez-principios-para-el-desarrollo-de-la-inteligencia-artificial.


Mauritz Kop. (2021). EU Artificial Intelligence Act: The European Approach to AI, Transatlantic Antitrust and IPR Developments. https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/


Image references:


Cristofani, B. (2018). Leave it to the experts: a thriving ecosystem has sprung up to offer A.I. expertise and technical help. The Economist. https://www.behance.net/gallery/68099809/The-Economist-AI-in-business-Special-Report


Dettmer, O. (2020). Untitled [Illustration]. The Economist. https://www.economist.com/finance-and-economics/2020/07/30/the-fear-of-robots-displacing-workers-has-returned


Xausa, E. (2019). Surgical robots have thin rods instead of bulky hands, and the rods never tremble. The New Yorker. https://www.newyorker.com/magazine/2019/09/30/paging-dr-robot


Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn