Artificial Intelligence 101: What Can Be Done to Mitigate Risks? (Part I)

Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.


Artificial Intelligence 101 is mainly divided into ten chapters, including:



Artificial Intelligence 101: What Can Be Done to Mitigate Risks? (Part I)


Davis, A. (n.d.) World rotating with several eyes observing [Gif].


After outlining a general roadmap on the potential risks of AI systems and showing how they can appear in the different phases of its creation and implementation, it has become urgent to consider protective measures, regulations, and policies that have to be applied before, during, and after the creation of an AI system. Some proposals in this regard are:


1) "Testing" before using


This means implementing testing obligations for the creator(s) and developer(s) of an AI system. With compulsory testing, the risks associated with the system must be assessed and weighed against the measures that should be taken to reduce them. This must happen before the AI product reaches the market. For example, pre-processing the data several times in order to be able to adjust it as accurately as possible in terms of non-discrimination. This is generally referred to as “counterfactual fairness.” Once the data has been cleaned in the AI system, the result remains unchanged even if a piece of data potentially considered sensitive is modified. The main idea is that once the algorithm has been developed, it must be tested to see the result(s) and to correct the points where it may be viewed as discriminatory.


Davis, A. (n.d.) Rest of the World [Illustration].


2) Proactive accountability policies


This refers to a set of general principles that should be met prior to the design of any application under the “accountability” umbrella. All of them are applicable to the parameters of the system design, the data and metadata used, as well as to the audits performed. By referencing situations and examples from previous developments, developers should be able to better analyze situations where AI systems have in the past, generated discrimination, and work on how to reduce similar risks in the future. Some researchers have developed similar policies such as "datasheets for data sets" and "model cards for model reporting" to test and reduce bias in algorithms. European Union Agency for Network and Information Security (ENISA) has been working on policies like this for years. These are referred to as policies on "Privacy Enhancing Technologies".


Both the General Data Protection Regulation (GDPR) of the European Union and the recent AI European Act are attentive to this matter and have already incorporated the need for the adoption of technical and organizational measures to guarantee the rights of affected individuals, not only at the level of data protection but also at the global level of operation of the AI systems.


3) Periodic audits to determine the adaptation of the AI system to practice


Despite their complexity, algorithms must be audited in order to justify the fact that they are legally compliant and prove that they do not discriminate amongst individuals. The data used to run the algorithm is rarely available outside the organization that works on the AI system and such data goes to the heart of the value of the company that manages it. Therefore, the proposed audit checks would be to develop auditing procedures where it is not necessary to have access to the specific data or, if access is gained, corporate secrecy is guaranteed. One way would be to have access to the software and source code of the algorithm and have it evaluated by an independent third-party expert —guaranteeing confidentiality— or to create various testing platforms where various aspects of the algorithm's decision-making process are analyzed to see if there may be discrimination in any part.


The most appropriate format would be to conduct impact assessments and third-party audits to evaluate the level of fairness and justice applicable to the AI model, but this seems to be one of the most difficult processes to accomplish, as the degree of expertise and technique needed to understand and be able to assess these systems require training and experience that often, only the creators of these systems themselves have.


Also, bodies similar to data protection authorities could be set up to monitor and inspect AI systems, and establish guidelines for standardized use of the technology which are necessary to guarantee the protection of peoples' rights and freedoms.



Yang, J. (2019). New York Times: How to Find a Watch [Illustration]. James Yang Illustration.


4) Demand for quality and diversity of data used in the AI systems


Researchers and developers need diverse, accurate and quality data to work with to be able to analyze and work more accurately. This depends to a large extent on institutional collaboration and collaboration between different sectors such as social sciences, ethics, legal, and technology. Reviewing the quality of the data and the supporting documentation of these data is essential in order to obtain valuable and lasting results.


Paradoxically, in order to correct bias and discrimination, an effective, corrective measure would be to obtain sensitive data, precisely to detect and correct those AI systems that prove to be biased.


5) “Human-in-the-loop” procedure


Around AI systems, the possibility of human review of the decisions made by such systems should exist. No matter how much AI systems and algorithms improve, it is going to be challenging for them to integrate subjective characteristics and specifics of social contexts into their rationales, or adjust their outcome to each situation. Even though some algorithms may end up requiring little or no human intervention, ultimately, there should always be review and judgment by a qualified human, especially to be able to analyze at what point an intervention or analysis of the results will be necessary.


For example, Estonia, one of the countries with the largest implementation of AI in the public sector, is developing a project in which an AI system will act as a judge and will be able to issue court rulings on claims under 7,000 Euros. The project is not fully ready yet, but Ott Velsberg, CTO of the Estonian government, assures that any decision made by the "robot-judge" can always be reviewed by a human.


Yang, J. (2019). Wired: Teaching the Machines [Illustration]. James Yand Illustration


6) To require cybersecurity and data protection measures


The security of equipment and data must be ensured, especially nowadays with the digital explosion and with everyone spending large amounts of time online. Returning to the assumptions in which AI systems are used for the control and surveillance of citizens, a relevant risk is that of a cyber-attack against government databases, by introducing some type of malware or virus that can generate changes and alterations in the stored data. This can have devastating consequences for the affected citizens, such as the system recognizing or attributing characteristics that do not match the citizens' profile.


Additionally, it is important to guarantee the security and availability of sensitive data collected to ensure the inclusion and representation of all sectors and communities in the society, and that this data will not be used or transferred to third parties for further use. Disseminating information about vulnerable groups, such as people with disabilities, can have very serious consequences for access to a job or health coverage, especially in countries that are strongly driven by the private sector.


The above are just some examples of measures and actions that could be integrated around the AI systems in order to make them safer to use. We will examine more possible measures in the next 101 article. The main idea around mitigating AI risks is to work towards achieving zero risk. Though this is quite difficult to achieve, the need to avoid the potentially discriminatory impact(s) of AI systems makes it worth the attempt.





References:


European Union Agency for Network and Information Security (2017). Privacy Enhancing Technologies: Evolution and State of the Art. https://www.enisa.europa.eu/publications/pets-evolution-and-state-of-the-art.


European Union Agency for Fundamental Rights (2018). #BigData: Discrimination in data-supported decision making, p. 9. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf


European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). European Union Official Gazette L 119, 4 May 2016, art 25. https://www.boe.es/doue/2016/119/L00001-00088.pdf


SILBERG, J., MANYIKA, J. (2019). Tackling bias in artificial intelligence (and in humans). McKinsey Global Institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.pdf


THE TECHNOLAWGIST (2019). Entrevista a Ott Velsberg, Chief Data Officer del Gobierno de Estonia. The Technolawgist, p. 6. https://www.thetechnolawgist.com/2019/06/18/entrevista-a-ott-velsberg-chief-data-officer-el-gobierno-de-estonia-jueces-robot-inteligencia-artificial-y-el-futuro-de-la-digitalizacion/


CRAWFORD, K. et al. (2019). AI Now 2019 Report. New York: AI Now Institute, p. 46. https://ainowinstitute.org/AI_Now_2019_Report.pdf


WHITTAKER, M. et al. (2019). Disability, Bias, and AI. New York: AI Now Institute, p. 19-20, https://ainowinstitute.org/disabilitybiasai-2019.pdf https://ainowinstitute.org/disabilitybiasai-2019.pdf


Image references:


Davis, A. (n.d.) [world rotating with several eyes observing] [Gif]. Ariel Davis. https://arielrdavis.com/The-New-York-Times-7


Davis, A. (n.d.). Rest of the World [Illustration]. https://arielrdavis.com/Rest-of-World


Yang, J. (2019). Wired: Teaching the Machines [Illustration]. James Yand Illustration https://www.jamesyang.com/#group-24


Yang, J. (2019). New York Times: How to Find a Watch [Illustration]. James Yang Illustration. https://www.jamesyang.com/#group-29

Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn