top of page

Artificial Intelligence 101: What Can Be Done to Mitigate Risks? (Part II)

Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.


Artificial Intelligence 101 is mainly divided into ten chapters, including:



Artificial Intelligence 101: What Can Be Done to Mitigate Risks? (Part II)


Solo, J. (2018). AI Law [Illustration] The Economist. Behance.



In continuation of the previous article, the present one addresses some measures that can help to mitigate the risks of AI. In this regard, some additional proposals to mitigate AI risks are:


7) Education and information accessible to the population


The lack of public awareness contributes to the feelings of mistrust and abuse of power by companies using AI systems. In this regard, educating and informing groups of all ages about the operations, scope, and risks of the use of AI becomes necessary. Public institutions should invest in training and education initiatives to foster global social awareness of the implications of this technology and the impact it has on the population and the future.


Further, companies using AI systems must provide information to the users and customers in order to allow them to understand the decision-making process of the AI system and its underlying reasoning, and also to identify the responsible agents so that, if necessary, they can be held accountable for the necessary corrections and adaptations.


The European General Data Protection Regulation (GDPR) currently includes a legal obligation to inform individuals whose data is being collected. However, such information is often seen as tedious and difficult to understand by users, especially with regard to automated decision-making processes, such as profiling.


8) Debates and collaboration at the public and private levels


It is necessary that all agents involved in society such as States, companies, institutions, and non-governmental organizations, assume a firm commitment to debates and discourses about AI systems-- with a special focus on cross-border debates, culminating in enforceable regulations. The more these discussions are made, the better the risks of AI systems can be mitigated.


Relevant steps have been made toward such regulation, especially in Europe with the recent approval of the AI European Act in conjunction with the GDPR. Also, at a cross-border level, it is worth mentioning the recent agreement "in principle" reached between the European Union and the United States regarding the international transfer of data.


The new development was announced on March 25th, 2022 and there is still a long way to go until a final resolution becomes binding. Hopefully, it does not repeat the flaws that invalidated the former Privacy Shield regulation, which was dispensed with due to a lack of guarantees and safeguards concerning user data transfers between the U.S. and European countries. See the 2020 Court of Justice of the European Union case “Data Protection Commissioner vs. Facebook Ireland Ltd.”, also called Schrems II.


There is an imbalance between those who create and implement AI systems and those who are impacted by them and suffer from their inefficiencies. When these issues do not affect us directly, it is harder for us to take them into account. Relevant facts may not be taken into account because we are not aware of them or because they do not directly affect us. That is why it is important to involve different organizations and non-governmental entities that defend the interests of minorities —for example, people living with disabilities. Without feedback from minority groups, important details and considerations that should be taken into account are not reflected in the AI system creation process. As a consequence, accidents and discrimination are more likely to happen. A perfect example of this is the accident that took place in 2018, caused by an automated Uber vehicle in Arizona, USA, where a girl who was walking with her bicycle was killed by the car because the algorithm failed to identify her as a person. Would the result have been different if, for example, the victim had been in a wheelchair?


According to Timnit Gebru, in discussions about AI and social impact, the relevant actors are often unrepresented. The actors in direct contact with the organizations that see the direct impacts of AI discrimination, those who are in the best position to share the practical risks of implementing these systems, are not included in the discussions. Unfortunately, the people sitting at the table are not part of these social organizations. Instead, they are the ones who can provide funding for projects.


Ejaita, D. (2021). [Different people together on top of a table finding balance] [Illustration]. The Economist.


9) Use of AI itself to correct bias and discrimination defects


AI systems can be used to detect and reduce the impact of bias in the AI itself. In many instances, AI can reduce subjective human interpretation of information, because machine learning can be configured to disregard variables that are not relevant or that have been shown to be detrimental to some social groups. Humans inevitably take into consideration certain cultural knowledge and stereotypes that are inherent in our existence and remain in the subconscious. This would be the equivalent of the "black box" of AI systems — as referred to in a previous 101 article Let’s Talk About Bias (Part I).


Thus, several studies, such as one led by Computer Engineering Professor, Jon Kleinberg, have been able to prove that the use of AI itself can increase fairness in the decision-making process of the AI system itself. In the words of MIT's research scientist Andrew McAfee, "If you want the bias out, get the algorithms in."


10) More diverse and qualified professionals


Multidisciplinary teams and professionals that represent, to a greater extent, the diversity that exists in the world must be integrated into the staff that researches and develops AI systems. This would undoubtedly help to bring to the table risks and ways of analyzing AI systems from as many points of view as possible. Unfortunately, currently, not many professionals are skilled enough to fully develop and understand AI systems and even less so, those from culturally diverse backgrounds. To tackle this imbalance in the profiles of AI professionals, it would be worthwhile to consider some courses of action:


a) Rethinking the process of talent acquisition, i.e. investing in talents and professionals that come from different institutions and backgrounds; b) Actively seeking to build an inclusive environment and guarantee equal pay, to encourage diversity; c) implementing courses and mentoring programs for minority groups that do not have access to the same opportunities as students that come from privileged backgrounds or institutions.


11) Shareholder demand in the private sector


Thinking about the private sector, an important measure to keep up the pressure would be for shareholders and stakeholders to demand social responsibility and compliance with certain guarantees for the AI systems developed by their companies. Although this option seems somewhat utopian, it is worth considering that, in the race to develop and implement AI systems, we cannot help but demand that such systems be secure and come with specific guarantees. In the long term, a public scandal by any of the major technology companies arising from the misuse or lack of guarantees in their AI systems can generate a lack of confidence in the company and a significant impact on its profitability.


12) Specialization of the public sector


Digital implementation in the public sector usually goes a couple of steps behind the private sector. The technical know-how and specialization of relevant figures in the public sector, specifically judges and bodies in charge of analyzing and resolving disputes related to AI systems, will provide strong guarantees to citizens. Thus, given the technical elements of this type of dispute, those in charge of resolving disputes related to AI systems must have specialized knowledge in this area.

Cristofani, B. (2018). [Eye-shaped table with several people sitting] [Illustration]. The Economist.


In line with what was stated in the previous article, the above are just some additional examples of measures and actions that could be integrated into and around the AI systems in order to make them safer to use. A special mention must be made on the debate about how to establish the attribution of responsibility to the different actors in the AI system creation process and, most importantly, how to ensure that persons that may suffer damages from AI systems can have access to real and enforceable guarantees. Thus, the accountability of AI personnel will be discussed in the next 101 article.



References:


European Union Agency for Fundamental Rights (2018). #BigData: Discrimination in data-supported decision making. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf


Whittaker, M. et al. (2019). Disability, Bias, and AI. New York: AI Now Institute, p. 9. https://ainowinstitute.org/disabilitybiasai-2019.pdf


OpenDemocracy (2018). In the era of artificial intelligence: safeguarding human rights. Commissioner for Human Rights. https://bit.ly/2EyydEZ


Smith, C.S. (2020). Dealing With Bias in Artificial Intelligence. The New York Times. https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html


Silberg, J., Manyika, J. (2019). Tackling bias in artificial intelligence (and in humans). McKinsey Global Institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.pdf


Crawford, K. et al. (2019). AI Now 2019 Report. New York: AI Now Institute, p. 46. https://ainowinstitute.org/AI_Now_2019_Report.pdf


Guarascio, F. and Chee, F.Y. (2022). EU-U.S. data transfer deal cheers business, but worries privacy activists. Reuters. https://www.reuters.com/legal/litigation/eu-us-reach-preliminary-deal-avoid-disruption-data-flows-2022-03-25/


Reuters. (2019). Uber in fatal crash had safety flaws say US investigators. BBC. https://www.bbc.com/news/business-50312340


Young, S. (2021). How to Make Sure That Diversity in AI Works. Forbes. https://www.forbes.com/sites/forbestechcouncil/2021/06/14/how-to-make-sure-that-diversity-in-ai-works/


Judgement of 16 July 2020, Schrems II, C‑311/18, ECLI:EU:C:2020:559


Illustration references:


Solo, J. (2018). AI Law [Illustration]. The Economist. Behance. https://www.behance.net/gallery/65196957/AI-Law-The-Economist


Ejaita, D. (2021). Different people together on top of a table finding balance [Illustration]. The Economist. https://www.economist.com/news/2021/01/05/2020-as-told-through-illustration


Cristofani, B. (2018). Eye-shaped table with several people sitting [Illustration]. The Economist. https://www.economist.com/special-report/2018/03/28/the-sunny-and-the-dark-side-of-ai

Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page