Artificial Intelligence 101: Let’s Talk About Bias (Part I)

By far the greatest danger of Artificial Intelligence is that

people conclude too early that they understand it.

— Eliezer Yudkowsky,

artificial intelligence researcher and founder of MIRI.



Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, most of the population is not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.


Artificial Intelligence 101 is mainly divided into ten chapters, including:

  • Introduction and International Legal Framework;

  • Let’s Talk About Bias (Part I) (the present article) and (Part II);

  • Addressing Privacy;

  • Risks on AI Implementation (part I and II);

  • What Can Be Done To Mitigate Risks? (part I and II);

  • A Special Mention to Accountability;

  • The Best of AI—Projects With Positive Social Impact.


Artificial Intelligence 101: Let’s Talk About Bias (Part I)

Unbabel (n.d.). [man feeding a computer with gender traits] [Illustration]. Data-Pop Alliance.


Following the first 101 article regarding Artificial Intelligence (AI) where the general regulatory framework was addressed, the first part of Let’s Talk About Bias will talk about some of the most common areas where AI has generated potential bias and discrimination.


Risks associated with AI can be found alongside its creation process, so when thinking about where and how to mitigate them, one must think about all the stages, items, and professionals involved.


Not only is discrimination in AI systems a technological problem, but also a reflection of what happens in the streets. In the past, an AI system was considered “neutral” because it was dealing in appearance with objective data and IT systems. However, with time, practice, and implementation results, it has been proven that its application can lead to biased and discriminatory practices, thus breaching the non-discriminatory regulations worldwide, such as article 21 of the European Human Rights Act:


1. Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.


2. Within the scope of application of the Treaties and without prejudice to any of their specific provisions, any discrimination on grounds of nationality shall be prohibited.


MaRS Discovery District (2020). [5 peopel of different nationalities with their faces surrounded by an online call framework]. MaRS.


This article addresses the a first batch of stages, more focused on a technical point of view where AI systems are carrying discriminatory potential, which are:


1) The Algorithm Design


1.1.) In the determination of the AI system goal. At an early stage, computer engineers need to narrow down what the AI systems must achieve. What is the AI system’s objective? At this stage, they need to determine the kind of data required and the algorithm parameters that will be used to reach the result. What happens when, for example, a vague concept like “solvency” comes into place? How can it be translated into computer language and what parameters are going to be used to determine solvency? Age, credit history, gender, birth details? Some of these parameters, if not all, can be considered sensitive information and they will likely be used to train the referred AI system to, for example, determine eligibility of the candidates based on such parameters.


Furthermore, what happens when the AI system goal itself focuses only on results based on economic profits? Solely relying on generating the highest possible margin for a financial entity can undermine the potential economic and social impact it has. A good example of this is the granting of subprime mortgages or student loans that are easy to take but hard if not impossible to return.


1.2.) Algorithm opacity and lack of transparency. Algorithms in AI systems are complex and, once they have sufficient parameters and instructions, tend to provide automatic results that are difficult to track from beginning to end, therefore, making hard to understand how they analyse the data and how they execute the results. This is also called the black box effect.


Additionally, it is often difficult to find specialists that are sufficiently trained and have enough expertise that can help to spot the risks in these systems, set aside the fact that the AI projects are usually divided into different groups and sections. As a result, the computer engineers themselves end up missing the “big picture” when they are developing AI systems, adding to the existing risk identification.


1.3.) System errors that lead to a biased result. Errors per se occur and are sometimes not spotted until the AI system is at a later stage, which makes it difficult to find and undo the path that has already been built. For example, Amazon stopped using an algorithm for its hiring process after realising it was favouring certain words that frequently appeared in men’s resumes.


Furthermore, in the verification processes implemented to test AI systems, data used to test the systems and validate them are often the same. Therefore, if the verification process already involves discriminatory data, the bias may pass without being noticed.


2) The Underlying Data and Its Quality


Discrimination can also happen when quality of the data is not sufficiently representative of the population's reality—it represents only a certain part of it or only a majority group. The Amazon algorithm explained in the previous point is a good illustration of this phenomenon—when the algorithm is only fed with white people’s data, it perpetuates preexisting prejudice. Subjacent data is one of the most important problems when it comes to bias, and apparent objective data on its face can contain historical or social discrimination. When assigning a label to a photograph, for example, an algorithm that provided automatic image descriptions turned out to give biased image descriptions while describing a baby; when the babies were white-skinned , the algorithm described the photos using the word "baby", but when they were dark-skinned, it described them as "colored babies". If both photos have babies in them and the intent is not to add any additional features, the impartial and objective way to tag both pictures is to label each of them as "baby".


Another example, in 2019, Kate Crawford, co-director of the AI Now Institute of NYU, when searching for "CEO Images" found that only 11% of the first results for "CEO" were identifying women when, at the time, 27% of "CEOs" in the United States were women.


Another data quality loophole lies in where and how this data has been collected. The abundance of data can potentially generate misleading information. For example, collecting a lot of data in one specifically dangerous neighborhood (i.e crime rate) may lead people to disregard the dangerousness of other neighborhoods due to the lack of available data (more data/more dangerous; less data/less dangerous).


3) The AI System Learning Process


Discrimination may happen, as mentioned earlier, as a result of biased instructions given to the AI system. For example, an investigation carried out by ProPublica—a non-profit independent news agency located in Manhattan—detected that Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an AI system created to detect recidivism in Broward County, Florida, incorrectly labeled Afro-American defendants as having a “high recidivism potential”—twice as high as white defendants.

Bosyk, T. (2018). Face Recognition [Illustration]. Dribble.


These categories above show that an initial impartial approach and set of data may seem non-biased in appearance, but once compared with other sets of data and applied to specific results, they can show discriminatory traits. Consequently, this can contribute to worsen and perpetuate discriminations for certain groups and minorities that have historically suffered from it. In the same line, the next 101 Artificial Intelligence article, Let’s Talk About Bias (Part II) will also keep the focus bias and discrimination but based on human and subjective characteristics.




References:


European Union Agency for Fundamental Rights. (2018). #BigData: Discrimination in data-supported decision making. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf.


Fernández, C.B. (2020). La Comisión va a plantear cinco opciones de regulación de la inteligencia artificial en la Unión. Diario la Ley. https://bit.ly/3hpPFtM


Opendemocracy. (2018). In the era of artificial intelligence: safeguarding human rights.Commissioner for Human Rights. <https://bit.ly/2EyydEZ>


Hao, K. (2019). This is how AI bias really happens – and why it’s so hard to fix. MIT Technology Review. <https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/


Crawford, K. et al. (2019). AI Now 2019 Report. Nueva York: AI Now Institute, p. 22. <https://ainowinstitute.org/AI_Now_2019_Report.pdf>


Manyika, J., Silberg, J., Presten, B. (2019). What Do We Do About the Biases in AI? Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai


Silberg, J., Manyika, J. (2019). Tackling bias in artificial intelligence (and in humans). McKinsey Global Institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.pdf.


Image References:


Unbabel (n.d.). [man feeding a computer with gender traits] [Illustration]. Data-Pop Alliance. https://datapopalliance.org/lwl-25-discrimination-in-data-and-artificial-intelligence/


MaRS Discovery District (2020). [5 peopel of different nationalities with their faces surrounded by an online call framework]. MaRS. https://marsdd.com/news/can-technology-help-us-be-less-racist/


Bosyk, T. (2018). Face Recognition [Illustration]. Dribble. https://dribbble.com/shots/4859099-Face-Recognition/attachments/10637050?mode=media


Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn