Artificial Intelligence 101: Let’s Talk About Bias (Part II)


"Data will always bear the marks of its history.

That is human history held in those data sets." – Kate Crawford, co-founder of the AI Now Institute at NYU.


Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area and lack the awareness to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.

This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.


Artificial Intelligence 101 is mainly divided into ten chapters, including:


Artificial Intelligence 101: Let’s Talk About Bias (Part II)


Lue, N. (2020). [Illustration of an AI brain with a scale] [Illustration].


This is the second part of the Let’s Talk About Bias topic, part of the Artificial Intelligence 101 series. While in the previous article we focused more on the main reasons why data itself may be biased, in this one emphasis will be put on human and subjective characteristics that lead to potential bias.


Catching up on some references from the previous 101 article, discrimination in AI systems is also a social problem: a biased result in an AI system will likely reflect on how our society works.


How can this happen? Let’s see:


1. Developers' bias


To put it in the words of Kate Crawford, co-founder of the AI Now Institute at New York University: “Data and data sets are not objective; they are creations of human design. We give numbers to their voice, draw inferences from them, and define their meaning through our interpretations.”


Bias happens due to human discrimination, in the broad sense. Most developers who currently work on AI come from similar backgrounds. A huge percentage of them are white males, coming from common cultural and socio-economic spheres. This leads, unintentionally or not, to an interpretation of data and results that ends up being applied to all, even though many pieces and readings of the same picture are missing.


Olga Russakovsky, cofounder of the AI4ALL foundation, said that in a society that it is already biased an AI unbiased system can hardly be created, and since we can do better than what is done as of today, we should do better. Alongside with Ms Russakovsky’s insight, Timnit Gebru, founder of Distributed Artificial Intelligence Research (DAIR), cofounder of Black in AI and former Google AI ethics developer, said that maybe the greatest challenge is changing the cultural attitude of the science community. Indeed, she asserts that there is a tendency to believe objectiveness is always present in investigations when, often, the result itself is not objective.


Barbara Grosz, computer scientist and Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), pictures the following question popping up on engineers and developers’ screens every time they start on a new AI developing project: “Have you thought about the ethical implications of what you’re doing?”



Analytics India Magazine (n.d.). Fairness and analytics [Illustration].


2. Lack of diversity/representation


Computer developers aim to work on AI systems that can suit different scenarios. This saves time and resources, and, at first glance, it does not seem a risk. Until we consider that by the time these AI systems are implemented, social contexts that were taken as reference in the first place might have dramatically changed since then or that the context in which these AI systems are further applied are too different from the original one.


As Andrew Selbs, assistant professor at UCLA School of Law and postdoctorate at the Data & Society Research Institute said: “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.”


Hand in hand with this, there is the fact that there is few feedback to none requested from the communities where such AI systems are going to be implemented. This is particularly lacking from underrepresented and minority communities who are, in fact, the ones that may give the most valuable feedback about how their communities work and what facts and considerations would be most relevant to integrate in AI systems. In the words of Timint Gebru: “All these institutions are bringing the wrong people to talk about the social impacts of A.I., or be the faces of these things just because they’re famous and privileged and can bring in more money to benefit the already privileged.”


Towards AI. (n.d.). [Different diverse people chatting] [Illustration]


3. The concept of “fairness”.


In the AI field, most principles for responsible and ethical AI include the concept of “fairness”. However, the concept of fairness is ambiguous and accepts many interpretations depending on the context. Arvind Narayan, associate professor at Princeton University known for his work in data anonymisation, identified some 21 different definitions of "fairness". According to the Merriam-Webster definition, fairness is the quality or state of being fair. And fair is defined as marked by impartiality and honesty: free from self-interest, prejudice, or favoritism. But still, it is not clear what is meant by absence of bias.


It is complex to define and integrate into mathematical figures a concept like fairness that is broad, changes according to the context and is subject to many interpretations. In addition, different definitions can sometimes not coexist in the same assumption, especially when cultural factors or information regarding gender, race, or sexual orientation are considered. For example, Kate Crawford used the example of the "CEO Images" search that was referenced in the previous 101 article to highlight the relevance and complexity of this point: what proportion of female "CEO" images would we consider fair; the 50% versus male "CEO" and what we have not reached yet, or the actual 27%?


To illustrate this, there is a famous platform called The Moral Machine that perfectly brings to light the challenges of deciding what would be fair, specially in circumstances where no final answers are right and may lead to potential harm. In fact, this platform was the trigger that led to the present 101 Artificial Intelligence articles series.


Feel free to take a look and judge for yourself: https://www.moralmachine.net/


Landrein, S. (n.d.) [self-driving car potentially running over a kid or an elder lady] [Illustration].


There is a tendency to expect computer science to give accurate results, but the concepts of fairness or impartiality cannot do this. In words of Timnit Gebru: The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say: 'Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math.'”


Tarikvision. (n.d.). [Equality vs. equity vs. justice] [Illustration].


There are outstanding organizations and experts from different areas and nationalities that work towards focusing on raising awareness about AI systems potential discriminations as well as giving solutions to fix them (i.e. AI4ALL, OdiseIA, Access Now, DAIR… you can check a list of some of them here AIethicist.org). Their work is necessary and should be brought to our attention.


All the above add up to the potential data bias referred to in the previous 101 article and, alongside bias and discrimination, violations of privacy are at the top in the list when it comes to addressing potential breaches of fundamental rights. This is where we will put the focus in the next 101 article.



References:


Smith, C.S. (2020). Dealing With Bias In Artificial Intelligence. The New York Times. https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html


HAO, K. (2019). This is how AI bias really happens – and why it’s so hard to fix. MIT Technology Review. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/


Harvard John A. Paulson School Of Engineering And Applied Sciences. (n.d.). Barbara J. Grosz. https://grosz.seas.harvard.edu/


Crawford, K. (n.d.). https://www.katecrawford.net/about.html


Selbst, A. (n.d.). https://andrewselbst.com/


Crawford, K. et al. (2019). AI Now 2019 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf


OPENDEMOCRACY (2018). In the era of artificial intelligence: safeguarding human rights [en línea]. Commissioner for Human Rights. <https://bit.ly/2EyydEZ>


Silberg, J., Manyika, J. (2019). Tackling bias in artificial intelligence (and in humans). McKinsey Global Institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.pdf


European Union Agency For Fundamental Rights. (2018). #BigData: Discrimination in data-supported decision making. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf


Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG). (n.d.). Moral Machine. https://www.moralmachine.net/hl/es


Image References:

Lue, N. (2020). [Illustration of an AI brain with a scale] [Illustration]. Harvard University https://sitn.hms.harvard.edu/uncategorized/2020/fairness-machine-learning/


Landrein, S. (n.d.) [self-driving car potentially running over a kid or an elder lady] [Illustration]. MIT Technology Review. https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/


Analytics India Magazine (n.d.). Fairness and analytics [Illustration]. https://analyticsindiamag.com/machine-learning-fairness-bias-google-open-ai-gym/


Tarikvision. (n.d.). [Equality vs. equity vs. justice] [Illustration]. Dreamstime. https://www.dreamstime.com/d-isometric-flat-vector-conceptual-illustration-equality-vs-equity-vs-justice-human-rights-equal-opportunities-d-isometric-image237595195


Towards AI. (n.d.). [Different diverse people chatting] [Illustration]. https://towardsai.net/p/machine-learning/bias-matters-whats-fairlearn-and-why-should-i-care

Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn