Artificial Intelligence 101: Addressing Privacy


Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area and lack the awareness to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intends to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.


Artificial Intelligence 101 is mainly divided into ten chapters, including:



Artificial Intelligence 101: Addressing Privacy


Tarasov, S. (n.d.) [brain that has a circuit-like appearance] [Digital image]. Forbes.



Following the previous Artificial Intelligence 101 article, another subject with which AI systems collide head-on is privacy. AI systems have a hard time honoring and respecting the right to privacy, a fundamental right which, with the advent of the digital world, has become more relevant than ever (and, likewise, more breached than ever).


In privacy protection, the digital environment is not different than the physical one. Unfortunately, the population believed for a long time that what is happening online has no real consequences in their physical world. Hence the lack of interest in protecting their information, monitoring their online behavior or even changing their habits.


According to the Universal Declaration of Human Rights, article 12:


No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.


The Yale Review of International Studies. (2021). [a chip with a legal scale in it about to be clicked by a hand] [Digital image].


In the digital environment, applications and social media platforms collect a huge amount of data. As seen in a previous 101 article of this series, most of the time users are not aware of the amount of data collected, either because the platforms do not properly report on it or because users do not carefully read their terms and conditions—they are often extremely long, complex and generate reader fatigue. Handing over personal data is important to these platforms because they will help them generate profiles, predict behaviors and give AI systems fuels for decision-making.


Many people say that they do not care about handing over their data because they have nothing to share or hide. However, this is an incorrect way to put this issue. Someone’s isolated data may seem not relevant, but, it becomes extremely significant when this data is aggregated to millions of other data with the purpose to identify general character traits, behaviors and preferences in a given population. With a final purpose: monetizing these patterns.


However, the danger of using these large sets of data is not just about their monetization. The problem is that the AI systems using these data will potentially lead to programs and applications providing services that can be harmful to an individual or their community. Here are some real life examples:


And the list goes on and on.


Brett Wallis, A. (n.d.) [a woman walking next to a wall which has smartphones and a computer with eyes in them] [Photography].


Personal data can contain direct or indirect information about our health, financial situation, political or religious opinion or sexual orientation, to name only a few. A lot of data collected by social platforms are related to sensitive information: from the "likes" or accounts we follow, to the comments we may post. When these "likes" are combined with other data, one can easily determine the political or sexual orientation of an individual. If this information is subsequently used in the programming of AI systems and algorithms, the results can easily end up violating a fundamental right, such as the right to non-discrimination. Under the current data privacy regulations, direct data have some level of protection (i.e. name, ID number, address, phone number, health records, sexual orientation…) but the incidental data (i.e. postal code, where someone attended school, their favoured shops, or their online presence…) although used for the same purposes, is often not considered sensitive.


Also, when a platform or a business tells a customer that they can register for free or get a shopping card to get discounts for free, this is not true: they are giving data for free (i.e. gender, postal code, shopping habits if it is regarding a shopping card, consumer behavior, etc.). With the amount of revenue companies get from all their customers' data, they might as well pay them for it.


Spanish journalist Marta Peirano, expert on internet and privacy matters, clearly illustrates all the above in her Ted Talk from 2015 named “The surveillance device you carry around all day”. Nine and a half minutes worth of your attention (the original audio is in Spanish, but subtitles are in English):



AI systems are fed by the information humans provide willingly and if the data provided to the system is biased (intentionally or unintentionally), the result will inevitably be biased. It is thus important that internet users stop for a moment before accepting data privacy policies and think about which data are worth handing over.


Personal information is much more secure in people's hands than with some third party taking care of it for whatever purpose. Even though regulations are becoming more strict regarding the handling and treatment of personal data, in the end, no one but the company or government treating such data knows what is being done with it, and for how long. People should not be giving their data away for free if they are not sure about the way they are going to be treated, especially for a service or product that they do not really needed.


The right to privacy has always been and continues to be essential, even though modern patters manage to make people feel that it is no longer relevant.


No one would allow a random person to get into their home to study their life or look at their habits. So why doing it online? What is worse is that customers and internet users voluntarily provide private information (i.e. personal or family pictures—and sometimes of minors—profile information, random surveys…) to businesses for a few bunch of likes, comments or discounts.


We will see in the forthcoming articles that AI has potential to achieve extraordinary good things. Unfortunately, it can also be used for detrimental and harmful purposes. Surprisingly enough, it all starts with our data. Considering this, being informed and aware of the general picture may give us, the citizens, sufficient tools to regain control over our data and be more sensitive about what we share. After all, it is our personal information.


And last, but not least, for those who have not watched it yet and want to get deeper insights on this topic, you might be interested in watching the Social Dilemma documentary:





References:


HAO, K. (2019). This is how AI bias really happens – and why it’s so hard to fix. MIT Technology Review. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

OPENDEMOCRACY (2018). In the era of artificial intelligence: safeguarding human rights. Commissioner for Human Rights. https://bit.ly/2EyydEZ


Peirano, M (2021). El Enemigo Conoce al Sistema. 2nd ed. Debate: Madrid.


European Union Agency for Fundamental Rights. (2018). #BigData: Discrimination in data-supported decision making [online]. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf.


Crawford, K. et al. (2019). AI Now 2019 Report (pp. 31 ff.). New York: AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf


Balogh, S. and Johnson, C. (2021). AI can help reduce inequity in credit access, but banks will have to trade off fairness for accuracy — for now. Business Insider. https://www.businessinsider.com/ai-lending-risks-opportunities-credit-decisioning-data-inequity-2021-6


Thornhill, J. (2021). Beware the known unknowns when finance meets AI. Financial Times. https://www.ft.com/content/01c366db-e1b8-49ff-9952-ef40403991ee


Orlowski, J. The Social Dilemma [movie]. Netflix. https://www.youtube.com/watch?v=uaaC57tcci0


Doughlas Heaven, W. (2021). Bias isn’t the only problem with credit scores — and no, AI can’t help. MIT Technology Review. https://www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/


Livingston, M. (2020). Preventing Racial Bias in Federal AI. Journal of Science Policy & Governance. https://www.sciencepolicyjournal.org/article_1038126_jspg160205.html


Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G


Canales, K. (2021). China's 'social credit' system ranks citizens and punishes them with throttled internet speeds and flight bans if the Communist Party deems them untrustworthy. https://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4


Stevenson, A. (2018). Facebook Admits It was Used to Incite Violence in Myanmar. The New York Times. https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html


Cook, J. (2021). A Powerful New Deepfake Tool Has Digitally Undressed Thousands of Women. The Huffington Post.

https://www.huffpost.com/entry/deepfake-tool-nudify-women_n_6112d765e4b005ed49053822



Image References:


Tarasov, S. (n.d.) [brain that has a circuit-like appearance] [Digital image]. Forbes. https://www.forbes.com/sites/davidteich/2020/08/10/artificial-intelligence-and-data-privacy--turning-a-risk-into-a-benefit/


Brett Wallis, A. (n.d.) [photography of a woman walking next to a wall which has smartphones and a computer with eyes in them]. The Guardian. https://www.theguardian.com/world/2016/oct/17/uk-security-agencies-unlawfully-collected-data-for-decade


The Yale Review of International Studies. (2021). [a chip with a legal scale in it about to be clicked by a hand] [Digital image]. http://yris.yira.org/comments/4810




Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn