Artificial Intelligence 101: Risks on AI Implementation (Part II)


Foreword



Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area and lack the awareness to consciously weigh the pros and the cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.

Artificial Intelligence 101 is mainly divided into ten chapters, including:


Risks on AI Implementation (Part II)



Davis, A. (2019). Surveillance eye behind humans [Digital Illustration]. The New York Times.



Following the Artificial Intelligence 101: Risks on AI Implementation (Part I) article, this second part will provide some more insights on areas where AI systems have proven to generate risks in its implementation stage. Let's continue:



5) Increased control and surveillance


At a domestic level, AI systems implemented in companies tend to favor employer control, generating increasing pressure on the employees. For example, Abdi Muse, Executive Director of the Awood Center, an East African community organization in Minnesota, denounced the AI system used by Amazon to determine "the ratio" of compliance of its workers and their payroll. Muse explained that, on its face, the AI system helped organize the workforce workflow, but the downside of the system was the scrutiny of the employee’s time. If the productivity of a worker fell three times in a row, below the productivity ratio set by the algorithm used by Amazon for that day, the worker in question had a high likelihood of being fired. Muse stressed that some workers chose between going to the bathroom or maintaining their work ratio.


Understanding the type of data and metrics that the Amazon AI systems used to determine the workers' "ratios" would require access to the algorithms and the data they use, but this sort of information is not available outside the organization and is likely only accessible through data leaks. The same applies to other organizations and even governments.


On a more international level, China is one of the countries where the use of data of its citizens for control and surveillance has been more criticized—but let's not lose sight of the fact that this country is not the only country implementing these practices. Some governments and international organizations, like the Human Rights Watch, have reportedly said that the use of biometric recognition data of individuals in China has been used to identify and harass minorities and dissident communities around the Chinese territory. Some examples are the pro-democracy individuals in the Hong Kong protests or the Uyghurs, a minority community living in Xinjiang, a region where most of the people are Muslims.


Here’s an HRW video regarding the Uyghurs and the use of data for surveillance:

Human Rights Watch. (2019). China's Mass Surveillance Phone App. YouTube.



And here’s one about a piece of news regarding the use of biometric data during the Hong Kong protests:

PBS News Hour. (2019). Biometric data becomes new weapon in Hong Kong protests. YouTube.




6) Greater risk of cyber-attacks


Along with the increase in control and surveillance, the concentration and centralization of large amounts of direct and indirect data in governments and corporations exponentially increase the risk of cyber-attacks and security breaches to the servers where this data is stored.


Citing an article published in the World Economic Forum by Oliver Wyman advisors, Paul Mee and Chaitra Chandrasekhar titled “Cybersecurity is too big a job for governments or business to handle alone”:


Cybersecurity complaints to the US Federal Bureau of Investigation more than tripled during the pandemic last year, while the average payment by victims of ransomware jumped 43% in the first quarter of 2021 from the preceding quarter. Attacks on the software supply chain are growing exponentially, and the burgeoning Internet of Things (IoT) and 5G wireless technology offer more vulnerabilities to exploit.”


To illustrate the above, here's a screenshot of Kaspersky's real-time webpage on cyber-threats around the globe:

Kaspersky. Cybermap screenshot, https://cybermap.kaspersky.com/




7) Non-inclusive rationales and outcomes


We tend to have a society that is highly developed at the technological level and not very educated at the ethical and social levels. In parallel to the research, development, and creation of AI systems, the discussions and proposals for the regulation of this sector should also evolve; something that, in the past, has not been very balanced.


Lack of communication between the technology sector and other sectors of society such as the political, legal, or social sectors has led to a lack of diverse perspectives and representativeness of minority or historically discriminated communities, either because the AI systems potentially discriminate against them when they are applied or because their data is just not included in the equation.


As already mentioned throughout these 101 series, in certain contexts, the AI systems have not adequately reflected the populations’ reality, thus failing to accommodate the AI system to the context and circumstances of the community where the system is being implemented. Restating the words of Andrew Selbs mentioned in the previous 101 article, Let’s Talk About Bias (Part II):


“You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.”


One of the major drawbacks of the implementation of this type of technology is that people with few resources or with little access to technology and the internet are excluded from their development and implementation radius. This exclusion leads to a lack of data about that certain community. Hence, the AI system neither considers the data from the regions where these people live nor does it adjust to being implemented there.


On top of the above, the research and development of AI systems are centralized in the hands of large technology companies, giving them virtually all power and control over AI systems, while leaving those who suffer any potential damaging consequences voiceless and powerless to act.


Considering the current state of AI, developers can no longer think about their jobs as one which only creates programs and algorithms. Their inventions have proven to have a huge impact on people’s lives. The AI development chain should incorporate additional steps such as contextual, ethical, and legal filters.



Cristofani, B. (2018). An interchangeable face for a gentlemen suit [Illustration]. The Economist.



Having outlined some of the most flagrant risks related to AI in the former articles, the next two articles will focus on available options to mitigate those AI risks. There is still much to be done, but once a risk has been identified, it is easier to work towards finding solutions.






References:


Crawford, K. et al. (2019). AI Now 2019 Report. New York: AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf.


Fernandez, C.B. (2020). Inteligencia Artificial: muchas propuestas éticas, pero poca regulación que establezca garantías sobre su uso. Wolters Kluwer. https://ceadigilaw.org/inteligencia-artificial-muchas-propuestas-eticas-pero-poca-regulacion-que-establezca-garantias-sobre-su-uso/


Chui, M. et al. (2018). Notes From the AI Frontier – Applying AI For Social Good [en línea]. McKinsey Global Institute, p. 18. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/applying%20artificial%20intelligence%20for%20social%20good/mgi-applying-ai-for-social-good-discussion-paper-dec-2018.ashx


Siegel, R. (2019). Amazon Prime Day means protest for workers in Minnesota. The Washington Post. https://www.washingtonpost.com/business/2019/07/15/amazon-workers-minnesota-prime-day-means-protest/


BBC. (2021). "Who are the Uyghurs and why is China being accused of genocide?" BBC. https://www.bbc.com/news/world-asia-china-22278037


Human Rights Watch. (2019). China’s Algorithms of Repression. HRW. https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass


Human Rights Watch. (2019). China's Mass Surveillance Phone App. YouTube. https://www.youtube.com/watch?time_continue=81&v=_Hy9eIjkmOM&feature=emb_title


PBS News Hour. (2019). Biometric data becomes new weapon in Hong Kong protests. YouTube. https://www.youtube.com/watch?v=yB5tY2LhRgM&t=48s


Mee, P. and Chandrasekhar, C. (2021). "Cybersecurity is too big a job for governments or business to handle alone." World Economic Forum. https://www.weforum.org/agenda/2021/05/cybersecurity-governments-business/



Image References:


Davis, A. (2019). [Surveillance eye behind humans] [Digital Illustration]. The New York Times. www.nytimes.com/2019/12/20/opinion/privacy-surveillance-video.html


Davis, A. (2020). [Lock with an eye in the key whole] [Illustration]. The New York Times. https://www.nytimes.com/2020/01/20/opinion/facial-recognition-ban-privacy.html


Cristofani, B. (2018). [An interchangeable face for a gentlemen suit] [Illustration]. The Economist. https://www.economist.com/special-report/2018/03/28/managing-human-resources-is-about-to-become-easier


Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn