Artificial Intelligence 101: Risks on AI Implementation (Part I)


Foreword


Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, many are not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.


This series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.

Artificial Intelligence 101 is mainly divided into ten chapters, including:


Risks on AI Implementation (Part I)



Munn, M. (n.d.) A face broken in many pieces [Illustration]. The New Yorker.



Further to the discrimination and privacy-related risks discussed in the previous 101 articles, there are other inherent risks caused by the implementation of AI systems that have emerged as a consequence of the use of these systems. The present and subsequent 101 articles will focus on the risks that arise due to the implementation of AI systems. Let’s see:


1) Lack of attribution of responsibility


Who is responsible when a product or service that runs with an AI system harms a third party: the seller/service(s) provider? The merchant who sold the AI product? The developer of the AI system?


When it comes to the attribution of responsibility of an AI system, the chain of responsibility is not so clear and this leads to controversy.


Currently, there is no global legal framework that attributes responsibility to the agents involved in the "life" chain of an AI system. As discussed in the first Artificial Intelligence 101 article, a lack of uniformity in AI systems regulations leads to different fragmented regulations that are drafted to be applicable to a single concept that is global in scope. In addition, most of these regulations have a long way to go if they are to meet up with modern technologies. Legal fragmentation generates inconsistencies, especially when it comes to attributing responsibility to the agents involved in the AI development and implementation process. Also, the lack of a global regulatory legal framework leads to sizeable gaps, specifically when it comes to resolving disputes caused by harm posed to third parties by AI.


On top of that, there are different approaches when it comes to determining accountability. This leads to some controversy between legal agents (this topic will be further addressed in a specific 101 article, later on, titled “A Special Mention to Accountability”).


For now, what is important to bear in mind is that the lack of a global regulatory framework regarding the development and implementation of AI systems gives discretion to the major AI players to define the concepts of fairness or ethics in the context of AI, and that conveniently misses the attribution of responsibility part. Even though large technology companies promote themselves as being compliant and respectful of fundamental rights, if they breach such compliance duties, they will hardly be made subject to sanctions, since specific and ad hoc regulations are still on the way.


2) Work automation


The advancement of AI will entail the amortisation of jobs that require tasks with highly repetitive and routine content, that does not involve strategic decision-making, and that have little creative and intellectual content (e.g., basic search tasks, telemarketing, or assembly line work). This will likely lead to a radical change in the labour force, where individuals with low-skilled jobs will have to refocus on sectors that cannot be easily replaced by AI.


According to Kai Fu-Lee, founder of Sinovation Ventures and former President of Google, China, the jobs that will be hardest to replace are the ones that have little routines, those that require creativity, intellectual challenges, and certain emotional qualities, such as empathy and confidence.


Researchers seem to agree that the implementation of AI systems will affect many industries. Although not everyone agrees on how, what seems to be clear is who will be most affected, and that is vulnerable groups, such as people that have been historically discriminated against, people with low wages and few resources, the elderly, and those with some form of disability.


World Economic Forum. (2018). Kai-Fu Lee: Jobs Will Be Replaced...And That's Okay [Video]. YouTube.


3) New forms of social manipulation and disinformation


Wrongful use of AI technology can contribute to manipulative behaviours such as the Cambridge Analytica scandal, related to the 2016 USA presidential election, where information about Facebook users (profile information, interests and political preferences) was used to influence their candidate preferences, by using information that would conveniently induce them into voting for a certain political party. Another example is the Myanmar-Rohingya conflict in 2018, where Facebook was held responsible for inciting hate and discrimination against the said community.


The use of AI technology like the above example leads to discrimination towards certain social groups, especially members that have historically been discriminated against, for example, discrimination against minority groups and across racial and gender lines.


AI technology can also be used to create "deepfakes". These are videos or images of real people that have not been recorded or taken by them but can be used to generate disinformation and fake news (even a person's nudity, as was the case with an application named DeepNudes, which has now fortunately been taken off the market).


Fu, Ch. (2021). Man in front of a computer with papers flying around [Illustration]. The New Yorker.


4) Large-scale energy pollution


The technology industry is often criticized for the large amount of energy required for IT infrastructure. Such energy vastly comes from non-renewable resources, and the advancement of AI technologies makes the industry more prone to pollution.


The energy dependence of the technology sector is growing exponentially, as the sector grows at a huge pace. According to the AI Now Report of 2019, by 2020 the carbon footprint would amount to the equivalent of 3-4% of the world greenhouse gases, more than double of what the sector produced in 2007 (to add some perspective, this percentage is higher than the carbon footprint of Japan, the 6th biggest world polluter). Despite the declarations of the major technology companies about using more renewable energies, the reality is that they are overly dependent on fossil fuels. The 5G networks, designed to implement the Internet of Things (IoT), have accelerated the use of data processors and 5G antennas consume much more than their 4G predecessors. So has the massive switch to cloud-based services, which require cold average temperatures and continuous electricity supply.


Furthermore, up-to-date information and data on the actual impact of pollution caused by these types of technologies and services are largely missing. Having accurate figures about the pollution impact of technology is not easy. The organisations that analyze these types of impact work with little data, which is often outdated, and rarely count on the cooperation from the polluting companies that do have this information.

Getty Images. AI Champions driving new industry solutions for Climate Change [Edited Photography]. Forbes.



The list of AI implementation risks is long and cuts across different sectors. All of these sectors should be considered when policies and regulatory frameworks are drafted. It looks like the efforts are going in the right direction but there’s still a long path to go. The next 101 Artificial Intelligence article, titled: Risks on AI Implementation (Part II), will address some more implementation risks worth considering to get a better insight into the AI implementation risks.




References:


European Union Agency for Fundamental Rights. (2018). #BigData: Discrimination in data-supported decision making. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf.


Open Democracy (2018). In the era of artificial intelligence: safeguarding human rights. Commissioner for Human Rights. https://bit.ly/2EyydEZ


Marr B. (2020). Is Artificial Intelligence (AI) A Threat to Humans? Forbes. https://www.forbes.com/sites/bernardmarr/2020/03/02/is-artificial-intelligence-ai-a-threat-to-humans/.


Fernández, C.B. (2020). Estados Unidos presenta diez principios para el desarrollo de la inteligencia artificial. Diario la Ley. https://bit.ly/3ldRWcD; ver también, CRAWFORD, K. et al. (2019). AI Now 2019 Report. Nueva York: AI Now Institute, p. 21. https://ainowinstitute.org/AI_Now_2019_Report.pdf.


Lee, K. (2020). How can AI save our humanity? Ted Ideas worth spreading. https://www.ted.com/talks/kai_fu_lee_how_ai_can_save_our_humanity?language=en#t-7231.


Fridman, L. (2019). Kai-Fu Lee: AI Superpowers - China and Silicon Valley (podcast). YouTube. https://www.youtube.com/watch?v=cQ48rP_Rs4g.


World Economic Forum. (2018). Kai-Fu Lee: Jobs Will Be Replaced...And That's Okay [Video]. YouTube. www.youtube.com/watch?v=gX2DrPBQEpk


Crawford, K. et al. (2019). AI Now 2019 Report. New York: AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf.


Ma, A. and Gilbert, B. (2019). Facebook understood how dangerous the Trump-linked data firm Cambridge Analytica could be much earlier than it previously said. Here's everything that's happened up until now. Business Insider. https://www.businessinsider.com/cambridge-analytica-a-guide-to-the-trump-linked-data-firm-that-harvested-50-million-facebook-profiles-2018-3#was-it-legal-4


Milmo, D. (2021). Rohingya sue Facebook for £150bn over Myanmar genocide. The Guardian. https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence


Burgess, M. (2021). The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways. WIRED. https://www.wired.com/story/deepfake-nude-abuse/


Cook, J. (2021). A Powerful New Deepfake Tool Has Digitally Undressed Thousands of Women. The Huffington Post. https://www.huffpost.com/entry/deepfake-tool-nudify-women_n_6112d765e4b005ed49053822


Andrews, E. L. (2020). AI’s Carbon Footprint Problem. Standford University. Human-Centred Artificial Intelligence. https://hai.stanford.edu/news/ais-carbon-footprint-problem


Image References:


Munn, M. (n.d.) [A face broken in many pieces] [Illustration]. The New Yorker. https://www.newyorker.com/tech/annals-of-technology/the-bot-politic


Fu, Ch. (2021). [Man in front of a computer with papers flying around] [Illustration]. The New Yorker. http://www.charlotte-fu.com/#/the-new-yorker/


Getty Images. [AI Champions driving new industry solutions for Climate Change] [Edited Photography]. Forbes. https://www.forbes.com/sites/markminevich/2021/03/31/ai-champions-driving-new-industry-solutions-for-climate-change/




Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn