Much is spoken about the wonderful potential that Artificial Intelligence (AI) has and how it can benefit our lives. However, most of the population is not sufficiently informed and educated in this area to allow them to consciously weigh the pros and cons this technology already brings in their lives. Nowadays, we see how users, tired of endless legal notices and privacy policies, hand over their personal data without knowing who is going to use them and how. What is the result? A lack of trust in AI systems and a fear of the unknown.
These series of Artificial Intelligence 101 articles intend to give the reader a general picture of the current regulatory framework (or lack thereof) around this technology as well as to put the focus on its potential for both helping society and jeopardizing it and, in the latter case, specifically focusing on its discrimination and privacy-related risks.
Artificial Intelligence 101 is mainly divided into ten chapters, including:
Introduction and International Legal Framework (the present article);
Let’s Talk About Bias (part I and II);
Risks on AI Implementation (part I and II);
What Can Be Done To Mitigate Risks? (part I and II);
A Special Mention to Accountability;
The Best of AI—Projects With Positive Social Impact.
Since the 1950’s when Alan Turing, considered one of the fathers of computational science, introduced the first preliminary concepts of AI with his Turing Test—a test that analyzed a machine’s ability to exhibit intelligent human-like behaviour—and John McCarthy, a prominent mathematician and computer scientist, introduced the term AI, until today, there have been many definitions of AI. Their variants depend on the approach and territory from which we analyze this concept, but a general definition could be the Merriam Webster’s: AI is “a branch of computer science dealing with the simulation of intelligent behavior in computers” and/or “the capability of a machine to imitate intelligent human behavior”—for more definitions please check the United States (U.S.) National Artificial Intelligence Initiative Act of 2020 or the European Union (EU) Artificial Intelligence Act of 2021.
All definitions share similarities and the same essence, but they are not homogeneous and also show the lack of consensus and the need to unify the concepts when dealing with AI, especially considering that AI does not care about national boundaries and is reaching almost all corners of the earth.
Bareham, J. (2019). [China vs. USA brain connectivity][Illustration]. The Verge. https://www.theverge.com/2019/3/14/18265230/china-is-about-to-overtake-america-in-ai-research
How often do users read the legal notices and privacy policies on the apps, websites, and devices they come across? Do they know the data and information that is being gathered from them? The "food" of AI is data, large amounts of data (big data) that are analyzed, interpreted, and run by sophisticated algorithms that are part of a generally more complex structure. This structure involves a processing system made up of software and hardware which have been designed to provide a specific service or outcome, i.e. cancer diagnosis, credit worthiness, advertising. The use of these algorithms and AI systems to analyze large amounts of data today can and does lead, as we will see in the following 101 article series, to situations where our fundamental rights, especially those of non-discrimination and privacy, are put at risk.
Developments in AI have been far ahead of any regulatory framework. AI benchmark countries such as the U.S. and China have focused more on innovation and development of this technology and have postponed the task of policy making and regulating AI and its risks until “later in the day” .
As these regulations are delayed, some of the world's most relevant technology companies opted for a kind of "self-regulation"—Google, IBM, and Microsoft published their own ethical principles for the use and development of AI in their products—but these principles are not enough given their enforcement, discretionary nature, and the lack of legal safeguards.
Lately, however, with the increase of AI implementation, some of the inherent risks of this technology have materialized, i.e. an autonomous vehicle that runs over a person because it does not recognize it as human, no longer leaving room for postponement.
In terms of regulation, as of today, three of the major actors in AI are working towards reaching significant regulations but too much room is still left for interpretation:
In the U.S., a National Artificial Intelligence Initiative Act of 2020 has become law at federal level, and its purpose is to provide a general framework to (i) promote and maintain U.S. leadership in AI, (ii) lead the world in the development and use of reliable AI, and (iii) prepare the present and future workforce for the integration of AI systems across the U.S. economy and society. Although it does not include specific responsibilities, it outlines general guidelines and principles the public and private actors must abide to. Also, at a state level, general AI bills or resolutions were introduced in at least 17 states in 2021, and enacted in Alabama, Colorado, Illinois, and Mississippi—in 2020 most of them failed or were postponed.
In China, the State Council launched in 2017 the New Generation Artificial Intelligence Development Plan which serves as a reference document for the policy and goals China wants for AI. Among others, one of China's ambitions is to be the global center of AI innovation by 2030 and to make AI "the main driving force for China's industrial upgrading and economic transformation”. According to this plan, initial ethical regulations for key areas of AI should be in place by 2025. Other initiatives have been passed so far, like the 30-point draft guideline published by the Cyberspace Administration of China (CAC), by virtue of which it is proposed to forbid companies from using algorithms that “encourage addiction or high consumption” and put national security at risk, but no specific law has been enacted as such.
In Europe, there have been numerous ethical and moral discussions focused on the use of AI that have helped identifying the AI risks such as the White Paper on AI and the recent reports issued by the European Parliament approved in October 2020 that study how to regulate AI to boost innovation, respect for ethical standards and trust in the technology. The outcome of all those discussions has been a first official proposal to regulate AI, the Artificial Intelligence Act of 2021.
Ideally, the best alternative would be to have a global AI definition and regulation so that policy makers and, most importantly, users can have clear guidelines regarding their rights and guarantees—a good step forward is the new global agreement on the "Ethics of Artificial Intelligence" presented by the UNESCO on 25 November 2021. Despite their non-legally binding effects, these guidelines aim to reach as many countries as possible.
Otherwise, AI regulation is left covered by the existing regulations of each country, thus generating regulatory fragmentation and legal uncertainty that leads to a lack of protection for citizens, i.e. lack of guarantees and safeguards when user data is transferred between U.S. and European countries (see the 2020 Court of Justice of the European Union case “Data Protection Commissioner vs. Facebook Ireland Ltd.”, also called Schrems II) which invalidated the then existing Privacy Shield. This is in part because different regulations are drawn from a single concept that is global in scope, and from outdated and fragmented regulations at local level that are not able to accommodate the specific new reality that AI brings with it. Sovereignty is tantamount to countries, but AI and new technologies have no physical boundaries, making cooperation between countries a compulsory requirement.
Pal D. (2018). Artificial Intelligence - Resembling the Human Brain. Flickr. www.flickr.com/photos/158301585@N08/43267970922/
Historically, the United States has been the strongest country leading AI research and development (R+D) since other countries, especially China, were lacking sufficient research talent. However, although R+D is still conditional to the advance of AI, the latter has entered a new phase: its implementation. And how can this new phase manage to change the leading subjects? Because in terms of implementation, China has a talent and entrepreneurial capacity that is as strong or even stronger than that of the United States. Furthermore, it has a solid and expansive backup from the Chinese government which has heavily invested in this sector, creating an ecosystem of start-ups and venture capital funds that fiercely compete. Lastly, big tech China companies—like Alibaba (sometimes referred to as the Chinese Amazon), Tencent (with its chat application WeChat and its easy payment methods) just to give some examples—already have huge amounts of data from its population of 1.4 billion people (representing a lot of AI "food").
In the middle of the U.S.-China game, much hope is put in the European Union in terms of legislation, the favourite when it comes to creating a first regulatory framework that goes hand in hand with other relevant regulations already in force, such as the General Data Protection Regulation that helps guarantee the protection of people's fundamental rights such as non-discrimination, equality, and privacy.
Liwaiwai. (2019). [USA vs.China brain connectivity and Europe watching][Illustration]. Liwaiwai. https://liwaiwai.com/2019/08/30/the-race-for-artificial-intelligence-china-vs-america/
Thus, in view of the current situation, it would be desirable to reach a global consensus on AI regulation, but given the strong tensions and clash of interests between the top leading AI countries, the most feasible and balanced alternative would be for the European Union to lead the launch of a first global regulatory enforceable model that balances innovation, development, and implementation in the AI sector, alongside with the protection of the rights and freedoms of all citizens, so that we can all benefit from this technology.
Standford University. Professor John McCarthy [online]. http://jmc.stanford.edu/index.html
European Union Agency For Fundamental Rights (2018). #BigData: Discrimination in data-supported decision making. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf.
McCausland, P. (2019). Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk. NBC News. www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281
Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute.
United States House of Representatives. National Artificial Intelligence Initiative Act, H.R.6216, 116th Cong. (2020). www.congress.gov/bill/116th-congress/house-bill/6216/text
National Artificial Intelligence Initiative Office (2021). National Artificial Intelligence Initiative – About. www.ai.gov/about/
National Conference of State Legislatures (15 Sept. 2021). Legislation Related to Artificial Intelligence. www.ncsl.org/research/telecommunications-and-information-technology/2020-legislation-related-to-artificial-intelligence.aspx
Reuters (29 Sept. 2021). China says to set governance rules for algorithms over next three years. www.reuters.com/world/china/china-says-set-governance-rules-algorithms-over-next-three-years-2021-09-29/
Roberts, H. et al. (2021). The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI & SOCIETY, 36:59-77.
LEE, K. (2020). Superpotencias de la Inteligencia Artificial, China, Sillicon Valley y el nuevo orden mundial. Ediciones Deusto, pp. 26-38.
European Union: European Commission. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, 21 Apr. 2021, COM(2021) 206 final, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
UNESCO (Nov. 2021). UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence. https://en.unesco.org/news/unesco-member-states-adopt-first-ever-global-agreement-ethics-artificial-intelligence
Bareham, J. (2019). [China vs. USA brain connectivity][Illustration]. https://www.theverge.com/2019/3/14/18265230/china-is-about-to-overtake-america-in-ai-research
Pal D. (2018). Artificial Intelligence - Resembling the Human Brain. Flickr. www.flickr.com/photos/158301585@N08/43267970922/
Liwaiwai. (2021). [USA vs.China brain connectivity and Europe watching][Illustration]. Liwaiwai. https://liwaiwai.com/2019/08/30/the-race-for-artificial-intelligence-china-vs-america/