top of page

Innovation Law and Regulation 101: Recognizing Silicon Minds


Foreword


Artificial Intelligence systems are set to be the next revolution, forever changing humans’ lives. This new phenomenon and its many effects will cause great changes in our society, which is why regulating is the first step toward ethical development. In fact, unregulated use of these technologies could give rise to negative consequences such as discriminatory uses and disregard for privacy rights. The challenges brought by the use of Artificial Intelligence urge legislators and experts to protect citizens and consumers as regulating becomes a priority if humans wish to protect themselves from unethical and abusive conduct. This series explores the topic of new technologies such as artificial intelligence systems and their possible regulations through legal tools. In order to do so, we will start with an explanation of the rise of new technologies and delve into the complicated question of whether machines can be considered intelligent. Subsequently, the interplay between Artificial Intelligence and different branches of law will be analyzed. The first chapter of this series of articles explored the possibility of granting AI systems with legal personality and the main legislative steps taken in the EU towards that direction. Moving into the realm of civil law the second chapter considered the current debate on the responsibility regime concerning the use and production of AI. The third chapter will discuss the influence that AI plays on contract law and the stipulation of smart contracts. The use of AI in criminal law and the administration of justice will be examined in the following chapter with a focus on both the positive and negative implications of their use. The fifth chapter will be dedicated to the use of Artificial Intelligence by public sector bodies. Finally, the complicated relationship between data protection and AI will be discussed in light of the EU General Data Protection Regulation.


The 101 series is divided into six articles:

  1. Innovation Law and Regulation 101: Recognizing Silicon Minds

  2. Innovation Law and Regulation 101: AI on Trial, Blaming the Byte

  3. Innovation Law and Regulation 101: Navigating Smart Contracts

  4. Innovation Law and Regulation 101: Silicon Justice

  5. Innovation Law and Regulation 101: A.I. as Civil Servants

  6. Innovation Law and Regulation 101: Defending Data from Silicon Eyes


Innovation Law and Regulation 101: Recognizing Silicon Minds


Romans would never have imagined that women would one day receive legal personality, yet the legal personality they were granted gradually matched the one already attributed to male citizens. The same can be said for slaves who, despite being natural persons, were not considered worthy of receiving legal personality. This shows that the category of legal persons is open to new additions and subject to evolutions, possibly even accepting Artificial Intelligence systems. This type of system is able to emulate human intelligence, make decisions, predict outcomes, create images, write contracts and so many other actions that would normally be carried out by humans. Yet, they lack some characteristics that are deemed essential to be considered intelligent according to anthropocentric views and fundamental to receive the status held by entities with legal personality.

This essay will analyze the ongoing debate on Artificial Intelligence systems and the possibility of them finally receiving legal personality that could turn them into electronic persons and make them part of the always-changing category of legal persons. The first part of the essay will focus on the historical evolution of the debate on Artificial Intelligence and the experiments of Turing and Searle will be both considered. Subsequently, the essay will analyze the recent doctrinal legal debates on the issue with reference to the Delvaux Report to the European Parliament to illustrate a positive approach toward the recognition of the long-awaited electronic personality. The essay will then illustrate the opposing side of the debate with reference to the findings of the Italian National Bioethics Committee and reference to the issues deriving from this side of the debate. The essay will finally be devoted to general conclusions that reflect on the issues illustrated.


The Imitation Game: Can a Machine Think?

It was 1950 when English mathematician Alan Turing showed the incredible promises held by computers. The test he conducted, known as “The Imitation Game” started from a question: “Can a machine think?” (Warwick & Shah, 2016). Alan Turing wanted to prove that the answer to the question would be affirmative and did so by linking intelligence to natural language. In his opinion, a computer must be able to talk like a human to be considered intelligent. In order to prove himself right and demonstrate machines’ intelligence, he placed a human and a machine in the same room and let them have a conversation. The person in the room was not aware of the fact that the discussion she was having was not with another person, but with a computer. In fact, the latter was able to deceive humans into thinking that he was a human based on the answers he was able to give (Copeland, 2000). The computer, by deceiving the human speaker, proved that computers are indeed capable of emulating human intelligence and, with some time, even exceeding it. Turing was in fact so satisfied by the obtained results that he predicted that in fifty years from then, machines would be able to hold conversations with humans with no difficulties (Turing, 1950).


Figure 1: English Mathematician Alan Turing at age 16 (Chaffin, n.d.).

The Chinese Room: A Machine Cannot Think

The enthusiasm that “The Imitation Game” created was not shared by all scholars and experts. Among the skeptical ones, there was U.S. philosopher John Searle. His experiment “The Chinese Room” had the objective of showing the incapability of machines to think like humans (Searle, 1980). Searle believed that human natural language and human intelligence cannot be separated from intention and deliberateness. Machines do not answer in a certain manner because it is their intention to do so, but rather because they are instructed to do so. Searle explained this by asking a series of questions to a human person of English mother tongue. The person was able to answer questions with perfectly logical sentences using only Chinese language despite not having knowledge of the language. Searle explained this apparently strange phenomenon by underlining that a human, like a machine, can learn to give logical answers and emulate thinking patterns basing the decision process on symbol association. Particularly, the person inside the room was merely following a set of rules and answering questions by associating the right symbols to the questions received after a period of training. The experiment was successful in showing that deceiving the human speaker is not enough for a machine to be really considered capable of thinking like a human. Therefore, one can only conclude that machines can simulate thinking but cannot actually understand what they are simulating. Similarly, a person can give answers in Chinese despite not having knowledge of the language after being instructed on how to do so. The experiment set the debate on the type of intelligence that machines have, excluding that machines’ intelligence could be considered identical to human intelligence. However, just because it is different this does not entail that a new type of intelligence, not based on human characteristics, could exist and receive recognition.


How Does A Machine Work?

After years of development and innovation, there is now more consensus on how Artificial Intelligence systems actually work. Artificial Intelligence machines base their emulating thought pattern on inference reasoning rather than on causal reasoning like humans. This entails that machines are able to detect syntactic connections but are not able to understand semantic correlations. For this reason, machines do not have the capability to grasp the inner meaning of the words and symbols they are able to successfully link based on their syntactic characteristics.

Furthermore, as John Searle illustrated, machines do not have the intentionality to do certain actions. They merely associate words based on their frequency, and the context they are used in, but are not capable, for the moment, to understand the meaning without further details.

These characteristics may seem as limits that are impossible to overcome and that stand in the way of recognizing machines as worthy of receiving legal personality. Yet, the astonishing results that Artificial Intelligence systems are able to reach explain recent developments in the debate concerning the conferral of legal personality to machines (Gallo & Stancati, 2020).


Figure 2: The functioning of Artificial Intelligence neural networks (Castrounis, 2022).

Legal Personality of Machines

In 2017 rapporteur Mady Delvaux presented her report to the European Parliament with recommendations on Civil law rules on robotics. Her report wanted to highlight the importance of regulating the new emerging technologies that are based on Artificial Intelligence systems. Mady Delvaux introduced the concept of electronic personality in her report (Delvaux, 2017). According to her explanation, the concept of electronic personality could be created to envisage a new and differentiated type of legal personality that would be crafted on the specific characteristics of Artificial Intelligence systems. This new and updated legal framework could solve potential issues arising from the lack of regulation of Artificial Intelligence systems. In fact, as long as the legislators refrain from regulating the innovation that is taking place, many situations will be exempted from receiving protection. The concept and attribution of legal personality could in fact be invoked to request compensation for damages caused (Garcìa-Micò, 2020). However, it is also important to underline that such attribution would also lead to the protection of the Artificial Intelligent machine and not only to its responsibilisation. Legal personality is a legal concept that grants legal protection and provides for a set of rights that the legal person can exercise. This becomes fundamental in a world where Artificial Intelligence systems are becoming more independent as the days go by. The well-known Artificial Intelligence Chatbot Chat-GPT is just an example of the growing independence of these technologies. Artificial Intelligence systems are able to write texts and carry out results that can be attributed only to the automatic learning of the machine itself. “The Next Rembrandt” is another example of the level of creativity demonstrated by Artificial Intelligence machines. The system was in fact able to create an original drawing based on the analysis of a large corpus of works made by the worldwide famous painter Rembrandt. In scenarios like the ones illustrated it is of the utmost importance to protect the creativity of machines and to recognize their autonomy.


The Need for Legal Personality Resists Against Skeptical Scholars

On the other hand, the shortcomings of Artificial Intelligence systems are interpreted by other scholars as reasons why systems cannot be attributed legal personality. They base their reasoning on the criteria that are strictly linked to natural persons and their status as legal persons. The Italian National Bioethics Committee illustrates the issue by underlining that Artificial Intelligence machines are not sentient beings. Therefore, they are not able to feel like humans do, they do not have emotional intelligence (Italian National Bioethics Committee, 2017). Thus, their statement seems mostly based on anthropocentric characteristics that seem fundamental to even conceive the possibility that these systems might be granted legal personality.

The trend in the opposing side of the debate seems to rely also on the concept of legal objects. Artificial Intelligence systems are not considered as fully autonomous and that entails that the only status that can be attributed to them is that of legal objects (Velikanova, 2020). Yet, this creates a number of problems. Firstly, others would be considered liable for the Artificial Intelligent system’s actions even when said actions are exclusively the output of independent and autonomous learning. Secondly, the risk of autonomous machines being chained to the status of objects could lead to them being electronic slaves (Ruffolo, 2020). Legal personality, or better said electronic personality, would elevate their status to legal entities with rights and obligations. The aim is in fact to regulate the phenomenon with legal tools and not to turn a blind eye to innovation. The astonishing results of Artificial Intelligence are in fact so revolutionary that scholars are worried about A.I. takeover of human agency (Anderson and Rainie, 2018). While some might think that this is a reason to limit the personhood of Artificial Intelligence, it is actually the opposite scenario that would be more suitable. The recognition of legal personality would create a precise legal framework where the rights of machines would be limited by law.

Furthermore, legal personality is already granted to non-natural persons such as corporations (Chestermen, 2020). This shows that it is not the legal tools that are not compatible with Artificial Intelligence, but rather humans’ attitude towards the development of a new, and as such scary, phenomenon.


Figure 3: Illustration of how the "Chinese Room" experiment was conducted (Sagar, 2019).

In conclusion, the debate on the intelligence of Artificial Intelligence systems dates back to the beginning of the XX century. Alan Turing put forward his revolutionary thesis according to which machines are able to think and to deceive others into thinking that their answers are the result of human thinking. Despite the astonishing results of his experiment, “The Imitation Game”, others were not fully convinced of the capability of machines to think. The American Philosopher Searle was in fact able to demonstrate that the opposite could be said about machines: they do not think, they emulate thinking. The scientific community has now reached a consensus on the capabilities and functioning of Artificial Intelligence systems, illustrating that their reasoning is based on inference correlations. However, despite the issues, several scholars came forward with several hypotheses for the creation of a specific legal personality, known as electronic personality. The attribution of electronic personality and the related recognition of Artificial Intelligence systems as legal persons would solve multiple issues linked to the use if those systems. Particularly, the liability of Artificial Intelligence would be finally recognized as well as their patrimonial responsibility for damages caused, and Artificial Intelligence systems would also receive protection for the actions that are the result of their autonomous learning process. The debate is also focused on the opposing belief that Artificial Intelligence systems lack the essential characteristics to be considered worthy of legal personality. This heavily anthropocentric view deems fundamental the abilities to feel and to think like humans. However, legal frameworks already grant legal personality to non-natural persons and the concept of legal personality could actually be the answer to limit the unregulated uses of Artificial Intelligence systems.

The debate is far from being settled, however, the current discussion seems to be focused on marginal characteristics losing sight of the objective that should be shared by legislators and legal professionals: regulate unregulated scenarios. The risk is that innovation will continue to develop and legislators will be left behind with no possibility to limit Artificial Intelligence.


Bibliographical References

Anderson, J. & Rainie, L. (2018). Artificial Intelligence and the Future of Humans. Pew Research Center.


Chesterman, S. (2020). Artificial Intelligence and the Limits of Legal Personality. International & Comparative Law Quarterly, 69(4), 819-844.


Comitato Nazionale per la Bioetica. (2017). Sviluppi della Robotica e della Roboetica.


Copeland, J. B. (2000). The Turing Test. Minds and Machines 10.


Delvaux, M. (2017). Report with recommendations to the Commission on Civil Law Rules on Robotics.


Gallo, G. & Stancati, C. (2020). Persons, robots and responsibility. How an electronic personality matters. Social Aspects of Cognition 32.


García-Micó, T.G. (2021). Electronic Personhood: A Tertium Genus for Smart Autonomous Surgical Robots?. In: Ebers, M., Cantero Gamito, M. (eds) Algorithmic Governance and Governance of Algorithms. Data Science, Machine Intelligence, and Law, vol 1. Springer, Cham.


Ruffolo, U. (2020). XXVI lezioni di Diritto dell'Intelligenza Artificiale. Giappichelli.


Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brian Sciences 3(3).


Turing, A. M. (1950). Computing Machinery and Intelligence. Mind 59(236).


Velykanova, M. (2020) Artificial intelligence: legal problems and risks. J Natl Acad Legal Sci Ukraine 27(4).


Warwick, K. & Street, P. (2016). Can machines think? A report on Turing test experiments at the Royal Society. Journal of Experimental & Theoretical Artificial Intelligence 28(6).


Visual Sources





Author Photo

Sofia Grossi

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page