Is the Mind a Computer Program?


Is a computer program that is able to deceive a human being into thinking they are interacting with another person intelligent? If you were absolutely captivated by a piece of artwork only to find out it was created by a computer program, how would it affect your judgment about creative intelligence? AI has gotten so advanced with today’s technology that there are large-scale language models, such as the GPT-3 model, that are able to produce creative works such as writing short stories, and poetry, or having coherent conversations with humans (Johnson, 2022). The programs in these models are so well-developed that humans are finding it harder to draw a distinction between texts that are written by AI and those that are composed by human beings. With such a fine line between artificial and human intelligence comes the burning question: can computer programs be genuinely intelligent and produce insights that emerge from a place of profound understanding? In this first part of the 101 series on Philosophy of Mind, it will be argued that it takes more than just imitating human intelligence superficially to be counted as sufficiently intelligent for computer programs.


Turing Machines: The basis for every computer today


Alan Turing was a Cambridge mathematician in the 20th century who was responsible for breaking the Nazi Enigma code during World War II. His most famous contribution today is the invention of the Universal Turing machine in 1936. Every computer today is a limited Turing machine since there is no computer with infinite memory and capacity (Mullins, 2012). In 1950, Alan Turing developed a test for artificial intelligence, namely the Turing Test. Any machine that can pass the test by playing the ‘Imitation Game’ satisfies the criteria to be considered intelligent (Turing, 1950, p. 433–434, 442). The Imitation Game consists of a three-way chat between a computer program that takes the place of A, a human being who takes the place of B, and an interrogator, who is also a human. Both A and B are to convince the interrogator that they are the human, and it is the interrogator’s task, after a fair amount of interaction, to guess who is the human and which is the computer program. If the interrogator guesses incorrectly and states that the computer program is the human, then the program passes the Turing Test and has exhibited sufficient capacity to be considered ‘intelligent’. If the interrogator guesses correctly, on the other hand, the computer fails to pass the test for intelligence.



Figure 1: A simple Turing machine consists of a tape with marked squares that can be moved back and forth (Acosta, 2012)



A Turing machine is a hypothetical machine that can operate any computer algorithm no matter how complex it is (Mullins, 2012). It operates on binary codes of 0s and 1s and consists of discrete states S1, S2, S3…, Sn. The output in a Turing machine is entirely determined by what state it is in, and what it is scanning (0 or 1) changes state. For instance, in a very simple Turing machine, if it is in state S1 and it scans 1 (1, S1), it will have a 'write' instruction, i.e., write ‘1’ or ‘0’ or blank and a move instruction, i.e., move to Sn or remain in S1. To generalize, a Turing machine’s next state is determined by its previous state based on the instructions programmed into its algorithm.


Are simple Turing Machines intelligent?


According to Block (1995, p. 1–2), Turing’s characterization of intelligence is reminiscent of behaviorism, whereby intelligence is determined by certain behavioral dispositions such that a computer program conversing with a biological human being can pass off as a human in a Turing Test. Likewise, such characterization inherits the classic problems related to the behaviorist approach to the mind. Block (1995, p. 4–5) constructs a conversation tree in which the judge, who is the interrogator, makes a certain statement and the program replies with an appropriate response for the judge to reply, thus marking a conversation as the responses alternate between the judge and the program. If the judge states “X”, where X is a coherent statement, the program locates it as input A(X) and locates the corresponding output affiliated with A(X), which is B(X), so the judge replies with “Y”, making the program locate it as C(Y) and responding with D(Y).



Figure 2: Turing Machine (GeeksforGeeks, 2021)



From a behaviorist perspective, the program is arguably intelligent as it exhibits the relevant behavioral dispositions signified by the Turing Test required to intelligently carry out a conversation. For the behaviorist, mental states are behavioral states, e.g., to be in the mental state of being in pain is to have the behavioral disposition to wince or scream in agony or some other state characteristic of pain behavior. There is no mental state of neurons arranged in a certain way so as to qualify as a pain state. The only pain state, for the behaviorist, is the behavioral state. To apply the concept to programs, the internal abstract computational principles that are driving and determining the program’s output do not matter as they are rather concerned with symbol manipulation than the behaviorist conception of intelligence. The symbolic manipulation can be taken to be a certain neural state that has no mentalistic significance to it whatsoever since the behaviorist outright denies it. However, as Block (1995, p. 5) portrays using the Aunt Bubbles machine by constructing the conversation tree depicted above, any machine can follow such an algorithm and produce appropriate responses that have been programmed into it. The responses are nothing but input from the human programmer, so it would be least convincing to claim such a machine is intelligent. Furthermore, Block (1995, p. 2) also notes that many machines are capable of performing functions that intelligent humans cannot, such as complex mathematics. In addition, intelligent beings, such as chimpanzees or dolphins, would not be able to pass the Turing Test, as clearly, it would not be possible for them to carry out a conversation with human beings. Turing recognized this problem and conceded that the Turing Test is sufficient but not necessary for intelligence. But Block’s arguments above challenge even the sufficiency of the test.


Searle’s Chinese Room Argument



Figure 3: The Chinese Room by John Searle (Medium, 2019)


John Searle, in his seminal work Minds, Brains, and Programs, introduced his famous objection against Artificial Intelligence through his thought experiment of the Chinese Room. The Chinese Room is a coordinated system that is a metaphor for a computer program. A brief elaboration of the thought experiment is as follows (Searle, 1980, p. 417–418):


1. Searle is locked in a room and knows nothing about Chinese. To him, Chinese letters are meaningless symbols.


2. Certain Chinese writings on paper are passed through the door into the Chinese room. Along with each batch of Chinese symbols is a set of rules written in English, unbeknownst to the Chinese speaker passing the notes, that helps Searle correlate one batch of Chinese symbols with another.


3. Following the set of rules in English, Searle manages to write the appropriate Chinese symbols in response to the Chinese writings he receives through the door.


4. He gets so adept at correlating Chinese symbols using the set of rules that the Chinese speaker outside the door passing him the writings thinks they are having a conversation with a native Chinese speaker, even though Searle still knows nothing about Chinese and is following a set of rules to meaninglessly relate one Chinese symbol with another.


According to Searle (1980, p. 418), the set of rules identifies as the ‘program’ or the ‘algorithm’ that enables him to run a Chinese program through symbolic manipulation. He makes a distinction between strong Artificial Intelligence and weak Artificial Intelligence. Weak AI is the claim that a computer is a powerful tool in the study of the mind by enabling us to simulate various kinds of mental processes (Searle, 1980, p. 417). In contrast, Strong AI is the more radical claim that an appropriately programmed computer literally possesses a mind or has mental states, in particular ‘cognitive states’ such as intelligence. To clarify the distinction further, suppose a computer program helps to study the weather by simulating a winter storm. In that case, the computer is a powerful and valuable tool in investigating real-world phenomena. But it would be quite absurd to claim that an appropriately programmed computer can literally produce a winter storm. It would be a conflation between physical causal and formal or symbolic causal processes, as formal causal processes are abstract and fabricated, whilst physical causal processes are arguably intrinsic or essential.



Figure 4: The rulebook in the Chinese Room (The Mind Project, 2018)


Searle makes a distinction between syntax and semantics, which lies at the heart of the Chinese Room Argument. Syntax refers to symbols and symbolic manipulation, while semantics is responsible for endowing a speaker of a language with an understanding of it (Searle, 1980, p. 422). For instance, a computer can produce an English response ‘F(x)’ when the user inputs a question in English ‘x’. But an important difference between the human user and the computer is that the computer is presumably performing a completely syntactical procedure according to the instructions that have been programmed into its algorithm that says, “If x, then f(x)”. There is no understanding of what “x” actually signifies to the user to come up with an appropriate response.


Searle’s general argument is as follows :


Axiom 1: Syntax is not sufficient for semantics.


Axiom 2: Minds have mental contents; specifically, they have semantic contents.


Axiom 3: Computer programs are entirely defined by their formal or syntactical structure.


Conclusion: Instantiating a program by itself is never sufficient for having a mind ( 1984, p. 39)


In Axiom 1, there is the underlying claim that objects obeying formal rules, such as words in a language or computational entities in AI, are independent of the way in which they have meaning (semantics) (Chalmers, 1992, p. 30). But more precisely, the rule-following behavior does not determine the meaning.


Philosophical Objections



Figure 5: Weak vs. Strong AI (GavinJensen.com, 2018)


How convincing are Searle's arguments against Strong AI? His philosophical objection rests upon the presupposition that "Syntax is not sufficient for semantics". Such an assumption can be challenged in itself (see Chalmers, 1992), as even neurons firing at different rates are performing purely symbolic operations triggered by external stimuli, just like a computer program that performs algorithms to carry out an operation. So, the objection should not be that "Syntax is not sufficient for semantics", as even the brain performs syntactic operations and at some point, achieves semantics. Rather, it raises the important question: "At what point do syntactical operations result in semantics?" or "How much complexity is required to achieve semantics from syntactic operations?"


Part of Searle's motivation for endorsing the claim that "Syntax is not sufficient for semantics" comes from his appeal to biology as a fundamental causal basis for the mind. He claims that mental processes in the brain are causally responsible for mental processes such as thinking and intelligence. A view that is resonant with biological naturalism that asserts only biological substrates are capable of producing mental properties (Searle, 1984, p. 39; Corcoran, 2001, p. 307–310). Thus, for Searle, consciousness is a biological phenomenon that cannot be replicated by non-biological components such as silicon chips constituting a computer program. However, such a view of the mind may seem parochial, as Searle offers no justification as to why only neurons firing in the brain can produce a conscious state. In fact, there are biological organisms with no brains or neurons but are arguably conscious, including many insects and aquatic animals. There is no singular biological aspect of the brain to which one can reduce the features of the mind. Interestingly enough, Searle rejects any reduction of mental properties to physical features and insists on a bifurcation of the ontology of subjective third-person inaccessible mental properties and objective third-person accessible physical-biological properties. However, such an ontological distinction is at odds with his biological naturalism, which entails that there is something in non-spatial-temporal mental properties that correspond to spatial-temporal features of biological properties, yet mental properties are ontologically distinct from physical properties leading to an outright contradiction (Corcoran, 2001, p. 310–314). One way or another, Searle's claim that only biological features are the causal basis of the mind is hardly convincing.


Conclusion



Figure 6: Are we all machines? (Dreamstime, n.d.)


Searle’s Chinese Room Argument is a classic objection against Strong AI — that computer programs can have minds and be in cognitive states such as intelligence. Turing made an argument that Turing Tests are sufficient to measure the intelligence of computer programs, taking a behaviorist stance on the mind. Block’s arguments, however, challenge even the sufficiency of such a test, as his Aunt Bubbles machine illustrates that it is incredibly easy to program an appropriate algorithm into a computer so it can pass the Turing Test, but that does not warrant the justification in claiming these programs are intelligent. Searle’s Chinese Room Argument contextualizes and bolsters Block’s arguments, since Searle in the Chinese Room exhibits all the relevant behavioral dispositions sufficient for understanding Chinese, yet he has no clue what the Chinese symbols mean. Furthermore, Searle pinpoints the rejection of Strong AI on his argument that computer programs are merely syntactical engines involved in symbolic manipulation and syntax is not sufficient for semantics, which in Searle’s thought experiment is understanding Chinese. Thus, he draws the conclusion that the mind is not a computer program, and programs are neither constitutive of nor sufficient for minds.


Searle's general objection to Strong AI, apart from his biological naturalism, is at best directed at the symbolic AI school, which placed the model of the mind at the level of symbols (Chalmers, 1992, p. 2). However, that does not disprove Strong AI. Connectionist AI paradigms aim to find the level at which symbolic implementation leads to semantics. Thus, the computational theory of mind still remains a viable option.


The next part of this series will explore an alternate theory of mind that takes it to be an immaterial substance separate from the physical realm. Is the mind being ontologically distinct from the brain the best answer? Searle attempted to argue for such a distinction with his biological naturalism on the basis of third-person inaccessibility of conscious experience. Does this distinction emerge for the reason that we cannot directly observe the goings-on in the mind because they are not spatiotemporally located anywhere and elude scientific observation? The next article will explore such a question, and evaluate how convincing the argument is that the mind is distinct from the body.




Bibliographical References

Block, N. (1995). The mind as the software of the brain. In E. E. Smith & D. N. Osherson (Eds.), Thinking: An invitation to cognitive science (pp. 377–425). The MIT Press.


Chalmers, D. J. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap (pp. 25–48). Hillsdale, NJ: Lawrence Erlbaum.


Corcoran, K. (2001). The Trouble With Searle's Biological Naturalism. Erkenntnis, 55(3), 307–324. https://doi.org/10.1023/A:1013386105239


Johnson, S. (2022, April 15). AI is mastering language. Should we trust what it says? New York Times. https://www.nytimes.com/2022/04/15/magazine/ai-language.html


Mullins, R. (2012). What is a Turing Machine? University of Cambridge. https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html


Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756


Searle, J. R. (1984). Minds, brains, and science. Cambridge, Mass: Harvard University Press.


Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. Retrieved from http://www.jstor.org/stable/2251299


Visual Sources

Figure 1: Acosta, R. (2012, October 21). Turing Machine, reconstructed by Mike Davey as seen at Go Ask ALICE at Harvard University. https://commons.wikimedia.org/wiki/File:Turing_Machine_Model_Davey_2012.jpg


Figure 2: Turing Machine. (n.d.). https://www.geeksforgeeks.org/turing-machine-in-toc/


Figure 3: The Chinese Room by John Searle. (n.d.). https://medium.com/acing-ai/what-is-the-chinese-room-argument-in-artificial-intelligence-d914abd02601


Figure 4: The rulebook in the Chinese Room (n.d.). https://mind.ilstu.edu/curriculum/searle_chinese_room/searle_chinese_room.html


Figure 5: Weak vs. Strong AI (n.d.). https://www.gavinjensen.com/blog/2018/ai-weak-strong


Figure 6: Are we all machines? (n.d.) https://www.dreamstime.com/android-man-disguises-himself-as-human-robot-pop-art-retro-illustration-kitsch-vintage-s-style-image201994244





Author Photo

Swarnila Saha

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn