top of page
Writer's pictureGabriella Yanes

Psychoanalysis vs AI: A Common Battleground


To think of psychoanalysis and artificial intelligence within the same lines seems like a big stretch and a somewhat rare proposition. Psychoanalysis looks for what is most human: the body, sexuality, and drive. Artificial intelligence looks for the least human aspects of humanity: its theoretical thesis is based on the premise that mental life (emotional and mental processes) can be pinned down to a set of principles shared by humans and machines. Hence, the relationship between these two seems as far opposite as possible. On one hand, psychoanalysis is a clinical practice, a philosophical and scientific battleground, while artificial intelligence is a scientific invention with cultural and theoretical concepts that provoke philosophical debates as well (Turkle, 1988). However, the creation of an artificial brain, of a "thought machine", aiming to surpass human intelligence and separate the idea of thought and body, calls forth an urgent encounter with the psychoanalytic subject. Psychoanalysis, as far away as it finds itself from the scientific field, could be a crucial tool in understanding what AI means for humanity, to speaking, sexed subjects.


Due to its inherent conceptual interdisciplinarity, the field of AI and its many discourses seem to blur the limits between science and fiction. Coming from a rich history of fantasy and characteristically popular science, where literature, cinema, and other cultural aspects have created many stories around it, it poses a challenge to discern where the science of AI begins and fiction ends. Its theoretical research comes from a variety of fields such as computer science, information theory, mathematics, neurobiology, psychology, linguistics, and philosophy. Therefore, there is a constant and well-rounded debate on its potential, standpoint, and development, becoming a polemical topic that permeates into every aspect of humanity nowadays.


Scientist and engineer Ray Kurzweil (2014) believes that due to the rapid advances in the field of neural networks, quantum computing, and biotechnology, we will soon transcend the "limits of nature" where the boundaries between science and nature will be synthesized. On the other hand, many argue that we are about to enter a "Fourth Industrial Revolution" with the gradual merge between digital, physical, and biological worlds (Schwab, 2016). Hence, many philosophers and intellectuals explore the conceptual terrain of this revolution where science is betting toward an artificial reality, raising complex questions about the notion of intelligent life and the future of what it means to be "human". Is the theory of psychoanalysis as irrelevant nowadays as it is thought to be? What does psychoanalysis have to say regarding the future ahead of us? This article aims to explore the position that psychoanalysis has regarding the future of science and artificial/digital life, where many theorists believe AI could actually reinforce the significance of the psychoanalytic theory today.


Figure 1: Artificial Intelligence and Psychoanalysis (Unknown, n.d.).

Through the integration of philosophical explorations and the clinical and conceptual advancements of Lacanian theory, this article strives to establish a fresh and fruitful connection between psychoanalysis and AI. By approaching Artificial Intelligence from a psychoanalytic perspective, this thesis regards the concept of sexual non-rapport as its central theoretical core. In the 70s, Jacques Lacan announces in his seminars that there is no sexual rapport, meaning not that sexual encounter does not exist, but that there is always something missing; as sexed subjects, there is no such thing as completeness. The primary goals of this article are to offer a psychoanalytic interpretation and questioning of AI as a discourse concerning "knowledge in reality". Secondly, it aims to create a novel conceptual framework for examining the tangible implications of Artificial Intelligence on subjectivity, the body, and societal bonds.


Until the 1950s, behaviorism was the dominant position in the American academic psychology field. It emphasized the studies of behavior over inner mental processes. For example, researchers were allowed to study the behavior of "remembering" but were discouraged from discussing the concept of "the memory" because it was understood to be lacking scientific rigor. Behaviorism focuses on understanding behavior only in terms of stimulus and response, with a "black hole" representing the unexplored inner processes of the mind. However, due to the cultural and political climate changes of the 1960s, the dominance of behaviorism declined, and scientists felt more open to studying memory and what inner mental processes meant (Turkle, 1988).


One significant factor in the decline of behaviorist methods was the emergence of computers in the late 1950s. While not a technical influence, the mere existence of computers provided legitimacy for a new perspective on the mind. Computer scientists had developed a new language for discussing the internal states of their machines, leading psychologists to the realization that humans might also have internal processes that needed conceptualization. Psychologist George Miller (1958), who worked at Harvard during the behaviorist era, highlighted how psychologists felt compelled to discuss memory because computers demonstrated the existence of internal processes. Miller (1983), in his book States of Mind, states the following:


The engineers showed us how to build a machine that has memory, a machine that has purpose, a machine that plays chess, a machine that can detect signals in the presence of noise, and so on. If they can do that, then the kind of things they say about the machines, a psychologist should be permitted to say about a human being. (p. 23)

Figure 2: John von Neumann with the IAS Computer (Richards, c. 1951).

The presence of computers revalidated the exploration of memory and internal mental processes within the field of scientific psychology. Many technical ideas that psychologists adopted from the world of computing, such as concepts from cybernetics and automata theory, had existed prior to the surfacing of real computers and gained greater significance later on when the first computers were made around the mid 1940s. George Miller (1958) observed that engineers began using terms related to mental processes (such as "memory") that psychologists had wished to use but had previously been considered as "unscientific" (Turkle, 1988).


Hence, it can be said that the introduction of computational ideas, computational language, and the tangible existence of computer machines collectively made it possible for the growth of an intellectual environment where it became scientifically acceptable to discuss mental processes that had been previously prohibited by behaviorism. The presence of computers played a crucial role in establishing a new field of psychology focused on the perspective of inner mental states, which later became known as cognitive science.


Computer programs offer a means to examine how beliefs and rules contribute to behavior. Prior to computer programs, explanations like "because the pawn blocked the bishop" would have been rejected by earlier psychological theories as simply describing a chess player's reasons. However, when the mind is seen as a program, these reasons transform into explanations. Psychologists were drawn to computers because they provided a way to delve into the "black hole" representing the mind. Once this hole was studied, it suggested a way to incorporate computation concepts that closely align with common-sense understandings related to the human mind (Turkle, 1988).


A commonly held belief is that the presence of computers tends to push psychology toward more precise and quantifiable theories. This view argues that computers, by their very nature, demand rules, precision, and formal structures. However, the impact of computers on psychology is more nuanced than what this simple narrative suggests. For instance, its initial influence, which was directed at challenging behaviorism, led to the development of a less restrictive and more flexible science of the mind rather than a stricter one (Apprich, 2018).


Figure 3: Artificial Intelligence (Unknown, n.d.).

Artificial intelligence represents the most explicit conduit for the computer's influence on psychology. It promotes a global materialistic perspective and also puts forward specific theories about how the mind operates. Its dual objective is to create "thinking machines" and to utilize machines to contemplate the processes of thinking itself. The underlying methodological assumption is that if one can construct a machine capable of intelligent behavior, the way in which the machine achieves this intelligence is relevant for understanding how a cognitive aspect of the human mind, such as our intelligence, works (Millar, 2020).


The concept of Artificial Intelligence (AI), aiming to create consciousness within machines, challenges traditional notions of the self in a manner that is reminiscent of psychoanalysis. Many people perceive the autonomous self, the idea that the self is more than our conscious awareness, as a straightforward concept because they experience it in their daily lives, e.g. in language. Our everyday language reflects this experience and conveys the idea of free will when we say things like "I act", "I do", or "I desire". Even when individuals encounter theological or philosophical ideas that question free will, they often make minor adjustments to their understanding of the autonomous self, viewing it as a self with limited decision-making capabilities. In psychoanalysis, however, a more profound skepticism exists. The unconscious does not merely limit the self, it shapes a self without a fixed centre. This means that there is not a fixed concept of one's self, it does not evolve around a center but instead fluctuates through unconscious processes. In AI, an even more unsettling challenge emerges: If the mind is a program, where does the self reside? This challenge not only questions the ability to determine one's self but even the existence of a self at all (Turkle, 1988).


Traditional humanism upholds the idea of a conscious, purposeful individual. By challenging this humanistic concept, AI disrupts the established order, aligning itself more within the lines of psychoanalysis and radical philosophical schools like deconstructionism, a school of thought formed mainly by Jacques Derrida in the 1960s which questions traditional concepts of certainty and truth. In psychoanalysis, the self is dispersed within the depths of the unconscious, while in deconstructionism, the self is dispersed within the realm of language. In the realm of computation, the self is also decentralized, and it may even be dissolved within the concept of programming (Millar, 2020).

In a society where Artificial Intelligence plays a significant role in the future of our social construction, the study of AI, from a psychoanalytical perspective, serves as a provocative endeavor. It prompts us to question the relevance of psychoanalysis when applied beyond the traditional "human" clinical context. Furthermore, it speculates on how our philosophical and critical contemplation of AI has thus far overlooked a crucial aspect: enjoyment. This leads us to propose the idea that the connecting link between these seemingly disparate fields is none other than sex. In psychoanalysis, which deals with with the idea of suffering, sex is the fundamental issue underlying all others. Yet, it is more than just a symptomatic issue; it is a philosophical quandary––philosophical in the sense that it inherently lacks a definitive solution (Millar, 2020). In psychoanalysis, sex represents the impossible yet unavoidable collision of epistemological and ontological questions that mark the emergence of subjectivity in all speaking beings. Therefore, we must ask: What role do sexed subjects play in the context of Artificial Intelligence?


Figure 4: "Dream Caused by the Flight of a Bee Around a Pomegranate a Second Before Awakening" (Dalí, 1944).

In a thought-provoking article in the New York Times titled "You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills" (2023), three authors, including Yuval Harari, Israeli author and contemporary philosopher, caution against the significant peril posed by AI to humanity. The text's central thesis focuses on language as a universal foundation for defining human experience. With keen observation, they highlight that a machine, armed with the requisite resources to command language, could potentially generate a reality with the power to ultimately erode all the cultural achievements and expressions that humanity has crafted over thousands of years.


In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers. (Harari, Harris, & Raskin, 2023)

This is where the psychoanalytic viewpoint contributes with valuable insights. To begin with, we might contemplate that, in a sense, language itself can be seen as the first machine that exerted influence over the human body, effectively shaping its identity. Language is a framework comprising codes, symbols, ideograms, phonetics, mechanisms, characters, and numbers, among other elements, which assert themselves upon individuals from the very outset, even during their time in the womb, through the sounds transmitted via the mother's body. In essence, at this stage, we are recipients of language rather than its masters, and we are subject to the influence of the language that communicates with us (Zabalza, 2023).


Therefore, a primary misconception is that human individuals have originated and fully control the realm of symbols and language. Instead, when engaging with the domain of language, signs, and symbols, the individual becomes divided into two facets: a subject of expression and a subject of utterance. This means that the individual is no longer shaped by an internal consciousness (the cogito) but rather by an external influence that continuously writes and speaks through it (an Other). Each time the individual attempts to think or articulate the symbolic order, it becomes ensnared within it. Consequently, the individual is not just subjected to the influence of the symbolic but is also formed by it. Conversely, it is only the symbolic that shapes the world of machines; their "self" comes entirely from a "conscious" language which is programmed as a set of codes and signs.


Perhaps everything comes down into considering that, depending on the subjective stance one takes concerning this aspect of humanity, it can be interpreted as either the foolishness that often distinguishes human actions or, on the contrary, as the essential vulnerability that distinguishes us from machines. A vulnerability through which an individual accesses that rare phenomenon we commonly call love, whose key does not adhere to reason, logic, or any algorithm and where psychoanalysis comes in play. This is, perhaps, where AI gives a fundamental place for psychoanalysis in our human history.



Bibliographical References

Apprich, Clemens: Secret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine Learning. In: Digital Culture & Society. Rethinking AI, Jg. 4 (2018), Nr. 1, S. 29–44. DOI: https://doi.org/10.25969/mediarep/13524.


Harari, Y., Harris, T., Raskin, A. (2023). You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills on The New York Times

http://archive.today/2023.03.24-223358/https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html


Kurzweil R. (2014). How to Create a Mind: The Secret of Human Thought. New York: Penguin.


Lacan, J. (1998). The Seminar of Jacques Lacan book XX: Encore – On feminine Sexuality, the Limits of Love and Knowledge 1972-1973. London: W.W. Norton & Company.


Laurent, É. (2016). The Unconscious and the Body Event. The Lacanian Review HurlyBurly 1: pp. 178-87.


Laurent, É. (2016) L’Envers de la biopolitique. Une ecriture pour la jouissance. Paris Navarin.


Millar, I. (2020). The Psychoanalysis of Artificial Intelligence. Kinston School of Art.


Miller, J. (1983). States of Mind (New York: Pantheon, 1983).


Turkle, S. (1988). Artificial Intelligence and Psychoanalysis: A New Alliance. Daedalus: Journal of the American Academy of Arts and Sciences, MIT.


Zabalza, S. (2023). La Inteligencia Artificial no se lleva bien con los cuerpos vivos que hablan on ElSigma.


Žižek, S. (2020). Sex and the Failed Absolute: London: Bloomsbury.



Visual Sources




Comments


Author Photo

Gabriella Yanes

Arcadia _ Logo.png

Arcadia has an extensive catalog of articles on everything from literature to science — all available for free! If you liked this article and would like to read more, subscribe below and click the “Read More” button to discover a world of unique content.

Let the posts come to you!

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page