top of page

Can Computers Be Creative?

There is widespread debate in the philosophy of creativity and artificial intelligence about whether it is possible to generate programs that can engage in creative problem-solving. Much of the criticism against creative artificial intelligence stems from the fact that AI is pre-programmed by humans so it only operates on rule-based algorithms and can provide no genuine insight into creativity since creativity seemingly requires 'randomness' and agency or intentionality that programmed AI cannot be capable of achieving.

Berys Gaut (2010), in his essay Philosophy of Creativity, proposes the definition of creativity, according to which it is a capacity to produce things that are original and valuable, the value condition being a necessary element to rule out cases of worthless originality. Furthermore, in a separate essay entitled The Value of Creativity, Gaut (2018) outlines his arguments for an additional component that is necessary to regard a process as genuinely creative, and that is agency or intentionality. Possessing intentionality involves having at least partial awareness of the goals and values that the agent is trying to achieve with their creative endeavor and exhibiting some understanding of how to achieve them (Gaut, 2018). Margaret Boden (2004), in her book The Creative Mind: Myths and Mechanisms, applies the concept of creativity to computers and one of the most important questions she asks in her work is whether computers can ever be creative. Boden (2004) answers in negative. Nevertheless, this article will present the argument that artificial intelligence can, in fact, be creative by endorsing a predictive processing framework. It would entail creativity can emerge from probabilistic algorithms using Bayesian inference since the human brain works in a similar manner using predictive processing and information optimization (Clark, 2018).

Figure 1: A colorful painting by M.C. Escher of a robot head with flowers growing out of the top. An example of an artwork generated by the DALL·E 2 AI. (DALL-E 2 AI, 2022)

Analyzing the definition of creativity, to be an intentional agent, a computer must be able to perform a task autonomously rather than acting upon instructions provided by the programmer. The objection against intentionality in computers lies in the idea that as a computer is initially programmed by a human being or perhaps another computer, it cannot be autonomous since every task it conducts must be commanded by its essential programming. So, the computer simply performs a set of functions based on absolute rules and does not possess any understanding of what it does, so neither is it autonomous nor is it, consequently, an intentional agent. To address the autonomy problem, it will be shown that the criticism is misinformed. First, it will be demonstrated that even human beings are partly programmed so they possess the complex capacities to navigate their environment. Second, it will be argued that despite initial programming in computers, computers can still act intentionally or goal-directed given the right algorithm. To address the first claim, human beings are pre-wired toward certain unalterable or deeply ingrained dispositions, behavior, and arguably knowledge based on their genetic makeup. Examples of unalterable behavior would include appetite for certain types of foods and distaste or inappetence for others that are rightly categorized as inedible, such as wood, rocks, etc. given there are no neurological disorders. To clarify the distinction between programmed input that contains a genetically pre-determined set of instructions and programmed input that contains an environmentally determined set of instructions (such as education, cultural affiliations, group membership, etc.): the former is usually deeply ingrained in one’s behavior and can hardly be altered, as illustrated in the case of the twin studies, where the twins, despite growing apart and leading drastically different lives, had stark similarities in terms of personality traits. Instructions that are received from the environment, on the other hand, participate in a feedback network where the determination of output (behavior) from input (environmental stimuli) is based upon computations conducted across a probability distribution function that incorporates positive and negative feedback from past experience (Clark, 2018). In other words, we behave in a certain way in some environment because we perceive our behavior to be rewarding with respect to the environment, and this kind of conditioning/programming is learned, highly flexible, and adaptive, unlike genetic determination.

The blank slate theory of the mind has been persistently challenged through scientific experiments in different fields. In a study in human behavioral genetics, two identical twins were placed in drastically different environments for years and their personalities were evaluated afterward (Bouchard et al., 1990). It was discovered that, despite being raised in dissimilar households, the identical twins more or less remained significantly alike in terms of their personalities (Bouchard et al., 1990). The findings of this study tell us about the proportion of our identities that is conferred by genes. Since the effect of genes on our behavior is quite significant, one could argue that our actions are primarily deterministic, even though we have countless choices in deciding how to act. Does that imply we are not autonomous beings? It does not if one defines autonomy as having personal responsibility over our choices rather than having control over the causal sources that create and partially shape our preferences. For instance, person A may be disposed to consuming savory foods and one may rightly claim they are not responsible for this disposition and cannot blame them for it. However, A still has the freedom to decide what kind of savory food they consume among a wide array of choices. If A chooses to consume a certain meal over another, A is personally responsible for their selection as they acted through their freedom of will. Similarly, an individual who is more prone to aggressive behavior has several functional ways to release this aggression, through sports, mechanical work, games, etc. But instead, if the individual resorts to violent crimes later in their life, the state is authorized to convict and punish them for their behavior as they are accountable for the dysfunctional, parasitic choice they had made. Thus, human beings are autonomous creatures despite possessing an innate causal source that partially pre-determines their preferences and behavior, as they had freedom of their will in selecting over the wide range of options that are available to them in deciding a certain course of action.

Figure 2: AI-Generated Art: Shocking Reds (MINICRISP, n.d.).

In a similar vein, a computer can be initially programmed so it has the right ‘genetic’ influence on how to navigate the world by itself because we want a computer that is able to learn through experience, but even that would require some initial programming. This is because there are computers that rely on absolute rule-based algorithms (Sprevak, 2017), like Turing-style machines, and these operate on a feedforward network, always producing the same output for a given input. To design a computer for creativity, it must operate on probabilistic algorithms instead, so it actively interacts with the environment and learns through experience (Clark, 2018), thus operating in a feedback network. Thus, initial programming would only consist in computing the algorithm that pre-wires the computer with a certain set of equipment for environmental navigation, and the output would be produced through a complex interplay of internal and external processes. These processes would be divided into various levels, where each level is designed to respond suitably by cooperating with the previous level, and the sum of these cooperative responses would entail output that is best suited to the initial input. So, the computer program has to possess certain deterministic features such that its different levels cooperate both within and across levels. Through programming, the computer will be able to navigate through the environment by incorporating information received through various sensory modalities and optimizing the information using Bayesian inferential models to produce the best output (Clark, 2018). Probabilistic algorithms would allow for a high level of flexibility, enabling a computer to make frequent alterations in its system based on the feedback it receives from the environment, for instance when trying to solve a creative problem. It would re-use information from previous attempts to refine its probability distribution model and explore novel ways to solve the problem, while at the same time, building upon prior information on the attempted problem. The computer’s sensitivity to feedback and attempt to alter its responses so that they receive positive feedback from the environment is a clear case of the computer possessing intentionality and agentive choice, as it is responsible for deciding what external system it wants to participate in.

External systems are defined by the context within which cognition takes place; two individuals could be in the same environment and perceive the same stimuli, but their goals may differ based on their prior interactions with the world. So, for instance, person A may perceive a chair and decide to break it, so their feedback system would be based on whether their behavior is able to accomplish the task of breaking the chair. In contrast, person B may want to remove the chair and take it someplace else, so their behavior would be shaped accordingly under a different feedback system. It should be noted that positive feedback does not entail the behavior itself is intrinsically positive but rather that it serves as an instrument to achieve a certain goal or purpose. Making progress and accomplishing this goal would count as positive feedback, and regressions and failures would count as negative. However, that says nothing about the intrinsic character of behavior

Figure 3: AI-Generated Art: The Golden Sunset (GLOBIX, n.d.)

Another feature of creativity that may raise doubts about a computer being creative is fulfilling the value condition. Since a computer cannot make normative judgments about creativity, it cannot understand creativity. Thus, it would come up with original ideas that are worthless and of no use. However, one could disagree that a computer cannot perceive the creative value of tasks. Given the computer operates on probabilistic algorithms, it would integrate feedback into its perception of the world. Based on the feedback it receives, the computer will learn to distinguish between good value judgments of creativity and bad ones. The good judgments will receive positive responses from the environment it interacts with, while others will receive negative responses indicating to the computer that its judgments do not align with the standard used for evaluating the value of creativity. The underlying mechanism would be similar to classical and operant conditioning, in that certain behavior is positively or negatively reinforced based on the response the agent receives from the environment. Thus, the value of creativity becomes culturally dependent and context-sensitive.

The third and last objection that will be addressed here is raised by Boden (2004) who argues it is impossible for a computer to possess a database as large as human beings that is regularly enriched by experiences. Boden’s criticism is seemingly unfounded, as computers can easily, if not better, store and access an enormous amount of information so their database is at least as large and diverse. Perhaps, the challenge lies in seeing connections across different sets of information, like humans do, and integrating them when approaching a task. One important aspect of creativity is making associations among completely unrelated things, for instance, the use of 'homospatial' and 'janusian' processes in Kekule’s discovery of the structure of the benzene molecule (Rothenberg, 1995). According to predictive processing theories, the visual system detects signals from an external object, and the brain processes these signals at the lower level to find the best hypothesis that accommodates the full signals at the higher level, resulting in a unitary and coherent perception of the object (Clark, 2018). However, even though sense perception is discrete, and one only perceives one object at a time, there may still be some minimal perception of other objects that resemble the one being currently perceived (Clark, 2018). These minimal perceptions do not dominate, however, as they address the signals from the object only partially and potentially conflict with other signals (Clark, 2018). To apply this theory to the discovery of the benzene molecule, the arrangement of the atoms in the molecule emitted visual signals (in Kekule’s imagination) that partially resembled a snake. So, even though Kekule was fully aware that the molecule and the snake were two distinct objects — and he knew this because the signals from the snake image do not fully accommodate the signals from the molecule — he still made the association of structural similarity between the atoms and the snake image due to the latter partially resembling the former, which enabled him to superimpose one image over the other. Thus, Kekule’s creativity in juxtaposing a snake with the benzene molecule has essentially been determined by the visual signals he received in his imagination and how, despite perceiving a discrete coherent object, some of the signals caused him to detect structural similarities with the image of a snake. In a similar manner, a computer can make such associations across vastly different conceptual spaces based on the signals it receives and how it interprets those signals because there are lots of other things that could have resembled the benzene molecule, including notional objects.

Figure 4: AI-Generated Art: Me Against Everyone (GLOOMBOT, n.d.).

It is misguided to claim a computer cannot be genuinely creative as it is well capable of satisfying all the conditions necessary for creativity, which includes intentionality (by interactions with the environment and adapting behavior according to positive or negative response), valuableness, and originality (by interpreting resembling objects that accommodate partial signals) — assuming novelty automatically follows from originality. So, through probabilistic algorithms, one can reasonably claim that creativity is a capacity that can be artificially developed in computers and is thus not restricted to particular living beings.

Despite the following arguments, critics may raise further concerns about creativity in AI. Are virtual programs isolated from the world capable of possessing any genuine understanding of real-world concepts, such as chairs and tables? Perhaps one needs to interact with objects to develop a working knowledge of them. How should such interactions take place and must it be a necessary condition to interact with physical objects in 3D, which a virtual program is incapable of doing since it has no existence in the spatial region although its underlying hardware does? These are important metaphysical questions about how to define ‘understanding’ of concepts that have implications for creativity in AI, since, with no knowledge of concepts, AI cannot be intelligent in the ordinary sense, let alone be creative. The questions will be left for the readers to contemplate, and to encourage further research.

Bibliographic Sources

Boden, M. (2004). The creative mind: Myths and mechanisms. London: Routledge.

Bouchard, T. J., Jr, Lykken, D. T., McGue, M., Segal, N. L., & Tellegen, A. (1990). Sources of human psychological differences: the Minnesota Study of Twins Reared Apart. Science (New York, N.Y.), 250(4978), 223–228.

Clark, A. (2018). Beyond the 'bayesian blur': Predictive processing and the nature of subjective experience. Journal of Consciousness Studies, 25(3-4), 71–87.

Gaut, B. (2010) The Philosophy of Creativity. Philosophy Compass, 5(12), 1034–1046.

Gaut, B. N. (2018). The value of creativity. In B. Gaut, & M. Kieran (Eds.), Creativity and Philosophy (pp. 124–139). Routledge.

Patterson, P., & Thomas, K. (2007). Review of “The Creative Mind: Myths and Mechanisms”. In S. Schroeder, Essays in Philosophy (Vol. 8, Iss. 1, pp. 223–230). Philosophy Commons.

Rothenberg, A. (1995). Creative Cognitive Processes in Kekulé’s Discovery of the Structure of the Benzene Molecule. The American Journal of Psychology, 108(3), 419–438.

Sprevak, M. (2017). Turing’s model of the mind. In J. Copeland, J. Bowen, M. Sprevak, & R Wilson (Eds.), The Turing Guide: Life, Work, Legacy (pp. 277–285). Oxford University Press. Retrieved 1 Apr. 2022, from

Visual Sources

Author Photo

Swarnila Saha

Arcadia _ Logo.png


Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page