A Critique of Artificial Consciousness as Motivation and Volition

  1. Abstract

The aim of this paper is to critically examine the criteria for AI consciousness proposed by Jacob Su in his recent paper, “Consciousness in Artificial Intelligence: A Philosophical Perspective Through the Lens of Motivation and Volition.” First, I argue that Su’s definition of consciousness is inaccurate, being unable to contend with so-called “marginal cases” (Gruen, 2021). I then contend that for Su’s criteria of consciousness to stand, not only does it require a robust defense of libertarian free will, it also requires an explanation for the possibility of consciousness without motivation. I investigate the suitability of each premise in Su’s criteria of consciousness as a benchmark for consciousness, as well as a general problem with determining consciousness in AI or any other being purported to have consciousness. After presenting my critiques of Su's claims, I elaborate on the inherent ambiguity of consciousness, as highlighted by the Hard Problem of Consciousness and the Chinese Room Argument. I argue that any attempt to classify a conscious AI will ultimately prove challenging. Finally, I construct a schema that any successful criteria of consciousness must address. 

  1. Introduction

In a world of emerging AI technology, one of the concerns that necessarily arises is the problem of AI consciousness: what it is, what it looks like, and what it would entail. This paper mainly addresses the first of these concerns as a response to Su’s articulation of what consciousness AI is. First, he establishes his own definition of consciousness, which is simply self-awareness (2024, p. 2). Given this, he then spends the rest of the paper constructing what I will term the Consciousness Criteria, or CC, two criteria that would presumably allow for the identification of consciousness in AI:

Pse The AI must be able to be aware of having subjective emotional experiences. 

Psm The AI must be able to autonomously and freely choose how it itself is to be motivated.

Both conditions are, as we will see, subject to internal discrepancies, and they are difficult to defend without some prior commitment to controversial and still debated premises.

Redefining Consciousness

Definition is crucial to consciousness, especially when trying to identify it in a being. Although this is addressed, it is addressed only briefly in Su’s paper. First, Su asserts that a definition of consciousness based on sentience is (rightly, I may add) rejected for being overly broad and imprecise. He then proposes an additional definition, which he would adopt, that a conscious being is “aware of not only their thoughts, feelings, and senses but also that they are aware of their awareness” (2024, p. 2). Succinctly stated, consciousness is defined as self-awareness. This definition is further justified by claiming that such a definition offers a “unique relationship to other phenomena of the human experience and… opens up paths to more exciting discussions and implications.” While this may be the case, it does not provide explanatory grounds upon which his definition ought to be adopted. Nevertheless, irrespective of the rationale behind his definition, the theory of consciousness proposed can be classified as a higher order theory of consciousness, or more specifically in this case, a Higher-Order Thought Theory (HOT). HOTs can further be categorized as being either actualist or dispositionalist, though it is unclear into which of these camps Su’s definition would most conform (Carruthers & Gennaro, 2023). However, as I will proceed to demonstrate, either way, there are troublesome implications for his definition. 

One key objection, the “marginal case” objection, raised in Su's paper is that, despite appearing intuitively conscious, animals lack self-awareness. In response to this “marginal case” objection, Su suggests that animals supposedly have self-awareness on a “basic” level, aware of their own biological condition and mental state; for example, they can recognize that they are hungry or that they are experiencing pain or pleasure. Based on this, it is concluded that animals have consciousness, motivation, and volition in spite of a lack of “complex decision-making” faculties possessed by humans (Su, 2024, p. 4). However, in claiming that animals have consciousness as defined in the HOT, there seems to have been some confusion pertaining to the difference between first-order “awareness” and second-order “self-awareness.” Awareness is distinctly different from self-awareness in that the latter is the awareness of one’s awareness; yet when an animal is hungry, it is not aware of its own hunger; it is simply hungry, without some further second-order mental process granting it awareness of its own predicament (Byrne & Whiten, 1988, 1997; Povinelli, 2000). 

In his paper, Su gives the example of a dog signaling its owner for food as evidence for the dog’s supposed self-awareness. However, this is a contradictory claim, the exact claim inherently voiced by the objection. When a dog signals its owner for food, it is not aware of its hunger; instead, the signaling may either be a result of instinct developed over time, in which case no awareness is required, or it may be that the dog is simply aware of its hunger and, through inductive processes, arrives at the action of signaling its owner for food. At no point in the process does any kind of second-order “self-awareness,” or thinking about the state of hunger itself, occur.

Since there is uncertainty regarding the subject of animal self-awareness, and intuitively, we recognize them as unable to be self-aware, the burden of proof for animal self-awareness falls on those who advocate for such a prospect. Yet, Su did not substantiate the claim that animals have the capabilities for self-awareness, merely asserting that animals have some kind of first-order awareness – it seems misconstrued to define consciousness as self-awareness. Furthermore, even if one were to “bite the bullet” in this case and accept that animals are not conscious, then one must also do the same with other “marginal cases,” perhaps the most difficult of which would be the marginal case of infants. Studies have been conducted to discern the self-awareness of infants, and have found that before 15 months of age infants are largely “unself-aware” (Lawrie, 2018; Lewis & Brooks-Gunn, 1979; Moore et al., 2007, p. 169). To assert that infants younger than 15 months are somehow unconscious would be a highly contentious statement, with substantive ethical implications. If one grants that only conscious beings have moral worth, then such a claim would entail that infants (and animals) have no moral worth, possessing the same moral status as trees or rocks.

  1. Challenges to the CC

Now that we have examined the definition of consciousness, we can proceed with explicating the relationship between consciousness and AI. 

The CC’s first premise is Pse, defined as the ability to not only have, but have the awareness of having, of subjective experiences. This forms the basis upon which Psm can operate, where motivations are formed based on subjective experiences acquired by the AI in Pse. However, for any motivations derived out of Pse to, as Su describes, “originate wholly from ourselves,” a robust defense for libertarian free will must first be supported and defended” (Su, 2024, p. 4); otherwise, true self-motivation is unachievable, and the argument presented is an argument from ignorance (Shatz, 1987). 

Consider an AI that is created and trained based on data from the entire internet, disregarding internet regulations and censorship. Necessarily, such an AI would inherit the bias and partiality of internet users, as around 60% of internet users are between 25 and 44, and around half of its users are only from five countries: China, India, the United States, Indonesia, and Pakistan (Internet Users by Country 2024, n.d.; Petrosyan, 2024a, 2024b). All this is to say, an AI model trained on any data set will be subject to such a data set’s inherent prejudices. Under such a circumstance, the question could be raised whether the AI is in fact “choosing” its own course of action when it decides to perform an act, or whether such an act was preconditioned by its own training data. An analogy could be drawn between humans and AI, as our “training data” could in some sense be conceived of as our genetic material and, to some further extent, be conceived of as empirical data gathered through our senses.

In both the AI case and the human case, the ways in which the subject responds to stimuli would be heavily, if not completely, influenced by both internal and external coercive factors; therefore, not a result of autonomous judgment or Psm. Even if one were to accept that perhaps some of our decisions regarding minor trivialities, such as deciding between picking up a red apple or a green one, were truly free, whatever that may entail, it would be difficult to assert that such a reductive view of autonomous motivation would indeed count as a factor in determining consciousness. It should be noted here that a comprehensive argument for determinism or any other form of a non-libertarian free will position will not be given here, as that would be outside the scope of this paper. However, without a sound defense of free will, it is hard to see why one should accept Psm.

The CC’s second premise is Psm, defined as the ability for some beings to autonomously decide their own motivations. Such a claim, however, faces the obstacle whereby one must explain the “marginal case” of conscious beings being unmotivated or never to be motivated.

Although consciousness makes motivation possible, it does not necessitate motivation. Hence, if a conscious being is unmotivated, such as in the case of someone who is, as Su describes, “depressed or desireless,” then it seems problematic to claim that AI must display superficially motivation-apt behavior to be categorized as conscious. Much like my objection in §2, conscious beings may have had potential for self-awareness and motivation but were unable to realize it. Suppose that an infant dies before it reaches 15 months of age. During the period that the infant was alive, we would almost certainly consider it to be conscious, yet it is unable to autonomously decide its own motivation. This would be the case according to Su, as he lists self-awareness as a prerequisite for motivation (Su, 2024, pp. 3–5). The same can be transposed on to an AI example. Suppose an AI is in its process training process. If we were to terminate its said process right before the instant before it achieves the ability to autonomously motivate itself (whenever that may be), it seems rather arbitrary and ad hoc to consider the resulting AI not to be conscious, though it has not truly achieved the ability to self-motivate. 

  1. The Indefinability of Phenomenal Consciousness

Having discussed local objections to Su’s CC, I think then that a sketch of a more general problem in the project of defining consciousness must be presently delineated to justify my critique of Su’s claims. The most potently destructive question to be responded to when creating any criteria for consciousness is the hard problem of consciousness, first proposed by David Chalmers (Van Gulick, 2022). As Su explains, the hard problem of consciousness entails “explaining why… brain activity gives rise to a subjective, qualitative experience at all” (2024, p. 2). There are some philosophers who deny the existence of this problem, namely, in an example given by Su, David Dennett. Dennett subscribes to an illusionist view of consciousness, in essence making the physicalist claim that consciousness is simply a set of materialist neurological processes. He contends that the hard problem can be solved purely by solving the easy problems of consciousness, i.e., how specific brain functions allow for perception, learning, or cognition (2013, 2017). However, these claims, as Chalmers notes, can be disputed using a Moorean argument, which states that the existence of phenomenal consciousness is more intuitively compelling than a merely illusionist explanation of consciousness. In his own words, “[W]hat is needed is an explanation of how having a mind without phenomenal consciousness could be like this, even though it is not at all the way that it seems” (Chalmers, 2020).

This having been established, we can conclude that any appropriate benchmark we may select for determining consciousness must be able to contend with the hard problem of consciousness, i.e., able to give an account that allows for qualia in the conscious experience. 

Another related problem for any criteria of consciousness is the Chinese Room Argument, originally proposed by John Searle. In a concise form, it goes thusly:

 Imagine a native English speaker, let’s say a man, who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols that are correct answers to the questions (the output). (Keil & Wilson, 1999, p. 115)

In this case, while it may be that to external observers, the room is outputting Chinese in a manner suggesting that the mechanism within understands Chinese, such is not actually the case. The purpose of this thought experiment, then, is to demonstrate the impossibility of categorizing some being as being conscious or unconscious based merely on observed behavior. The room in the argument could be replaced with a supposedly conscious AI program – perhaps one that displays self-awareness, subjective opinions, and emotion – yet such an AI would, in fact, hardly be considered conscious. From an external perspective, all the while, there would be no good reason to reject the consciousness of such an AI (at least in accordance with Su’s criteria). Any constructed criteria that base its determining factors of consciousness on some phenomenally understood factors, such as the one Su proposes, must be able to further demonstrate, via some other means, the authenticity of any such displayed consciousness.

Inevitably, these problems weaken the position of any criteria for determining consciousness, a point with Su addresses: “[c]onsidering both the Chinese Room Argument and this, how can we determine if an AI can truly experience motivation or consciousness, or just simulate it? … Ultimately, even if we can only simulate motivated and conscious behavior in AI, it still remains a meaningful accomplishment.” (2024, p. 6). However, Su’s point here seems rather contradictory and counterproductive – if he is seeking to construct some framework to understand and diagnose consciousness, yet any such diagnosis could be the result of simulated consciousness, with no way of knowing whether it is simulated or not, then the very motivation for constructing such frameworks could be questioned.

  1. A Schema of Consciousness Criteria

Now that we have addressed some of the obstacles that must be treated when producing guidelines for consciousness, I will be defining a schema that successful criteria of consciousness, specifically AI consciousness, would be able to navigate. Successful criteria of AI consciousness must be able to:

  1. Provide an adequate and precise definition of consciousness, in alignment with the preexisting intuitive framework of consciousness.

  2. Prove that a given AI fits the definition laid out above, beyond reasonable doubt.

Why these two criteria? First, one of the main implications of consciousness, especially in the field of AI, is its ethical significance, i.e., if we were to discover that an AI was conscious, our moral responsibility towards said AI would change dramatically. However, if some constructed criteria for consciousness were to skew our moral duty in such a way as to bring it into conflict with our moral intuitions (in the most general sense of the term), then it would be highly improbable that such a theory would be able to impose its new ethical construct in any widely accepted magnitude. Second, because of the aforementioned problems of the Chinese Room Argument, and more comprehensively, the hard problem of consciousness, any criteria of consciousness ought to have the capability of providing explanations to these problems. Although the intuitive framework utilized in (a) may not be able to solve these issues, it is the predominant framework that our moral (and legal) system is based around; therefore, the holder of the burden must be able to justify a reasonable challenge to the pre-established intuitive ethical framework.

Presently, Su’s criteria of consciousness fail on both accounts: it fails to provide an adequate and precise definition of consciousness able to contend with marginal cases, and regardless of whether Su’s description is accurate, it is unable to overcome the challenges described in §4. Therefore, it is unable to prove, beyond reasonable doubt, that a given AI fits any given definition of consciousness, including his own definition. 

  1. Discussion

In this paper, I investigated the argument presented by Jacob Su, where he argues that for an AI to be considered conscious, it must both be able to have subjective emotional experiences and have the ability to self-motivate. Then, I illustrated some inconsistencies and faults in the criteria by first questioning his definition of consciousness, then arguing that the criteria is based on assumed premises and is imbued with innate contradictions. Additionally, I outlined some general issues with delineating consciousness. Finally, I propose a schema for future attempts at creating any set of guidelines for categorizing consciousness, as well as establishing that Su's criteria are unable to meet the requirements specified in the schema.

While I consider Su’s attempt at defining consciousness to be flawed, it also informs future endeavors into the philosophy of mind, allowing for the construction of more sophisticated theories of consciousness. Undoubtedly, then, a question that must be addressed when studying consciousness is such an endeavor’s ultimate purpose. If AI can emulate the outward behavior of conscious being to an advanced degree, on a practical level, there would be seldom a difference between some “truly” conscious being and some “pseudo-conscious” being. Alarmingly, AI-created content is becoming progressively more difficult to identify as technology advances (Casal & Kessler, 2023; Fleckenstein et al., 2024; Pocol et al., 2024). This begs the question: what continues to justify a search for the “true nature of consciousness?”




References


Byrne, R. W., & Whiten, A. (1988). Machiavellian intelligence: Social expertise and the evolution of intellect in monkeys, apes, and humans. Clarendon Press.

Byrne, R. W., & Whiten, A. (1997). Machiavellian intelligence II: Extensions and evaluations. Cambridge University Press.

Carruthers, P., & Gennaro, R. (2023). Higher-order theories of consciousness. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2023). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2023/entries/consciousness-higher/

Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), 100068. https://doi.org/10.1016/j.rmal.2023.100068

Chalmers, D. (2020). Debunking arguments for illusionism about consciousness. Journal of Consciousness Studies, 27(5–6), 258–281. https://philpapers.org/rec/CHADAF-2

Dennett, D. C. (2013). Intuition pumps and other tools for thinking. W. W. Norton & Company.

Dennett, D. C. (2017). Consciousness explained. Little, Brown.

Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6, 100209. https://doi.org/10.1016/j.caeai.2024.100209

Gruen, L. (2021). The moral status of animals. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2021/entries/moral-animal/

Internet users by country 2024. (n.d.). Retrieved August 9, 2024, from https://www.datapandas.org/ranking/internet-users-by-country

Keil, F. C., & Wilson, R. A. (1999). The MIT encyclopedia of the cognitive sciences (MITECS). The MIT Press. https://doi.org/10.7551/mitpress/4660.001.0001

Lawrie, M. (2018). Measuring early emergence of self-awareness in infants using eye tracking. Theses, Dissertations and Culminating Projects. https://digitalcommons.montclair.edu/etd/139

Lewis, M., & Brooks-Gunn, J. (1979). Toward a theory of social cognition: The development of self. New Directions for Child and Adolescent Development, 1979(4), 1–20. https://doi.org/10.1002/cd.23219790403

Moore, C., Mealiea, J., Garon, N., & Povinelli, D. J. (2007). The development of body self-awareness. Infancy, 11(2), 157–174. https://doi.org/10.1111/j.1532-7078.2007.tb00220.x

Petrosyan, A. (2024a, June 27). Global internet users age distribution 2024. Statista. https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide/

Petrosyan, A. (2024b, July 30). Internet usage worldwide. Statista. https://www.statista.com/topics/1145/internet-usage-worldwide/

Pocol, A., Istead, L., Siu, S., Mokhtari, S., & Kodeiri, S. (2024). Seeing is No longer believing: A survey on the state of deepfakes, AI-generated humans, and other nonveridical media. In B. Sheng, L. Bi, J. Kim, N. Magnenat-Thalmann, & D. Thalmann (Eds.), Advances in Computer Graphics (pp. 427–440). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-50072-5_34

Povinelli, D. J. (2000). Folk physics for apes: The chimpanzee’s theory of how the world works. Oxford University Press.

Shatz, D. (1987). Free will and the structure of motivation. Midwest Studies in Philosophy, 10(1), 451–482. https://doi.org/10.1111/j.1475-4975.1987.tb00551.x

Su, J. (2024). Consciousness in Artificial Intelligence: A Philosophical Perspective Through the Lens of Motivation and Volition. Critical Debates in Humanities, Science and Global Justice, 3(1). https://criticaldebateshsgj.scholasticahq.com/article/117373-consciousness-in-artificial-intelligence-a-philosophical-perspective-through-the-lens-of-motivation-and-volition

Van Gulick, R. (2022). Consciousness. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Winter 2022). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2022/entries/consciousness/