How to create a robot that has subjective experiences
The paradoxical machinery of conscious qualia
Creating a robot that experiences consciousness is a centuries-old riddle with profound philosophical underpinnings. Few people have given it as much thought as David Chalmers, and though he has generally been favourable about the prospect, he has outlined certain gaps and mysteries he expects us to encounter on the way there. In his book The Conscious Mind he outlines the key riddle, namely an impassable gulf between two types of mental events:
A phenomenal feature of the mind is characterized by what it is like for a subject to have that feature, while a psychological feature is characterized by an associated role in the causation and/or explanation of behaviour.
The problem of explaining these phenomenal qualities is just the problem of explaining consciousness. This is the really hard part of the mind-body problem.
Why should there be conscious experience at all?I argue that there is good reason to believe that almost everything in the world can be reductively explained; but consciousness may be an exception
This post will fill in those gaps by first discussing how the two interpretations of the human mind above are intertwined. We’ll demonstrate an example where people consistently make an error about their subjective experiences; an act that Chalmers believes is impossible. From there we’ll explain how subjective phenomena are created and shaped out of mechanical acts of interpretation and judgment.
(N.B.: In this post, the term “judgment” is used in the Chalmers/Kantian sense, to mean an evaluation, decision, or conclusion about how things are, rather than a moral judgment.)
Could a zombie write a book about qualia?
Chalmers was reacting to a trend in psychology that represented all human thinking as purely physical processes aimed at controlling behaviour, and excluded or devalued the subjective side of conscious experience. He referred to this attitude as the psychological explanation of the mind, which seeks to explain mental events in a causal sense — e.g. how your beliefs cause actions. He distinguished these from phenomenological explanations, which describe your subjective experiences in terms of their first-person qualities. These include the conscious quality of the colour red, the unpleasantness of pain, and the “feel” of a melody, which are generally called “qualia” (singular: quale), and which Chalmers describes in vivid detail:
In my environment now, there is a particularly rich shade of deep purple from a book on my shelf; an almost surreal shade of green in a photograph of ferns on my wall […] any color can be awe-provoking if we attend to it, and reflect upon its nature.
He further illustrates the distinction by describing a hypothetical entity called a zombie. A zombie, (more commonly referred to as a philosophical zombie, or p-zombie) is an idea borrowed from Thomas Nagel.
A zombie is just something physically identical to me, but which has no conscious experience — all is dark inside. — Chalmers
It is purely an “automaton”, as we might say, but acts and is in every other way like a person. Most people’s understanding of AI, chat-bots, and robots situate those technologies squarely in this category. At best they give rise to zombies — though it is highly contested how far we have progressed towards truly replicating human psychology.
Our inability to bridge the gap between human consciousness and zombies, to explain our subjective experiences in terms of causal machinery, has come to be known as the “hard problem” of consciousness. A wall of separation seems to exist between the truths of subjectivity and the machinery of the brain. Can this gap ever be bridged? What does it take for a robot to experience subjectivity the way we do?
Chalmers hints at a crack in the wall when he highlights a paradox that arises from distinguishing between psychological and phenomenological explanations. If psychological explanations of mind are defined by their causal nature in the physical world, then by implication phenomenological ones are not causal on the physical world:
What it means for a state to be phenomenal is for it to feel a certain way, and what it means for a state to be psychological is for it to play an appropriate causal role.
In other words, experiencing qualia should have no causal effect on a person’s behaviour. This is why the zombie, who did not have such experiences, would be behaviourally indistinguishable from any other human, including from Chalmers himself.
He then raises a question: would a zombie ever write a book about consciousness, as Chalmers himself did? How or why would this creature — who you’ll remember has no subjective experience of qualia — passionately defend the existence of qualia, describing subtle qualities and distinctions between various types? If it did write such a book, we should assume it was lying or deluded, and most likely the psychological causal chains, motives, thoughts, etc. that drove its mental states and actions should be distinct from Chalmers’ own. Chalmers refers to this as the “phenomenological paradox”:
It seems that consciousness is explanatorily irrelevant to our claims and judgments about consciousness. This result I call the paradox of phenomenal judgment.
He has no concrete solution to this paradox except to say that a solution can and must exist, since we know that consciousness really exists. Yet it is difficult to believe that a zombie’s many and varying opinions about qualia would have their cause or explanation outside its subjective experience. Intuitively, only a person with conscious experience would write such a book. Even if Chalmers didn’t write a book, the thoughts he had about qualia would be unlikely for the zombie to have.
Is it possible for a robot who did not experience qualia to “mechanically”, or unconsciously write a book on the topic, and still believe itself to be completely honest? To do so would imply it has an intermediate form of phenomenal consciousness; just enough to experience and write about it, but not enough to truly count as being conscious. From your perspective, such a robot wouldn’t be any different from the other human beings you see around you, both conceptually and in reality. You would be compelled to call it conscious.
The phenomenological paradox isn’t so much a problem as an opportunity. It moves the subjective experience of consciousness out of the inaccessible, mysterious dimension where it has resided for centuries, and situates it as the possible cause of real physical events. This gives us a foot in the door: if we work backwards from the actions and thoughts, to the subjective experiences of consciousness which seem to cause them, we may discover something about the nature of the phenomena themselves. And if we could figure out what kinds of mechanisms would lead a robot to process qualia in this way, and for it to honestly have such thoughts about them, surely we will have done everything we possibly can do to confirm that the robot in question experiences consciousness.
The experience of qualia grows out of judgments
Assuming that neither Chalmers nor our hypothetical robot are lying, then there are processes in both their minds that convert the experience of qualia into written words. Chalmers makes many specific claims about qualia, such as that the qualia of red are more similar to blue than to the qualia of auditory sounds. These are nuanced statements, which require his mind to make observations, comparisons, and judgments about the experience of qualia.
Such contemplations are not “free-actions”. They are proper mental functions. Just as doing a math sum in your mind takes time, processing and interpreting qualia does too, if for no other reason than that converting the experience into explicit verbal facts must be a concrete mental action. Various entities in your mind must collaborate, in real time, to generate these conclusions. These processes will happen only at particular moments in time, right after you turn your “attention” to the subject matter. Once initiated, the processes take some time to complete, a few milliseconds at least, and will likely expend some energy. They may entail an emotional effort, as evidenced by Chalmers’ passion about the topic. You may even get upset when someone claims qualia don’t exist.
Chalmers considers this explanation for phenomenal consciousness, but still doubts if “phenomenological judgments”, as he calls them, actually represent real consciousness itself:
When I introspect, I find sensations, experiences of pain and emotion, and all sorts of other accoutrements that, although accompanied by judgments, are not only judgments.
As a reader, you may also be wondering if the act of contemplating qualia has anything to do with the conscious experience itself. Perhaps translating your experiences of qualia into linguistic descriptions is only ancillary to the direct experience; a follow-on action, after the real experience is complete. Or perhaps there are parallel mechanical processes in your brain that echo your phenomenological experience, copying it like a shadow.
The best way to demonstrate that your actual experience of qualia itself is derived from such acts of judgment and interpretation is to show an example where you would make an error in judgment about qualia, and how that error affects your subjective experience. This may at first seem impossible; surely there is no way to make a false judgment about something as immediate and directly known as your conscious experiences. Chalmers echoes your sentiment:
In every case with which we are familiar, conscious beings are generally capable of forming accurate judgments about their experience, in the absence of distraction and irrationality.
But even the simplest, most direct understanding of qualia regularly results in common errors. Take for example one of the most abiding features of qualia: that they are consistent over time. When we refer to “the qualia of the colour red” we are implying it exists as a distinct object or property of its own. This is why Chalmers is able to describe a hypothetical scenario he calls “inverted qualia” :
It seems entirely coherent to imagine two such creatures that are physically identical, but whose experiences of A and B [qualia of colour] are inverted.
His hypothetical only makes sense if the qualia of red in a given person were consistent over time; otherwise what is it you are inverting? How would you know you’ve inverted it?
Now consider the following:
How do you know that the qualia of the colour red are the same across different moments in time?
Most people find this question intuitively easy to answer. They merely turn their gaze inwards, look at various experiences of red in their thoughts, and see that the qualia are in fact the same. Of course, since this comparison is between two different moments in time (past and present), you must resort to at least one memory of the colour from the past. You can’t actually make yourself experience red at a past time. You can only recall, or re-experience a past memory of the colour red, in the present, and compare it to, say, a red object you see in front of you.
So even as a recalled memory of the past it is still an experience of qualia in the present. Therefore your comparison is always between two experiences of qualia in the present moment, not one in the past and one in present. This makes it impossible to know if the qualia of red are the same at two different times — or if the question even has any meaning.
Chalmers’ “inverted spectrum” argument is therefore incoherent, since for all you know, your own spectrum gets inverted between noon and 1:00 pm every day, and you’d never realize it. Clearly any belief that the qualia of red are the same across time is an error, or at least unjustified. Instead, when comparing them you only judge them as the same. The nervous signals from your eyes may be the same or similar, and this may force you to interpret them as equal, but your judgments about qualia are created during the act — and within the machinery — of interpretation.
And yet we think we can know this, and feel convinced that the qualia of red are consistent over time. Our conviction about their objective consistency may even be one of the strongest reasons to believe that qualia are a real metaphysical property in their own right, outside our subjective judgments. Why are we so convinced of this, when simple reasoning shows it’s impossible to know? Because our mechanisms of judgment concluded as much, and we have no other source of knowledge about qualia than our conclusions about it.
The mechanism in this example is simply faulty, and will continue to give wrong answers long after you have seen through its deception; you are predestined to be wrong about qualia. This example demonstrates how the function of judgment, through which you decide what qualia are, plays a formative role in the nature of subjective experiences themselves.
Direct knowledge of qualia
To say that red “has qualia” that can be compared implies that it always has the same quality. And that is what Chalmers relies on: our subjective judgment that qualia are consistent entities, things that “exist” and can be learned about as “facts”. But qualia are subjective — isolated to a given mind in a given moment. The only way for a person to ever know anything about them is through the function of introspective comparison. There is no other way. The conclusions you reach become the entirety of what you know and believe, indeed of what can be known and believed, about qualia. Even your insistence that qualia can’t be explained mechanically arises from a judgment that resulted in knowledge, and whose roots and source should be questioned.
Chalmers recognizes that judgments may be physically determined, but insists that we still somehow know, axiomatically, that consciousness exists as an entity in itself. It is a mantra he repeats perhaps a dozen or so times in his book:
On the face of it, we do not just judge that we have conscious experiences; we know that we have conscious experiences.
I know I am conscious, and the knowledge is based solely on my immediate experience.
Our experience of consciousness enables us to know that we are conscious.
There is nothing we know about more directly than consciousness. [emphases added]
Behind this dogmatic claim lies a hidden assumption, that there is a secondary, free-floating agent that experiences qualia, and has unconstrained (free-will) opinions about those experiences. I say “free will”, since nothing in its various acts of belief is predetermined to arrive at a specific conclusion — and it is always right. This agent is able to experience qualia without any organ of perception, it can know about qualia without any cognitive functions of knowledge and memory, it can contemplate them without the machinery of reasoning, and judge their existence and properties without the usual motives to drive judgment. It can even introspect without any physical brain to look into.
It has direct, unquestioned access to the truth of qualia, leaping over all the mundane requirements of cognition that psychological explanations are obliged to deal with during the rest of our lives:
The ultimate explananda [things to explain] are not the judgments but experiences themselves. No mere explanation of dispositions to behave will explain why there is something it is like to be a conscious agent.
Chalmers is confident beyond a doubt that we know our conscious experiences well, outside of any function of judgment. Yet as we saw, the fallibility of our judgment extends even to our innermost experiences. Because it is difficult to spot errors about consciousness from outside the subject, such knowledge has so far escaped critical scrutiny. Only their own inconsistencies can reveal our errors to us.
There was a clue in Chalmers’ book all along. The author borrows Nagel’s definition of subjective phenomenal experience, which is that it is “like” something:
A mental state is conscious if there is something it is like to be in that mental state. To put it another way, we can say that a mental state is conscious if it has a qualitative feel — an associated quality of experience.
Although this expression is usually used as a colloquial turn of phrase, it has two faces, the first of which tries to cover up and hide the second. On the one hand, it suggests that there is a constant “it” — consciousness — and that the question of what “it” is like has an objective answer; even if it is inexpressible in words. This obscures its other, hidden face: “what it is like” is an act of comparison, a function a mind can perform, whose meaning comes from the judgment made. And as we saw, that comparison can be erroneous. If a robot, or a person made a mistake in their judgment — say they thought that red “was like” blue — the entirety of their experience of qualia would be altered.
Qualia (as well as consciousness itself) are like beauty, in that there is nothing outside your evaluations of beauty for you to point to and say, “there is beauty”. They are momentary judgments, and only have as much consistency as those judgments compel you to have¹. If you make a “bad” judgment about beauty somehow, such as when you are under the influence of psychoactive drugs, then beauty itself is altered, because there is nowhere else for it to be found except in your judgment².
In the same way, qualia have no existence outside your momentary interpretation. Their existence depends on you looking for them; by definition you are never aware of qualia unless you attend to them. The meaning you give them is their meaning. The existence you ascribe to them is their existence. The object of qualia is nowhere to be found. Raw nervous signals from the eye don’t contain qualia — they are electrical impulses. Qualia must therefore be a part of the interpretation, not the signal. They gain all their features as your mind recalls, judges, and compares, them. And even if qualia did somehow come packaged with the nervous impulses, there would still have to be some machinery at the receiving end to interpret those “attached” properties and convert them into beliefs about qualia. You must look “behind” the mind’s eye, so to speak, and ask what kind of interpretive process could produce such a set of judgments.
The question of the content of a [phenomenal] judgment is not so clear, precisely because it is not clear what role consciousness plays in constituting the content of a phenomenal belief. — Chalmers
That is what is the missing in the quote above. The judgment is not constituted by the phenomena. The judgment is the phenomena.
Chalmers raises many other philosophical questions related to qualia, such as the irreducible nature of subjectivity, and the topic of epiphenomenalism. There is even a sense in which Chalmers’ view is compatible with the contents of this post. This post does not address his entire 400-page book worth of questions. In the interest of keeping this post focused and high-level, I have separated a few of those topics to an appendix here, to be read by those who are interested.
Chalmers, in a sense, foresaw many of these arguments. Ultimately, he rejects them based purely on the sheer fact of our experience. The fact that we cannot deny to ourselves that we have conscious experience leaves all other reductionist explanations as unsatisfactory. And he is right: they are unsatisfactory. But this points to the inevitable truth that we, as humans, can never step out of our own minds, nor ignore the machinery of beliefs that resides in our skulls. No matter how many times I told you that you are not hungry, you will still want food. Similarly, your mind cannot simply accept an objective answer to a subjective question. It cannot convert into a “fact” what is really a motive:
Every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view. — Nagel, What is it like to be a bat?
Your brain is driven to look for answers to these questions, despite the fact that the questions may be self-contradictory. They will recur with regularity, and you will tie yourself in knots trying to satisfy them. The existence of qualia is not a brute fact, as Chalmers puts it, but the satisfaction of a need, a need to grasp the world, to grasp experience itself, to feel you know. When cracks start to form in your established understanding, you pursue new research and new answers until you feel “satisfied”:
[Eliminativism] is the kind of “solution” that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of our behavior.
This view can be satisfying only as a kind of intellectual cut and thrust. At the end of the day, we still need to explain why it is like this to be a conscious agent. An explanation of behavior or of some causal role is simply explaining the wrong thing. This might seem to be mule-headed stubbornness, but it is grounded in a simple principle: our theories must explain what cries out for explanation. [emphases added]
That word, “satisfying”, carries more weight than Chalmers realizes. It points to his hunger for an answer, and one that will let him sleep at night. The truth he finds is a reflection of the hunger that searches for it.
Whether you yourself choose to accept these arguments or reject them is entirely up to you; or more accurately it is up to the complex processes driving your individual judgment. If the machinery of your mind has concluded that there is a magic je ne sais quoi to consciousness, I am not inclined to push it in any other direction. We have, however, dealt with every other objection to the possibility that robots can replicate subjective consciousness.
The second part of this post will assume that the reader does not require anything above and beyond the experiences, judgments, and beliefs to conclude that a given robot is conscious. We will break down qualia into their many interpretive actions and causes. The conscious robot should come to believe that qualia exist of its own accord, and learn to make the right correlations, consistently connecting the interpretation of red qualia to other red qualia, judging that red and blue are more similar than red and a C-major chord, associating red with passion, etc. This requires a new language and framework for discussing the processes related to qualia and conscious experience.
To be clear, the bar is quite high here; we are not talking about a robot that imitates the behaviour of a human simply saying it is conscious, or hard-coding such beliefs into its mind. I mean that we understand in detail what it means for humans to interpret qualia subjectively, then replicate that as a roughly deterministic process in a robot. The point is to take qualia seriously and in explicit detail, without either obscurantism or reductionism.
¹ It was not so long ago that philosophers thought beauty was part of the physical world, instead of in the eye of the beholder. Perhaps the same mistake is presently being made regarding qualia.
² Chalmers notes:
The phenomenal judgments of my zombie twin, by contrast, are entirely unreliable; his judgments are generally false. […] Despite the fact that his cognitive mechanisms function in the same way as mine, his judgments about consciousness are quite deluded.
I‘d argue that since qualia are invented by your judgment, it is not really possible to be wrong about qualia, no more than it is possible to be wrong about a fictional character you have concocted. You may only be inconsistent.