As a noun in the English language, ‘consciousness’ suggests an entity or state, which ought to be classifiable ontologically. It is an ambiguous term, however, with several distinct referents. It can mean, for example, wakefulness as opposed to sleep or coma. It can also refer to the contents of that wakeful state of awareness: actual experience in contrast to physical or mental capabilities. Most importantly, consciousness can be described either first-personally or third-personally—from the inside (as the experience of a given mind) or from the outside (as described by another mind which observes associated behavior). The apparent irreconcilability of these perspectives has vexed philosophers over the ages and now challenges scientists as well. The dilemma is traditionally known as the Mind-Body Problem, aka the Hard Problem of Consciousness. The older designation itself is ambiguous, since ‘mind’ eludes precise definition and ‘body’ can mean either the organism concerned or physical reality at large. The more recent designation as “hard” reflects the basic frustration involved and stands as a reminder that the mystery remains unsolved.
The problem, of course, is uniquely an intellectual challenge for human beings, who know themselves to be “conscious.” In one formulation, it is the question of how the physical brain (a third-person concept) can produce first-person experience in its owner. This is not a problem for the owner’s brain, of course, which copiously produces experience on a daily basis, but for philosophers trying to understand how it does that, even when it is their own brains concerned. It is an odd perplexity, since the outward-looking human mind habitually tries to understand things in terms of third-person description—that is, as events in the external world (including neural events in a brain) that can be perceived in common by multiple observers and described in language. But then the question is how physical events in the brain produce the subjective field of view known familiarly as the external world, which includes that brain, which somehow produces that field of view… ad infinitum. We are caught in a loop, trying to understand how the brain produces a subjective first-person point of view at all.
Sheer frustration has led some to deny that there is any real problem—usually by insisting that there are not two ontological categories in play: either there is fundamentally only mind or only matter. Indeed, most philosophical discussions seem to imply that ‘mind’ (another ambiguous English noun, difficult to translate) is accordingly a sort of thing rather than a process that might be better designated with a verb. (Do you mind? Mind your p’s and q’s.) Mental properties such as qualia (“raw feels” like the colours of a rainbow, the tones of sound, the sensations of a pain, the fragrance of a rose) tend to be reified and compared with the physical things and processes with which they are associated: vibrations of light or sound, damage to tissue, the chemistry of the flower’s odor, etc. Qualia are not things, however, not nouns but adjectives that reflect inner actions the organism performs for its own reasons, even when it has no reason or occasion to know that it has such reasons. Here is the problem in a nutshell: actions too are viewed third-personally, as events just happening in the world, while reasons (or intentions or goals) are pursued first-personally by agents. They seem to live in incommensurable domains. Even when the subject is both agent and observer, there seems to be an irreconcilable gulf between the point of view from which things are observed and the things observed from that point of view.
I do not deny the apparent gulf. But it is not a problem of how to place mind and body together in a common framework or ontology—to view them side by side—nor a question of reducing one to the other. It’s rather a problem concerning what we expect of understanding, which seems to require standing apart from (or under, or at arm’s length from) what we hope to grasp; or what we expect of explanation, which seems a matter of making something plain (or plane, as in flattened out to a common level, the wrinkles ironed). The locus from which we view the world cannot itself appear in the panorama visible from that place. And yet we habitually expect it to, because we are still coming to terms with the oddity of our epistemic situation: being both subject and object. I believe that once people get used to the circularity inhering in that situation, the gulf will seem less perplexing.
In the meantime, we must contend with a laxness in language, such that the Oxford English Dictionary lists more than a dozen distinct common meanings or usages for ‘consciousness.’ If the vocabulary is so ambiguous, can thinking about the corresponding actual phenomenon be any less confused? Literally hundreds of distinct theories of consciousness have by now been propounded. A significant portion of disagreement involves talking at cross purposes because of terminology. For example, in philosophy currently, a convention distinguishes ‘phenomenal consciousness’ from ‘access consciousness’. The former refers to actual first-person experience. The latter refers to a capacity to access the contents of the former, which is a third-person behavioral concept even when the same individual is both observer and observed. Language, with its ability to categorize terms and concepts, thus makes it seem that there are kinds of consciousness, raising contrived questions about how they relate.
Another source of disagreement comes from diverging basic philosophical positions—namely some form of materialism versus some form of idealism. The very gulf implied by the Mind-Body Problem inspires fundamental division in how people think about it. This is a second-order effect of the epistemic dilemma facing embodied minds: subjects who are perplexingly also objects. There is no indisputable way to decide between idealism and materialism (to decide whether mind or matter is fundamental and real). These options reflect inclinations upstream from evidence and rational debate—much like political leanings or religious persuasions. Indeed, the Mind-Body Problem lies at the core of the contest between religion and science as competing worldviews.
Is ‘consciousness’ even a coherent concept? Is it useful or more trouble than it is worth? Psychologists ignored it for much of the 20th century, fed up with the vague excesses of 19th century armchair “introspection,” and heady with the practical results of a behaviorist approach that was in tune with the general scientific emphasis on third-person accounts and spatio-temporal description. Indeed, anecdotal first-personal experience is irrelevant to science, except as reports of it in language can be confirmed by others, according to prescribed protocols. Since consciousness is personal experience, it could only be approached scientifically as a “natural phenomenon”—a sort of third-personal object of study, at arm’s length, but which does not fit gracefully within the scientific ontology.
Classical behaviorism was a macroscopic project correlating gross input of stimulus and output of motor behavior in laboratory settings. It could conveniently and productively ignore consciousness. The development of brain science and its refined technologies afforded a microscopic project that is still a form of behaviorism, but which permits a more detailed and intimate correlation between stimulus and response. In particular, “output” is now considered to include not only neural impulse and motor behavior (both describable third-personally) but also the subjective experience produced by the brain. The Mind-Body Problem then seemed more amenable to scientific study as a mind-brain problem. Within the functionalist program, it could even seem a computational problem. Despite these advances, there still remains a gulf between first-person and third-person perspectives. The question remains: why should there be “anything it is like” to be an organism, or a brain, much less a robot or computer?
AI presents the enticing possibility to replicate human capacities artificially, raising the question of whether consciousness is indispensable for those capacities. Indeed, is consciousness functional? Is there some point to the state of “there being something it is like to be you,” beyond the abilities with which it is associated? In view of natural selection, it would seem obviously so. Yet, no one has clarified to everyone’s satisfaction exactly what the function(s) of consciousness might be or its evolutionary advantage. Yet, no doubt, what most people understand as their consciousness is treasured by them, for its own sake, as dearly as life itself. This despite the fact that we spend a third of our time asleep and much of our so-called waking time on automatic, as it were, in some degree of mindless inattention.
The potential of AI raises the question of whether all that we value as humanness—which includes our precious consciousness—could in fact be duplicated artificially and even improved upon. What we know as consciousness is a product of a highly parochial biological brain and the result of an inefficient process of natural selection. Perhaps it is far from ideal and from what could potentially be realized artificially. Perhaps our transhumanist machine successors would be better off without consciousness—or at least without specific features inhering in biology, such as suffering and aggression. On the other hand, perhaps the abilities we treasure, and their possible extensions and improvements, do require consciousness. Perhaps there is some function our AI or cyborg descendants would necessarily possess that could be called consciousness. But it might be quite different from the nebulous concept we know. Being them might not much resemble being you or me. And perhaps it should not.