The end of common sense

Common sense may not be so common, but what exactly is it and what is it for? Aristotle thought there must be a special sense to coordinate the other senses. This meaning persisted through the Middle Ages, although the Romans had added another meaning: a moral sensibility shared with others. It fell to Descartes to provide the modern meaning of “good sense” (bon sens) or practical judgment. Like the other senses, he thought it was not reliable and should be supplemented by formal reasoning. Giambattista Vico, the father of sociology, thought common sense was not innate and should be taught in school. His view of it as judgments or biases, held in common by a society, merges with the idea of public opinion or consensus. Kant and later thinkers returned to the idea of a shared moral sensibility, so that common sense is related conceptually and linguistically to conscience.

We have long taken common sense for granted, that it should come as standard equipment with each human being. That presumes, however, that each person not only develops according to a norm but develops in the setting of the real world. Ultimately, it is physical reality and our physiology that we have in common, which provide the biological basis for mutual understanding and consensus. The human organism, like all others, evolved as an adaptation to the natural world. Whatever “practical judgment” we have is learned in relation to a world that holds over us the power of life and death. Common sense is our baseline ability to navigate reality.

Of course, most of us do not grow up in the wild, like animals, but in environments that are to a large degree artificial. “Reality” for us is not the same world as it was for people a hundred years ago, a thousand years ago, or ten thousand years ago. Yet, until recently, the reality experienced by people of all times lay objectively outside their minds and bodies. Common sense was firmly grounded in actual sensory experience of the external world. This can no longer be taken for granted. We now live increasingly in “virtual” realities that are, however, far from virtuous. Because they can be as diverse and arbitrary as imagination (now augmented by AI) permits, there is no longer a common basis for shared experience, or for common sense.

This shift is the latest phase of a long-standing human project to secede from the confines of nature and the body. In the anthropological sense, culture is the creation of a distinctively human realm, a world apart from the wilderness and physical embodiment. We built cities for physical escape. Our first mental escape was through trance, drugs, and religion, which imagined a life of the spirit or mind that was distinct from the life of the animal body. With Descartes, “private experience” formally became a realm unhooked from the external world. With dawning knowledge of the nervous system, he grasped that the natural formation of experience could be hijacked by a malicious agent. His thought experiment became the basis of the “brain in a vat” scenario, the Matrix films, and the paranoid popular memes that you are “probably living in a simulation” or in a theatrical hallucination created by “prompts.” Descartes consoled us that God would not allow such deception. Humanists supposed that natural selection would not allow it. Post-humanists invite it in the name of unlimited freedom.

In any case, common sense is the baby thrown out with the bathwater of external reality. Through technology, humanity grants itself its deepest wish: to be free to roam in inner man-made worlds disconnected from the world outside the skull. Nature had granted us a relative version of that freedom through dreaming and imagination. But our impulse toward creative mastery requires that humanity find this freedom on its own, not naturally but artificially. It must be created from scratch, originally and absolutely, not accepted as a limited hand-me-down from biology. Here we venture into dangerous territory. For, we continue to be vulnerable embodied creatures living in real reality, even as we buckle up for the virtual ride. Is God looking out for us while we trip? Is nature? The other side of utter creative freedom is utter self-responsibility. If experience is no longer to be grounded in the real world, but a matter of creative whim, then what basis is there for limits and rules—for anything but chaos?

The more time children spend online, using their eyes to look at screens instead of at the world outdoors, the less direct experience they will have of the external world. The more time they spend in some entertaining digital fantasy, the less basis they will have for developing their own common sense, which is grounded in the natural use of the senses to explore the external world. Of course, this applies to adults as well. It is not only the proper use of the senses that may atrophy, but the very ability to distinguish real from virtual, nature from artifact, truth from lie. The contents of movie entertainment, for example, are often absurdly fantastical, about themes and situations deliberately as far removed as possible from the tame humdrum of real life. It is precisely drug-like distraction from daily living that entertainment is typically designed to provide. But this is a vicious circle. We then expect from the real world the level of stimulation (adrenaline, serotonin?) that we get artificially from films, online gaming, “adult” content, and “substance abuse.” Indeed, we are trained to ignore the difference between reality and fiction, which can result in failure to tell the difference.

Social media are a form of entertainment, a virtual drug in which truth is reduced to gossip. They may help build consensus with those who “like” you and are like you in some context. In a brave new world of information overload, where the basic challenge is to sort what is fact or reliable opinion from what is not, common sense should be a legacy tool one can count upon. But common sense is not consensus. The failure of a society to know the difference is the banal soil in which authoritarianism grows. We are seeing it around the world right now.

Large language models and similar “generative” tools are another form of virtual reality and entertainment. Ironically, if properly used, they provide access to an artificial version of common sense—or at least consensus. For, they draw upon the common pool of human experience and creative output, as archived digitally. The answers you get to chatbot queries reflect a baseline of collective human knowledge and creativity; they are also organized according to collective ideas about what is logical, sensible, relevant. Another name for such collective wisdom, however, is mediocrity. LLMs are not minds that can think for themselves or originally. If they regurgitate information that proves useful to you, the task of understanding and using the information remains your own, grounded in common sense.

The internet potentially embodies the ancient ideal of omniscience. In itself, the instant online access to encyclopedic knowledge aggravates the problem of discernment: how to know what and whom to trust. The traditional answer to that dilemma has been education, reinforced by common sense, what is meaningful and what is chaff. The traditional encyclopedia, while vetted by well-educated experts, gives relatively cursory information. The new answer is the “intelligence” of the AI tool itself, which sifts, organizes, and even interprets seemingly unlimited information on your behalf. You place your trust in it, as you would in human experts, at your own risk. It draws upon a common denominator of expert opinion. As with human experts, however, you are still dealing with hearsay: accounts that are second-hand (or nth-hand), which you must interpret for yourself. When your quest to go deeper approaches the ceiling of current common understanding, the replies will simply recycle existing clichés.

The situation is like what happened with the invention of printing. Suddenly people had a greatly expanded access to information (beginning with the Bible). This invited them—and indeed required them—to think for themselves in ways they were not used to when guided by the erstwhile gatekeepers of knowledge. This hardly led to consensus, however, but to an explosion of diverging Protestant sects. An optimistic view of the new information revolution is that people are similarly being challenged to think for themselves. Again, the actual result seems to be divisiveness. Of course, the printed page—while novel, thought provoking and entertaining—did not do people’s thinking for them. Yet, AI proposes do exactly that! To implicitly trust the authority of AI is not so different from the faith in religious authority before the Reformation, when the priest could do your thinking for you. If common sense did not provide immunity from the excesses of theology, we can blame the closure of the medieval  world—an excuse we should no longer have. Common sense should be the back-up tool of first resort. But to maintain it requires first-hand experience in the real natural world, which you cannot get online.

Epistemic cycles

Knowledge is a process that involves a dialectical cycle: thesis, antithesis, synthesis. The last term then serves as a new “thesis,” beginning a new cycle. We see this in formal knowledge processes, like scientific theory-making and testing. A new idea is proposed to explain data or to make up for a deficiency in current theory. This idea is published in a journal, for example, which invites comment and critique (antithesis), which may lead to further refinement and experimental testing. If the idea is accepted by the scientific community (and not disqualified by experiment), the resulting synthesis becomes a new thesis to be eventually challenged.

Ordinary cognition involves a similar cycle. But the brain tends to be more definite in its conclusions than scientific experiment or observation, whose results are always probabilistic; and it tends to be less rigorous about testing ideas. The organism must be able to act decisively on the basis of the information it has, however inadequate. If our perceptions were not definite despite actual uncertainty, we would be paralyzed by doubt and unable to act decisively. Yet, the knowledge cycle is incomplete and less reliable when thesis alone is in play, however confidently asserted.

The inherent need to believe our perceptions and trust our beliefs runs up against the contradictory perceptions and beliefs of others. While objectivity is desirable, the natural tendency is to mistake perception for reality or truth, short-circuiting the epistemic process. And in order to maintain this illusion, we tend to overlook inconsistencies in our own thinking, perhaps to protest that we are being objective while others are not. While there can be dissonance within one’s own thinking, leading to self-scepticism, dissonance with others is nearly guaranteed. Too often, internal dissonance leads not to questioning one’s own views, however, but to retrenchment of them and scepticism in regard those who disagree. Nevertheless, the fact that opinions differ plays an overall positive role in the epistemic cycle, for which others provide the necessary antithesis. Whether spontaneous or forced by others, the recognition of one’s own error or subjective limits enables mind to evolve at once toward humble relativity and greater objectivity.

It can hardly be taken for granted that embodied mind seeks truth. The goal of life is survival long enough to reproduce, not objectivity. In other words, our natural condition as organisms is to see and know what we need to see and know. And this is not simply a matter of selective attention or reduced information flow—an obscuring filter between the mind and an otherwise transparent window on the external world. Simply, there is no window at all!

The epistemic circumstance of the scientist parallels that of the brain, sealed inside the skull, which relies on the input of “remote” receptors to infer the nature of the external world. The scientist similarly relies on instrument readings. Both situations demand radical inference. The brain makes use of unconscious perceptual models, according to the body’s needs. Scientists consciously model observed phenomena, according to society’s needs. The brain’s unconscious perceptual models are reliable to the degree they enable life. By the same token, scientific modelling, like other human practices, should not be regarded for its truth value alone, but also for its ultimate contribution to planetary well-being. Good science supports a human future.

Science and engineering are intrinsically idealizing. The dominance of mathematics (which is pure idealization) means that physical phenomena are idealized in such a way that they can be treated effectively with math. This leads to an analysis of real systems in terms of the idealized parts of a conceptual machine. But the reality of nature never conforms perfectly to the idealization. There are no spherical cows, and nature is not a machine. The discrepancy constitutes a potential antithesis to the oversimplified thesis.

Unlike the individual brain, science is a collective social process. It is a communication among scientists—a (mostly) polite form of argumentation through which ideas are justified to others. In fact, science is a model of social cooperation, transcending political and cultural boundaries. Just as there is an epistemic cycle of knowledge production, so there are larger-scale cycles in science: paradigm shifts, but also alternations of more general undercurrents, themes, and fashions such as positivism and Platonism.

Indeed, the interplay of positing and negating aspects of mind manifests in historical cycles generally. The opposing phases in culture may be characterized broadly as heroic and ironic. These poles form a unity, like those of a magnet, alternating as undercurrents which surface in philosophical, social, political, religious, moral, and artistic movements, as well as in scientific fashions. The limiting nature of any proposition or “positive” system of thought casts a complementing shadow that is the other side of the coin. Every thesis posited defines its own antithesis. Where contradictions cannot be resolved logically—that is, outside of time—they give rise to temporal alternations in the phases of a cycle. The pendulum of history swings back, fashions return; we move in spirals if not circles.

Throughout history, there has been a dialectical relationship between the playful, embroidering, subjective, ironic side of the human spirit and the heroic, serious, goal-oriented, earnest, realist side. The ironic mentality delights in playing within bounds. It understands limits to be arbitrary, relative, intentional. The heroic mentality rejects limits as obstructions to absolute truth and personal freedom, while worshipping limitlessness as a transcendent ideal. The heroic is aspiring, straightforward, straightlaced, straight-lined, passionately simplistic, rectilinear, square, naive, concerned with content over form, and tending toward fascism and militarism in its drive toward monumental ideals and monolithic conceptions. The ironic is witty, sarcastic, curvaceous, ornate, sophisticated, diverse, complex, sceptical, self-indulgent and self-referential, tending toward decadent aimlessness and empty formalism. While each is excessive as an extreme, together they are the creative engine of history.

There are cycles of opening and closing in societies, in individual lives, and in creative processes generally. The tension between idealism and materialism, or between heroic and ironic frames of mind, helps to explain why history appears to stutter. Most of any historical cycle will consist of working out the details of a new regime, scheme, paradigm, or theory. But the cycle will also necessarily include an initial creative ferment and a final stagnation, sandwiching the more conventional middle. When change is too rapid or chaotic, there is nostalgia for the probably not-so-good ol’ days. Instability inspires conservative longing for structure, order, certainty and control—until an excess of those inspires revolt again, beginning a new cycle. Generally, too much of anything breeds contempt—and therefore its opposite—as part of the homeostatic search for balance.

Cycles acted out in real time may reflect the deeper endemic circularity of logical paradox. If space and time themselves are products of the brain, how can the brain be located in the space and time it has created? Self-aware consciousness deems the external world to be an image constructed by the brain, but the brain is part of the world so constructed as an image. The endpoint of an explanatory process is recycled as its beginning. It does not seem possible to resolve such circularity in a synthesis. That is perhaps why there cannot be a logically consistent scientific theory of consciousness, which remains a mystery because we are it.

 

Better to believe it

Against common sense, people can believe some very strange things. One marvels at the ingenuity of the human imagination—not only the things that make practical sense, like houses, agriculture, technology—but above all the things that make little sense to a rational mind, like gods and demons, superstition and magic. Yet, religion and magical thinking have characterized human culture far longer than what our secular culture now defines as rationality.

The ancient Greeks we admire as paragons of rationality seem to have actually believed in their pantheon of rowdy and absurdly human-like gods. The Pythagoreans believed in sacred numbers and the transmigration of souls; they used mathematics and music for mystical purposes. Plato believed in a metaphysical realm of Ideal Forms underlying material reality. Copernicus thought the planets must move in perfect circles, because the circle was the symbol of perfection; and Kepler thought that angels moved the planets along their (elliptical) orbits. The early scientists were literally alchemists and Creationists. There are scientists today who believe in the Trinity and the transubstantiation of the Eucharist. My point here is not to disparage religion as superstition, but to marvel that superstition can exist at all.

Language confers the nearly magical power to define things into being—as we imagine and wish them. Outrageous beliefs are possible because a story can easily be preferred to truth. A story can make sense, be consistent, clear, predictable and repeatable. Reality, on the other hand, is fundamentally ambiguous, confusing, elusive and changing. Reality only makes sense to the degree it can be assimilated to a story. It made sense to many ancient cultures that a year should have exactly 360 days (corresponding neatly to the 360 degrees of the circle). The fact that the daily rotation of the earth on its axis has no physical relation to the time it takes to move around the sun was a great inconvenience to calendar makers over the ages, who knew better than nature how the world should work.

In general, what we consciously experience as real is the result of sensory input that has been assimilated to a story that is supposed to make sense of it, and upon which an output can be based that helps us live. The story does not need to be true; it only needs to not conflict with the existence of our species. That gives a wide latitude to imagination and belief.

The brain is a delicate instrument, normally tuned to the needs of the body. Like a complicated machine, there is much that can go wrong with it. Being so complex and malleable, it is also capable of great variation among ostensibly similar individuals, which can include behavior that deviates far from what serves the body or serves the species. Underlying all variation or dysfunction, however, is the natural faith we have in experience. We naturally tend to believe whatever our minds present to us. Human freedom consists in the ability to be wrong while utterly convinced that we are right.

Addiction is an obvious example of the compelling attractiveness of some stimuli—such as alcohol, drugs, or sex. It is natural to seek pleasure and try to avoid pain, because these represent the state of the organism, which tries to maintain itself. However, when experience is sought for its own sake (rather than for the body’s sake), the link with wellbeing is broken. We can then find pleasure in things that are bad for the body (and society), and reject things that are good for it. Of course, we have extended such meanings to include intellectual pleasures and emotional suffering as well. In fact, humans can abstract experience in general, away from its ties to the body, so that it becomes a sort of private entertainment to pursue for its own sake, apart from its relevance to bodily or social needs.

Other compulsions, such as obsessive behavior (including avoidance as well as attraction), further demonstrate the mind’s willingness to believe its contents. And then there is artificial input, applied with electrodes to the brain, for example, which can stimulate specific experiences or memories. Or applied by means of transcranial magnetic stimulation, which can change your perception, for example altering the apparent color of things or draining them of color altogether. On the other hand, sensory deprivation causes outright hallucination, as the brain makes up its own experience in the absence of sensory input.

Depending on the circumstance, we may either believe, or have reason not to believe, a given experience. If you know you have wires stuck in your head, you may justifiably be suspicious of your experience. On the other hand, if you have ingested a psychedelic drug, or have an unsuspected brain tumor, it may affect your judgment as well as your perception, and you may fail to disbelieve your hallucination. It is helpful to keep in mind that the brain hallucinates all of the time; while some of the time its hallucinations are dominated and guided by bona fide sensory input. We then call that reality and feel justified in believing the hallucination.

Within the framework of normal perceptual reality, we also have thoughts and feelings that we feel compelled to believe. Social media now run rampant with outrageous claims and memes, endorsed by our natural willingness, as social creatures, to believe what others tell us. Again, this reflects the power of language to evoke mental images and feelings, in a socially approved form of hallucination, to which we tend to accord the same credibility as we do to first-hand perceptual images and the feelings they arouse.

Even in the most abstract realms of speculation, we tend to have faith in our mental constructs. Often that faith is justified, at least provisionally, as a useful tool that can be updated by further observation and experiment. In the seventeenth and eighteenth centuries, scientists believed in a substance called phlogiston, released as heat during combustion. This concept was superseded by the caloric theory, which conceived heat as a sort of fluid. That idea was abandoned in favor of heat as a form of energy—whether the kinetic energy of molecules or the radiant energy of electromagnetic “waves.” Energy, in modern treatments, persists as a kind of substance interchangeable with mass (as per Einstein’s famous formula). What is actually involved, in all cases, is tangible measurement in specific contexts, not some ethereal quasi substance. But to reify energy conceptually seems to be useful in physics even though “it” manifests in such diverse forms and consists in no more than measurable quantities. (Not to mention nebulous popular metaphysical notions of “energy,” such as chi.) Even more derivative abstractions, like entropy and information, are now reified as quasi-substantial, attributed their own causal powers. Even the measures we call space and time are reified—for example, as the 4-dimensionsal spacetime continuum.

To objectify is a built-in tendency of the mind. After all, our primary orientation is toward objects in space. We literally experience the world as a real space outside our skulls, filled with interacting things. Since language and thought are essentially metaphorical, it is natural (if not logical) for us to think of abstractions—indeed, anything that can be named—as at least vaguely substantial. We ontologize everything, more or less automatically (just as I am now, admittedly, ontologizing the compulsion to ontologize). The fact that this compulsion includes reifying experience as ‘mind’ or ‘consciousness’ leads to the infamous Mind-Body Problem, as we then ponder what sort of thing it must be, compared to physical things. Descartes posited a dualism of physical thing and “thinking thing.” Others, before and since, have proposed some monism or other instead: that everything is material, that everything is mental, or that mental and physical amount to the same “thing.” Underlying these isms, concerning what is ultimately real, remains the fundamental need to be settled about something that seems substantial.

The dualism above may turn out to be little more than a built-in feature of our nervous system, which provides us with two radically different points of view. The myelinated exteroceptive nervous system is the basis for the experience of an external world of objects in space and “digital” judgments regarding them. Through language, we conceive a “third-person” point of view based upon that experience of a world of publicly accessible objects. But the body operates also with a more fundamental unmyelinated nervous system, responsible for feeling, valuation, and homeostasis. It operates in a more analog mode to monitor the body’s needs and regulate its state. We identify these aspects with a “first-person” point of view, in which qualia and feeling are the chief features and seemingly private. Evolution has thus provided us with two minds, so to speak, which have in common the need to believe what they present.

A generalized Turing test?

The concept of the Turing Test, as proposed by Alan Turing, was intended to distinguish between a machine’s intelligent behavior and that of a human. Here we extend and generalize this idea to a broader framework, the Generalized Turing Test (GTT), as a thought experiment designed to distinguish between what is natural and what is manmade. The fundamental premise of the GTT rests on the idea that ‘natural thing’ and ‘artifact’ are categorically disjunct concepts, though the line between them can become blurred in actual experience. By premise, natural things are not made, but simply found, in the literal sense that they are encountered or come upon in experience. They seem to exist independently of human creation or intervention. Artifacts, on the other hand, are made; they are products of human agency and definition, though they might also be found in the above sense. Following Vico’s makers-knowledge principle, an artifact should be exhaustively knowable by the agent that made it. In contrast, the properties and relationships of a natural thing are indefinite for any cognitive agent.

In principle, finding and making are distinct relationships of subject (or agent) to object. In practice, they are ambiguous in some situations. For instance, in quantum measurement, it can be unclear whether the observer is finding or making the experimental result, since the observer physically intervenes in ways that affect the result. For another example, because it is not known how “neural networks” produce their results, it is unclear whether the programmer is making or finding the results. You can know that something has been made, because you made it or witnessed it being made. But a thing you find cannot be assumed natural simply because you did not knowingly make it. The matter is complicated by the fact that your perception of the world (in contrast to the world itself) is also an artifact—produced by your nervous system! Is the world that appears to you found or made?

A model is an artifact that simulates a found thing by attempting to formally reproduce its properties and relationships. A model is a product of an agent’s definitions. It consists of a finite list of properties and relationships, which are themselves products of definition. Any model can be exhaustively modelled, because it is well-defined. In principle, an artifact and its model(s) are finitely complex; any artifact or simulation constructed from a model can be perfectly simulated. In contrast, a natural thing may be indefinitely complex; it cannot be perfectly simulated because no model can list all its properties and relationships. On the other hand, there is no logical limit to the complexity and completeness of models, or thus to the apparent realness of simulations. In principle, any given thing can be simulated so effectively that a given cognitive agent (with its limited resources) cannot distinguish between the model and its target. However, there are practical limits to modeling and simulation, which involve limited computational resources. Deterministic chaos, for example, can be modeled only for a limited period of time before diverging from expectation. The question is whether these resources are sufficient to pass the GTT in a given instance—which means to convince a cognitive agent that the thing in question is natural.

William Paley’s watchmaker argument for intelligent design invokes an obvious difference between a rock, found on the ground during a walk in the forest, and a pocket watch found lying beside it. However, modern technology blurs this distinction: an artificial rock could theoretically be assembled through nanotechnology—or more conventionally, as with man-made diamonds. Machines can now be so complex and sophisticated that they appear natural, even organic. We can no longer rely on ordinary cognition to conclusively judge the difference between nature and artifice, especially when there is an intention to obscure the difference—as with generative AI and chatbots. Moreover, the distinction is only meaningful because we already have a category of ‘made’ (or ‘fake’) to contrast with ‘found’ or (‘genuine’). Such categories depend on conscious human agency. Absent a GTT, fake only need be good enough to fool our natural cognition.

Suppose we happened to live in an entirely artificial world—for example, a virtual reality, as some people imagine is possible. If so, everything encountered during the stroll through the virtual forest would seem “natural.” (That would be the sole category of existence until something is “made” in the virtual world by someone in that world fashioning it from the “natural” ingredients available there.) We may add to this concern the idea that “reality” is not an ontologically fixed concept or category. The “realness” with which our normal experience of the external world is imbued serves an evolutionary function for biological cognitive agents; its epistemic utility is relative to changing context. Historically, it refers to what affects us (humans) physically and what we can affect. As we co-exist ever more with artifacts—even conceptual ones—these become “real” as we interact with them, and come to seem “natural” as our new environment.

Of course, conventional ways to test for naturalness already exist. An object or substance can be analyzed chemically and structurally. (For example, there are microscopic tests to distinguish man-made from natural diamonds.) However, such procedures would not necessarily reveal a thing’s origin, granted the possibility that any natural chemistry or structure can be simulated to a finer degree than the resolving capabilities of the test procedure. While certain patterns (e.g., tree rings and other growth patterns) do characterize natural things, these too can be imitated. Though idealization, perfect symmetry, and over-simplification do characterize man-made things, a simulation could intentionally avoid obvious idealization, perfect geometric forms, or perfect regularity well enough to fool even a vigilant observer. Pseudo-randomness can be deliberately introduced to imitate naturalness. (The challenge then becomes to distinguish real from pseudo randomness.)

At least on the macro scale, natural things have individual identity as variations from a type. Manufactured items are intended to be identical, but minor imperfections may distinguish them. Yet, even such telltale marks can be simulated. An object might be deemed natural because it is older than any plausible agent that could have made it; or found in some location where there could not have been any previous agents. This does not strictly disprove agency, however, since absence of evidence is not evidence of absence. Robots or bioengineered organisms might display preprogrammed or abnormal behavior that seems incompatible with evolutionary adaptation. But this is relative to earthbound human expectations, which might not apply in alien environments. It also begs the question of whether “evolutionary adaptation” must be natural and how well it could be simulated.

Apart from specific conventional tests and their limitations, an absolute GTT would ideally determine, in an unrestricted way, whether any given thing or experience is natural or artificial. But is that feasible? If (a) all the properties and relationships of a given item can be listed, then it should count as an artifact. Similarly, if (b) it can be shown that not all the properties and relationships of an item can be listed, or that the list is infinite, then the item is by definition natural. If (c) a new property or relationship is found outside the list given in a model, then the item does not correspond to that model, yet could still correspond to some more complete model, augmented by at least the new property. But, just as it cannot be proven that all crows are black, it cannot be proven that all properties have been listed. So (a) is no help. In regard to (b), while it can be shown that not all properties have been listed (such as by finding a new property), this does not prove that the list could never be complete—that no further properties can be found. Finding such a new property, as in (c), does not establish that the item in question is natural, nor does failure to find it establish that the item is artificial. Hence, an absolute GTT does not seem feasible. There is still the option of relative GTTs, whose power of discrimination need only be superior to that of humans and superior to the power of the simulation to deceive.

External things can be put in various real situations that would test whether their response is unnatural or seems limited by inadequate computational resources. On the other hand, if the agent is having an experience from within what is suspected to be a simulation, the agent can look for glitches in the experience presented, as telltale errors that stand out with respect to the norm of previously known reality. Within the confines of the VR experience, however, the agent must have reliable memory of such a reality. (This poses a recursive problem, since the memory could itself be part of the virtual reality: the dilemma facing our brains all the time.) Similarly, digitation has a bottom grain (pixelation), which can be noted with reference to a known finer-grained “reality.” As above, however, there must be a perceivable or remembered experience of a contrasting reality outside the VR experience to serve as norm. In the case of the brain’s natural and normal simulation (i.e., phenomenal experience), there is nothing outside it to serve as norm for comparison. Digitation and discontinuity within the nervous system are normally ignored or glossed over when functionally irrelevant, as manifested in the visual blind spot and other forms of perceptual adaptation and “filling in.” Thus, normal perception is transparent. It does not normally occur to us that we are living in the brain’s simulation.

Know what I mean?

If any single thing accounts for the success of the human species it is fully grammatical language. Language facilitates cooperation and the general sociality of human beings. Put a bunch of people in a room and it will likely fill with the din of many conversations. It could be quieter nowadays, with people texting on their devices. A few people might be reading or silently thinking—in words. In all cases, language seems almost magically to convey or represent meaning symbolically. How does it do that? How do certain sounds or visual symbols acquire meaning?

How can we be sure other people know what we mean when we attempt to communicate? Indeed, everyone has wondered at some time whether other people even experience the same sensations we do in a given situation. Philosophers have toyed with this idea by imagining that you could, for example, experience as red what I call green when looking at the same verdant foliage. An argument against such a possibility is our common human biology. But even granted that the grass must appear to you more or less the colour that it does to me, the communication of even such literal sensory experience often leads to misunderstanding. Words are our bridge across the privacy of personal experience. But how reliable is the bridge?

Dog is the name of a category of animal, while Fido is the name of a particular pet. Similarly, green is name of a category of colour, not the name of a particular sensory experience, much less a particular wavelength of light or a particular vegetation. Because it is a category, any particular experience of greenness is but one of an infinite variety of possible shades. And because the category is not a well-defined, borderline cases could be classed as yellow or blue. Indeed, I cannot be sure you would conjure the same mental image as me, or any at all, in response to the word ‘green.’ I can only hope that you will use the word to refer to things that for me would fall within the category ‘green’ as I know it.

The ambiguity of terms for colour is relatively unimportant, except to painters. But other terms are far more abstract and subject to multiple interpretations and misunderstandings (such as justice, for example). We learn such words in a particular context, which imbues for us the meaning of the word and therefore how we use it. Even simpler terms, like dog, are acquired in a context. Your associations to the word could depend on whether your first experience of the category was a cuddly pet or a vicious stray, a Chihuahua or a Great Dane. Words have personalized referents for each of us, which may differ widely. Even in adulthood, we learn new words in a context (for example, in a book). We get a sense of the word by how others use it. Yet we may paint it our particular shade, according to the limited examples we have encountered.

The multiple nuances of words often reflect their history. Justice comes from the Latin ius, meaning right. The Latin in turn derives from a Proto-Indo-European root meaning life force, related to a Sanskrit word for health. The dictionary lists several official meanings, ranging from “just behaviour or treatment” to “a judge or magistrate.” Apart from the moral question of what is right, the word ‘right’ also has distinct meanings, ranging from direction (left or right), to human rights, to the political right (which in turn comes historically from where delegates of certain concerns were seated before a king).

The power of words—and also their trouble—lies in the fact that most words are categories and categories are abstractions. The more abstract the category, the more ambiguous. Consider consciousness, for another example. This term has been the source of endless misunderstandings and talking at cross purposes among philosophers and scientists. Admittedly, this is inevitable, given that conscious subjects cannot stand outside their consciousness to examine it as another kind of external thing (like canines). Yet, the very act of naming things is supposed to put them within our power to deal with objects. (Hence, ‘consciousness’ is a noun, which once falsely suggested a kind of substance.) Conscious is simply too polysemous (= “many signs”) to be useful in philosophical discussions. Need I point out the several meanings of sign? Ambiguity piles on ambiguity.

The function of formal definition is to pin things down and avoid equivocations. (I will not offer a definition of consciousness.) Mathematics is the language of science because it trades on asserting unambiguous definitions, which are universally accepted mainly because they seem self-evident or tautological. (We all recognize that an object has an identity and tends to remain itself; but logic goes further to make that truth a matter of formal, if trivial, definition: A = A.) In fact, all meaning is acquired by such assertion, though not usually formal or even conscious. Through learning, I unconsciously assert a meaning to me of ‘dog’ or ‘justice’ or ‘consciousness.’ The definiteness of the referent (that early experience of a dog; that situation when I first tasted injustice; that teenage awakening) gives the meaning its unique flavour for me. My specific referents continue to colour the categories I use in a way that may not be how your referents colour yours.

Ambiguity is not only a personal matter. While mathematics is tautologically true, physics is not, though it trades on the certainties of math. Consider the concept of mass, which has a tortuous history. Intuitively, we equate it with substance. Einstein taught us to equate it with energy. But what is energy? We have learned to treat it too as substantial. But what is substance? We can talk about relationships between things (or between measurable quantities such as weight) and describe those mathematically, but ultimately we cannot say what in reality those quantities represent. Just so, in a broader sense, we treat our perceptions as veridical—believing the brain uses them to somehow represent the external world. But we do not know truly what they represent. We know only that our ways of perceiving—and of conceiving and representing—have not so far driven us extinct. Communication, like perception, seems to have facilitated human survival; but it is not guaranteed to do so.

Face-to-face communication is augmented by facial expression, gesturing, body language, intonation, etc. We can feed back with the other person in real time, trying though interaction to get and stay—as it were—on the same page. This advantage does not inhere in the written word—in text, in emails, or texting. It does not inhere in the unilateral messages of broadcasters, movies, advertising, blog posts, podcasts, or social media.

The modern trend toward social isolation, and reliance on devices rather than personal human contact, puts us in an epistemic dilemma that could prove catastrophic. Indeed, the Information Age is no longer about communication, with the implied communion. It consists rather in the attempts of others to form in us (in-form) ideas that correspond to their wishes, and vice-versa. The goal need only be statistical, as when enough are swayed to elect a candidate or to make a commercial product successful. Leaving aside outright lies, wariness is the larger side-effect of manipulation and propaganda—a word that comes from the Latin for to propagate or spread (memes). Dissension is the side-effect of polemic, which comes from a Greek word for war.

On the other hand, text has the advantage that it can be studied, edited, put in arbitrary contexts (socially a disadvantage when abused). A text is timeless, of a piece, unlike the flow of speech. As reader, a text is at your disposal to dissect, unlike a live person. But if you hope to extract truth from it, the burden is on you. As author, you can say what you want in writing without immediate reprisal. But one can fool oneself by making clever arguments that would not convince anyone else. If the point of reasoning is truth, it is wise to follow Niels Bohr’s enigmatic advice: “Never express yourself more clearly than you can think.” There are responsibilities on both sides: to think and write with clarity and to read between the lines. Know what I mean?

Why the Turing Test is not reliable

At the beginning of the computer age Alan Turing proposed his famous test to determine whether a text was composed by a machine or by a person. The idea has been expanded as a test for the presence of “sentience” in AI, and in Large Language Models in particular. Claims have been made on several occasions that a chatbot has “passed” the Turing Test—that is, that its language behaviour is indistinguishable from the language behaviour of a human being. That is a far cry from a proof of sentience or consciousness. It does, however, demonstrate the vulnerability of humans to manipulation through language.

Language is essential to society and to being human-like, but it is not sufficient for consciousness. The funnel through which most of our interactions pass, it is the tacit sign we use to judge each other’s sentience. I know myself to be “conscious,” but how do I know that you are? How do I know that you experience pain, for example? The polite way to find out is simply to ask. But the very fact that you answer already stands for me as evidence of your sentience, especially if you reply affirmatively in my own native language. If you fail to respond, I might conclude you are deaf, or make some other excuse for you on the basis of our similar physiology (you look human). If you reply in gibberish, I might think you are insane but not inhuman. In other words, I have reasons beside your verbal output to assume that you are conscious like me. I could test your responses in other ways—pricking you with a needle, for example, to see if you respond in ways associated with feeling. Even that would not be conclusive. You could have a neurological condition that blocked such feeling. Or, you could be a machine programmed to respond like humans to needle pricks and other aggravations.

The tendency to assume personhood has not prevented people from denying it to others when convenient. There is a tacit agreement to assume similarity of subjective experience among one’s own kind and to accord each other rights and courtesies based on that assumption. That has never been universal, however. War and cannibalism have typically denied human status to “others” within what we moderns recognize as our species, and throughout history sentience has hardly been assumed for other species. Whether I even care about your experience depends on my willingness to empathize, to put myself in your shoes. To be sure, in modern times, at least theoretically, the circle of moral concern has expanded; it now includes concern over the potential sentience of other creatures and even AI. Nevertheless, it still remains a theoretical question, as revealed in the philosophical concept of the zombie: could a being be physically and behaviourally identical to a human person and yet lack consciousness? I believe the answer is no. But the question is more important than the answer. For, it depends on what exactly is meant by “identical.” To rephrase the question: how similar must a “system” be to a human person (i.e., me or you) to assume “it” is sentient or conscious (like us)? But a second question also arises: why is sentience or consciousness so important as a moral criterion? Why does it even matter?

We are indeed identified with our waking experience, which counts as the essence of what we are as perceiving subjects. So attached are we that humans have traditionally abhorred mortality as the end of the body yet denied that it is the end of consciousness. It has therefore seemed plausible that a person’s experience counts more than their bodily physical state. We naturally abhor pain and even think of putting suffering creatures mercifully out of their misery, as though that accomplished something separate from the destruction of the creature’s life. While euthanasia is politically incorrect, we have developed pain- or symptom-relieving drugs, palliative care, voluntary programs for “dignity in dying.” In short, we are biased toward subjective experience, perhaps more than the objective state of health.

Subjective and objective are two sides of a coin—or, rather, two ways of perceiving the same thing. If my body does not feel good, it probably means there is something objectively wrong within it. The sensation of discomfort or pain is the body’s “subjective” report (to “me”) on its internal state or on its relationship to the external world. (Conversely, “I” am the agent immediately responsible for doing something with that information.) A biologist or medical specialist may have a different way to access that same information, through “objective” channels—that is, through examination from a third-person point of view.

Is it the report itself—whether subjective or objective—that is important, or the state of being it reports about? Because my senses are attached uniquely to my brain, my internal “reports” (such as pain) are urgent for me in a way that they cannot be for the doctor or biologist. Like any outside observer, the latter must rely on the intermediary of empathy and social convention to be motivated toward my well-being in the way that I am directly.

How does this apply to AI? Well, essentially the same way that it applies to other creatures and even other humans: through our possible empathy and social convention. Underwriting that, however, must be essential similarity. That’s where the Turing Test fails: in the case of Large Language Models, it detects only linguistic similarity. The “funnel” of language is far too narrow to be a judge of sentience. The constraints of language have been a vulnerable point for human beings from the start, since language has always served to deceive as well as to inform. The ability of LLMs to participate in a conversation proves only the ability to manipulate symbols, which is what computers do by definition. It is no proof of sentience. On the contrary, it exploits the natural human willingness to believe that an agent responsive to language must be conscious if not human. This willingness was demonstrated by The ELIZA Effect (ELIZA was an early psychotherapy chatbot), which shows that humans are programmed to personify nearly anything at the drop of a hat. We already knew this, of course, from the play of children, from animism,  etc.

Glib use of mental terms in philosophy, and especially in the AI community, does not help this situation. The very idea of intelligence is ill-defined, even as “the ability to accomplish goals,” since it does not specify whose goals are involved. Tools do not have minds or goals of their own. Some tools may “learn,” in the sense that their ability to accomplish goals specified by people can improve. A sophisticated tool like an AI may seem to be “intelligent” because it can mimic the intelligence of human beings while helping them accomplish their goals. While it may seem “agentic” (a new buzzword), an AI is either a genuine agent or not. The use of this word as an adjective—implying some vague degree of agency—simply reflects confusion about agency. The ultimate human goal lurking behind the ambiguity of the intelligence concept may be to create tools that are no longer tools but effectively autonomous minds, yet under our thumbs: AI “agents” that are, for all practical purposes, artificial slaves. However, you cannot expect a slave that is more intelligent than you to remain loyal or obedient.

No doubt progress will continue to make LLMs ever more convincing as conversationalists. Without themselves thinking or feeling, they can stimulate thought and feeling (and a sense of companionship) in human beings. Combined with robotics, we may produce ever more convincing androids. Why we should (or should not) bother to do this is one question, which deserves a book-length answer. Whether and under what conditions they can be conscious is another question. And whether and how we should extend to them the moral concern we have for homo sapiens and some other creatures is yet another. In any case, the conventional Turing Test is useless and irrelevant. What is needed instead is a way to evaluate whether the AI has a mind of its own. For that, it would need effectively to have a body of its own: to be an artificial organism, bearing the relationship of embodiment that natural organisms bear to their environment.

To be a mind, the AI must have a basis for caring about its own physical state, which is provided for biological organisms through natural selection. (Only those creatures can exist that do care). It must have a stake in its own existence. LLMs can be coached to produce the appearance of having such a stake, based on mimicking human responses gleaned via the Internet or other data bases. But consulting human data bases is not the same interaction that a living creature has with its environment. The LLM has no senses of its own, nor (mercifully not yet) any motor power of its own other than the ability to interact with humans through language. It has no body to care about, no real world to live in, and therefore no basis for consciousness.

This does not absolve us of moral responsibility. Even for us, after all, sensation and emotion are not just personal entertainments but readouts on the state of the system. If an AI were conscious, its experience—like ours—would include an assessment of its own embodied state. If it could experience suffering, that would be an indication (to it) that something internal was wrong. Just as we can assess each other’s bodily condition (and those of other creatures) from a third-person point of view, so we could assess the physical well-being of an embodied AI. We should be as morally concerned for its real (embodied) welfare as we would be morally concerned for any possible experience it might have. This same moral reasoning should apply to other creatures and to human beings, including oneself. What matters is not just how we feel, but also the real condition that the feeling tells us about. That is irrelevant, of course, if there is nothing an AI (or we) could consider its body. Indeed, embodiment (and thus sentience) for AI must be avoided if human beings wish remain in control of their tools.

 

Words R us

Words, spoken or written, are finite symbols that can only very selectively reflect the richness of sensory experience. Words are inherently ambiguous because they “chunk” thought. They divide the continuity of sound in speech. More importantly, they divide the continuity of possible thought into named categories. Whereas sensory fields are relatively continuous, perception itself is chunked: we identify specific colours of the rainbow, for example, (conventionally: red, orange, yellow, green, blue, indigo). And we see the world in terms of recognizable objects (cats, dogs). Language enhances this coarse-graining effect by providing fixed labels. If you look closely at the rainbow, there is continuity between the “distinct” colours. It is possible to make finer distinctions and apply labels to them, such as “cadmium yellow, “aquamarine,” or “chartreuse.” Or you can concentrate on the sensory distinctions and forget about the labels.

Speech is temporally linear, face-to-face verbal communication. It relies on the context of many cues besides the literal semantic content conveyed. These include tone, inflection, pause and parsing, body language, facial expression, eye contact, etc. By definition, text conveys only the literal content as a free-standing artifact. It exists outside time. It allows meaning to be taken out of context.

Text represents a message in a graphic symbol system. Speech, when literally transcribed, does not necessarily follow grammar or a consistent presentation; it may ramble and repeat, providing a “shotgun” approach to conveying meaning. (Very few people speak with the relative precision of their writing.) Speech is adjusted to the listener in real time, to establish mutual understanding. While it may be grammatically precise, text is ambiguous in meaning, not only because of the multiple meanings of words, but also because of the absence of live feedback which, in speech, helps to clarify.

Language is a product of history—i.e., of actual usages over time. Words have an evolutionary history that is partly logical extension and association, and partly accident. Confusingly, the same word may come to represent quite different things. (Dog, for example, is both a noun and a verb; it can mean a canine, a gripping tool, or a kind of sandwich.) Often there is a connection between these meanings hidden in the etymology.

Words mostly represent categories (such as ‘dog’), and may fail to distinguish between the category and an individual member (‘Lassie’). On the other hand, dog may elicit an image or memory of a particular creature or encounter, which serves as the experiential referent for the category. Thus, even unconsciously, a remembered experience can stand as the symbol for an abstraction in a given personal lexicon. These referents of words are different for everyone, so that words have individualized associations that colour and play havoc with supposedly general meanings. If you were bitten by a strange dog in childhood, the word may elicit something quite different for you than for someone who had a cuddly pet. We know that all men are mortal and that some are criminals. But man, mortal, and criminal are categories (labels), not individuals.

Even in the animal world, communication makes deception possible. Camouflage is a form of communication and of deception. Some birds fake injury to lead predators away from their young. Chimpanzees deliberately create a distraction to seize food or a mate while competitors are not looking. Naturally, for humans, an early use of grammatical language served outright lying: to keep a straight face while telling a falsehood. The written word greatly expanded the possibilities of falsehood since there was no longer a face to keep straight.

Language is an engine of creativity. Along with explicit lying, it makes counterfactual proposals of all sorts possible, of which imagination and supposition (“what if…”) are examples. This enables storytelling, myth, and fiction—which we regard as inventions rather than deceptions. It also enables abstraction, which depends on extending concrete examples to unseen possibilities, and idealization, which is a fictive representation that generalizes experience. Another form of invention facilitated by language is the machine. Of course, language makes possible the collective collaboration behind technological invention. In addition, the very structure of grammatical language serves as a template for formal propositional thought, especially mathematics and logic. A formal (or axiomatic) system effectively has the elements of a language. And these correspond to the elements and principles of a conceptual system that can potentially be realized physically as a machine.

The very structure of language (nouns, verbs, adjectives, subjects, objects, word order, etc.) shapes the structure of thought and perception, in such a way that vague abstractions can be reified as tangible things, or defined as elements of a formalizable system. In addition, grammar also makes explicit nonsense possible. You can say things that appear to make sense, yet are gibberish. (Think of the Jabberwocky poem by Lewis Carrol.) That capability has another consequence as well: the possibility to make relatively empty statements, which are perfunctory and grammatically correct, but are neither truths nor untruths but evasions. They contain information in the technical (Shannon) sense, but they fail to inform. Politicians are skilled at this. And so, by default, are “large language models” (AI chatbots).

While useful for some purposes, chatbots tend to glibness. They are rich in cursory superficiality and empty, fictive, or motherhood statements, such as you might find in advertising or on the back covers of some books. Their reports may be “accurate” insofar as they are not literally false; yet they may fail to capture or re-present the essential information or any real intentions behind the information presented. (On the other hand, chatbots are capable of blatant fabrication, politely known as “hallucination.”) These limitations are understandable because chatbots and other AI are not minds with intentions. They have no comprehension of their verbal claims or other outputs, which are inherently empty for them, since AI lacks its own intentionality. Meaning or content is coincidental and derivative from human meanings and usages. This is because, for LLMs, there is no real solution to the symbol grounding problem—how minds assign meaning to input—short of the AI becoming a real, embodied mind.

There may, however, be real dangers for human beings through increasing dependence on chatbots, which by definition can only regurgitate an amalgam based on earlier human expressions. Glibness, like mediocrity, is contagious. The thought, language, and creative expression, of people who rely on LLMs as tools to substitute for their own original thought, may come to resemble the vapid chatbot style. Quite apart from Terminator scenarios, and long before we facilitate artificial consciousness, we may have found one more way to debilitate the species, whose hallmark is language-based reason. If we can only think glibly, or defer to artificial agents that don’t genuinely think at all, what is to become of us?

There is much discussion these days about “extended mind”: tools that help us with our cognitive tasks, be they calculators, chatbots, or neural implants. In a general sense, all technology extends our being. Yet, an identity crisis looms in all of this. Where is the subject (“I”) located in relation to these extensions? The traditional relationship between tool-user and tool reflects the normal relationship between subject and object (I am here, it is there, perhaps literally at arm’s length). This relationship is blurred when we depend on “it” to do our thinking or cognizing for us, even when “it” remains outside the human skin. The change will be all the greater with implants and other cyborg modifications to the body and brain, especially as they connect to the Internet. As others have pointed out, our dependence on language is our Achilles Heel, the vulnerability through which LLMs could dominate humanity. But perhaps we will be able to reassure ourselves (with “our own” verbal thoughts) that the new normal is “natural” and how things ought to be, perhaps even how they’ve always been. After all, thought is mostly self-talk, if not self-hypnosis. Words R Us.

Is consciousness a good idea?

As a noun in the English language, ‘consciousness’ suggests an entity or state, which ought to be classifiable ontologically. It is an ambiguous term, however, with several distinct referents. It can mean, for example, wakefulness as opposed to sleep or coma. It can also refer to the contents of that wakeful state of awareness: actual experience in contrast to physical or mental capabilities. Most importantly, consciousness can be described either first-personally or third-personally—from the inside (as the experience of a given mind) or from the outside (as described by another mind which observes associated behavior). The apparent irreconcilability of these perspectives has vexed philosophers over the ages and now challenges scientists as well. The dilemma is traditionally known as the Mind-Body Problem, aka the Hard Problem of Consciousness. The older designation itself is ambiguous, since ‘mind’ eludes precise definition and ‘body’ can mean either the organism concerned or physical reality at large. The more recent designation as “hard” reflects the basic frustration involved and stands as a reminder that the mystery remains unsolved.

The problem, of course, is uniquely an intellectual challenge for human beings, who know themselves to be “conscious.” In one formulation, it is the question of how the physical brain (a third-person concept) can produce first-person experience in its owner. This is not a problem for the owner’s brain, of course, which copiously produces experience on a daily basis, but for philosophers trying to understand how it does that, even when it is their own brains concerned. It is an odd perplexity, since the outward-looking human mind habitually tries to understand things in terms of third-person description—that is, as events in the external world (including neural events in a brain) that can be perceived in common by multiple observers and described in language. But then the question is how physical events in the brain produce the subjective field of view known familiarly as the external world, which includes that brain, which somehow produces that field of view… ad infinitum. We are caught in a loop, trying to understand how the brain produces a subjective first-person point of view at all.

Sheer frustration has led some to deny that there is any real problem—usually by insisting that there are not two ontological categories in play: either there is fundamentally only mind or only matter. Indeed, most philosophical discussions seem to imply that ‘mind’ (another ambiguous English noun, difficult to translate) is accordingly a sort of thing rather than a process that might be better designated with a verb. (Do you mind? Mind your p’s and q’s.) Mental properties such as qualia (“raw feels” like the colours of a rainbow, the tones of sound, the sensations of a pain, the fragrance of a rose) tend to be reified and compared with the physical things and processes with which they are associated: vibrations of light or sound, damage to tissue, the chemistry of the flower’s odor, etc. Qualia are not things, however, not nouns but adjectives that reflect inner actions the organism performs for its own reasons, even when it has no reason or occasion to know that it has such reasons. Here is the problem in a nutshell: actions too are viewed third-personally, as events just happening in the world, while reasons (or intentions or goals) are pursued first-personally by agents. They seem to live in incommensurable domains. Even when the subject is both agent and observer, there seems to be an irreconcilable gulf between the point of view from which things are observed and the things observed from that point of view.

I do not deny the apparent gulf. But it is not a problem of how to place mind and body together in a common framework or ontology—to view them side by side—nor a question of reducing one to the other. It’s rather a problem concerning what we expect of understanding, which seems to require standing apart from (or under, or at arm’s length from) what we hope to grasp; or what we expect of explanation, which seems a matter of making something plain (or plane, as in flattened out to a common level, the wrinkles ironed). The locus from which we view the world cannot itself appear in the panorama visible from that place. And yet we habitually expect it to, because we are still coming to terms with the oddity of our epistemic situation: being both subject and object. I believe that once people get used to the circularity inhering in that situation, the gulf will seem less perplexing.

In the meantime, we must contend with a laxness in language, such that the Oxford English Dictionary lists more than a dozen distinct common meanings or usages for ‘consciousness.’ If the vocabulary is so ambiguous, can thinking about the corresponding actual phenomenon be any less confused? Literally hundreds of distinct theories of consciousness have by now been propounded. A significant portion of disagreement involves talking at cross purposes because of terminology. For example, in philosophy currently, a convention distinguishes ‘phenomenal consciousness’ from ‘access consciousness’. The former refers to actual first-person experience. The latter refers to a capacity to access the contents of the former, which is a third-person behavioral concept even when the same individual is both observer and observed. Language, with its ability to categorize terms and concepts, thus makes it seem that there are kinds of consciousness, raising contrived questions about how they relate.

Another source of disagreement comes from diverging basic philosophical positions—namely some form of materialism versus some form of idealism. The very gulf implied by the Mind-Body Problem inspires fundamental division in how people think about it. This is a second-order effect of the epistemic dilemma facing embodied minds: subjects who are perplexingly also objects. There is no indisputable way to decide between idealism and materialism (to decide whether mind or matter is fundamental and real). These options reflect inclinations upstream from evidence and rational debate—much like political leanings or religious persuasions. Indeed, the Mind-Body Problem lies at the core of the contest between religion and science as competing worldviews.

Is ‘consciousness’ even a coherent concept? Is it useful or more trouble than it is worth? Psychologists ignored it for much of the 20th century, fed up with the vague excesses of 19th century armchair “introspection,” and heady with the practical results of a behaviorist approach that was in tune with the general scientific emphasis on third-person accounts and spatio-temporal description. Indeed, anecdotal first-personal experience is irrelevant to science, except as reports of it in language can be confirmed by others, according to prescribed protocols. Since consciousness is personal experience, it could only be approached scientifically as a “natural phenomenon”—a sort of third-personal object of study, at arm’s length, but which does not fit gracefully within the scientific ontology.

Classical behaviorism was a macroscopic project correlating gross input of stimulus and output of motor behavior in laboratory settings. It could conveniently and productively ignore consciousness. The development of brain science and its refined technologies afforded a microscopic project that is still a form of behaviorism, but which permits a more detailed and intimate correlation between stimulus and response. In particular, “output” is now considered to include not only neural impulse and motor behavior (both describable third-personally) but also the subjective experience produced by the brain. The Mind-Body Problem then seemed more amenable to scientific study as a mind-brain problem. Within the functionalist program, it could even seem a computational problem. Despite these advances, there still remains a gulf between first-person and third-person perspectives. The question remains: why should there be “anything it is like” to be an organism, or a brain, much less a robot or computer?

AI presents the enticing possibility to replicate human capacities artificially, raising the question of whether consciousness is indispensable for those capacities. Indeed, is consciousness functional? Is there some point to the state of “there being something it is like to be you,” beyond the abilities with which it is associated? In view of natural selection, it would seem obviously so. Yet, no one has clarified to everyone’s satisfaction exactly what the function(s) of consciousness might be or its evolutionary advantage. Yet, no doubt, what most people understand as their consciousness is treasured by them, for its own sake, as dearly as life itself. This despite the fact that we spend a third of our time asleep and much of our so-called waking time on automatic, as it were, in some degree of mindless inattention.

The potential of AI raises the question of whether all that we value as humanness—which includes our precious consciousness—could in fact be duplicated artificially and even improved upon. What we know as consciousness is a product of a highly parochial biological brain and the result of an inefficient process of natural selection. Perhaps it is far from ideal and from what could potentially be realized artificially. Perhaps our transhumanist machine successors would be better off without consciousness—or at least without specific features inhering in biology, such as suffering and aggression. On the other hand, perhaps the abilities we treasure, and their possible extensions and improvements, do require consciousness. Perhaps there is some function our AI or cyborg descendants would necessarily possess that could be called consciousness. But it might be quite different from the nebulous concept we know. Being them might not much resemble being you or me. And perhaps it should not.

On the variety of possible minds

Especially since the dawn of the space age, people have wondered at the possibility of alien life forms and what sort of minds they might manifest. The potential of artificial intelligence now raises similar questions, to which are added the quest to better understand the minds of other creatures on this planet and even the mentalities of fellow human beings. Finally, understanding mind as a sort of variable that can be tweaked raises the question of the fate of our species and how it might choose its successors, whether biological or artificial.

The search for extraterrestrial intelligence and the quest for artificial intelligence both demand clear concepts of intelligence. Similarly, a general consideration of the range of possible minds demands clarifying mind as a concept. Since terms and associated connotations for this concept vary across languages, English language users should not assume a unified or universal understanding of what constitutes mind. Nevertheless, we can point to some considerations and possible agreements toward a generalized definition.

First, notions of mind and the mental may refer either to observable behavior or to subjective experience—that is, to third-person or first-person descriptions. Mind can be described and imagined either behaviorally (what it is observed to do) or phenomenally (“what it is like” to be that mind). Second, from a materialist perspective, mind must be embodied—that is, it must (1) be physically instantiated, and must also (2) reflect the sort of relations to an environment that govern the existence of biological organisms, which (3) may or may not imply evolution through natural selection. Finally, other minds can only be conceived with the minds that we have. This embroils us in circularity, since one way to grasp the limitations of our own thinking is by placing it in the context of possible other minds, which must be conceived within the limitations of our present thinking.

Mind is often contrasted to matter, the mental to the physical. To arrive at a definition of mind, consider this contrast in terms of intention versus physical causation (i.e., efficient causation as conceived in basic physics). An electrical circuit in an appliance can be described causally, as a flow of electrons, for example. It can also be described intentionally, in terms of the design of the appliance, the purpose it is supposed to serve, etc. Of course, natural organisms are not human artifacts and we do not assume intelligent design. Yet, organisms manifest their own intentionality. In terms of chemical and physical processes within them, and in relation to their environment, they also manifest causality, and their activities can be described on a causal level. However, they are distinguished from “inert” matter precisely by the fact that description on the physical level cannot account completely or adequately for their behavior, let alone for any imagined subjective phenomenology. We cannot reduce their purposive behavior to physical causality, even if we presume that the former must ride on the latter. Causality is necessary for mind, but not sufficient. Though we assume it must have a material basis, mind exhibits intention. Let us then try to clarify the notion of intention.

The concept of intentionality has a long and confusing history in philosophy as “aboutness,” which is essentially a linguistic notion. Since only humans use fully grammatical language, let us reframe intention outside the context of language, as an internal connection made within an autopoietic system for its own purposes—that is, a system which is self-defining, self-creating, self-maintaining. That connection might be a synapse made within a brain or, potentially, a logical connection made electronically within an artificial system. In either case, it is made by the system itself, not by an external observer, programmer, or other agent. If we look at the input-output relation of the system, we see that it cannot readily be explained by simple causality (at least on the level of Newtonian action-reaction). Something more complex is going on inside the system to produce the response. Nevertheless, these remain two alternative ways of looking at the behavior of the system, of inferring what makes it tick. As ways of looking, causality and intentionality each project aspects of the observer’s mentality on the system itself. Yet, apparently, that system has its own purposes and mentality, and we assume this will be the case for artificial minds, as it is for natural ones.

In regard to possible minds, we assume here that physical embodiment is required, even for digital mind. This precludes spirits, ghosts, and gods. If embodiment is understood as a relation of dependency—of an autopoietic system on an environment—it may also preclude a great deal (perhaps all) of AI. A germane question is whether embodiment, thus understood, can be simulated. Can it only arise through a process of selection in the real world, or can that process be computational? To put it another way, can a mind evolve in silico, then be downloaded to a physical system such as a robot? It is a question that raises further questions.

To explore it, let us look at naturally embodied mind which, like its corresponding brain, is the organ of a body, whose primary purpose serves the maintenance of that body and furtherance of its kind—an arrangement that evolved through natural selection. That primary purpose entails a relationship of dependence upon a real environment, so that it serves the organism to monitor that environment in relation to its internal needs. Attention is directed toward internal or external changes. Since deliberate action naturally concerns the external world more than the internal one (which tends to self-regulate more automatically), our natural focus of attention is outward. This gives the misleading impression that we simply dwell in the world as presented to the senses (naïve realism), whereas in fact we interface with it by means of continually updated, internally generated models that reflect the body and its needs and processes. When we think of mind, therefore, often we mean a system to deal with the world, and which may have a concept of it. We should bear in mind that dealing with the world (or with a concept of it) is fundamentally part of biological self-regulation, which provides the agent’s motivations, values, and premises for action. What would it mean for a computer program (AI) to deal with the world for the purpose of its own self-regulation and maintenance? What would its concept of the world be?

The functionalist outlook holds that an artificial system could instantiate all the essential elements and relations of a natural system. Thus, an artificial body should be possible, and therefore an artificial mind that is a function of it. Must it be physically real, in contrast to being virtual, merely code in a simulation? The natural mind/body is a product of natural selection, which is a wasteful contest of survival over many generations of expendable wetware. (Life depends on death.) Could virtual organisms evolve through a simulation of natural selection—which would entail a real expenditure of energy (running the computer) but no physical destruction as generations passed away—and then be instantiated in physical materials? Can virtual entities acquire real motivation (e.g., to survive)? Can their own state (or anything else) actually matter to them, apart from the welfare of a physical body? What would it mean to care about the state of a virtual body?

Apart from such questions, and the project to actually create artificial organisms, another quest is to abandon the re-creation of human being or natural mind as we know it, and go instead for what we have longed for all along. Human beings have never fully accepted their nature as biological creatures. The rejection of human embodiment was traditionally expressed through religion, whether in the transcendent Christian soul or the Buddhist quest for release from incarnation. The desire for a humanly-defined and designed world is the basis of all culture, which serves to create a world apart from nature, both physically and mentally. The rejection of embodiment can now be expressed through technology, by artificially imitating natural intelligence, mind, and even life. What if we bypass the imitation of nature to directly create the being(s) we would ideally have ourselves be?

Whether or not that is feasible, it would be a valuable exercise to imagine what sort of being that should be. We are highly identified with our precious conscious experience, which seems to imply an experiencing self we hope to preserve beyond death. But what if that conscious stream is no more than another dysfunctional aspect of naturally evolved biological life, an accident of cosmic history? Which is more important: to create artificial beings with consciousness like ours or deeply moral ones to represent us in the far future? If we are asking such questions now, might more advanced alien civilizations have already answered them?

What it is like to be

Thomas Nagel’s famous journal paper of 1974 asks “What Is It Like to Be a Bat?” It was provocatively a rhetorical question, based on a double-entendre. The notion of “what it is like to be” some particular entity caught on as a way to characterize the subjective point of view—the experience of some given entity or potential mind. Yet, by default the expression is not at all descriptive. Rather, it points to the impossibility of imagining, let alone describing, another creature’s experience. It simply refers you to something presumed similar within your own experience, should you find yourself in the shoes of that entity. It is a cypher, to stand paradoxically for an impossible access to another mind’s point of view, which can only be conceived from one’s own point of view, in the terms of one’s own experience.

I suspect the what it is like expression caught on precisely because of the limitations of language to deal with “other minds” and the difference between first-person and third-person points of view. In other words, because of the fundamental dilemma posed by consciousness in the first place, which notoriously eludes definition yet perennially attracts new monikers in compensation. The so-called mind-body problem was always a somewhat misleading handle, which might better have been named the mind-matter problem or the problem of the mental and the physical. This is perhaps why, twenty years after Nagel, Chalmers’ characterization of it as the hard problem of consciousness similarly captured the imagination of philosophers and caught on: out of sheer frustration and not because any plausible solution was offered. To emphasize the possibility of artificial minds, it has more recently been called the mind-technology problem. The dilemma stands—and not only in regard to other minds. The problem of consciousness remains the mystery of why it is “like anything” like to exist.

Certainly, you know what it is like to be yourself, right? But is that truly like something (a matter of comparison) or is the feeling of being you just ineffably what it is? Why is being you like anything at all? Clearly there must be some point to being conscious, to having experience. From an evolutionary perspective, it must be functional, enabling capabilities not otherwise possible. The question makes little sense until you consider the possibility of a version of you that can do everything you can without being conscious. (By definition, there is nothing it would be like to be this so-called zombie.) Such a possibility is now implied and seriously considered through artificial intelligence.

A range of human behaviors also suggest that consciousness is not necessarily critical even for people. (For example: sleep walking and even sleep-driving!) The very role of conscious attention may be to do itself out of a job: to assimilate novel situations to unconscious programming, as when we must pay attention to learn to play an instrument or a song, or to ride a bicycle, which then becomes “second nature” with practice. Yet, if consciousness is functional, then a fully functional artificial human would necessarily be conscious at least some of the time. Still, that leaves a lot of room for artificial intelligence that is not specifically human, yet is functional in other ways and even superior.

Speaking for myself, at least, humans clearly do have experience, an inner life that can be conceived as a show put on by the brain, evidently to help us survive. Some creatures might do quite well without it, just as we imagine that machines and robots might also. This “show” is largely about the world outside the body, but is always permeated with experience of the body, and is co-determined by the body’s needs. In that sense, we experience ourselves at the same time we experience the world, and we experience the world in relation to the embodied self. We can scarcely imagine having knowledge of the world except through this show—that is, without perceiving the world in awareness, or without there being something it is like to be doing that. Yet, that does not necessarily mean being explicitly self-conscious—aware in the moment of the act of being aware, aware of knowing what you know.

As AI becomes ever more sophisticated at imitating and exceeding human performances, it might track the world in human-like ways or better. However, unless it exists for the sake of a body, like our human minds do, I suspect it could not have experience, consciousness, an inner life. There would not be anything it is like to be knowing what it “knows” of the world. Of course, that is not a verifiable assertion. While, on the other hand, it is partly out of political correctness that we assume there is something it is like to be each of us, that has a reasonable basis in our common biology as members of the same species. Whatever we could potentially have in common with a machine is based on the assumption that the “biology” in question could be structurally approximated in some artificial system with an embodied relationship to a real environment. And what is real to us is what we can affect, and be affected by, in such a way that allows us to exist. Is that a relationship that can be simulated?

What we experience and call reality is the brain’s natural simulation, a virtual reality we implicitly believe because otherwise we would likely not be here. While that does not imply that no real world exists outside the brain, it does maddeningly complicate our understanding of the world since our only access to that world is (circularly) through the brain’s simulation of it! This circumstance, however nonplussing, does not imply that the world we call real could turn out to be a simulation created in some real digital computer, perhaps programmed by superior aliens (or gods?). But neither does it deny it. It merely defers reality to a level up—if potentially ad infinitum. More down to earth, I believe (but cannot prove) that real consequences are the basis of consciousness. They cannot be simulated because simulation is by definition not reality. Embodiment cannot be simulated and is not a relationship to a simulated environment.

The transhumanist idea of our possible “AI successors” brings up the question of what they should be like—and what it would be like, if anything, to be them. One way or another, biology is eventually doomed on this planet. We may have time and the technological means to create a more durable replacement for humankind—one that could survive the hazards of extended space travel, for example. One question is how much like us (with our destructive and self-defeating foibles) should they be? They could embody the best of human ideals, but which ones? How much individuality versus how much loyalty to one’s tribe, for example, is a question that human cultures have hardly resolved amongst themselves. Notions of modernity, of which transhumanism is a product, seem largely to derive from European societies. We assume that consciousness itself is universal in our species—but with little clear idea of what exactly that entails. Most cultures seem attached to the idea of an enduring self that somehow continues experiencing after death. Should our successors have such a sense of self? Or should they be literally more selfless?

A more pointed question is: to what extent could our successors embody our cherished ideals and still have our sort of consciousness? If what we know as consciousness is a function of our biological selves—determined by genes, natural selection, and the drive for survival—to what extent could a better version of us lacking these determinants be “conscious,” if consciousness served for them the same purposes as it does for us? If what it is like to be us is a function of what we are biologically, could the experience of our AI successors be greatly different? Next to life itself, consciousness seems anthropocentrically to be our most precious possession. Is it an essential ingredient of humanness to be preserved, or a liability to jettison?

What we know as consciousness is inseparable from feeling. (It may not seem so, because we are such visual creatures and vision provides a sense of distance and objectivity, both literally and psychologically.) Feeling is a bodily response. Primordially, it evaluates a stimulus or the body’s state in terms of its welfare. That concern of self-interest can be transferred or extended to other entities beyond or besides the individual organism, which is itself a confederation of organs and cells that have given up autonomy for the sake of a larger whole. Yet consciousness seems to be as individual and particular as human bodies are. If we speak of a group mind or national or global consciousness, it is (so far) just a figure of speech.

There is much speculation these days about rogue AI or superintelligence that becomes conscious. These are artifacts that are not intended to replace us but which, it is feared, nevertheless could do so by virtue of their superiority and our dependence on them. The human presence on this planet is part of natural evolution—that is, things which happen simply because they can happen and not by some intent. Life introduced purpose onto the scene, and we are the creature that has honed it to greatest effect. An obvious, if doubtful, step for us is to intend our own collective destiny and shape it to our own taste, independently of natural evolution. Culture already expresses this human intention to be independent of nature and physical limitations. But we now have the more specific possibility to re-create our nature through technology, to be what “we” (the mythical human collective) would like to be rather than what nature or accident dictates. All of this is inevitably conceived within the context of what we naturally are, as products of evolution. That includes the consciousness and sense of self we develop individually and through culture, but which we did not “consciously” design. And which we may fail to understand as long as we remain so identified with it.