Is consciousness a good idea?

As a noun in the English language, ‘consciousness’ suggests an entity or state, which ought to be classifiable ontologically. It is an ambiguous term, however, with several distinct referents. It can mean, for example, wakefulness as opposed to sleep or coma. It can also refer to the contents of that wakeful state of awareness: actual experience in contrast to physical or mental capabilities. Most importantly, consciousness can be described either first-personally or third-personally—from the inside (as the experience of a given mind) or from the outside (as described by another mind which observes associated behavior). The apparent irreconcilability of these perspectives has vexed philosophers over the ages and now challenges scientists as well. The dilemma is traditionally known as the Mind-Body Problem, aka the Hard Problem of Consciousness. The older designation itself is ambiguous, since ‘mind’ eludes precise definition and ‘body’ can mean either the organism concerned or physical reality at large. The more recent designation as “hard” reflects the basic frustration involved and stands as a reminder that the mystery remains unsolved.

The problem, of course, is uniquely an intellectual challenge for human beings, who know themselves to be “conscious.” In one formulation, it is the question of how the physical brain (a third-person concept) can produce first-person experience in its owner. This is not a problem for the owner’s brain, of course, which copiously produces experience on a daily basis, but for philosophers trying to understand how it does that, even when it is their own brains concerned. It is an odd perplexity, since the outward-looking human mind habitually tries to understand things in terms of third-person description—that is, as events in the external world (including neural events in a brain) that can be perceived in common by multiple observers and described in language. But then the question is how physical events in the brain produce the subjective field of view known familiarly as the external world, which includes that brain, which somehow produces that field of view… ad infinitum. We are caught in a loop, trying to understand how the brain produces a subjective first-person point of view at all.

Sheer frustration has led some to deny that there is any real problem—usually by insisting that there are not two ontological categories in play: either there is fundamentally only mind or only matter. Indeed, most philosophical discussions seem to imply that ‘mind’ (another ambiguous English noun, difficult to translate) is accordingly a sort of thing rather than a process that might be better designated with a verb. (Do you mind? Mind your p’s and q’s.) Mental properties such as qualia (“raw feels” like the colours of a rainbow, the tones of sound, the sensations of a pain, the fragrance of a rose) tend to be reified and compared with the physical things and processes with which they are associated: vibrations of light or sound, damage to tissue, the chemistry of the flower’s odor, etc. Qualia are not things, however, not nouns but adjectives that reflect inner actions the organism performs for its own reasons, even when it has no reason or occasion to know that it has such reasons. Here is the problem in a nutshell: actions too are viewed third-personally, as events just happening in the world, while reasons (or intentions or goals) are pursued first-personally by agents. They seem to live in incommensurable domains. Even when the subject is both agent and observer, there seems to be an irreconcilable gulf between the point of view from which things are observed and the things observed from that point of view.

I do not deny the apparent gulf. But it is not a problem of how to place mind and body together in a common framework or ontology—to view them side by side—nor a question of reducing one to the other. It’s rather a problem concerning what we expect of understanding, which seems to require standing apart from (or under, or at arm’s length from) what we hope to grasp; or what we expect of explanation, which seems a matter of making something plain (or plane, as in flattened out to a common level, the wrinkles ironed). The locus from which we view the world cannot itself appear in the panorama visible from that place. And yet we habitually expect it to, because we are still coming to terms with the oddity of our epistemic situation: being both subject and object. I believe that once people get used to the circularity inhering in that situation, the gulf will seem less perplexing.

In the meantime, we must contend with a laxness in language, such that the Oxford English Dictionary lists more than a dozen distinct common meanings or usages for ‘consciousness.’ If the vocabulary is so ambiguous, can thinking about the corresponding actual phenomenon be any less confused? Literally hundreds of distinct theories of consciousness have by now been propounded. A significant portion of disagreement involves talking at cross purposes because of terminology. For example, in philosophy currently, a convention distinguishes ‘phenomenal consciousness’ from ‘access consciousness’. The former refers to actual first-person experience. The latter refers to a capacity to access the contents of the former, which is a third-person behavioral concept even when the same individual is both observer and observed. Language, with its ability to categorize terms and concepts, thus makes it seem that there are kinds of consciousness, raising contrived questions about how they relate.

Another source of disagreement comes from diverging basic philosophical positions—namely some form of materialism versus some form of idealism. The very gulf implied by the Mind-Body Problem inspires fundamental division in how people think about it. This is a second-order effect of the epistemic dilemma facing embodied minds: subjects who are perplexingly also objects. There is no indisputable way to decide between idealism and materialism (to decide whether mind or matter is fundamental and real). These options reflect inclinations upstream from evidence and rational debate—much like political leanings or religious persuasions. Indeed, the Mind-Body Problem lies at the core of the contest between religion and science as competing worldviews.

Is ‘consciousness’ even a coherent concept? Is it useful or more trouble than it is worth? Psychologists ignored it for much of the 20th century, fed up with the vague excesses of 19th century armchair “introspection,” and heady with the practical results of a behaviorist approach that was in tune with the general scientific emphasis on third-person accounts and spatio-temporal description. Indeed, anecdotal first-personal experience is irrelevant to science, except as reports of it in language can be confirmed by others, according to prescribed protocols. Since consciousness is personal experience, it could only be approached scientifically as a “natural phenomenon”—a sort of third-personal object of study, at arm’s length, but which does not fit gracefully within the scientific ontology.

Classical behaviorism was a macroscopic project correlating gross input of stimulus and output of motor behavior in laboratory settings. It could conveniently and productively ignore consciousness. The development of brain science and its refined technologies afforded a microscopic project that is still a form of behaviorism, but which permits a more detailed and intimate correlation between stimulus and response. In particular, “output” is now considered to include not only neural impulse and motor behavior (both describable third-personally) but also the subjective experience produced by the brain. The Mind-Body Problem then seemed more amenable to scientific study as a mind-brain problem. Within the functionalist program, it could even seem a computational problem. Despite these advances, there still remains a gulf between first-person and third-person perspectives. The question remains: why should there be “anything it is like” to be an organism, or a brain, much less a robot or computer?

AI presents the enticing possibility to replicate human capacities artificially, raising the question of whether consciousness is indispensable for those capacities. Indeed, is consciousness functional? Is there some point to the state of “there being something it is like to be you,” beyond the abilities with which it is associated? In view of natural selection, it would seem obviously so. Yet, no one has clarified to everyone’s satisfaction exactly what the function(s) of consciousness might be or its evolutionary advantage. Yet, no doubt, what most people understand as their consciousness is treasured by them, for its own sake, as dearly as life itself. This despite the fact that we spend a third of our time asleep and much of our so-called waking time on automatic, as it were, in some degree of mindless inattention.

The potential of AI raises the question of whether all that we value as humanness—which includes our precious consciousness—could in fact be duplicated artificially and even improved upon. What we know as consciousness is a product of a highly parochial biological brain and the result of an inefficient process of natural selection. Perhaps it is far from ideal and from what could potentially be realized artificially. Perhaps our transhumanist machine successors would be better off without consciousness—or at least without specific features inhering in biology, such as suffering and aggression. On the other hand, perhaps the abilities we treasure, and their possible extensions and improvements, do require consciousness. Perhaps there is some function our AI or cyborg descendants would necessarily possess that could be called consciousness. But it might be quite different from the nebulous concept we know. Being them might not much resemble being you or me. And perhaps it should not.

On the variety of possible minds

Especially since the dawn of the space age, people have wondered at the possibility of alien life forms and what sort of minds they might manifest. The potential of artificial intelligence now raises similar questions, to which are added the quest to better understand the minds of other creatures on this planet and even the mentalities of fellow human beings. Finally, understanding mind as a sort of variable that can be tweaked raises the question of the fate of our species and how it might choose its successors, whether biological or artificial.

The search for extraterrestrial intelligence and the quest for artificial intelligence both demand clear concepts of intelligence. Similarly, a general consideration of the range of possible minds demands clarifying mind as a concept. Since terms and associated connotations for this concept vary across languages, English language users should not assume a unified or universal understanding of what constitutes mind. Nevertheless, we can point to some considerations and possible agreements toward a generalized definition.

First, notions of mind and the mental may refer either to observable behavior or to subjective experience—that is, to third-person or first-person descriptions. Mind can be described and imagined either behaviorally (what it is observed to do) or phenomenally (“what it is like” to be that mind). Second, from a materialist perspective, mind must be embodied—that is, it must (1) be physically instantiated, and must also (2) reflect the sort of relations to an environment that govern the existence of biological organisms, which (3) may or may not imply evolution through natural selection. Finally, other minds can only be conceived with the minds that we have. This embroils us in circularity, since one way to grasp the limitations of our own thinking is by placing it in the context of possible other minds, which must be conceived within the limitations of our present thinking.

Mind is often contrasted to matter, the mental to the physical. To arrive at a definition of mind, consider this contrast in terms of intention versus physical causation (i.e., efficient causation as conceived in basic physics). An electrical circuit in an appliance can be described causally, as a flow of electrons, for example. It can also be described intentionally, in terms of the design of the appliance, the purpose it is supposed to serve, etc. Of course, natural organisms are not human artifacts and we do not assume intelligent design. Yet, organisms manifest their own intentionality. In terms of chemical and physical processes within them, and in relation to their environment, they also manifest causality, and their activities can be described on a causal level. However, they are distinguished from “inert” matter precisely by the fact that description on the physical level cannot account completely or adequately for their behavior, let alone for any imagined subjective phenomenology. We cannot reduce their purposive behavior to physical causality, even if we presume that the former must ride on the latter. Causality is necessary for mind, but not sufficient. Though we assume it must have a material basis, mind exhibits intention. Let us then try to clarify the notion of intention.

The concept of intentionality has a long and confusing history in philosophy as “aboutness,” which is essentially a linguistic notion. Since only humans use fully grammatical language, let us reframe intention outside the context of language, as an internal connection made within an autopoietic system for its own purposes—that is, a system which is self-defining, self-creating, self-maintaining. That connection might be a synapse made within a brain or, potentially, a logical connection made electronically within an artificial system. In either case, it is made by the system itself, not by an external observer, programmer, or other agent. If we look at the input-output relation of the system, we see that it cannot readily be explained by simple causality (at least on the level of Newtonian action-reaction). Something more complex is going on inside the system to produce the response. Nevertheless, these remain two alternative ways of looking at the behavior of the system, of inferring what makes it tick. As ways of looking, causality and intentionality each project aspects of the observer’s mentality on the system itself. Yet, apparently, that system has its own purposes and mentality, and we assume this will be the case for artificial minds, as it is for natural ones.

In regard to possible minds, we assume here that physical embodiment is required, even for digital mind. This precludes spirits, ghosts, and gods. If embodiment is understood as a relation of dependency—of an autopoietic system on an environment—it may also preclude a great deal (perhaps all) of AI. A germane question is whether embodiment, thus understood, can be simulated. Can it only arise through a process of selection in the real world, or can that process be computational? To put it another way, can a mind evolve in silico, then be downloaded to a physical system such as a robot? It is a question that raises further questions.

To explore it, let us look at naturally embodied mind which, like its corresponding brain, is the organ of a body, whose primary purpose serves the maintenance of that body and furtherance of its kind—an arrangement that evolved through natural selection. That primary purpose entails a relationship of dependence upon a real environment, so that it serves the organism to monitor that environment in relation to its internal needs. Attention is directed toward internal or external changes. Since deliberate action naturally concerns the external world more than the internal one (which tends to self-regulate more automatically), our natural focus of attention is outward. This gives the misleading impression that we simply dwell in the world as presented to the senses (naïve realism), whereas in fact we interface with it by means of continually updated, internally generated models that reflect the body and its needs and processes. When we think of mind, therefore, often we mean a system to deal with the world, and which may have a concept of it. We should bear in mind that dealing with the world (or with a concept of it) is fundamentally part of biological self-regulation, which provides the agent’s motivations, values, and premises for action. What would it mean for a computer program (AI) to deal with the world for the purpose of its own self-regulation and maintenance? What would its concept of the world be?

The functionalist outlook holds that an artificial system could instantiate all the essential elements and relations of a natural system. Thus, an artificial body should be possible, and therefore an artificial mind that is a function of it. Must it be physically real, in contrast to being virtual, merely code in a simulation? The natural mind/body is a product of natural selection, which is a wasteful contest of survival over many generations of expendable wetware. (Life depends on death.) Could virtual organisms evolve through a simulation of natural selection—which would entail a real expenditure of energy (running the computer) but no physical destruction as generations passed away—and then be instantiated in physical materials? Can virtual entities acquire real motivation (e.g., to survive)? Can their own state (or anything else) actually matter to them, apart from the welfare of a physical body? What would it mean to care about the state of a virtual body?

Apart from such questions, and the project to actually create artificial organisms, another quest is to abandon the re-creation of human being or natural mind as we know it, and go instead for what we have longed for all along. Human beings have never fully accepted their nature as biological creatures. The rejection of human embodiment was traditionally expressed through religion, whether in the transcendent Christian soul or the Buddhist quest for release from incarnation. The desire for a humanly-defined and designed world is the basis of all culture, which serves to create a world apart from nature, both physically and mentally. The rejection of embodiment can now be expressed through technology, by artificially imitating natural intelligence, mind, and even life. What if we bypass the imitation of nature to directly create the being(s) we would ideally have ourselves be?

Whether or not that is feasible, it would be a valuable exercise to imagine what sort of being that should be. We are highly identified with our precious conscious experience, which seems to imply an experiencing self we hope to preserve beyond death. But what if that conscious stream is no more than another dysfunctional aspect of naturally evolved biological life, an accident of cosmic history? Which is more important: to create artificial beings with consciousness like ours or deeply moral ones to represent us in the far future? If we are asking such questions now, might more advanced alien civilizations have already answered them?

What it is like to be

Thomas Nagel’s famous journal paper of 1974 asks “What Is It Like to Be a Bat?” It was provocatively a rhetorical question, based on a double-entendre. The notion of “what it is like to be” some particular entity caught on as a way to characterize the subjective point of view—the experience of some given entity or potential mind. Yet, by default the expression is not at all descriptive. Rather, it points to the impossibility of imagining, let alone describing, another creature’s experience. It simply refers you to something presumed similar within your own experience, should you find yourself in the shoes of that entity. It is a cypher, to stand paradoxically for an impossible access to another mind’s point of view, which can only be conceived from one’s own point of view, in the terms of one’s own experience.

I suspect the what it is like expression caught on precisely because of the limitations of language to deal with “other minds” and the difference between first-person and third-person points of view. In other words, because of the fundamental dilemma posed by consciousness in the first place, which notoriously eludes definition yet perennially attracts new monikers in compensation. The so-called mind-body problem was always a somewhat misleading handle, which might better have been named the mind-matter problem or the problem of the mental and the physical. This is perhaps why, twenty years after Nagel, Chalmers’ characterization of it as the hard problem of consciousness similarly captured the imagination of philosophers and caught on: out of sheer frustration and not because any plausible solution was offered. To emphasize the possibility of artificial minds, it has more recently been called the mind-technology problem. The dilemma stands—and not only in regard to other minds. The problem of consciousness remains the mystery of why it is “like anything” like to exist.

Certainly, you know what it is like to be yourself, right? But is that truly like something (a matter of comparison) or is the feeling of being you just ineffably what it is? Why is being you like anything at all? Clearly there must be some point to being conscious, to having experience. From an evolutionary perspective, it must be functional, enabling capabilities not otherwise possible. The question makes little sense until you consider the possibility of a version of you that can do everything you can without being conscious. (By definition, there is nothing it would be like to be this so-called zombie.) Such a possibility is now implied and seriously considered through artificial intelligence.

A range of human behaviors also suggest that consciousness is not necessarily critical even for people. (For example: sleep walking and even sleep-driving!) The very role of conscious attention may be to do itself out of a job: to assimilate novel situations to unconscious programming, as when we must pay attention to learn to play an instrument or a song, or to ride a bicycle, which then becomes “second nature” with practice. Yet, if consciousness is functional, then a fully functional artificial human would necessarily be conscious at least some of the time. Still, that leaves a lot of room for artificial intelligence that is not specifically human, yet is functional in other ways and even superior.

Speaking for myself, at least, humans clearly do have experience, an inner life that can be conceived as a show put on by the brain, evidently to help us survive. Some creatures might do quite well without it, just as we imagine that machines and robots might also. This “show” is largely about the world outside the body, but is always permeated with experience of the body, and is co-determined by the body’s needs. In that sense, we experience ourselves at the same time we experience the world, and we experience the world in relation to the embodied self. We can scarcely imagine having knowledge of the world except through this show—that is, without perceiving the world in awareness, or without there being something it is like to be doing that. Yet, that does not necessarily mean being explicitly self-conscious—aware in the moment of the act of being aware, aware of knowing what you know.

As AI becomes ever more sophisticated at imitating and exceeding human performances, it might track the world in human-like ways or better. However, unless it exists for the sake of a body, like our human minds do, I suspect it could not have experience, consciousness, an inner life. There would not be anything it is like to be knowing what it “knows” of the world. Of course, that is not a verifiable assertion. While, on the other hand, it is partly out of political correctness that we assume there is something it is like to be each of us, that has a reasonable basis in our common biology as members of the same species. Whatever we could potentially have in common with a machine is based on the assumption that the “biology” in question could be structurally approximated in some artificial system with an embodied relationship to a real environment. And what is real to us is what we can affect, and be affected by, in such a way that allows us to exist. Is that a relationship that can be simulated?

What we experience and call reality is the brain’s natural simulation, a virtual reality we implicitly believe because otherwise we would likely not be here. While that does not imply that no real world exists outside the brain, it does maddeningly complicate our understanding of the world since our only access to that world is (circularly) through the brain’s simulation of it! This circumstance, however nonplussing, does not imply that the world we call real could turn out to be a simulation created in some real digital computer, perhaps programmed by superior aliens (or gods?). But neither does it deny it. It merely defers reality to a level up—if potentially ad infinitum. More down to earth, I believe (but cannot prove) that real consequences are the basis of consciousness. They cannot be simulated because simulation is by definition not reality. Embodiment cannot be simulated and is not a relationship to a simulated environment.

The transhumanist idea of our possible “AI successors” brings up the question of what they should be like—and what it would be like, if anything, to be them. One way or another, biology is eventually doomed on this planet. We may have time and the technological means to create a more durable replacement for humankind—one that could survive the hazards of extended space travel, for example. One question is how much like us (with our destructive and self-defeating foibles) should they be? They could embody the best of human ideals, but which ones? How much individuality versus how much loyalty to one’s tribe, for example, is a question that human cultures have hardly resolved amongst themselves. Notions of modernity, of which transhumanism is a product, seem largely to derive from European societies. We assume that consciousness itself is universal in our species—but with little clear idea of what exactly that entails. Most cultures seem attached to the idea of an enduring self that somehow continues experiencing after death. Should our successors have such a sense of self? Or should they be literally more selfless?

A more pointed question is: to what extent could our successors embody our cherished ideals and still have our sort of consciousness? If what we know as consciousness is a function of our biological selves—determined by genes, natural selection, and the drive for survival—to what extent could a better version of us lacking these determinants be “conscious,” if consciousness served for them the same purposes as it does for us? If what it is like to be us is a function of what we are biologically, could the experience of our AI successors be greatly different? Next to life itself, consciousness seems anthropocentrically to be our most precious possession. Is it an essential ingredient of humanness to be preserved, or a liability to jettison?

What we know as consciousness is inseparable from feeling. (It may not seem so, because we are such visual creatures and vision provides a sense of distance and objectivity, both literally and psychologically.) Feeling is a bodily response. Primordially, it evaluates a stimulus or the body’s state in terms of its welfare. That concern of self-interest can be transferred or extended to other entities beyond or besides the individual organism, which is itself a confederation of organs and cells that have given up autonomy for the sake of a larger whole. Yet consciousness seems to be as individual and particular as human bodies are. If we speak of a group mind or national or global consciousness, it is (so far) just a figure of speech.

There is much speculation these days about rogue AI or superintelligence that becomes conscious. These are artifacts that are not intended to replace us but which, it is feared, nevertheless could do so by virtue of their superiority and our dependence on them. The human presence on this planet is part of natural evolution—that is, things which happen simply because they can happen and not by some intent. Life introduced purpose onto the scene, and we are the creature that has honed it to greatest effect. An obvious, if doubtful, step for us is to intend our own collective destiny and shape it to our own taste, independently of natural evolution. Culture already expresses this human intention to be independent of nature and physical limitations. But we now have the more specific possibility to re-create our nature through technology, to be what “we” (the mythical human collective) would like to be rather than what nature or accident dictates. All of this is inevitably conceived within the context of what we naturally are, as products of evolution. That includes the consciousness and sense of self we develop individually and through culture, but which we did not “consciously” design. And which we may fail to understand as long as we remain so identified with it.

Science as a cognitive strategy

Science is a form of human cognition. It extends natural sensory-based perception, augmenting the senses with instruments and reason. Ostensibly, it is a quest for the truth of nature underlying changing appearances: a search for laws which succinctly express observed regularities and for fundamental entities and their properties. But science must also be understood as a biological strategy of a particular creature to cope with its environment. The study of nature is a human undertaking, performed for reasons that involve the characteristics of human agents as well as those of the world studied. It is no more able than natural cognition to perceive the world as it “really” is apart from any perceiver. (For example, how the world “looked” before there were eyes to see.) Science may be more advantageous than ordinary perception for some purposes—like making money, technology, and war. Even the most seemingly disinterested research is often ultimately used to control nature or other people.

The truths sought by science are no more independent of the inquirer than the truths sought in ordinary cognition. Both ultimately are survival strategies: we see in ways that allow us to live. Of course, normal perception seems to us a transparent window on the world, which we take for granted. Yet, we know that it is a product of the nervous system, as much shaped by the biology and needs of the organism as by the external world. We must surmise that scientific cognition is likewise a function of the observer as much as of the world observed.

As a form of cognition, science focuses on a world it presumes to exist independently of itself, yet is haunted by the same ambiguity that troubles human consciousness generally: the doubtful relationship between appearance and reality. Scientific objectivity aims for a god’s-eye perspective. Yet, all description is necessarily from the point of view of the embodied observer. Objective description skirts acknowledging the observer’s subjectivity. Yet, all observers stand in a first-person epistemic relation to the world—whether through their natural sensory-motor instrumentation or via external devices that extend human agency. Science has developed protocols to avoid the idiosyncrasies of individual observers. But has it transcended the biases of homo sapiens, of particular cultures, or even of this generation of observers?

Science is not a matter of passive observation. It actively intervenes in nature—through controlled experiment, obviously—but also by imposing theoretical models on gathered data and by imposing resulting technology on the natural world. The model may dictate the sort of data sought. The presence of technology and humanly-defined environments changes the planet. We can no longer study nature in the raw, but only nature transformed by us in thought and in deed. Similarly, natural cognition actively intervenes, though without our normally realizing it. Our ideas about reality (including scientific ideas) transparently shape our experience.

The relation of the scientific model to the real world cannot simply be taken for granted in the way that the unconscious model is in ordinary experience, which has been informally “tested” through generations of adaptation. The scientific model must be formally tested in experiment, which is the whole point of doing science. This is hardly straightforward, however, since experiments yield their results in test situations that are already prescribed by theory. The experiment is effectively a physical realization of the theoretical model. That is rather like building a machine whose parts and operations are believed to be analogous to some natural process or system (in other words, a simulation). If the machine works, then the model is presumed to be an accurate representation of reality. However, a machine does not need to reflect reality in order to function. It only needs to be consistent within itself.

The parallel between scientific and ordinary cognition works both ways. We can learn something about natural perception by looking at scientific method. Helmholtz’s 19th-century idea of “unconscious inference” intuited how brain processes resemble formal reasoning. Brain processing is now understood as a form of computation. In particular, we can begin to account for the miracle of conscious experience by putting ourselves in the place of the brain as an agent, like ourselves, with a point of view. It is not merely a mechanical system operating passively on cause and effect, but a self-programming system. It is programmer as well as computer.

The epistemic circumstance of the scientist mimics that of the brain, sealed inside the skull. Both situations demand radical inference. Just as the brain relies on the input of receptors to infer the nature of the real world, so the scientist relies on instrument readings. The brain uses unconscious perceptual models, according to the body’s needs and goals. Scientists consciously model observed phenomena, according to their goals. The brain’s unconscious perceptual models are reliable to the degree they enable survival at the individual, group, or species level. By the same token, scientific models, like other human practices, should be regarded not only for their truth value but also for their ultimate contribution to human well-being and prospects. “Good” science is not only science that is done properly but which also supports a human future.

If we look deeper than the myth of science as detached, objective knowledge—a modern creation story—we recognize its social commitment to provide a certain kind of practical empowerment. Then science appears to form an integral part of the general management of society, as the conceptual and technological extension of human powers, both active and cognitive. As our modern interface with nature, science should be integral with the social planning that necessarily involves that interface. While that is a double-edged sword, wariness of mad scientists should be tempered by wariness of mad political leaders.

Like ordinary cognition, science focuses on what it can do, the scope of which should at least permit us to survive. Far from a divine revelation, or an open window on objective reality, science is an unfinished collective human enterprise. It provides a model of international cooperation and method for achieving consensus. Science is itself an experiment. For that reason, it is important to recognize its strengths and its limitations—in particular, to value it as a human construct with potential either to unify humanity and ensure its survival or to hasten its extinction.

Between the rhythm and the blues

I am not an AM radio person. I detest the motor-mouth babble that often fills radio silence with what is nominally intended to be a comforting human presence—to help people evade the emptiness sometimes called the blues. That goes for most pop music too. Is this just old age ranting about the next generation? Or is there something more to such disaffection? Informative talk shows do exist and so does deeply moving music. But conventional drivel seems to dominate the air waves today—and perhaps all of modern life.

Digital electronics facilitates the facile. For example, it makes it easy for a musician to create or find a background rhythm section to set going automatically, and then compose or improvise on top of that. But why exactly has that become such a formula? Why, indeed, do rhythm and repetition dominate popular music? Why must everything have a steady beat?

I am not a musician. But I see that life and meaning are organized between poles of chaos and boredom. We cannot live in a completely unpredictable environment, which would be overwhelming. Nor can we live well with total monotony, day in and day out. There is a happy medium, a balance of order and disorder that captures interest.

On the other hand, the modern human environment has become hugely automated, even in the music industry. Machines, mechanized systems, and algorithms have taken over every aspect of life. What machines do perfectly is to repeat the same thing over and over exactly. Isn’t that what rhythm is? Isn’t that what routine is? What algo-rhythms do is make things predictable. But isn’t that what boredom is? The balance is hardly “medium” and perhaps not so happy.

Certainly, there is order in nature. There are natural cycles, natural rhythms. But there is also variation, unpredictability, uncertainty, randomness. Life is precarious—a fact that humans have never much cared for. From the earliest times, we have striven to make the human world as exempt as we can from the contingencies of the natural world. Half of humanity now live in man-made environments called cities. But even in the village of ten thousand years ago, people found ways to order their surroundings, to create culture, and to distinguish themselves from their animal cousins forced to live in raw nature. There was face-painting and scarification, for example, and other forms of body decoration. There were elaborate customs and rules of behavior, as well as technologies like cooking. There was ritual and dance. There was rhythm.

The body has its natural rhythms, of course. The beating heart may be the original drum. But neither breathing nor heartrate are constant or regular. They elude deliberate control. The rhythm patterns of drumming, in contrast, are intentionally regular or varied. We had the idea of perfect repetition long before we created the ideal repetitions of machines and engines. The clock is the ideal metronome. Mechanization simply abstracts and automates natural rhythms.

Rhythm is an obvious way to coordinate group movements, as in dance—or for military marching and troupe movements, as in war. Ethnic dances often involve an ensemble doing the same steps in sync, as do modern line dances and many choreographies. Modern dance, whether choreographed or improvised, responds to and interprets the music. Music is the common reference to keep everyone together, for which the drum and the beat play a key timing role. Even in social dancing, couples stay in step with each other through the music.

Music and dance may have arisen together, but music can be produced and appreciated without reference to movement. We sit deliberately still during classical concerts, for example, indicating that we expect the music to do something to us besides induce a sort of reflex physical response. When we say “That piece moved me deeply,” we don’t mean it literally. Rather, by “deeply,” we mean that something inside responds in a way that is precisely not habitual or conditioned. It is unexpected, the opposite of cliché, a welcomed disturbance or awakening, which is interior and not dissipated in physical movement.

No matter how poetic or evocative the lyrics, to be moving in that way, popular music must work against a disadvantage when tied to the baseline of a rhythm section, a clichéd downbeat that automatically induces movement rather than heartfelt contemplation. We expect to dance at dance concerts—whether, rock n roll, country western, blues, etc.—where the beat is fundamental to the impulse to move, and the whole point is to enjoy that. But concerts of such music often take place in theatre venues where, like in the classical concert, there is no place to dance. Then there is a conflict between the physical impulse and the setup of the venue. A similar situation exists when such music is played on the radio. We are invited to listen, as though for an inner meaning that can stimulate us psychologically, to sounds that invite us to move like automatons when movement may be restricted, for example while driving in your car. What does that conflict do?

Culture is founded on sublimation, which is a transformation of energy from one form to another. According to Freud, that often involves repression. The impulse to move, for example, may sometimes have to be suppressed, as when you are confined to your seat and there is no place to get up and move. You may find a compromise, such as tapping your foot to the beat. But why put yourself in such an untenable situation? More importantly, does repression in itself lead to the sublime? Does the relentless beat help you toward a higher vision or does it just exercise your muscles while dulling your mind?

What, indeed, is the purpose of art—in this case music? Many people will balk at the notion that it should serve any purpose and not simply be its own worthwhile end: art for art’s sake. Or that it should serve a singular purpose, as it more or less did in the Christian Middle Ages. Art, as a general phenomenon, now seems to be intentionally divergent, to resist definition altogether. At its best, it seeks to be playful, even rebellious, to defy category. That seems to fulfill a definite social function, as a counterbalance to the over-rationalism and pragmatism of our age, the complement of science in particular. Yet, popular music is singularly homogeneous, tied to the obligatory beat. Far from being transformative (as its exponents may hope), it is a numbing pablum that serves to keep everything the same. You can hardly escape it in public places, such as restaurants and stores, where the staff may prefer it because it energizes them to make it through a shift of fundamentally unsatisfying drudgery. But isn’t that numbing?

Art can be transformative—if the intention and the conditions are right. Music can “move” us— out of our conditioning instead of further into it. It can only break our habits, however, by breaking its own. It must surprise us by defeating expectation. Our anticipations are strengthened with every expected mechanical downbeat and musical cliché to which we have been conditioned, and which we may rely upon for their reassuring predictability. The listener, like the musician, has a moral choice between the mundane and the sublime.

It might seem that randomly generated sounds would at least defeat any anticipation one could have about the next note. But there are good reasons why noise is not interesting. For, it is not the absence of intention that inspires us, but its perfection. Sublimation means deliberately making sublime, which is a collaboration between the artist and the audience. Both must invite it. If you hope to be “moved” by music and not simply made to move, you must seek out that sort of experience and the music and art which has that effect.

Everything conventional and familiar serves to keep us in the middle zone of mediocrity, between bludgeoning regularity and the blues of the structureless void. There’s always the risk that confronting the sublime leads to an encounter with the depressive edge of one’s comfort zone, when the supports of the familiar are absent. Music (and art) outside that captive zone does exist, but is rare. It is calculated to throw you off your perch. It probably doesn’t have a down beat or induce frenetic movement. More likely it brings a state of inner stillness and soul-searching. If welcomed, such an experience might be called sacred.

On scholarship

It is reasonable that scholarly writing should refer to a literature of other scholars. Thinking is a communal effort. Ideas that are not grounded in common current understandings may have little appeal or relevance to one’s fellows, and may be rejected as crazy, incomprehensible, or irrelevant. On the other hand, ideas are unoriginal and dull that do no more than reflect the current consensus. Just as there is a zone of comfort between overwhelm and boredom, so there is a zone of optimal meaningfulness between the grandiose and the trivial.

Unfortunately, in the world of academe, the zone is skewed toward the trivial by specialization and fragmentation into siloed intellectual communities, which strive to say ever more about ever less. Degree candidates are encouraged not to take on theses with too sweeping themes. Journal articles must couch any proposal in a discussion of the current literature in the field—which means who said what about who said what. Especially in philosophy, arguments are weighed more against the arguments of others than against their own plausibility. The implicit focus is on attacking and defending positions, on opinion and debate more than truth. Philosophy is perhaps the most subjectified of the humanities, having ceded physical reality to science. Yet, in this highly subjectified era, it is not just in philosophy that claims of truth are suspect and frankly gauche.

Here, in contrast, is a sweeping claim: it is good that culture has evolved generally toward increasing subjectivism, because it is necessary for increasing consciousness. However, the dependency of consciousness on subjectivity reflects a dialectical cycle of two apposite tendencies: what could be called the realizing and de-realizing faculties. Subjectivism manifests the latter. If it were the only game in town, or carried to the extreme, we would all be naïve idealists, who believe everything is no more than opinion or preference and nothing is real. That is to say: there is no external reality (such as nature) to impose constraints on human experience. While that is patently false, so is the belief that, in the face of absolute truth or the scientific reality of nature, human opinion and experience are irrelevant. Neither extreme works separately, but together the two “faculties” are in fact productive.

Science is perhaps our best example of how realizing and de-realizing work together. Kuhn famously described their dialectical interaction in a political metaphor: the stages of a revolution. (Revolution connotes a drastic and irrevocable change; but literally the word implies return to a prior state, a cycle.) A theory is a heroic assertion, a creative act of realization, in which disparate elements are encompassed to make new sense. It naturally invites testing and criticism, in which both logical consistency and fit with reality are assessed. To the degree it works, the theory becomes a new “paradigm,” guiding the direction of research. This leads to a productive phase of working out the implications of the theory in detail, which is more bureaucratic than creative. It consists of relatively mundane operations compared to the dramatic initial breakthrough. The research program inevitably uncovers ever more discrepancies with the paradigm until it seems better to abandon it. That phase of de-realizing sets the stage for a new theory.

A similar cycle occurs in ordinary perception. The natural outward focus is on the external world, which we normally (and with good reason) experience as real. What we don’t experience is the unconscious act of making it seem real. But what the mind can do it can sometimes undo. We have the further ability to de-realize perception by experiencing it as perception rather than as reality, as internal or subjective rather than objective. (At first you see the clump of dust as a spider, but then quickly de-realize the mistaken appearance.) That is the role of subjectivity: to bracket, question or deconstruct what the realizing faculty constructs. Thus, creation and destruction play complementary dialectical roles in cognition. Since unrestrained creativity in the hands of technological capability is dangerous, subjectivism is healthy. Yet, it can go too far.

It is easy to forget that an epistemic cycle is involved, and to imagine that scholarship is no more than a sophisticated competition among opinions, on the one hand, or a naive assertion of truth, on the other. We are rightly suspicious of absolute truth; but without an ideal of truth we are left without real landmarks, to wander in a deconstructed postmodern wilderness. In the extreme, scholarship then is no more than scholasticism, as it was in the pre-scientific era when speculation referred only to the imaginings of other thinkers and not to the real world of nature or the evidence of the senses. If we seriously believe in an external reality, then we must admit there can be communally recognizable truths about it. The trick is not to fall off on either the side of objectivism or of subjectivism, but to keep their balance in a dialectical tension.

We live in an age of specialization, with many benefits, especially in technology. The downside we must recon with is the short-sighted vision, the narrow focus, built into specialization. Who could foresee the long-term effects of oil and overpopulation? Answer: in theory, at least, anyone with a sufficiently broad outlook! Which is to say, anyone standing far enough back from the trees to see the forest. Since such foresight is obviously uncommon, or unheeded, our collective perspective resembles that of the ant more than the astronaut. To paraphrase Wordsworth, the details of the world are too much with us.

There is a place for the general and the big picture. Yet, somehow, scholarship is often more skilled at nitpicking and finding fault in minutiae than at seeking good use for sweeping claims, perhaps because we are rightly suspicious of them. We associate the general with the facile, and the simplistic with populist manipulation. We have learned painfully, from scientific method, to seek the truth in details that give the lie to faulty generalizations. That is good practice and as it should be. But it reflects only one side of the cycle—the skeptical de-realizing part. The other side is the willingness to make claims worth trying to shoot down. The worst insult to a scientific theory is that it’s “not even false.” That can mean that it makes no falsifiable claim. But it can also mean it makes no claim worth bothering with.

Scholarship should not consist only in tearing apart the arguments of others, which is a purely defensive strategy. There should also be a proactive intent to embrace the ideas proposed, to try them on for size, to see to what good they may lead. In our over-subjectified world, there is a general suspicion of experts and academe, in favor of easy slogans. Perhaps this reflects a hunger for general and simple truths that can stand out from the dizzying glut of “information” with which we are surrounded. It may also reflect a failure of experts to provide an inspiring vision that can compete with the oversimplifications of demagogues. If academe has abandoned the field of grand truths, who can blame opportunists for moving in to take it over?

Artificial intelligence exacerbates the tendency to subjectivism. Indeed, AI is a product of subjectivism insofar as it realizes the intent to assimilate the objectivity of nature to human artifice. While imitating nature’s creativity is empowering, it also reduces nature’s reality to human terms, so that we are left with nothing but our own subjectivity in which to wallow. In our drive to reduce everything to human artifact, to re-create everything to human taste, we deliberately blur the distinction between real and artificial, objective and subjective. And then we no longer know what is real or true.

Especially when information can be artificially produced, the overwhelming bulk of it tends to be trivial and unrelated to reality. The morass of irrelevant information can only be navigated with a rudder. That means a general sense of reality against which to measure dubious claims. In former times this was called common sense. Reality is what we have (or had) in common—particularly the reality of nature that contains us despite our efforts to contain it intellectually and materially through technology. In this time of extreme social fragmentation, however, there is little held in common. Yet, there remains the elusive possibility of a guiding “sense” based in the objectivity of the natural world. Scholars in the humanities can demonstrate their sense by going beyond the defensive, reductive, and divisive aspects of critical analysis. Far from avoiding grand ideas or hacking them to death, they can seek to ground them responsibly in a vision of reality.

Debunking digital mind

It seems a common belief that consciousness could as well be implemented in silicon as in biology. That possibility is often encapsulated in the expression digital mind. Numerous assumptions underlie it, some of which are misleading and others false. My purpose is not to deny the possibility of artificial mind, but to examine assumptions upon which it crucially hinges. I focus on a particular sin of omission: what is lacking in the theory of digital minds is the relationship of embodiment that characterizes natural minds.

The computational metaphor may be our best tool for understanding mind and consciousness in scientific terms. For one thing, it invokes reason, intention, and agency as explanatory principles instead of efficient cause as understood in the physical sciences. Physicalism by itself has not been able to explain mental phenomena or conscious experience. (It does not even account adequately for observed behavior.) This failure is often referred to as the ‘hard problem of consciousness.’

Yet, despite its appeal, the computational metaphor falls short of acknowledging the origin and grounding of reason, intentionality, and agency in the human organism. Instead, reason and logic have traditionally seemed to transcend physicality, to be disembodied by definition, while agency is the tacit preserve of humans and gods. It would take us far afield to trace the history of that prejudice, with its thorough engraining in modern thought. Suffice it to say that human beings have long resisted conceiving of themselves as bodies at all, much less as animals.

Reason seemed to Descartes to be the capacity separating humans from the rest of the animal kingdom, and also distinguishing the human mind from the essentially mechanical human body. Religious traditions around the world have proclaimed the essence of the human person to be spiritual rather than material. The very concept of mind emerged in the context of such beliefs. While computation arose to formalize mental operations such as arithmetic and logic, it was quickly intuited that it might capture a far broader range of mental phenomena. Yet, computation rides on logic, not on biology. In contrast, the daily strategies of creatures (including the human creature) certainly are a function of biology.

It was perhaps inevitable, then, that the computational metaphor for mind would ignore embodiment. More importantly, by thinking of it merely as physical instantiation, it would fail to grasp the nature of embodiment as a relationship with a context and history. An abstract system of organization, such as a computer program, is a formalism that can be realized in a variety of alternative physical media or formats. But it is a mistake to think that the organization of a living thing can necessarily be captured in such a formalism, with no regard for how it came to be.

An embodied being is indeed a complexly organized physical system. But embodiment implies something else as well: a certain relation to the world. Every living organism stands in this relationship to the world, entailed by its participation in the system of life we call the biosphere. It is a relationship inherited through natural selection and maintained and refined by the individual organism. The survival mandate implies priorities held by the organism, which reflect its very nature and relation to its environment, and which motivate its behavior. To be embodied is to be an autopoietic system: one that is self-maintaining and self-defining. It pursues its own agenda, and could not naturally have come to exist otherwise. Natural autopoietic systems (organisms) are also self-reproducing, providing the basis of natural selection over generations. No AI or other artifact is yet an autopoietic system, with an embodied relationship to its world.

In principle, a non-organic system could be associated with sentience—and even consciousness—if it implements the sort of computational structures and processes implied in embodiment. Carbon-based neurons or natural biology may not be essential, but organism is. Organism, in that sense, is the internal organization and external orientation involved in an embodied (autopoietic) relation to the world. The question arises, whether and how that organization, and the relationships it implies, could be artificially created or induced. In nature, it arises through natural selection, which has programmed the organism to have priorities and to pursue its own interests. Natural intelligence is the ability to survive. In contrast, artificial intelligence is imbued with the designer’s priorities and goals, which may have nothing to do with the existential interests of the AI itself—of which it has none. Creating an artificial organism is a quite different project than creating an AI tool to accomplish human aims.

Embodiment is a relation of a physical entity to its real environment, in which events matter to it, ultimately in terms of its survival. Can that relationship be simulated? The answer will depend on what is meant by simulation. Computer simulation entails models, not physical entities nor real environments. A simulated organism is virtual, not physical. As an artifact, a model can exhaust the reality of another artifact, since both are products of definition to begin with. But no model can exhaust any portion of natural reality, which is not a product of definition.

The notion of “whole brain emulation” assumes falsely that the brain is a piece of hardware (like a computer), and that the mind is a piece of software (like a computer program). (An emulation is a simulation of another simulation or artifact—such as software or hardware—both of which are products of definition.) The reasoning is that if a true analog of the human mind could “run” on a true analog of the human brain, it too would be conscious. The conclusion may be valid, but the premises are false.

Despite the detailed knowledge of neural structure that can now be obtained through micro-transection, it is still a model of the brain that results, not a replica. We cannot be sure how the identified parts interact or what details might escape identification. We do no more than speculate on the program (mind) that might run on this theoretical hardware. No matter how detailed, a brain emulation cannot be conscious—if emulation means a disembodied formalism or program. The conceit of brain emulation trades on an assumption of equivalence between neural functioning and computation—completely ignoring that neural functioning occurs in a context of embodiment while computation explicitly does not.

The persistence of this assumption gives rise to transhumanist fantasies such as copying minds, uploading one’s consciousness to cyberspace, or downloading it into alternative bodies—as though the software is completely separable from the hardware. It gives rise to absurd considerations such as the moral standing and ethical treatment of digital minds, or the question of how to prevent “mind crime”—that is, the abuse by powerful AIs of conscious sub-entities they might create within themselves.

The seductiveness of the computational metaphor is compounded by the ambiguity of mental terms such as ‘consciousness’, ‘mind,’ ‘sentience,’ ‘awareness,’ ‘experience,’ etc.—all of which can be interpreted from a behavioral (third-person) point of view as well as from the obvious first-person point of view. The two meanings are easily conflated, so that that the computational substrate, which might be thought to explain or produce a certain observable behavior, is also assumed to account for an associated but unobservable inner life. This assumption dubiously suggests that present or imminent AI might be conscious because it manifests behavior that we associate with consciousness.

Here we must tread carefully, for two reasons. First of all, because our only means of inferring the subjective life of creatures (natural or artificial) is through observing their behavior, which is no less true of fellow human beings. The conflation works both ways: we assume sentience where behavior seems to indicate it; and we empathetically read into behavior possible subjective experience based on our own sentience. As part of the social contract, and because we sincerely believe it, we deal with other people as though they have the same sort of inner life as we do. Yet, because we can only ever have our own experience, we are at liberty to doubt the subjective life of other creatures or objects, and certainly of computer programs. This is poorly charted territory, in part because of long-standing human chauvinism, on the one hand, and superstitious panpsychism, on the other. It is just as irrational to assume that bits of code are sentient as to assume that rocks or clocks are.

The second reason is the difficulty of identifying “behavior” at all. Language is deeply at fault, for it is simply not true that a rose is a rose is a rose or that an airplane “flies” like a bird. We think in categories that obliterate crucial distinctions. A “piece” of behavior may be labelled in a way that ignores context and its meaning as the action of an embodied agent—that is, its significance to the agent itself, ultimately in the game of survival. Thus, the simulated pitching of a device that hurls practice baseballs only superficially resembles the complex behavior of the human baseball player.

Relationships and patterns are reified and formalized in language, taken out of context, and assumed transferable to other contexts. Indeed, that is the essence of text as a searchable document (and a computer program is ultimately a text). This is one of the pitfalls of abstraction and formalism. An operation in a computer program may seem to be “the same” as the real-world action it simulates; but they are only the same in human eyes that choose to view them so by glossing over differences. AIs are designed to mimic aspects of human behavior and thought. That does not mean that they replicate or instantiate them.

Digital mind may well be possible—but only if it is embodied as an autopoietic system, with its own purposes and the dependent relationship to a physical environment that implies. Nothing short of that deserves the epithet mind. The idea of immortalizing one’s own consciousness in a digital copy is fatuous for numerous reasons. On the other hand, the prospect of creating new digital minds is misguided and dangerous for humanity. Those fascinated by the prospect should examine their motives. Is it for commercial gain or personal status? Is it to play God? We should seek to create digital tools for human use, not digital minds that would be tool users, competing with us for scarce resources on a planet already under duress.

Ironies of individualism

We think of Western civilization as individualistic. ‘Freedom’ and ‘democracy’ have become catchwords in the cause of global capitalism, which has promoted certain individualist ideals through the spread of consumerism and a competitive spirit. People have choice over an expanding range of products and services, though little input into what those should be. They can choose among candidates for office pre-selected by others.

Since humans are highly social creatures, the individual can exist only in relation to the collective. Individual rights exist only in the context of the group and in balance with the needs of society. By long tradition, modern China is still far more collectivist than modern America. But individualism in both societies is relative, a question of degree. The same people in the U.S. who tout individual freedom may also shout for their collective identity: “America first.”

The success of the human species has been a collective effort and a function of increasing numbers. As population increased, so did human prosperity, which enabled specialization and technology, which in turn furthered prosperity. Individuality was made possible when it was no longer necessary for everyone to do the same tasks just to survive. Nothing seems to make it inevitable, however. Relief from drudgery implies little about how people will use their liberated time and energy. Humans are the most cooperative of the primates, and cooperation depends to a large extent upon conformity. (“Monkey see, monkey do” describes our species far better than literal monkeys.) We are conformists at heart—or by gene. The individual identity we can achieve always strains against the tug of the herd.

Technology can be liberating, but “drudgery” is a relative term. (Washing machines and backhoes are great labour-saving devices. But how much labour is saved by clapping hands instead of flipping a light switch?) Since the body is naturally a labour-producing device, sparing it effort is not in itself a good thing. Instead of having to wash the clothes by hand, or dig the ditch with a shovel, we can go to the gym for gratuitous “exercise.” Freedom from drudgery is relative to the limits and needs of the physical body. We strive against the body’s limitations, but cannot be totally free of them. We live longer now, but can only dream of immortality.

There seems to be a definite expansion over time of what we now call human rights. At least since society reached the stage beyond small groups of hunter-gatherers, it was always the case that some individuals claimed the freedoms that go with power over others. With increasing prosperity came social stratification. The masses eventually coveted the same privileges and benefits they could see possessed by elites. With industrialization, they could have token versions of the same luxuries. With democratization, they could have a diluted and second-hand version of authority. Money conferred an ersatz version of status: the power to consume. Real power remained behind the scenes, however, in the hands of those positioned to define the apparent choices and influence the mass of voters or consumers.

Imagine trying to market your product to a hundred million consumers who have absolutely nothing in common, who are completely disparate—that is, truly individual. Suppose also that they are rational beings who know what they actually need. How many would buy your product? Imagine, further, trying to get them to agree on a leader! Both democracy and the consumer market depend on conformity far more than individuality, and on whim far more than reason. True, each person now can have a personal telephone cum television cum computer, thus spared the inconveniences of sharing a wired facility with others. Convincing people of the need and right of each to have their own personal devices maximizes market penetration—just as convincing people to live alone maximizes rent collected.

From an economic point of view, consumers are hardly individuals, but mere ciphers with a common willingness to purchase mass-produced widgets or standardized services. Lacking imagination, they reach for what is readily available and may seem to be the only option—or perhaps  for no better reason than that they have seen others flaunting the latest offerings and don’t want to be left behind. Long ago, people envied the nobility for the indulgences they alone could afford. Now the Joneses compete with each other to get ahead—or at least not fall behind. Remember the brief period when a mobile phone was a novel status symbol? Now staying “connected” seems a bare necessity. Social media have democratized status, which is always in flux according to who “likes” what. Is the mobile phone a labour-saving device or is it a new form of drudgery?

Carl Jung articulated the concept of individuation as the process of unfolding through which one becomes a specific mature self. While we may think of freedom and individuality in the same breath, becoming a unique individual is quite another matter than having unrestricted choice. To have the capacity to think and act independently of others is different from access and entitlement to a limitless variety of goods or experiences. In a sense, they are nearly opposite. We may be offended by restrictions imposed on what we can do or choose. But, to what avail is freedom if all we can imagine or do is what everyone else does, or if the choices have been predefined by others?

On the one hand, a rational individuated human being ought to be immune to the herd mentality. On the other hand, such a being should be capable of objectivity. If everyone were truly unique, they might see things really differently. What basis would there be for agreement? What basis for the commonality and cooperation that made civilization possible? Why would two people buy the identical product—if, indeed, they would want it at all? There would literally be no accounting for taste, and no basis for mass production. Capitalism could not succeed in a society of truly individuated beings. Even in that society, no doubt some people would consciously try to exploit the weaknesses of others, who would resist if they are equally awake.

But, of course, humans are not all that different, and mostly not fully awake. First of all, we come with more or less standard issue: the body, which imposes much upon our consciousness. As individuals, we are mass-produced tokens of a kind. We grow up indoctrinated in a common culture, programmed to view the world in standard ways, to want standardized things, to have conventional goals. The fact that opinions can diverge tells us that consensus is not the same as objectivity. (Agreeing on something does not make it true.) Even if all people were completely different from each other, we still would occupy a common world, with its own real and singular existence apart from how we think of it. (Is that not the definition of ‘reality’?) Objectivity means accordance with that common world, especially with the preexisting natural world which encompasses us all. In principle, our minds could be completely diverse internally and still agree on what is externally real. But only, of course, if we were truly objective.

Obviously, that is not the human condition. Quite the contrary. We vaunt the inner differences that are supposed to make us unique and give us identity, while hardly able to agree about external things. We love to bicker (and sometimes wage war) concerning externalities we deem real and important. We love to take a stand, to be right or superior, to debate and argue. Paradoxically, that only makes sense when we are convinced of knowing a truth that matters. For, simply having divergent inner experiences is not itself a basis for conflict, which is always about outer and bigger so-called realities. Any number of sugar plums can dance in our heads, just as any number of angels can dance on the head of a pin, since such things take up no space. It’s rather our cumbersome bodies, with their natural needs and programming, that compete for space and resources. We have thus always been in competition as well as in cooperation.

An individuated person is not an individual in the conventional sense—not someone obviously different in appearance, tastes and desires, for example. Rather, it is someone whose thinking can transcend the factors that render us inherently alike—and also render us unconscious and therefore incapable of objectivity. To be conscious and objective is to see—clearly and for what they are—the limiting perspectives that hold us in the communal trance.

Mobile phones free us from stationary connection points, while the social platforms they enable chain us to the opinions of others. News and entertainment media monopolize our attention with standard schlock. The chatbot or design tool may be your handy personal assistant, but can feed you a pablum of clichés. While our society becomes ever more individualistic, through technology and in accord with market needs, we do not necessarily become more individuated. In a different age, Jung thought of individuation as a normal progression in the natural life cycle. I doubt such inevitability. There are too many forces mitigating against it, keeping us immature and conventional. Individuation may be possible, but only through fierce intent.

 

Language, myth, and magic

Human consciousness is closely entwined with language. We live not so much in reality as in stories we tell about reality. We are constantly talking to ourselves as well as to each other. The common tale binds us, and where stories diverge there is dissension. We construct a civilized urban world, a material environment deliberately apart from the wild, which was first imagined with the help of language. We distinguish ourselves from mere beasts by naming them, and by transforming natural animal activities into their human versions, deliberately reinvented. Culture, in the broadest sense, sets us apart, which we may suppose is its ultimate purpose. We are the creature that attempts to define itself from scratch. And definition is an act of grammatical language.

Of course, this human enterprise of self-creation is circumscribed by the fact that there is nowhere physical for us to live but in the natural universe. Our artificial environments and synthetic products are ultimately made of ingredients we rearrange but do not create. The natural world is where we always find ourselves, wherever the expanding horizon of civilization and artifact happens to end. It has always been so, even when that horizon extended no further than the light of the campfire. Even then, the human world was characterized by the transformation of perception as much as by the transformation of materials. It was forged by imagination and thought, the story told. The original artifact, the prototype of all culture, was language. In the beginning was the word.

I marvel at the ingenuity of the human imagination—not the things that make practical sense, like houses, agriculture, and cooking—but the things that make little sense to a rational mind, like gods and magic. Yet, religion and magical thinking of some sort have characterized human culture far more and far longer than what our secular culture now defines as rationality. The ancient Greeks we admire as rational seekers of order seemed to actually believe in their pantheon of absurdly human-like and disorderly gods. The early scientists were Creationists. There are scientists today who believe in the Trinity and the transubstantiation of the Eucharist. My point here is not to disparage religion as superstition, but to wonder how superstition is possible at all. I believe it comes back to language, which confers the power to define things into being—as we imagine and wish them—coupled with the desire to do so, which seems to reflect a fundamental human need.

The term for that power is fiat. It means: let it be so (or, as Captain Picard would say, “make it so!”) This is the basic inner act of intentionality, whereby something is declared into at least mental existence. That could be the divine decree, “Let there be light!” Or the royal decree, “Off with her head!” The magician’s “Abracadabra!” Or the mathematician’s: “Let x stand for…” All these have in common the capacity to create a world, whether that is the natural world spoken into existence by God, the political realm established by a monarch or constitutional assembly, the abstract world invented by a geometer, the magician’s sleight of hand, or the author’s fictional world. Everything material first existed in imagination. It came into being simply by fiat, by supposing or positing it to be so in the mind’s eye or ear.

While we cannot create physical reality from scratch, we do create an inner world apart from physical reality—a parallel reality, if you like. Aware of our awareness, we distinguish subjective experience from objective reality, grasping that the former (the contents of consciousness) is our sole access to the latter. But the question is subtler still, because even such notions as physical reality or consciousness are but elements of a modern story that includes the dichotomy of subject and object. We now see ourselves as having partial creative responsibility for this inner show of “experience,” a responsibility we co-share with the external causal world. Fiat is the exercise of that agency. We imagine this must have always been the case, even before humans consciously knew it to be so.

That self-knowing makes a difference. If people have always created an inner world, but didn’t realize what they were doing, they would have mistaken the inner world for the outer one, the story for reality. In fact, this is the natural condition, for good reason. As biological organisms, we could not have survived if we did take experience at face value and seriously. The senses reveal to us a real world of consequence outside the skin, not a movie inside the head. The idea that there is such a movie is rather modern, and even today it serves us well most of the time to believe the illusion presented on the screen of consciousness, though technically we may know better. Fiat is the power to create that show, quite apart from whether or how well it represents objective reality.

Fiat is the very basis of consciousness. Like gods or monarchs, we simply declare the inner show into existence, moment by moment. That is not, however, an arbitrary act of imagination, but more like news reporting with editorial—a creative interpretation of the facts. The “show” is continually updated and guided by input from the senses. We could not exist if experience did not accord with reality enough to permit survival. That does not mean it is a literal picture of the world. It is more like reading tea leaves in a way that happens to work. The patterns one discerns auger for actions that permit existence, or at least they don’t immediately contradict it. (While crossing the street, it pays to see that looming shape as a rapidly approaching bus!) Therein lies the meaning of what naturally appears to us as real. Realness refers to our dependency on a world we did not choose—a dependency against which we also rebel, having imagined a freedom beyond all dependency.

The upshot is that we do have relative freedom over the experience we create, within the limits imposed by reality—that is, by the need to survive. It is quite possible to live in an utter fantasy so long as it doesn’t kill you. In fact, some illusions favor survival better than the literal truth does. Nature permits a latitude of fancy in how we perceive, while the longing for freedom motivates us to be fanciful. I believe this accounts for the prevalence of magical thinking throughout human existence, including the persistence of religion. It accounts also for the ongoing importance of storytelling, in literature and film as well as the media, and even in the narratives of science. In effect, we like to thumb our noses at reality, while cautious not to go too far. Magic, myth, imagination, and religion can be indulged to the extent they do not cancel our existence. (The same may be said for science, a modern story.) We like to test the limits. The lurking problem is that we can never be sure how far is too far until too late.

Outrageous beliefs are possible because a story can easily be preferred to reality. A story can make sense, be consistent, clear, predictable. Reality, on the other hand, is fundamentally inscrutable, ambiguous, confusing, elusive. Reality only makes sense to the degree it can be assimilated to a story. In the end, that is what we experience: sensory input assimilated to a story that is supposed to make sense of it, and upon which an output can be based that helps us live. Connecting the senses to the muscles (including the tongue) is the story-telling brain.

Like news reporting, what we experience must bear at least a grain of truth, but can never be the literal or whole truth. The margin in between permits and invokes the brain’s surprisingly vast artistic license. If that was all there is to it, we could simply class religion, magic, and myth as forms of human creativity, along with science, art, cinema and the novel. But there is the added dimension of belief. Reality implies the need to take something seriously and even literally—to believe it so—precisely because it makes a real difference to someone’s well-being. Fiction you can take or leave as optional, as entertainment. Reality you cannot.

Every human being goes through a developmental stage where the two are not so clearly distinguished. Play and make-believe happen in the ambiguous zone between reality and imagination. It no doubt serves a purpose to explore the interface between them. This prepares the adult to know the difference between seriousness and play—to be able to choose reality over fantasy. However, the very ambiguity of that zone makes it challenging to know the difference. At the same time, it is easy to misplace the emotional commitment to reality—which we call belief—that consists in taking something seriously, as having consequence. The paradox of belief is that it credits reality where it chooses, and often inappropriately. While it might seem perverse to believe a falsehood, human freedom lies precisely in the ability to do so. After all, a principal use of language has always been deception. So, why not self-deception?

The human dilemma

One way to describe the human dilemma is that we are conscious of our situation as subjects in a found world of objects. That world, of which we are a part, is physical and biological. Indeed, even our conceiving it in terms of subject and object reflects our biological nature. To permit our existence, not only must the world be a certain way, but as creatures we must perceive it a certain way, and act within it a certain way. While that may not be a problem for other creatures, it is for us, because we are aware of all this and can ponder it. Whenever anything is put in a limiting context, alternatives appear. Whenever a line is drawn, there is something beyond it. Our reflective minds are confronted with a receding horizon.

We are animals who can conceive being gods. Recognizing the limits imposed by physical reality and by our biological nature, we nevertheless imagine freedom from those constraints and are driven to resist them. Recognizing the limits of the particular, we imagine the general and the abstract. Resistance to limits involves denying them and imagining alternatives. Recognizing the actual, we conceive the ideal. Thus, for example, we resist mortality, disease, gravitation, pain, physical hardship, feelings of powerlessness—in short, everything about being finite biological creatures, hapless minions of nature. We imagine being moral and weightless free spirits— escaping, if not the sway of genes, at least the pull of gravity and confinement to a thin layer of atmosphere.

We find ourselves in a ready-made world we did not ask for. We find ourselves in a body we did not design and which does not endure. As infants, we learn the ropes of how to operate this body and accept it, just like people with prosthetic limbs in later life must learn to operate them and identify with them. At the same time, and throughout life, we are obliged to negotiate the world in terms of the needs of this body and through its eyes. This natural state of affairs must nevertheless seem strange to a consciousness that can imagine other possibilities. It is an unwelcome and disturbing realization for a mind that is trying to settle in to reality as given and make the best of it. The final reward for making these compromises is the insult of death.

A famous author described the horror of this situation as like waking up to find yourself in the body of a cockroach. It is a horror because it is not you, not your “real” body or self. It is someone else’s nightmare story from which you cannot awaken. (Of course, the metaphor presumes a consciousness that can observe itself. Presumably, the cockroach’s life is no horror to it.) But the metaphor implies more. Each type of body evolved in tune with its surrounding world in a way that permits it to survive. The experience of living in that body only makes sense in terms of its relationship to the world in which it finds itself but did not create. The horror of being a mortal human cockroach is simply the despair of being a creature at all, a product of the brutal gauntlet called natural selection. The history of life is the story of mutual cannibalism, of biological organisms tearing each other apart to devour, behaving compulsively according to rules of a vicious and seemingly arbitrary game. The natural cockroach knows nothing of this game and simply follows the rules by definition (for otherwise it would not exist). But for the human cockroach, the world itself is potentially horrifying, of which the cockroach body is but a symptomatic part.

The first line of defense against this dawning realization is denial. We are not mortal animals but eternal spirits! Life is not a tale told by an idiot, but a rational epic written by God. We are not driven by natural selection (or cultural and social forces) but by love and ideals of liberty, equality, fraternity. After all, we do not live in nature at all, but in ordered cities we hew from the wilderness, ultimately dreaming of self-sufficient life in space colonies. We are not obliged to suffer disease and die, but will be able to repair and renew the body indefinitely, even to design it from scratch in ways more to our liking. We are not condemned to live in bags of vulnerable flesh at all, but will be able to download our “selves” into upgraded bodies or upload them into non-material cyberspace. Alternatively, like gods, we may bring into existence whole new forms of artificial life, along principles of our design rather than nature’s trial-and-error whim. Religion conceives and charts the promise of godlike creativity, omniscience, freedom, resurrection and eternal life outside nature, which technology promises to fulfill.

The mind imagines possibilities for technology to tinker with. But just as religion and magic do not offer a realistic escape from natural reality, technology may not either. The idea of living disembodied in cyberspace is fatuous and probably self-contradictory. (The very meaning of consciousness may be so entwined with the body and its priorities that disembodied consciousness is an oxymoron. For, embodiment is not mere physicality, but a dependent relation on the creature’s environment.) Living in artificial environments on other planets may prove too daunting. Extending life expectancy entails dealing with resulting overpopulation, and perhaps genetic stagnation from lack of renewal. Reducing the body’s vulnerability to disease and aging will not make it immune to damage and death inflicted by others, or accidents that occur because we simply can never foresee every eventuality.

At every stage of development, human culture has sought to redefine the body as something outside nature. Scarification, tattooing, body painting and decoration—even knocking out or blackening teeth—have served to deny being an animal. Clothing fashion continues this preoccupation in every age. Even in war—the ultimate laying on the line of the body’s vulnerability—men attempt to redefine themselves as intentional beings, flaunting death with heroic reasons and grand ideals, in contrast to the baseness of groveling brutes who can do no more than passively submit to mortality. In truth, we have changed ourselves cosmetically but not essentially.

That is not cause for despair. We have made progress, even if our notions of progress may be skewed. Despair only makes sense when giving up or accepting failure seems inevitable. It is, however, reason for sober evaluation. In our present planetary situation, nature gives us feedback that our parochial vision of progress is not in tune with natural realities on which we remain dependent. We are in an intermediate state, somewhat like the awkwardness of adolescence: eager, but hardly prepared, to leave the nest, over-confident that we can master spaceship Earth. Progress itself must be redefined, no longer as ad hoc pursuit of goals that turn out—perversely—to be driven by biological imperatives (family, career, ethnicity, nationalism, profit, status, power). We must seek the realistic ways in which we can improve upon nature and transcend its limitations, unclouded by unconscious drives that are ultimately natural but hardly lead where we suppose. For that, we must clearly understand the human dilemma as the ground on which to create a future.

The dilemma is that nature is the ultimate source of our reality, our thinking and our aspirations, which we nevertheless hope to transcend and redefine for ourselves. But, if not this natural inheritance, what can guide us to define our own nature and determine our own destiny? Even in this age, some propose a formula allegedly dictated by God, which others know to be an anachronous human fiction. Some propose an outward-looking science, whose deficiency for this purpose has long been that it does not include the human subject in its view of the supposedly objective world. The dilemma is that neither of these approaches will save us from ourselves.