Thomas Nagel’s famous journal paper of 1974 asks “What Is It Like to Be a Bat?” It was provocatively a rhetorical question, based on a double-entendre. The notion of “what it is like to be” some particular entity caught on as a way to characterize the subjective point of view—the experience of some given entity or potential mind. Yet, by default the expression is not at all descriptive. Rather, it points to the impossibility of imagining, let alone describing, another creature’s experience. It simply refers you to something presumed similar within your own experience, should you find yourself in the shoes of that entity. It is a cypher, to stand paradoxically for an impossible access to another mind’s point of view, which can only be conceived from one’s own point of view, in the terms of one’s own experience.
I suspect the what it is like expression caught on precisely because of the limitations of language to deal with “other minds” and the difference between first-person and third-person points of view. In other words, because of the fundamental dilemma posed by consciousness in the first place, which notoriously eludes definition yet perennially attracts new monikers in compensation. The so-called mind-body problem was always a somewhat misleading handle, which might better have been named the mind-matter problem or the problem of the mental and the physical. This is perhaps why, twenty years after Nagel, Chalmers’ characterization of it as the hard problem of consciousness similarly captured the imagination of philosophers and caught on: out of sheer frustration and not because any plausible solution was offered. To emphasize the possibility of artificial minds, it has more recently been called the mind-technology problem. The dilemma stands—and not only in regard to other minds. The problem of consciousness remains the mystery of why it is “like anything” like to exist.
Certainly, you know what it is like to be yourself, right? But is that truly like something (a matter of comparison) or is the feeling of being you just ineffably what it is? Why is being you like anything at all? Clearly there must be some point to being conscious, to having experience. From an evolutionary perspective, it must be functional, enabling capabilities not otherwise possible. The question makes little sense until you consider the possibility of a version of you that can do everything you can without being conscious. (By definition, there is nothing it would be like to be this so-called zombie.) Such a possibility is now implied and seriously considered through artificial intelligence.
A range of human behaviors also suggest that consciousness is not necessarily critical even for people. (For example: sleep walking and even sleep-driving!) The very role of conscious attention may be to do itself out of a job: to assimilate novel situations to unconscious programming, as when we must pay attention to learn to play an instrument or a song, or to ride a bicycle, which then becomes “second nature” with practice. Yet, if consciousness is functional, then a fully functional artificial human would necessarily be conscious at least some of the time. Still, that leaves a lot of room for artificial intelligence that is not specifically human, yet is functional in other ways and even superior.
Speaking for myself, at least, humans clearly do have experience, an inner life that can be conceived as a show put on by the brain, evidently to help us survive. Some creatures might do quite well without it, just as we imagine that machines and robots might also. This “show” is largely about the world outside the body, but is always permeated with experience of the body, and is co-determined by the body’s needs. In that sense, we experience ourselves at the same time we experience the world, and we experience the world in relation to the embodied self. We can scarcely imagine having knowledge of the world except through this show—that is, without perceiving the world in awareness, or without there being something it is like to be doing that. Yet, that does not necessarily mean being explicitly self-conscious—aware in the moment of the act of being aware, aware of knowing what you know.
As AI becomes ever more sophisticated at imitating and exceeding human performances, it might track the world in human-like ways or better. However, unless it exists for the sake of a body, like our human minds do, I suspect it could not have experience, consciousness, an inner life. There would not be anything it is like to be knowing what it “knows” of the world. Of course, that is not a verifiable assertion. While, on the other hand, it is partly out of political correctness that we assume there is something it is like to be each of us, that has a reasonable basis in our common biology as members of the same species. Whatever we could potentially have in common with a machine is based on the assumption that the “biology” in question could be structurally approximated in some artificial system with an embodied relationship to a real environment. And what is real to us is what we can affect, and be affected by, in such a way that allows us to exist. Is that a relationship that can be simulated?
What we experience and call reality is the brain’s natural simulation, a virtual reality we implicitly believe because otherwise we would likely not be here. While that does not imply that no real world exists outside the brain, it does maddeningly complicate our understanding of the world since our only access to that world is (circularly) through the brain’s simulation of it! This circumstance, however nonplussing, does not imply that the world we call real could turn out to be a simulation created in some real digital computer, perhaps programmed by superior aliens (or gods?). But neither does it deny it. It merely defers reality to a level up—if potentially ad infinitum. More down to earth, I believe (but cannot prove) that real consequences are the basis of consciousness. They cannot be simulated because simulation is by definition not reality. Embodiment cannot be simulated and is not a relationship to a simulated environment.
The transhumanist idea of our possible “AI successors” brings up the question of what they should be like—and what it would be like, if anything, to be them. One way or another, biology is eventually doomed on this planet. We may have time and the technological means to create a more durable replacement for humankind—one that could survive the hazards of extended space travel, for example. One question is how much like us (with our destructive and self-defeating foibles) should they be? They could embody the best of human ideals, but which ones? How much individuality versus how much loyalty to one’s tribe, for example, is a question that human cultures have hardly resolved amongst themselves. Notions of modernity, of which transhumanism is a product, seem largely to derive from European societies. We assume that consciousness itself is universal in our species—but with little clear idea of what exactly that entails. Most cultures seem attached to the idea of an enduring self that somehow continues experiencing after death. Should our successors have such a sense of self? Or should they be literally more selfless?
A more pointed question is: to what extent could our successors embody our cherished ideals and still have our sort of consciousness? If what we know as consciousness is a function of our biological selves—determined by genes, natural selection, and the drive for survival—to what extent could a better version of us lacking these determinants be “conscious,” if consciousness served for them the same purposes as it does for us? If what it is like to be us is a function of what we are biologically, could the experience of our AI successors be greatly different? Next to life itself, consciousness seems anthropocentrically to be our most precious possession. Is it an essential ingredient of humanness to be preserved, or a liability to jettison?
What we know as consciousness is inseparable from feeling. (It may not seem so, because we are such visual creatures and vision provides a sense of distance and objectivity, both literally and psychologically.) Feeling is a bodily response. Primordially, it evaluates a stimulus or the body’s state in terms of its welfare. That concern of self-interest can be transferred or extended to other entities beyond or besides the individual organism, which is itself a confederation of organs and cells that have given up autonomy for the sake of a larger whole. Yet consciousness seems to be as individual and particular as human bodies are. If we speak of a group mind or national or global consciousness, it is (so far) just a figure of speech.
There is much speculation these days about rogue AI or superintelligence that becomes conscious. These are artifacts that are not intended to replace us but which, it is feared, nevertheless could do so by virtue of their superiority and our dependence on them. The human presence on this planet is part of natural evolution—that is, things which happen simply because they can happen and not by some intent. Life introduced purpose onto the scene, and we are the creature that has honed it to greatest effect. An obvious, if doubtful, step for us is to intend our own collective destiny and shape it to our own taste, independently of natural evolution. Culture already expresses this human intention to be independent of nature and physical limitations. But we now have the more specific possibility to re-create our nature through technology, to be what “we” (the mythical human collective) would like to be rather than what nature or accident dictates. All of this is inevitably conceived within the context of what we naturally are, as products of evolution. That includes the consciousness and sense of self we develop individually and through culture, but which we did not “consciously” design. And which we may fail to understand as long as we remain so identified with it.