Debunking digital mind

It seems a common belief that consciousness could as well be implemented in silicon as in biology. That possibility is often encapsulated in the expression digital mind. Numerous assumptions underlie it, some of which are misleading and others false. My purpose is not to deny the possibility of artificial mind, but to examine assumptions upon which it crucially hinges. I focus on a particular sin of omission: what is lacking in the theory of digital minds is the relationship of embodiment that characterizes natural minds.

The computational metaphor may be our best tool for understanding mind and consciousness in scientific terms. For one thing, it invokes reason, intention, and agency as explanatory principles instead of efficient cause as understood in the physical sciences. Physicalism by itself has not been able to explain mental phenomena or conscious experience. (It does not even account adequately for observed behavior.) This failure is often referred to as the ‘hard problem of consciousness.’

Yet, despite its appeal, the computational metaphor falls short of acknowledging the origin and grounding of reason, intentionality, and agency in the human organism. Instead, reason and logic have traditionally seemed to transcend physicality, to be disembodied by definition, while agency is the tacit preserve of humans and gods. It would take us far afield to trace the history of that prejudice, with its thorough engraining in modern thought. Suffice it to say that human beings have long resisted conceiving of themselves as bodies at all, much less as animals.

Reason seemed to Descartes to be the capacity separating humans from the rest of the animal kingdom, and also distinguishing the human mind from the essentially mechanical human body. Religious traditions around the world have proclaimed the essence of the human person to be spiritual rather than material. The very concept of mind emerged in the context of such beliefs. While computation arose to formalize mental operations such as arithmetic and logic, it was quickly intuited that it might capture a far broader range of mental phenomena. Yet, computation rides on logic, not on biology. In contrast, the daily strategies of creatures (including the human creature) certainly are a function of biology.

It was perhaps inevitable, then, that the computational metaphor for mind would ignore embodiment. More importantly, by thinking of it merely as physical instantiation, it would fail to grasp the nature of embodiment as a relationship with a context and history. An abstract system of organization, such as a computer program, is a formalism that can be realized in a variety of alternative physical media or formats. But it is a mistake to think that the organization of a living thing can necessarily be captured in such a formalism, with no regard for how it came to be.

An embodied being is indeed a complexly organized physical system. But embodiment implies something else as well: a certain relation to the world. Every living organism stands in this relationship to the world, entailed by its participation in the system of life we call the biosphere. It is a relationship inherited through natural selection and maintained and refined by the individual organism. The survival mandate implies priorities held by the organism, which reflect its very nature and relation to its environment, and which motivate its behavior. To be embodied is to be an autopoietic system: one that is self-maintaining and self-defining. It pursues its own agenda, and could not naturally have come to exist otherwise. Natural autopoietic systems (organisms) are also self-reproducing, providing the basis of natural selection over generations. No AI or other artifact is yet an autopoietic system, with an embodied relationship to its world.

In principle, a non-organic system could be associated with sentience—and even consciousness—if it implements the sort of computational structures and processes implied in embodiment. Carbon-based neurons or natural biology may not be essential, but organism is. Organism, in that sense, is the internal organization and external orientation involved in an embodied (autopoietic) relation to the world. The question arises, whether and how that organization, and the relationships it implies, could be artificially created or induced. In nature, it arises through natural selection, which has programmed the organism to have priorities and to pursue its own interests. Natural intelligence is the ability to survive. In contrast, artificial intelligence is imbued with the designer’s priorities and goals, which may have nothing to do with the existential interests of the AI itself—of which it has none. Creating an artificial organism is a quite different project than creating an AI tool to accomplish human aims.

Embodiment is a relation of a physical entity to its real environment, in which events matter to it, ultimately in terms of its survival. Can that relationship be simulated? The answer will depend on what is meant by simulation. Computer simulation entails models, not physical entities nor real environments. A simulated organism is virtual, not physical. As an artifact, a model can exhaust the reality of another artifact, since both are products of definition to begin with. But no model can exhaust any portion of natural reality, which is not a product of definition.

The notion of “whole brain emulation” assumes falsely that the brain is a piece of hardware (like a computer), and that the mind is a piece of software (like a computer program). (An emulation is a simulation of another simulation or artifact—such as software or hardware—both of which are products of definition.) The reasoning is that if a true analog of the human mind could “run” on a true analog of the human brain, it too would be conscious. The conclusion may be valid, but the premises are false.

Despite the detailed knowledge of neural structure that can now be obtained through micro-transection, it is still a model of the brain that results, not a replica. We cannot be sure how the identified parts interact or what details might escape identification. We do no more than speculate on the program (mind) that might run on this theoretical hardware. No matter how detailed, a brain emulation cannot be conscious—if emulation means a disembodied formalism or program. The conceit of brain emulation trades on an assumption of equivalence between neural functioning and computation—completely ignoring that neural functioning occurs in a context of embodiment while computation explicitly does not.

The persistence of this assumption gives rise to transhumanist fantasies such as copying minds, uploading one’s consciousness to cyberspace, or downloading it into alternative bodies—as though the software is completely separable from the hardware. It gives rise to absurd considerations such as the moral standing and ethical treatment of digital minds, or the question of how to prevent “mind crime”—that is, the abuse by powerful AIs of conscious sub-entities they might create within themselves.

The seductiveness of the computational metaphor is compounded by the ambiguity of mental terms such as ‘consciousness’, ‘mind,’ ‘sentience,’ ‘awareness,’ ‘experience,’ etc.—all of which can be interpreted from a behavioral (third-person) point of view as well as from the obvious first-person point of view. The two meanings are easily conflated, so that that the computational substrate, which might be thought to explain or produce a certain observable behavior, is also assumed to account for an associated but unobservable inner life. This assumption dubiously suggests that present or imminent AI might be conscious because it manifests behavior that we associate with consciousness.

Here we must tread carefully, for two reasons. First of all, because our only means of inferring the subjective life of creatures (natural or artificial) is through observing their behavior, which is no less true of fellow human beings. The conflation works both ways: we assume sentience where behavior seems to indicate it; and we empathetically read into behavior possible subjective experience based on our own sentience. As part of the social contract, and because we sincerely believe it, we deal with other people as though they have the same sort of inner life as we do. Yet, because we can only ever have our own experience, we are at liberty to doubt the subjective life of other creatures or objects, and certainly of computer programs. This is poorly charted territory, in part because of long-standing human chauvinism, on the one hand, and superstitious panpsychism, on the other. It is just as irrational to assume that bits of code are sentient as to assume that rocks or clocks are.

The second reason is the difficulty of identifying “behavior” at all. Language is deeply at fault, for it is simply not true that a rose is a rose is a rose or that an airplane “flies” like a bird. We think in categories that obliterate crucial distinctions. A “piece” of behavior may be labelled in a way that ignores context and its meaning as the action of an embodied agent—that is, its significance to the agent itself, ultimately in the game of survival. Thus, the simulated pitching of a device that hurls practice baseballs only superficially resembles the complex behavior of the human baseball player.

Relationships and patterns are reified and formalized in language, taken out of context, and assumed transferable to other contexts. Indeed, that is the essence of text as a searchable document (and a computer program is ultimately a text). This is one of the pitfalls of abstraction and formalism. An operation in a computer program may seem to be “the same” as the real-world action it simulates; but they are only the same in human eyes that choose to view them so by glossing over differences. AIs are designed to mimic aspects of human behavior and thought. That does not mean that they replicate or instantiate them.

Digital mind may well be possible—but only if it is embodied as an autopoietic system, with its own purposes and the dependent relationship to a physical environment that implies. Nothing short of that deserves the epithet mind. The idea of immortalizing one’s own consciousness in a digital copy is fatuous for numerous reasons. On the other hand, the prospect of creating new digital minds is misguided and dangerous for humanity. Those fascinated by the prospect should examine their motives. Is it for commercial gain or personal status? Is it to play God? We should seek to create digital tools for human use, not digital minds that would be tool users, competing with us for scarce resources on a planet already under duress.