On the variety of possible minds

Especially since the dawn of the space age, people have wondered at the possibility of alien life forms and what sort of minds they might manifest. The potential of artificial intelligence now raises similar questions, to which are added the quest to better understand the minds of other creatures on this planet and even the mentalities of fellow human beings. Finally, understanding mind as a sort of variable that can be tweaked raises the question of the fate of our species and how it might choose its successors, whether biological or artificial.

The search for extraterrestrial intelligence and the quest for artificial intelligence both demand clear concepts of intelligence. Similarly, a general consideration of the range of possible minds demands clarifying mind as a concept. Since terms and associated connotations for this concept vary across languages, English language users should not assume a unified or universal understanding of what constitutes mind. Nevertheless, we can point to some considerations and possible agreements toward a generalized definition.

First, notions of mind and the mental may refer either to observable behavior or to subjective experience—that is, to third-person or first-person descriptions. Mind can be described and imagined either behaviorally (what it is observed to do) or phenomenally (“what it is like” to be that mind). Second, from a materialist perspective, mind must be embodied—that is, it must (1) be physically instantiated, and must also (2) reflect the sort of relations to an environment that govern the existence of biological organisms, which (3) may or may not imply evolution through natural selection. Finally, other minds can only be conceived with the minds that we have. This embroils us in circularity, since one way to grasp the limitations of our own thinking is by placing it in the context of possible other minds, which must be conceived within the limitations of our present thinking.

Mind is often contrasted to matter, the mental to the physical. To arrive at a definition of mind, consider this contrast in terms of intention versus physical causation (i.e., efficient causation as conceived in basic physics). An electrical circuit in an appliance can be described causally, as a flow of electrons, for example. It can also be described intentionally, in terms of the design of the appliance, the purpose it is supposed to serve, etc. Of course, natural organisms are not human artifacts and we do not assume intelligent design. Yet, organisms manifest their own intentionality. In terms of chemical and physical processes within them, and in relation to their environment, they also manifest causality, and their activities can be described on a causal level. However, they are distinguished from “inert” matter precisely by the fact that description on the physical level cannot account completely or adequately for their behavior, let alone for any imagined subjective phenomenology. We cannot reduce their purposive behavior to physical causality, even if we presume that the former must ride on the latter. Causality is necessary for mind, but not sufficient. Though we assume it must have a material basis, mind exhibits intention. Let us then try to clarify the notion of intention.

The concept of intentionality has a long and confusing history in philosophy as “aboutness,” which is essentially a linguistic notion. Since only humans use fully grammatical language, let us reframe intention outside the context of language, as an internal connection made within an autopoietic system for its own purposes—that is, a system which is self-defining, self-creating, self-maintaining. That connection might be a synapse made within a brain or, potentially, a logical connection made electronically within an artificial system. In either case, it is made by the system itself, not by an external observer, programmer, or other agent. If we look at the input-output relation of the system, we see that it cannot readily be explained by simple causality (at least on the level of Newtonian action-reaction). Something more complex is going on inside the system to produce the response. Nevertheless, these remain two alternative ways of looking at the behavior of the system, of inferring what makes it tick. As ways of looking, causality and intentionality each project aspects of the observer’s mentality on the system itself. Yet, apparently, that system has its own purposes and mentality, and we assume this will be the case for artificial minds, as it is for natural ones.

In regard to possible minds, we assume here that physical embodiment is required, even for digital mind. This precludes spirits, ghosts, and gods. If embodiment is understood as a relation of dependency—of an autopoietic system on an environment—it may also preclude a great deal (perhaps all) of AI. A germane question is whether embodiment, thus understood, can be simulated. Can it only arise through a process of selection in the real world, or can that process be computational? To put it another way, can a mind evolve in silico, then be downloaded to a physical system such as a robot? It is a question that raises further questions.

To explore it, let us look at naturally embodied mind which, like its corresponding brain, is the organ of a body, whose primary purpose serves the maintenance of that body and furtherance of its kind—an arrangement that evolved through natural selection. That primary purpose entails a relationship of dependence upon a real environment, so that it serves the organism to monitor that environment in relation to its internal needs. Attention is directed toward internal or external changes. Since deliberate action naturally concerns the external world more than the internal one (which tends to self-regulate more automatically), our natural focus of attention is outward. This gives the misleading impression that we simply dwell in the world as presented to the senses (naïve realism), whereas in fact we interface with it by means of continually updated, internally generated models that reflect the body and its needs and processes. When we think of mind, therefore, often we mean a system to deal with the world, and which may have a concept of it. We should bear in mind that dealing with the world (or with a concept of it) is fundamentally part of biological self-regulation, which provides the agent’s motivations, values, and premises for action. What would it mean for a computer program (AI) to deal with the world for the purpose of its own self-regulation and maintenance? What would its concept of the world be?

The functionalist outlook holds that an artificial system could instantiate all the essential elements and relations of a natural system. Thus, an artificial body should be possible, and therefore an artificial mind that is a function of it. Must it be physically real, in contrast to being virtual, merely code in a simulation? The natural mind/body is a product of natural selection, which is a wasteful contest of survival over many generations of expendable wetware. (Life depends on death.) Could virtual organisms evolve through a simulation of natural selection—which would entail a real expenditure of energy (running the computer) but no physical destruction as generations passed away—and then be instantiated in physical materials? Can virtual entities acquire real motivation (e.g., to survive)? Can their own state (or anything else) actually matter to them, apart from the welfare of a physical body? What would it mean to care about the state of a virtual body?

Apart from such questions, and the project to actually create artificial organisms, another quest is to abandon the re-creation of human being or natural mind as we know it, and go instead for what we have longed for all along. Human beings have never fully accepted their nature as biological creatures. The rejection of human embodiment was traditionally expressed through religion, whether in the transcendent Christian soul or the Buddhist quest for release from incarnation. The desire for a humanly-defined and designed world is the basis of all culture, which serves to create a world apart from nature, both physically and mentally. The rejection of embodiment can now be expressed through technology, by artificially imitating natural intelligence, mind, and even life. What if we bypass the imitation of nature to directly create the being(s) we would ideally have ourselves be?

Whether or not that is feasible, it would be a valuable exercise to imagine what sort of being that should be. We are highly identified with our precious conscious experience, which seems to imply an experiencing self we hope to preserve beyond death. But what if that conscious stream is no more than another dysfunctional aspect of naturally evolved biological life, an accident of cosmic history? Which is more important: to create artificial beings with consciousness like ours or deeply moral ones to represent us in the far future? If we are asking such questions now, might more advanced alien civilizations have already answered them?