Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with vague meanings include intelligence, embodiment, mind, consciousness, perception, value, goal, agent, knowledge, belief, and thinking. Such vocabulary is naively borrowed from human mental life and used to underpin a theoretical and abstract general notion of intelligence that could be implemented by computers. Intelligence has been defined many ways—for example, as the ability to deal with complexity. But what does “dealing with” mean exactly? Or, defined as the ability to predict future or missing information; but what is “information” if not relevant to the well-being of some unspecified agent? It should be imperative to clarify such ambiguities, if only to identify a crucial threshold between conventional mechanical tools and autonomous artificial agents. While it might be inconsequential what philosophers think about such matters, it could be devastating if AI developers, corporations, and government regulators get it wrong.
However intelligence is formally defined, our notions of it derive originally from experience with living creatures, whose intelligence ultimately is the capacity to survive and breed. Yet, formal definitions often involve solving specific problems set by humans, such as on IQ tests. This problem-solving version of intelligence is tied to human goals, language use, formal reasoning, and modern cultural values; and trying to match human performance risks to test for humanness more than intelligence. The concept of general intelligence, as it has developed in AI, does not generalize the actual instances of mind with which we are familiar—that is, organisms on planet Earth—so much as it selects isolated features of human performance to develop into an ideal theoretical framework. This is then supposed to serve as the basis of a universally flexible capacity, just as the computer is understood to be the universal machine. A very parochial understanding of intelligence becomes the basis of abstract, theoretically possible “mind,” supposedly liberated from bodily constraint and all context. However, the generality sought for AI runs counter to the specific nature and conditions for embodied natural intelligence. It remains unclear to what extent an AI could satisfy the criteria for general intelligence without being effectively an organism. Such abstractions as superintelligence (SI) or artificial general intelligence (AGI) remain problematically incoherent. (See Maciej Cegłowski’s amusing critique: https://idlewords.com/talks/superintelligence.htm)
AI was first modelled on language and reasoning skills, formalized as computation. The limited success of early AI compared unfavorably with broader capabilities of organisms. The dream then advanced from creating specific tools to creating artificial agents that could be tool users, imitating or replicating organisms. But natural intelligence is embodied, whereas the theoretical concept of “mind in general” that underpins AI is disembodied in principle. The desired corollary is that such a mind could be re-embodied in a variety of ways, as a matter of consumer choice. But whether this corollary truly follows depends on whether embodiment is a condition that can be simulated or artificially implemented, as though it were just a matter of hooking up a so-called mind to an arbitrary choice of sensors and actuators. Can intelligence be decoupled from the motivations of creatures and from the evolutionary conditions that gave rise to natural intelligence? Is the evolution of a simulation really a simulation of natural evolution? A negative answer to such questions would limit the potential of AI.
The value for humans of creating a labor-saving or capacity-enhancing tool is not the same as the value of creating an autonomous tool user. The two goals are at odds. Unless it constitutes a truly autonomous system, an AI manifests only the intentionality and priorities of its programmers, reflecting their values. Talk of an AI’s perceptions, beliefs, goals or knowledge is a convenient metaphorical way of speaking, but is no more than a shorthand for meanings held by programmers. A truly autonomous system will have its own values, needs, and meanings. Mercifully, no such truly autonomous AI yet exists. If it did, programmers would only be able to impress their values on it in the limited ways that adults educate children, governments police their citizenry, or masters impose their will on subordinates. At best, SI would be no more predictable or controllable than an animal, slave, child or employee. At worst, it would control, enslave, and possibly displace us.
A reasonable rationale for AGI requires it to remain under human control, to serve human goals and values and to act for human benefit. Yet, such a tool can hardly have the desired capabilities without being fully autonomous and thus beyond human control. The notion of “containing” an SI implies isolation from the real world. Yet, denial of physical access to or from the real world would mean that the SI would be inaccessible and useless. There would have to be some interface with human users or interlocutors just to utilize its abilities; it could then use this interface for its own purposes. The idea of pre-programming it to be “friendly” is fatuously contradictory. For, by definition, SI would be fully autonomous, charged with its own development, pursuing its own goals, and capable of overriding its programming. The idea of training human values into it with rewards and punishments simply regresses the problem of artificially creating motivation. For, how is it to know what is rewarding? Unless the AI is already an agent competing for survival like an organism, why would it have any motivation at all? If it is such an agent, why would it accept human values in place of its own? And how would its intelligence differ from that of natural organisms, which are composed of cooperating cells, each with its relative autonomy and needs? The parts of a machine are not like the parts of an organism.
While a self-developing neural net is initially designed by human programmers, like an organism it would constitute a sort of black box. Unlike the designed artifact, we can only speculate on the structure, functioning, and principles of a self-evolving agent. This is a fundamentally different relationship from the one we have to ordinary artifacts, which in principle do what we want and are no more than what we designed them to be. These extremes establish an ambiguous zone between a fully controllable tool and a fully autonomous agent pursuing its own agenda. If there is a key factor that would lead technology irreversibly beyond human control, it is surely the capacity to self-program, based on learning, combined with the capacity to self-modify physically. There is no guarantee that an AI capable of programming itself can be overridden by a human programmer. Similarly, there is no guarantee that programmable matter (nanites) would remain under control if it can self-modify and physically reproduce. If we wish to retain control over technology, it should consist only of tools in the traditional sense—systems that do not modify or replicate themselves.
Sentience and consciousness are survival strategies of natural replicators. They are based on the very fragility of organic life as well as the slow pace of natural evolution. If the advantage of artificial replicators is to transcend that fragility from the outset, then their very robustness might also circumvent the evolutionary premise—of natural selection through mortality—that gave rise to sentience in the first place. And the very speed of artificial evolution could drastically out-pace the ability of natural ecosystems to adapt. The horrifying possibility could be a world overrun by mechanical self-replicators, an artificial ecology that outcompetes organic life yet fails to evolve the sentience we cherish as a hallmark of living things. (Imagine something like Kurt Vonnegut’s ‘ice nine’, which could escape the planet and replicate itself indefinitely using the materials of other worlds. As one philosopher put it: a Disneyland without children!) If life happened on this planet simply because it could happen, then possibly (with the aid of human beings) an insentient but robust and invasive artificial nature could also happen to displace the natural one. A self-modifying AI might cross the threshold of containment without our ever knowing or being able to prevent it. Self-improving, self-replicating technology could take over the world and spread beyond: a machine death of the universe. This exotic possibility would not seem to correspond to any human value, motivation or hope—even those of the staunchest posthumanists. Neither superintelligence nor silicon apocalypse seems very desirable.
The irony of AI is that it redefines intelligence as devoid of the human emotions and values that actually motivate its creation. This reflects a sad human failure to know thyself. AI is developed and promoted by people with a wide variety of motivations and ideals apart from commercial interest, many of which reflect some questionable values of our civilization. Preserving a world not dominated one way or another by AI might depend on a timely disenchantment with the dubious premises and values on which the goals of AI are founded. These tacitly include: control (power over nature and others), transcendence of embodiment (freedom from death and disease), laziness (slaves to perform all tasks and effortlessly provide abundance), greed (the sheer hubris of being able to do it or being the first), creating artificial life (womb-envy), creating super-beings (god-envy), creating artificial companions (sex-envy), and ubiquitous belief in the mechanist metaphor (computer-envy—the universe is metaphorically or literally digital).
Some authors foresee a life for human consciousness in cyberspace, divorced from the limitations of physical embodiment—the update of an ancient spiritual agenda. (While I think that impossible, it would at least unburden the planet of all those troublesome human bodies!) Some authors cite the term cosmic endowment to describe and endorse a post-human destiny of indefinite colonization of other planets, stars, and galaxies. (Endowment is a legal concept of property rights and ownership.) They imagine even the conversion of all matter in the universe into “digital mind,” just as the conquistadors sought to convert the new world to a universal faith while pillaging its resources. At heart, this is the ultimate extension of manifest destiny and lebensraum.
Apart from such exotic scenarios, the world seems to be heading toward a dystopia in which a few people (and their machines) hold all the means of production and no longer need the masses either as workers or as consumers—and certainly not as voters. The entire planet could be their private gated community, with little place for the rest of us. Even if it proves feasible for humanity to retain control of technology, it might only serve the aims of the very few. This could be the real threat of an “AI takeover,” one that is actually a political coup by a human elite. How consoling will it be to have human overlords instead of superintelligent machines?