The quest for superintelligence

Human beings have been top dogs on the planet for a long time. That could change. We are now on the verge of creating artificial intelligence that is smarter than us, at least in specific ways. This raises the question of how to deal with such powerful tools—or, indeed, whether they will remain controllable tools at all or will instead become autonomous agents like ourselves, other animals, and mythical beings such as gods and monsters. An agent, in a sense derived from biology, is an autopoietic system—one that is self-defining, self-producing, and whose goal is its own existence. A tool or machine, on the other hand, is an instrument of an agent’s purposes. It is an allopoietic system—one that produces something other than itself. Unless it happens to also be an autopoietic system, it could have no goals or purposes of its own. An important philosophical issue concerning AI is whether it should remain a tool or should become an agent in its own right (presuming that is even possible). Another issue is how to ensure that powerful AI remains in tune with human goals and values. More broadly: how to make sure we remain in control of the technologies we create.

These questions should interest everyone, first of all because the development of AI will affect everyone. And secondly, because many of the issues confronting AI researchers reflect issues that have confronted human society all along, and will soon come at us on steroids. For example, the question of how to align the values of prospective AI with human interests simply projects into a technological domain the age-old question of how to align the values of human beings among themselves. Computation and AI are the modern metaphor for understanding the operations of brains and the nature of mind—i.e., ourselves. The prospect of creating artificial mind can aid in the quest to understand our own being. It is also part of the larger quest: not only to control nature but to re-create it, whether in a carbon or a silicon base. And that includes re-creating our own nature. Such goals reflect the ancient dream to self-define, like the gods, and to acquire godlike powers.

Ever since Descartes and La Mettrie, the philosophy of mechanism has blurred the distinction between autopoietic and allopoietic systems—between organisms and machines. Indeed, it traditionally regards organisms as machines. However, an obvious difference is that machines, as we ordinarily think of them, are human artifacts, whereas organisms are not. That difference is being eroded from both ends. Genetic and epigenetic manipulation can produce new creatures, just as nucleosynthesis produced new man-made elements. At the other end lies the prospect to initiate a process that results in artificial agents: AI that bootstraps into an autopoietic system, perhaps through some process of recursive self-improvement. Is that possible? If so, is it desirable?

It does not help matters that the language of AI research casually imports mentalistic terms from everyday speech. The literature is full of ambiguous notions from the domain of everyday experience, which are glibly transferred to AI—for example, concepts such as agent, general intelligence, value, and even goal. Confusing “as if” language crops up when a machine is said to reason, think or know, to have incentives, desires, goals, or motivations, etc. Even if metaphorical, such wholesale projection of agency into AI begs the question of whether a machine can become an autopoietic system, and obscures the question of what exactly would be required to make it so.

To amalgamate all AI competencies in one general and powerful all-purpose program certainly has commercial appeal. It now seems like a feasible goal, but may be too good to be true. For, the concept of artificial general intelligence (AGI) could turn out to be not even be a coherent notion—if, for example, “intelligence” cannot really be divorced from a biological context. To further want AGI to be an agent could be a very bad idea. Either way, the fear is that AI entities, if super-humanly intelligent, could evade human control and come to threaten, dominate, or even supersede humanity. A global takeover by superintelligence has been the theme of much science fiction and is now the topic of serious discussion in think tanks around the world. While some transhumanists might consider it desirable, probably most people would not. The prospect raises many questions, the first being whether AGI is inevitable or even desirable. A further question is whether AGI implies (or unavoidably leads to) agency. If not, what threats are posed by an AGI that is not an agent, and how can they be mitigated?

I cannot provide definite answers to these questions. But I can make some general observations and report on a couple of strategies proposed by others.  An agent is necessarily embodied—which means it is not just physically real but involved in a relationship with the world that matters to it. Specifically, it can interact with the world in ways that serve to maintain itself. (All natural organisms are examples, and here we are considering the possibility of an artificial organism.) One can manipulate tools in a direct way, without having to negotiate with them as we do with agents such as people and other creatures. The concept of program initially meant a unilateral series of commands to a machine to do something. A command to a machine is a different matter than a command to an agent, which has its own will and purposes and may or may not choose to obey the command. But the concept of program has evolved to include systems with which we mutually interact, as in machine learning and self-improving programs. This establishes an ambiguous category between machine and agent. Part of the anxiety surrounding AGI stems from the novelty and uncertainty regarding this zone.

It is  problematic, and may be impossible, to control an agent more intelligent than oneself. The so-called value alignment problem is the desperate quest to nevertheless find a way to have our cake (powerful AI to use at our discretion) and be able to eat it too (or perhaps to keep it from eating us). It is the challenge to make sure that AI clearly “understands” the goals we give it and pursues no others. If it has any values at all, these should be compatible with human values and it should value human life. I cannot begin here to unravel the tangled fabric of tacit assumptions and misconceptions involved in this quest. (See instead my article, “The Value Alignment Problem,” posted in the Archive on this website.) Instead, I point to two ways to circumvent the challenge. The first is simply not to aim for AGI, let alone for agents. This strategy is proposed by K. Eric Drexler. Instead of consolidating all skill in one AI entity, it would be just as effective, and far safer, to create ad hoc task-oriented software tools that do what they are programmed to do because their capacity to self-improve is deliberately limited. The second strategy is proposed by Russell Stuart: to build uncertainty into AI systems, which are then obliged to hesitate before acting in ways adverse to human purposes—and thus to consult with us for guidance.

The goal to create superintelligence must be distinguished from the goal to create artificial agents. Superintelligent tools can exist that are not agents; agents can exist that are not superintelligent. The problem of controlling AI and aligning its values are byproducts of the desire to create meta-tools that are neither conventional tools nor true agents. Furthermore, real-world goals for AI must be distinguished from specific tasks. We understandably seek powerful tools to achieve our real-world goals for us, yet fearful they may misinterpret our wishes or carry them out in some undesired way. That dilemma is avoided if we only create programs to accomplish specified tasks. That equals more work for humans than automating automation itself, but keeps technology under human control.

Why seek to eliminate human input and participation? An obvious answer is to “optimize” the accomplishment of desired goals. That is, to increase productivity (equals wealth) through automation and thereby also reduce the burdens of human labor. Perhaps modern human beings are never satisfied with enough? Perhaps at the same time we simply loathe effort of any kind, even mental. Shall we just compulsively substitute automation for human labor whenever possible? Or are we indulging as well a faith that AI could accomplish all human purposes better and more ecologically than people? If the goal is ultimately to automate everything, what would people then do with their time when they are no longer obliged to do anything? If the hope behind AI is to free us from drudgery of any sort (in order, say, to “make the best of life’s potential”) what is that potential? How does it relate to present work and human satisfactions? What will machines free us for?

And what are the deeper and unspoken motivations behind the quest for superintelligence? To imitate life, to acquire godlike powers, to transcend nature and embodiment, to create an artificial ecology? Such questions traditionally lie outside the domain of scientific discourse. They become political, social, ethical and even religious issues surrounding technology. But perhaps they should be addressed within science too, before it is too late.