What is intelligence?

Intelligence is an ambiguous and still controversial notion. It has been defined variously as goal-directed adaptive behavior, the ability to learn, to deal with novel situations or insufficient information, to reason and do abstract thinking, etc. It has even been defined as the ability to score well on intelligence tests! Sometimes it refers to observed behavior and sometimes to an inner capacity or potential—even to a pseudo-substance wryly called smartonium.

Just as information is always for someone, so intelligence is someone’s intelligence, measured usually by someone else with their biases, using a particular yardstick for particular purposes. Even within the same individual, the goals of the conscious human person may contradict the goals of the biological human organism. It is probably this psychological fact that allows us to imagine pursuing arbitrary goals at whim, whereas the goals of living things are hardly arbitrary.

Measures of intelligence were developed to evaluate human performance in various areas of interest to those measuring. This gave rise to a notion of general intelligence that could underlie specific abilities. A hierarchical concept of intelligence proposes a “domain independent” general ability (the famous g-factor) that informs and perhaps controls domain-specific skills. “General” can refer to the range of subjects as well as the range of situations. What is general across humans is not the same as what is general across known species or theoretically possible agents or environments. Perhaps, the intelligence measured can be no more general than the tests and situations used to measure it. As far as it is relevant to humans, the intelligence of other entities (whether natural or artificial) ultimately reflects their capacity to further or thwart human aims. Whatever does not interact with us in ways of interest to us may not be recognized at all, let alone recognized as intelligent.

It is difficult to compare animal intelligence across species, since wide-ranging sense modalities, cognitive capacities, and adaptations are involved. Tests may be biased by human motivations and sensory-motor capabilities. The tasks and rewards for testing animal intelligence are defined by humans, aligned with their goals. Even in the case of testing people, despite wide acceptance and appeal, the g-factor has been criticized as little more than a reification whose sole evidence consists in the very behaviors and correlations it is supposed to explain. Nevertheless, the comparative notion of intelligence, generalized across humans, was further generalized to include other creatures in the comparison, and then generalized further to include machines and even to apply to “arbitrary systems.” By definition, the measure should not be anthropocentric and should be independent of particular sense modalities, environments, goals, and even hardware.

Like the notion of mind-in-general, intelligence-in-general is an abstraction that is grounded in human experience while paradoxically freed in theory from the tangible embodiment that is the basis of that experience. Its origins are understandably anthropocentric, derived historically from comparisons among human beings, and then extended to comparisons of other creatures with each other and with human beings. It was then further abstracted to apply to machines. The goal of artificial intelligence (AI) is to produce machines that can behave “intelligently”—in some sense that is extrapolated from biological and human origins. It remains unclear whether such an abstraction is even coherent. Since concepts of general intelligence are based on human experience and performance, it also remains unclear to what extent an AI could satisfy or exceed the criteria for human-level general intelligence without itself being at least an embodied autonomous agent: effectively an artificial organism, if not an artificial person.

Can diverse skills and behaviors even be conflated into one overall capacity, such as “problem-solving ability” or the g-factor? While ability to solve one sort of problem carries over to some extent to other sorts of tasks, it does not necessarily transfer equally well to all tasks, let alone to situations that might not best be described as problem solving at all—such as, for example, the ability to be happy. Moreover, problem solving is a different skill from finding, setting, or effectively defining the problems worth solving, the tasks worth pursuing. The challenges facing society usually seem foisted upon us by external reality, often as emergencies. Our default responses and strategies are often more defensive than proactive. Another level of intelligence might involve better foresight and planning. Concepts of intelligence may change as our environment becomes more challenging. Or as it becomes progressively less natural and more artificial, consisting largely of other humans and their intelligent machines.

Biologically speaking, intelligence is simply the ability to survive. In that sense, all currently living things are by definition successful, therefore intelligent. Though trivial sounding, this is important to note because models of intelligence, however abstract, are grounded in experience with organisms; and because the ideal of artificial general intelligence (AGI) involves attempting to create artificial organisms that are (paradoxically) supposed to be liberated from the constraints of biology. It may turn out, however, that the only way for an AI to have the autonomy and general capability desired is to be an embodied product of some form of selection: in effect, an artificial organism. Another relevant point is that, if an AI does not constitute an artificial organism, then the intelligence it manifests is not actually its own but that of its creators.

Autonomy may appear to be relative, a question of degree; but there is a categorical difference between a true autonomous agent—with its own intelligence dedicated to its own existence—and a mere tool to serve human purposes. A tool manifests only the derived intelligence of the agent designing or using it. An AI tool manifests the intelligence of the programmer. What does it mean, then, for a tool to be more intelligent than its creator or user? What it can mean, straightforwardly, is that a skill valued by humans is automated to more effectively achieve their goals. We are used to this idea, since every tool and machine was motivated by such improvement and usually succeeds until something better comes along. But is general intelligence a skill that can be so augmented, automated, and treated as a tool at the beck of its user?

The evolution of specific adaptive skills in organisms must be distinguished from the evolution of a general skill called intelligence. In conditions of relative stability, natural selection would favor automatic domain-specific behavior, reliable and efficient in its context. Any pressure favoring general intelligence would arise rather in unstable conditions. The emergence of domain-general cognitive processes would translate less directly into fitness-enhancing behavior, and would require large amounts of energetically costly brain tissue. The biological question is how domain-general adaptation could emerge distinct from specific adaptive skills and what would drive its emergence.

In light of the benefits of general intelligence, why do not all species evolve bigger and more powerful brains? Every living species is by definition smart enough for its current niche, for which its intelligence is an economical adaptation. It would seem, as far as life is concerned, that general intelligence is not only expensive, and often superfluous, but implies a general niche, whatever that can mean. Humans, for example, evolved to fit a wide range of changing conditions and environments, which they continue to further expand through technology. Even if we manage to stabilize the natural environment, the human world changes ever more rapidly—requiring more general intelligence to adapt to it.

The possibility to understand mind as computation, and to view the brain metaphorically as a computer, is one of the great achievements of the computer age. (The computer metaphor is underwritten more broadly by the mechanist metaphor, which holds that any behavior of a biological “system” could be reduced to an algorithm.) Computer science and brain science have productively cross-pollinated. Yet, the brain is not literally a machine, and mind and intelligence are ambiguous concepts not exclusively related to brain. “Thinking” suggests reasoning and an algorithmic approach—the ideal of intellectual thought—which is only a small part of the brain’s activity responsible for the organism as a whole. Ironically, abstract concepts produced by the brain are recycled to explain the operations of the brain that give rise to them in the first place.

Ideally, we expect of artificial intelligence to do what we want, better than we can, and without supervision. This raises several questions and should raise eyebrows too. Will it do what we want, or how can it be made to do so? How will we trust its (hopefully superior) judgment if is so much smarter than us that we cannot understand its considerations? How autonomous can AI be, short of being a true self-interested agent? Under what circumstance could machines become such agents, competing with each other and with humans and other life forms for resources and for their very existence? The dangers of superintelligence attend the motive to achieve ever greater autonomy in AI systems, the extreme of which is the genuine autonomy manifest by living things. AI should instead focus on creating powerful tools that remain under human control. That would be safer, wiser, and—shall we say—more intelligent.