On intentionality

Intentionality is an elusive concept that fundamentally means reference of something to something else. Reference, however, is not a property, state, or relationship inhering in things or symbols, nor between them; it is rather an action performed by an agent, who should be specified. It is an operation of relating or mapping one thing or domain to another. These domains may differ in their character (again, as defined by some agent). A picture, for example, might be a representation of a real landscape, in the domain of painted images. As such it refers to the landscape, and it is the painter who does the referring. Similarly, a word or sentence might represent a person’s thought, perception, or intention. The relevant agents, domains, and the nature of the mappings must be included before intentionality can be properly characterized.

In these terms, the rings of a tree, for example, may seem to track or indicate the age of the tree or periods favorable to growth. Yet, it is the external observer, not the tree, who establishes this connection and who makes the reference. Connections made by the tree itself (if such exist) are of a different sort. In all likelihood, the tree rings involve causal but not intentional connections.

A botanist might note connections she considers salient and may conclude that they are causal. Thus, changing environmental conditions can be deemed a cause of tree ring growth. Alternatively, it would stretch imagination to suppose that the tree intended to put on growth in response to favorable conditions. Or that God (or Nature) intended to produce the tree ring pattern in response to weather conditions. These suppositions would project human intentionality where it doesn’t belong. Equally, it would be far-fetched to think that the tree deliberately created the rings in order to store in itself a record of those environmental changes, either for its own future use or for the benefit of human observers. The tree is simply not the kind of system that can do that. The intentionality we are dealing with is rather that of the observer. On the other hand, there are systems besides human beings that can do the kind of things we mean by referring, intending, and representing. In the case of such systems, it is paramount to distinguish clearly the intentionality of the system itself from that of the observer. This issue arises frequently in artificial intelligence, where the intentionality of the programmer is supposed to transfer to the automated system.

The traditional understanding of intentionality generally fails to make this distinction, largely because it is tied to human language usage. “Reference” is taken for granted to mean linguistic reference or something modeled on it. Intentionality is thus often considered inherently propositional even though, as far as we know, only people formulate propositions. If we wish to indulge a more abstract notion of ‘proposition’, we must concede that in some sense the system makes assertions itself, for its own reasons and not those of the observer. If ‘proposition’ is to be liberated from human statements and reasoning, the intention behind it must be conceived in an abstract sense, as a connection or mapping (in the mathematical sense) made by an agent for its own purposes.

Human observers make assertions of causality according to human intentions, whereas intentional systems in general make their own internal (and non-verbal) connections, for their own reasons, regardless of whatever causal processes a human observer happens to note. Accordingly, an ‘intentional system’ is not merely one to which a human observer imputes her own intentionality as an explanatory convenience (as in Dennett’s “intentional stance”). Such a definition excludes systems from having their own intentionality, which reflects the longstanding mechanist bias of western science since its inception: that matter inherently lacks the power of agency we attribute to ourselves, and can only passively suffer the transmission of efficient causes.

An upshot of all this is that the project to explain consciousness scientifically requires careful distinctions that are often glossed over. One must distinguish the observer’s speculations about causal relations—between brain states and environment—from speculations about the brain’s tracking or representational activities, which are intentional in the sense used here. The observer may propose either causal or intentional connections, or both, occurring between a brain (or organism) and the world. But, in both cases, these are assertions made by the observer, rather than by the brain (organism) in question. The observer is at liberty to propose specific connections that she believes the brain (organism) makes, in order to try to understand the latter’s intentionality. That is, she may attempt to model brain processes from the organism’s own point of view, attempting as it were to “walk in the shoes of the brain.” Yet, such speculations are necessarily in the domain of the observer’s consciousness and intentionality. In trying to understand how the brain produces phenomenality (the “hard problem of consciousness”), one must be clear about which agent is involved and which point of view.

In general, one must distinguish phenomenal experience itself from propositions (facts) asserted about it. I am the witness (subject, or experiencer) directly to my own experience, about which I may also have thoughts in the form of propositions I could assert regarding the content of the experience. These could be proposed as facts about the world or as facts about the experiencing itself. Along with other observers, I may speculate that my brain, or some part of it, is the agent that creates and presents my phenomenal experience to “me.” Other people might also have thoughts (assert propositions) about my experience as they imagine it; they may also observe my behavior and propose facts about it they associate with what they imagine my experience to be. All these possibilities involve the intentionality of different agents in differing contexts.

One might think that intentionality necessarily involves propositions or something like them. This is effectively the basis on which an intentional analysis of brain processes inevitably proceeds, since it is a third-person description in the domain of scientific language. This is least problematic when dealing with human cognition, since humans are language users who normally translate their thoughts and perceptions into verbal statements. It is more problematic when dealing with other creatures. However, in all cases such propositions are in fact put forward by the observer rather than by the system observed. (Unless, of course, these happen to be the same individual; but even then, there are two distinct roles.)

The observer can do no better than to theoretically propose operations of the system in question, formulated in ordinary or some symbolic language. The theorist puts herself in the place of the system to try to fathom its strategies—what she would do, given what she conceives as its aims. This hardly implies that the system in question (the brain) “thinks” in human-language sentences (let alone equations) any more than a computer does. But, with these caveats, we can say that it is a reasonable strategy to translate the putative operations of a cognitive system into propositions constructed by the observer.

In the perspective presented here, phenomenality is grounded in intentionality, rather than the other way around. This does not preclude that intentionality can be about representations themselves or phenomenal experience per se (rather than about the world), since the phenomenal content as such can be the object of attention. The point to bear in mind is that two domains of description are involved, which should not be conflated. Speculation about a system’s intentionality is an observer’s third-person description; whereas a direct expression of experience is a first-person description by the subject. This is so, even when subject and observer happen to be the same person. It is nonsense to talk of phenomenality (qualia) as though it were a public domain like the physical world, to which multiple subjects can have access. It is the external world that offers common access. We are free to imagine the experience of agents similar to ourselves. But there is no verifiable common inner world.

All mental activity, conscious or unconscious, is necessarily intentional, insofar as the connections involved are made by the organism for its own purposes. (They may simultaneously be causal, as proposed by an observer.) But not all intentional systems are conscious. Phenomenal states are thus a subset of intentional states. All experience depends on intentional connections (for example, between neurons); but not all intentional connections result in conscious experience.

Sentience and selfhood

‘Consciousness’ is a vague term in the English language. Where existent, its counterparts in other languages often carry several meanings. To be conscious can be either transitive or intransitive; it can mean simply to be aware of something—to have an experience—or it can mean a state opposed to sleep, coma, or inattention. While consciousness clearly involves the role of the subjective self, one is not necessarily aware of that role in the moment. That is, one can be conscious though not self-conscious. The latter notion also is ambiguous: in everyday talk, self-consciousness refers to a potentially embarrassing awareness of one’s relationship to others, perhaps social strategizing. Here, it will mean something more technical: simply the momentary awareness of one’s own existence as a conscious subject.

It might be assumed that to be conscious is to be self-conscious, since the two are closely bound up for human beings. I propose rather to make a distinction between sentience (simply having experience) and the awareness of having that experience. The first involves no more than the naïve appearance of an external world as well as internal sensations—what Kant called phenomena and more recent philosophers call “contents of consciousness” or qualia. No concept of self enters into sentience as such. The second involves, additionally, the awareness of self and of the act or fact of experiencing. One should thus be able to imagine, at least, that other creatures can be sentient—even if they do not seem aware of their individual existence in our human way, and regardless of whether one can imagine just what it is like to be them.

Language complicates the issue. For, we can scarcely speak or think of sentience (or awareness, consciousness, experience, etc.) in general without reference to our familiar human sentience. We are thereby reminded of our own existence—indeed, of our presence in the moment of speaking or thinking about it. Nevertheless, it is as possible to be caught up in thought as to be caught up in sensation. (We all daydream, for example, only “awakening” when we realize that is what we have been doing.) Then the object is the focus rather than its subject. This outward focus is, in fact, the default state. Often, we are simply aware of the world around us, or of some thought in regard to it; we are not aware of being aware. Perhaps it is the fluidity of this boundary—between the state of self-awareness and simple awareness of the contents of experience—which gives the impression that sentience necessarily involves self-awareness. After all, as soon as we notice ourselves being sentient, we are self-aware. It is illogical, however, to conclude that creatures without the capability of self-awareness are not sentient. Language plays tricks with labels. At one time, animals were considered mere insensate machines—incapable of feeling, let alone thought, because these properties could belong only to the human soul.

One might even suppose that self-consciousness is a function of language, since the act of speaking to others directly entails and reflects one’s own existence in a way that merely perceiving the world or one’s sensations does not. Yet, it hardly follows that either sentience or self-consciousness is limited to language users. The problem, again, is that we are ill-equipped to imagine any form of experience than our own, which we are used to expressing in words, both to others and to ourselves.

This raises the question of the nature and function of self-consciousness, if it is not simply a by-product of the highly evolved communication of a social species. The question is complicated by the fact that identifiable tags of self-consciousness (such as recognizing one’s image in a reflection) seem to be restricted to intelligent creatures with large brains—such as chimpanzees, cetaceans, and elephants—all of which are also social creatures. On the other hand, social insects communicate, but we do not thereby suppose that they are conscious as individuals. To attribute a collective consciousness to the hive or colony extends the meaning of the term beyond the subjective sense we are considering here. It becomes a description of emergent behavior, observed, rather than individual experience perceived. In some sense, consciousness emerges in the brain; but few today would claim that individual neurons are “conscious” because the brain (or rather the whole organism) is conscious.

Closely related to the distinction between simple awareness and self-awareness is the distinction between object and subject, and the corresponding use of person in language. We describe events around us in the third person, as though their appearance is simply objective fact, having nothing to do with the perceiver. For the most part, for us the world simply is. Self-conscious in theory, naïve realism is our actual default state of mind. With good (evolutionary) reason, the object dominates our attention. Yet, self-awareness, too, is functional for us as highly social creatures. We get along, in part through the ability to imagine the subjective experience of others, which means first recognizing our own subjectivity. The very fact that we conceive of sentience at all is only possible because of this realization. The subject (self) emerges in our awareness as an afterthought with profound implications. As in the biblical Fall, our eyes are opened to our existence as perceiving agents, and we are cast from the state of unselfconscious being.

The modern understanding of consciousness (i.e., awareness of the world as distinct from the world itself) is that the object’s appearance is constructed by the subject. Our daily experience is a virtual reality produced in the brain, an internal map constantly updated from external input. This realization entails metaphysical questions, such as the relationship between that virtual inner show and the reality that exists “out there.” But that is also a practical question. We need an internal account of external reality that is adequate for survival, independent of how “true” it might or might not be. Self-consciousness is functional in that way too. It serves us to know that we co-create a model of external reality, and that the map is not the territory itself, but something we create as a useful guide to navigate it. Knowing the map as a symbolic representation rather than objective fact, means we are free to revise it according to changing need. The moment or act of self-consciousness awakens us from the realist trance. One is no longer transfixed by experience taken at face value. Suddenly we are no longer looking at the world but at our own looking.

This capacity to “wake up” serves both the individual and society. It enables the person or group to stand back from an entrapping mindset, or viewpoint, to question it, which opens the possibility of a broader perspective. Literally, this means a bigger picture, encompassing more of reality, which is potentially more adequate for survival both individually and collectively. Knowledge is empowering; yet it is also a trap when it seems to form a definitive account. The map is then mistaken for the territory and we fall again into trance. So, there is a dialectical relationship between knowing and questioning, between certainty and uncertainty. The ability to break out of a particular viewpoint or framework establishes a new ground for an expanded framework; but that can only ever be provisional, for the new ground must eventually give way again to a yet larger view—ad infinitum. That, of course, is challenging for a finite creature. We are obliged to trust the knowledge we have at a given time, while aware that it may not be adequate. That double awareness is fraught with anxiety. The psychological tendency is to take refuge in what we take to be certain, ignoring the likelihood that it is illusory.

Sentience arose in organisms as a guide to survival, an internal model of the world. Self-consciousness arose—at least in humans—as a further survival tool, the ability to transcend useful appearances in favor of potentially more useful ones. It comes, however, at the price of ultimate uncertainty. One may prefer the trance to the anxiety. From a species point of view, that may be a luxury that expendable individuals can afford, which the planetary collective cannot. Individuals and even nations can stand or fall by their mere beliefs, through some version of natural selection. But what inter-galactic council will be there to give the Darwin Award to a failed human species?

The equation of experience

I cringe when I hear people speak casually of their reality, since I think what they mean is their personal experience and not the reality we live in together. Speaking about “realities” in the plural is more than an innocent trope. It is often a way to justify belief or opinion, as though private experience is all that matters because there is no objective reality to arbitrate between perspectives, or because the task of approaching it seems hopeless. But clearly there is an objective reality of nature, even if people cannot agree upon it, and what we believe certainly does matter to our survival. So, it seems important to express the relationship between experience and reality in some clear and concise way.

The “equation of experience” is my handy name for the idea that everything a person can possibly experience or do—indeed all mental activity and associated behavior—is a function of self and world conjointly. Nothing is ever purely subjective or purely objective. There is always a contribution to experience, thought, and behavior from within oneself, and likewise a contribution from the world outside. On the analogy of a mathematical function, this principle reads E = f(s,w). The relative influence of these factors may vary, of course. Sensory perception obviously involves a strong contribution from the external world; nevertheless, the organization of the nervous system determines how sensory input is processed and interpreted, resulting in how it is experienced and acted upon. At the other extreme, the internal workings of the nervous system dominate hallucination and imagination; nevertheless, the images and feelings produced most often refer to the sort of experiences one normally has in the external world.

Of course, one should define terms. Experience here means anything that occurs in the consciousness of a cognitive agent (yet the “equation” extends to include behavior that other agents may observe, whether one is conscious of it or not). Self means the cognitive agent to whom such experience occurs—usually a human being or other sentient organism. World means the real external world that causes an input to that agent’s cognitive system.

But the “equation” can be put in a more general form., which simply expresses the input/output relations of a system. Then, O = f(is, iw), where O is the output of the agent or system, is is input from the system itself, and iw is the input of the world outside the system or agent. This generalization does not distinguish between behavior and experience. Either is an “output” of a bounded system defined by input/output relations. For organisms, the boundary is the skin, which also is a major sensory surface.

While it seems eminently a matter of common sense that how we perceive and behave is always shaped both by our own biological nature and by the nature of the environing world, human beings have always found reasons to deny this simple truth, either pretending to an objective view independent of the subject, or else pretending that everything is subjective or “relative” and no more than a matter of personal belief.

The very ideal of objectivity or truth attempts to factor out the subjectivity of the self. Science attempts to hold the “self” variable constant, in order to explore the “world” variable. In principle, it does this by excluding what is idiosyncratic for individual observers and by imposing experimental protocols and a common mathematical language embraced by all standardized observers. Yet, this does not address cognitive biases that are collective, grounded in the common biology of the species. Science is, after all, a human-centric enterprise. To focus on one “variable” backs the other into a corner, but does not eliminate it.

Even within the scientific enterprise, there are conflicting philosophical positions. The perennial nature versus nurture debate, for example, emphasizes one factor over the other—though clearly the “equation” tells us there should be no such debate because nature and nurture together make the person! At the other extreme, politics and the media amount to a free-for-all of conflicting opinions and beliefs. Consensus is rarely attempted—which hardly means that no reality objectively exists. Sadly, “reality” is a wild card played strategically according to the subjective needs of the moment, by pointing disingenuously to select information to support a viewpoint, while an opposing group points to other select information. The goal is to appear clever and right—and to belong, within the terms of one’s group—precisely by opposing some other group, dismissing and mocking their views and motives. Appeal to reality becomes no more than a strategy of rhetoric, rather than a genuine inquiry into what is real, true, or objective.

How does such confusion arise? The basic challenge is to sort out the influence of the internal and external factors, without artificially ignoring one or the other. However, an equation in two variables cannot be solved without a second equation to provide more information—or by deliberately holding one variable constant, as in controlled experiments. The problem is that in life there is no second equation and little control. This renders all experience ambiguous and questionable. But that is a vulnerable psychological state, which we are programmed to resist. On the one hand, pretending that the “self” factor has no effect on how we perceive reality is willful ignorance. On the other hand, so is pretending that there is no objective reality or that it can be taken for granted as known. How one views oneself and how one views the world are closely related. Both are up for grabs, because they are themselves joint products of inner and outer factors together. How, then, to sort out truth?

I think the first step is to recognize the problem, which is the basic epistemic dilemma facing embodied biological beings. We are not gods but human creatures. In terms of knowing reality, this means acknowledging the subjective factor that always plays a part in all perception and thought. It means transcending the naïve realism that is our biological inheritance, which has served us well in many situations, but has its limits. We know that appearances can be deceptive and that communication often serves to deceive others. Our brains are naturally oriented outward and toward survival; we are programmed to take experience at “face value,” which is as much determined by biological or subjective need as by objective truth. We now know something of how our own biases shape how we perceive and communicate. We know something about how brains work to gain advantage rather than truth. Long ago we were advised to “Know Thyself.” There is still no better recipe for knowing others or knowing reality.

The second—and utterly crucial—step is to act in good faith, using that knowledge. That is, to intend truth or reality rather than personal advantage. To aim for objectivity, despite the stacked odds. This means being honest with oneself, trying earnestly to recognize one’s personal bias or interest for the sake of getting to a truth that others can recognize who also have that aim and who practice that sincerity. Holding that intention in common allows convergence. Intending to find that common ground presumes that it should be mutually approachable by those who act in good faith. In contrast, the attitude of all against all tacitly denies the common ground of an objective reality.

No doubt convergence is easier said than done, for the very reasons here discussed—namely, our biological nature and the ambiguity inhering in all experience because of the inextricable entanglement of subject and object. With no gods-eye view, that is the disadvantage of being a finite and limited creature, doomed to see the everything through a glass darky. But there is also an advantage in knowing this condition and the limitations it imposes. To realize the influence of the mind over experience is sobering but also empowering. We are no longer passive victims of experience but active co-creators of it, who can join with others of good will to create a better world.

Compromise is a traditional formula to overcome disagreement; yet, it presumes some grumbling forfeit by all parties for the sake of coming to a begrudged decision. In the wake of the decision, it assumes that people will nevertheless continue to differ and disagree, in the same divergent pattern. There is an alternative. While perceiving differently, we can approach agreement from different angles by earnestly intending to focus on the reality that is common to all. Then, like the blind men trying to describe the elephant in the room, each has something important to contribute to the emerging picture upon which the fate of all depends.

From taking for granted to taking charge

In our 200,000 years as a species, humankind has been able to take for granted a seemingly boundless ready-made world, friendly enough to permit survival. Some of that was luck, since there were relatively benign periods of planetary stability, and some of it involved human resourcefulness in being able to adapt or migrate in response to natural changes of conditions—even changes brought about by people themselves. Either way, our species was able to count on the sheer size of the natural environment, which seemed unlimited in relation to the human presence. (Today we recognize the dimensions of the planet, but for most of that prehistory there was not even a concept of living on a “planet.”) There was no need—and really no possibility—to imagine being responsible for the maintenance of what has turned out to be a finite and fragile closed system. Perhaps there was a local awareness among hunter-gatherers about cause and effect: to browse judiciously and not to poo in your pond. Yet the evidence abounds that early humans typically slaughtered to extinction all the great beasts. Once “civilized,” the ancients cut down great forests—and even bragged about it, as Gilgamesh pillaged the cedars of Lebanon for sport.

Taming animals and plants (and human slaves) for service required a mentality of managing resources. Yet, this too was in the context of a presumably unlimited greater world that could absorb any catastrophic failures in a regional experiment. We can scarcely know what was in the minds of people in transition to agriculture; but it is very doubtful that they could have thought of “civilization” as a grand social experiment. Even for kings, goals were short-term and local; for most people, things mostly changed slowly in ways they tried to adjust to. Actors came and went in the human drama, but the stage remained solid and dependable. Psychologically, we have inherited that assumption: human actions are still relatively local and short-sighted; the majority feel that change is just happening around them and to them. The difference between us and people 10,000 years ago (or even 500 years ago) is that we finally know better. Indeed, only in the past few decades has it dawned on us that the theatre is in shambles.

I grew up in 1950s Los Angeles, when gasoline was 20 cents the gallon, and where you might casually drive 20 miles to go out for dinner. As a child, that environment seemed the whole world, totally “natural,” just how things should be. My job was to learn the ropes of that environment. But, of course, I had little knowledge of the rest of the planet and certainly no notion of a ‘world’ in the cultural sense. Only when I traveled to Europe as a young man did I experience something different: instead of the ephemera of L.A., an environment that was old and made of stone, in which people organized life in delightfully different ways. No doubt that cultural enlightenment would have been more extreme had I traveled in Africa instead of Europe. But it was the beginning of an awareness of alternatives. Still, I could not then imagine that cheap gas was ruining the planet. That awareness only crept upon the majority of my generation in our later years, coincident with the maturing consciousness of the species.

We’ve not had the example of another planet to visit, whose wise inhabitants have learned to manage their own numbers and effects in such a way to keep the whole thing going. We have only imagination and history on this planet to refer to. Yet, the conclusion is now obvious: we have outgrown the mindset of taking for granted and must embrace the mindset of taking charge if we are to survive.

What happened to finally bring about this species awakening? To sum it up: a global culture. When people were few, they were relatively isolated, the world was big, and the capacity to affect their surroundings was relatively small. Now that we are numerous and our effects highly visible, we are as though crowded together in a tippy lifeboat, where the slightest false move threatens to capsize Spaceship Earth. Through physical and digital proximity, we can no longer help being aware of the consequences of our own existence and attendant responsibility. Yet, a kind of schizophrenia sets in from the fact that our inherited mentality cannot accommodate this sudden awareness of responsibility. It is as though we hope to bring with us into the lifeboat all our bulky possessions and conveniences and all the behaviors we took for granted as presumed rights in a “normally” spacious and stable world.

We are the only species capable of deliberately doing something about its fate. But that fact is not (yet) engrained in our mentality. Of course, there are futurists and transhumanists who do think very deliberately about human destiny, and now there are think tanks like the Future of Humanity Institute. Individual authors, speakers, and activists are deeply concerned about one dire problem or another facing humanity, such as climate change, social inequity, and continuing nuclear threat, along with the brave new worlds of artificial intelligence and genetic engineering. Some of them have been able to influence public policy, even on the global scale. Most of us, however, are not directly involved in those struggles, and are only beginning to be touched directly by the issues. Like most of humanity throughout the ages, we simply live our lives, with the daily concerns that have always monopolized attention.

However, the big question now looming over all of us is: what next for humanity? It is not about predicting the future but about choosing and making it. (Prediction is just more of bracing ourselves for what could happen, and we are well past that.) We know what will happen if we remain in the naïve mindset of all the creatures that have competed for existence in evolutionary history: homo sapiens will inevitably go extinct, like the more than 99% of all species that have ever existed. Given our accelerating lifestyle, this will likely be sooner than later. They passively suffered changes they could not conceive, let alone consciously control, even when they had contributed to those changes. We are forced to the terrible realization that only our own intervention can rectify the imbalances that threaten us. Let us not underestimate the dilemma: for, we also know that “intervention” created many of those problems in the first place!

Though it is the nature of plans to go awry, humanity needs a plan and the will to follow it if we are to survive. That requires a common understanding of the problems and agreement on the solutions. Unfortunately, that has always been a weak point of our species, which has so far been unable to act on a species level, and until very recently has been unable even to conceive of itself as a unified entity with a possible will. We are stuck at the tribal level, even when the tribes are nations. More than ever we need to brainstorm toward a calm consensus and collective plan of action. Ironically, there is now the means for all to be heard. Yet, our tribal nature and selfish individualist leanings result in a cacophony of contradictory voices, in a free-for-all bordering on hysteria. There is riot, mutiny and mayhem on the lifeboat, with no one at the tiller. No captain has the moral (much less political) authority to steer Spaceship Earth. What can we then hope for but doom?

Some form of life will persist on this planet, perhaps for several billion years to come. But the experiment of civilization may well fail. And what is that experiment but the quest to transcend the state of nature given us, which no other creature has been able to do? We were not happy as animals, having imagined the life of gods. With one foot on the shore of nature and one foot in the skiddy raft of imagination, we do the splits. The two extreme scenarios are a retreat into the stone age or charging brashly into a post-humanist era. Clearly, eight billion people cannot go back to hunting and gathering. Nor can they all become genetically perfect immortals, colonize Mars, or upload to some more durable form of embodiment. The lifeboat will empty considerably if it does not sink first.

Whatever the way forward, it must be with conscious intent on a global level. We will not go far bumbling as usual. Whether salvation is possible or not, we ought to try our best to achieve the best of human ideals. Whether the ship of state (or Spaceship Earth) floats or sinks, we can behave in ways that honour the best of human aspirations. To pursue another metaphor, in the board game of life, though ever changing, at any given moment there are rules and other elements. The point is not just to win but also to play well, even as we attempt to re-define the rules and even the game. That means to behave nobly, as though we are actually living in that unrealized dream. Our experiment all along has been to create an ideal world—using the resources of the real one. Entirely escaping physical embodiment is a pipe-dream; but modifying ourselves physically is a real possibility. In a parallel way, a completely man-made world is an oxymoron, for it will always exist in the context of some natural environment, with its own rules—even in outer space. Yet coming to a workable arrangement with nature should be possible. After all, that’s what life has always done. With no promise of success, our best strategy is a planetary consciousness willing to take charge of the Earth’s future. To get there, we must learn to regulate our own existence.

Yes, but is it art?

Freud observed that human beings have a serious and a playful side. The “Reality Principle” reflects the need to take the external world seriously, driven by survival. Science and technology serve the Reality Principle insofar as they accurately represent the natural world and allow us to predict, control, and use it for our benefit. Yet they leave unfulfilled a deep need for sheer gratuitous activity—play. The “Pleasure Principle” is less focused, for it reflects not only pursuit of what is good for the organism but also the playful side of human nature that sometimes thumbs its nose at “reality.” It reflects the need to freely define ourselves and the world we live in—not to be prisoners of biology, social conditioning, practicality, and reason. I believe this is where art (like music, sport, and some mathematics) comes literally into play.

Plato had dismissed art as dealing only with appearances, not with truth. According to him art is merely a form of play, not be taken seriously. However, we do take art seriously precisely because it is play. What we find beautiful or interesting about a work of art often involves its formal qualities, which reveal the artist’s playfulness at work. Like science fiction, art may portray an imagined world; but it can also directly establish a world simply by assembling the necessary elements. Just as a board game comes neatly in a box, so the artist’s proposed world comes in a frame, on a plinth, or in a gallery. What it presents may seem pointless, but that is its point. It makes its own kind of sense, if not that of the “real” world. The artwork may be grammatically correct while semantically nonsense. Art objects are hypothetical alternatives to the practical objects of consumer society, of which they are sometimes parodies. Often they are made of similar materials, using similar technology, but expressing a different logic or no apparent logic at all. Artistic invention parallels creativity in science and technology. At the most ambitious levels, large teams of art technicians undertake huge projects, rivaling the monumentality of medieval cathedrals and the modern cinema, but also rivalling space launches and cyclotrons. Extravagance expresses the Pleasure Principle in all domains.

Like technologists, artists are experimentalists. They want to see what happens when you do this or that. They love materials, processes and tinkering. Some are also theorists who want to follow out certain assumptions or lines of thought to their ultimate conclusions. In this they are aided by zealous curators, art historians, and gallery owners who propose ever-changing commentaries and theories of art, reflecting what artists do but also shaping it. The world of contemporary art seems driven by some restless mandate of “originality” that resembles the dynamics of the fashion industry and the need for constant change that fuels consumerism generally. Like scientists, ambitious artists may be driven to excel what they have already done or the accomplishments of others. Some seek a place in art history, which is little more than the hindsight of academics and curators or the self-serving promotions of dealers and gallerists.

Science is often distinguished from art and other cultural expressions by its progress, through the accumulation of data and consequent advance of technology. Its theories seem to build toward a more complete and accurate representation of reality. Yet theories are always subject to revision and data are subject to refinement and reinterpretation. To predict the future of science is to predict new truths of nature that we cannot know in advance. Art too accumulates, and its social role has evolved in step with changing institutions and practices, its forms with changing technology. There is pattern and direction in art history, but whether that can be called progress in a normative sense is debatable. Art does not seek to reveal reality, so much as to reveal the artist and to play. Indeed, it seems to be bent on freeing itself from the confines of reality.

Art is also an important kind of self-employment. It provides not only alternative objects and visions, but also an alternative form of work and of work place. It’s a way to establish and control one’s own work environment. The studio is the artist’s laboratory. Art defines an alternative form of production and relation to work. Artists can be their own bosses, if at the price of an unstable income. As in society at large, a small elite enjoy the bulk of success and wealth. Some artists are now wealthy entrepreneurs, and some collectors are but speculative investors. The headiness of the contemporary art world mirrors the world of investment, with its easy money and financial abstractions, prompting questions about the very meaning of wealth—and of art. Indeed, art has always served as a visible form of wealth, and therefore as a status symbol. At one time, the value of artworks reflected the labor-intensive nature of the work, and often the use of precious materials. Today, however, the market value of an artwork reflects how badly other people want it—whatever their reasons.

In modern times, art has inherited a mystique that imbues it with social value apart from labor value and even the marketplace. Despite the fact that art defies an easy definition, and now encompasses a limitless diversity of expressions, people continue to recognize and value art as different from consumer items that serve more practical functions. On the one hand, art represents pure creativity—which is another word for play—and also an alternative vision. On the other hand, like everything else it has succumbed to commercialization. Artists are caught between. Most must sell their work to have a livelihood. To get “exposure,” they must be represented in galleries and are tempted to aim at least some of their work toward the marketplace. Thus, one aspect of art, and of being an artist, reflects the Pleasure Principle while the other represents the Reality Principle. Yet, when the motives surrounding art are not earnest enough—when they appear too mundane, too heady, too trivial, too dominated by money, fame, or ideology—the perennial question arises: is it art? That we can raise the question indicates that we expect more.

What more might be expected? European art originated as a religious expression—which might be said of art in many places and times. Quite apart from any specific theology, human beings have always had a notion of the sacred. That might be no more than a reverence for tradition. But it might also be a quest to go beyond how things have been done and how they have been seen. Religious art has often served as propaganda for an ideology that reinforced the social order of the day. Advertising and news media serve this purpose in our modern world. But even within the strictures of religious art (or commercial art or politically sanctioned art), there is license to interpret, to play, to improvise and surprise. The gratuitous play with esthetics and formal elements can undermine the serious ostensible message. Perhaps that is the eternal appeal of art, its mystique and its mandate: to remind us of our own essential freedom to view the world afresh, uniquely, and playfully.

What is intelligence?

Intelligence is an ambiguous and still controversial notion. It has been defined variously as goal-directed adaptive behavior, the ability to learn, to deal with novel situations or insufficient information, to reason and do abstract thinking, etc. It has even been defined as the ability to score well on intelligence tests! Sometimes it refers to observed behavior and sometimes to an inner capacity or potential—even to a pseudo-substance wryly called smartonium.

Just as information is always for someone, so intelligence is someone’s intelligence, measured usually by someone else with their biases, using a particular yardstick for particular purposes. Even within the same individual, the goals of the conscious human person may contradict the goals of the biological human organism. It is probably this psychological fact that allows us to imagine pursuing arbitrary goals at whim, whereas the goals of living things are hardly arbitrary.

Measures of intelligence were developed to evaluate human performance in various areas of interest to those measuring. This gave rise to a notion of general intelligence that could underlie specific abilities. A hierarchical concept of intelligence proposes a “domain independent” general ability (the famous g-factor) that informs and perhaps controls domain-specific skills. “General” can refer to the range of subjects as well as the range of situations. What is general across humans is not the same as what is general across known species or theoretically possible agents or environments. Perhaps, the intelligence measured can be no more general than the tests and situations used to measure it. As far as it is relevant to humans, the intelligence of other entities (whether natural or artificial) ultimately reflects their capacity to further or thwart human aims. Whatever does not interact with us in ways of interest to us may not be recognized at all, let alone recognized as intelligent.

It is difficult to compare animal intelligence across species, since wide-ranging sense modalities, cognitive capacities, and adaptations are involved. Tests may be biased by human motivations and sensory-motor capabilities. The tasks and rewards for testing animal intelligence are defined by humans, aligned with their goals. Even in the case of testing people, despite wide acceptance and appeal, the g-factor has been criticized as little more than a reification whose sole evidence consists in the very behaviors and correlations it is supposed to explain. Nevertheless, the comparative notion of intelligence, generalized across humans, was further generalized to include other creatures in the comparison, and then generalized further to include machines and even to apply to “arbitrary systems.” By definition, the measure should not be anthropocentric and should be independent of particular sense modalities, environments, goals, and even hardware.

Like the notion of mind-in-general, intelligence-in-general is an abstraction that is grounded in human experience while paradoxically freed in theory from the tangible embodiment that is the basis of that experience. Its origins are understandably anthropocentric, derived historically from comparisons among human beings, and then extended to comparisons of other creatures with each other and with human beings. It was then further abstracted to apply to machines. The goal of artificial intelligence (AI) is to produce machines that can behave “intelligently”—in some sense that is extrapolated from biological and human origins. It remains unclear whether such an abstraction is even coherent. Since concepts of general intelligence are based on human experience and performance, it also remains unclear to what extent an AI could satisfy or exceed the criteria for human-level general intelligence without itself being at least an embodied autonomous agent: effectively an artificial organism, if not an artificial person.

Can diverse skills and behaviors even be conflated into one overall capacity, such as “problem-solving ability” or the g-factor? While ability to solve one sort of problem carries over to some extent to other sorts of tasks, it does not necessarily transfer equally well to all tasks, let alone to situations that might not best be described as problem solving at all—such as, for example, the ability to be happy. Moreover, problem solving is a different skill from finding, setting, or effectively defining the problems worth solving, the tasks worth pursuing. The challenges facing society usually seem foisted upon us by external reality, often as emergencies. Our default responses and strategies are often more defensive than proactive. Another level of intelligence might involve better foresight and planning. Concepts of intelligence may change as our environment becomes more challenging. Or as it becomes progressively less natural and more artificial, consisting largely of other humans and their intelligent machines.

Biologically speaking, intelligence is simply the ability to survive. In that sense, all currently living things are by definition successful, therefore intelligent. Though trivial sounding, this is important to note because models of intelligence, however abstract, are grounded in experience with organisms; and because the ideal of artificial general intelligence (AGI) involves attempting to create artificial organisms that are (paradoxically) supposed to be liberated from the constraints of biology. It may turn out, however, that the only way for an AI to have the autonomy and general capability desired is to be an embodied product of some form of selection: in effect, an artificial organism. Another relevant point is that, if an AI does not constitute an artificial organism, then the intelligence it manifests is not actually its own but that of its creators.

Autonomy may appear to be relative, a question of degree; but there is a categorical difference between a true autonomous agent—with its own intelligence dedicated to its own existence—and a mere tool to serve human purposes. A tool manifests only the derived intelligence of the agent designing or using it. An AI tool manifests the intelligence of the programmer. What does it mean, then, for a tool to be more intelligent than its creator or user? What it can mean, straightforwardly, is that a skill valued by humans is automated to more effectively achieve their goals. We are used to this idea, since every tool and machine was motivated by such improvement and usually succeeds until something better comes along. But is general intelligence a skill that can be so augmented, automated, and treated as a tool at the beck of its user?

The evolution of specific adaptive skills in organisms must be distinguished from the evolution of a general skill called intelligence. In conditions of relative stability, natural selection would favor automatic domain-specific behavior, reliable and efficient in its context. Any pressure favoring general intelligence would arise rather in unstable conditions. The emergence of domain-general cognitive processes would translate less directly into fitness-enhancing behavior, and would require large amounts of energetically costly brain tissue. The biological question is how domain-general adaptation could emerge distinct from specific adaptive skills and what would drive its emergence.

In light of the benefits of general intelligence, why do not all species evolve bigger and more powerful brains? Every living species is by definition smart enough for its current niche, for which its intelligence is an economical adaptation. It would seem, as far as life is concerned, that general intelligence is not only expensive, and often superfluous, but implies a general niche, whatever that can mean. Humans, for example, evolved to fit a wide range of changing conditions and environments, which they continue to further expand through technology. Even if we manage to stabilize the natural environment, the human world changes ever more rapidly—requiring more general intelligence to adapt to it.

The possibility to understand mind as computation, and to view the brain metaphorically as a computer, is one of the great achievements of the computer age. (The computer metaphor is underwritten more broadly by the mechanist metaphor, which holds that any behavior of a biological “system” could be reduced to an algorithm.) Computer science and brain science have productively cross-pollinated. Yet, the brain is not literally a machine, and mind and intelligence are ambiguous concepts not exclusively related to brain. “Thinking” suggests reasoning and an algorithmic approach—the ideal of intellectual thought—which is only a small part of the brain’s activity responsible for the organism as a whole. Ironically, abstract concepts produced by the brain are recycled to explain the operations of the brain that give rise to them in the first place.

Ideally, we expect of artificial intelligence to do what we want, better than we can, and without supervision. This raises several questions and should raise eyebrows too. Will it do what we want, or how can it be made to do so? How will we trust its (hopefully superior) judgment if is so much smarter than us that we cannot understand its considerations? How autonomous can AI be, short of being a true self-interested agent? Under what circumstance could machines become such agents, competing with each other and with humans and other life forms for resources and for their very existence? The dangers of superintelligence attend the motive to achieve ever greater autonomy in AI systems, the extreme of which is the genuine autonomy manifest by living things. AI should instead focus on creating powerful tools that remain under human control. That would be safer, wiser, and—shall we say—more intelligent.

Origins of the white lie

In the wake of recently discovered unmarked graves of indigenous children, at state-sponsored residential schools run by churches, there has been much discussion lately about attitudes and practices of colonialism in Canada. Hardly institutions of learning, these were indoctrination centres serving cultural genocide. It is politically correct now to look back with revulsion, as though we now live in a different world. Should we be so smug? After all, the last Indian residential school closed only twenty-five years ago.

What is particularly horrifying—and yet perplexing—is the prospect that many of the people running these schools (and the government officials who commissioned them) probably felt they were doing the right thing in “helping” indigenous children assimilate into white society. Apart from cynical land-grabbing and blatant racism, many in government may have thought themselves well-motivated, and the school personnel may have been sincerely devout. Yet, the result was malicious and catastrophic. There were elements of the same mean-spirited practices in English boarding schools and ostensibly charitable institutions. Nineteenth-century novels depict the sadism in the name of character formation, discipline and obedience, which were supposed to prepare young men and women for their place in society. How is it possible to be mean and well-meaning at the same time?

Certainly, “the white man’s burden” was a notion central to colonialism. It is related to the European concept of noblesse oblige, which was an aspect of the reciprocal duties between peasant and aristocrat in medieval society. The very fact that such class relationships (between the lowly and their betters) persist even today is key to the sort of presumption of superiority illustrated by the residential schools. Add to class the element of race, then combine with religious proselytizing, empire and greed, and you have a rationale for conquest. The natives were regarded suspiciously as ignorant savages who made no proper use of their land and “resources.” Their bodies were raw material for slavery and their souls for conversion. All in the name of civilizing “for their own good.” Indeed, slavery was a global institution from time immemorial, practiced in Canada as well as the U.S., and practiced even by indigenous natives themselves.

In view of the Spanish Inquisition in the European homeland, it cannot be too surprising that the conquistadors applied similar methods abroad. The fundamental religious assumption was that the body has little importance compared to the soul. In the medieval Christian context, it was self-evident that the body could be mistreated, tortured, even burnt alive for the sake of the soul’s salvation. According to contemporary accounts, the conquistadors committed atrocities in a manner intended to outwardly honor their religion: natives hanged and burned at the stake—in groups of thirteen as a tribute to Christ and his twelve apostles! The utter irony and perversity of such “logic” has more recent parallels and remains just as possible today.

The Holocaust applied an intention to keep society pure by eliminating elements deemed undesirable. Eugenics was a theme of widespread interest in early twentieth century, not only in Nazi Germany. Hannah Arendt argued controversially that the atrocities were committed less by psychopathic monsters than by ordinary people who more or less believed in what they were doing, if they thought about it deeply at all. In the wake of WW2, interest was renewed in understanding how such things can happen in the name of nationalism, racial superiority, or some other captivating agenda. In particular: to understand how unconscionable behavior is internally justified. The psychological experiments of Stanley Milgram, about obedience to authority, shed light on the banality of evil, by showing how easy it is for people to commit acts of torture when an authority figure assures them it is necessary and proper. The underlying question remains: how to account for the disconnect between common sense (or compassion or morality) and behavior that can later (or by others) be judged patently wrong? By what reasoning do people justify their evil deeds so that they appear to them acceptable or even good?

Self-deception seems to be a general human foible, part and parcel of the ability to deceive others. It can be deliberate, even when unconscious. Or, it can be incidental, as when we simply do not have conscious access to our motives. Organisms, after all, are cobbled together by natural selection in a way that coheres only enough to insure survival. The ego or rational mind, too, is a cobbled feature, cut off from access to much of the organism’s workings, with which it would not be adaptive for it to directly interfere. The conscious self is charged by society to produce behavior in accord with social expectations, yet is poorly equipped as an organ of self-control.

Biology is no excuse, of course, especially since our highest ideals aspire to transcend biological limitations. Yet, a brief digression may shed some light. The primary aim of every organism is its own existence. Life, by definition, is self-serving; yet our species is characteristically altruistic toward those recognized as their own kind. The human organism discovered reason as a survival strategy. It has surrounded itself with tools, machines, factories and institutions that serve some purpose other than their own existence. As seemingly rational agents in the world, we try to shape the world in certain ways that nevertheless fit our needs as organisms. Thus, we purport to act according to some rational program, even for the good of others or society, but which often turns out to be self-serving or serving our specific group. The disconnect is a product of evolutionary history. We aspire and purport to be rational, but we were not rationally designed.

Hypocrisy literally means failing to be (self-)critical enough. The context of that failing is that we believe we are acting in accordance with one agenda and do not see how we are also acting in accordance with a very different one. We think we are pursuing one aim and fail to recognize another aim inconsistent with it. Deaf to the dissonance, the right hand (hemisphere?) knows not what the left is doing. A person, group, or class behaves according to their interests, and believes some story that justifies their entitlement, to themselves and to others. The cover story is somehow made to jive with other motivations behind it. What is supposedly objective fact is molded to fit subjective desire.

As social creatures, we tend to look to others for clues to how we should behave. But that is a self-fulfilling prophecy when everyone else is doing likewise. There must be some way to weigh action that is not based on social norm. This is the proper function of reason, argument, debate, and social criticism. It is not to convince others of a point of view, but to find what is wrong with a point of view (no matter how good-sounding) and hopefully set it right. In particular, it should reveal how one intention can be inconsistent with another intention that lurks at its core, just as the whole structure of the brain lurks beneath the neo-cortex. Reason ought to reveal internal inconsistency and the self-deception that permits it.

Yet, self-deception is a concomitant of the ability to deceive others, which is built into our primate heritage and the structure of language. Society can only cohere through cooperation, and there must be ways to tell the cooperators from the defectors in society. Reputation serves this function. But reputation is an image in people’s minds that can be manipulated and faked. As any actor can tell you, the best way to make your performance emotionally convincing is to believe it yourself. If your story is a lie, then you too must believe the lie if you expect to convince others of your sincerity. Furthermore, deception of the others dovetails with their willingness to be deceived—namely, their own self-deceptions.

We know that people consciously create acts of fiction and fantasy; also, that they sometimes knowingly lie. Self-deception overlaps these categories: fiction that we convince ourselves is fact. Rationally, we know that opinions—when expressed as such—are someone’s thoughts. But the category of fact renounces this understanding in favor of an objective truth that has no author, requires no evidence, and for which no individual is responsible, unless God. We disown responsibility for our statements by failing to acknowledge them as personal assertions and beliefs, instead proposing them offhand as free-standing truths in the public domain.

Religion, patriotism, and cultural myth are not about reason or factual truth, but about social cohesion and soothing of existential anxiety through a sense of belonging. We trust those who seem to think and act like us. But this is a double-edged sword. It makes towing a line a condition of membership in the group. Controlling the behavior of members helps the group cohere, but does not allow for a control on the behavior of the group itself.

Scientific propositions can be pinned down and disproven, but not so cultural myths and biases, nor religious beliefs, which cannot even be unambiguously comprehended, let alone debunked in a definitive way. Like water for the fish, the ethos of a society’s prejudices cannot easily be perceived. As Scott Atran has observed, “…most people in our society accept and use both science and religion without conceiving of them in a zero-sum conflict. Genesis and the Big Bang theory can perfectly well coexist in a human mind.” Perhaps that foible is a modern sign that we have not outgrown the capacity for self-deception, and thus for evil.

Splitting hairs with Occam’s razor

Before the 19th century, science was called natural philosophy or natural history. Since the ancient Greeks, the study of nature had been a branch of philosophy, a gentlemanly discussion of ideas by men who disdained to soil their hands with actual materials. What split science off from medieval philosophy was the use of experiment, careful observation, quantitative measurement with instruments, and what became known as scientific method, which meant testing ideas by hands-on experiment. Science became the application of technology to the study of nature. This in turn gave rise to further technology in a happy cycle involving the mutual interaction of mind and matter.

Philosophy literally means love of wisdom. In modern times it has instead largely come to mean love of sophistry. The secession of science from philosophy left the latter awkwardly bereft and defensive. One of the reasons why science emerged as distinct from philosophy is that medieval scholastic philosophy had been (as modern philosophy largely remains) mere talk about who said what about who said what. Science got down to brass tacks, focusing on the natural world, but at the cost of generality. Philosophy could trade on being more general in focus, if less verifiably factual. It could still deal with areas of thought not yet appropriated by scientific study, such as the nature of mind. And it could deal in a critical way with concepts developed within science—which became known as philosophy of science. Either way, the role of philosophy involved the ability to stand back to examine ideas for their logical consistency, meaning, tacit assumptions, and function within a broader context. The focus was no longer nature itself but thought about it and thought in general. Philosophy assumed the role of “going meta,” to critically examine any proposed idea or system from a viewpoint outside it. This meant examining a bigger picture, outside the terms and borders of the discipline concerned, and examining the relationships between disciplines. (Hence, metaphysics as a study beyond physics.) However, that was not the only response of philosophy to the scientific revolution.

Philosophy had long been closely associated with logic, one of its tools, which is also the basis of mathematics. Both logic and mathematics seemed to stand apart from nature as eternal verities, upstream of science. Galileo even wrote that mathematics is the language of the book of nature. So, even though science appropriated these as tools for the study of nature, and was strongly shaped by them, logic and math were never until recently questioned or considered the subject matter of scientific study. The increasing success of mathematical description in the physical sciences led to a general “physics envy,” whereby other sciences sought to emulate the quantifying example of physics. Sometimes this was effective and appropriate, but sometimes it led to pointless formalism, which was often the case in philosophy. Perhaps more than any other discipline, philosophy suffered from an inferiority complex in the shadow of its fruitful younger sibling. Philosophy could legitimize itself by looking scientific, or at least technical.

Certainly, all areas of human endeavor have become increasingly specialized over time. This is true even in philosophy, whose mandate remains, paradoxically, generalist in principle. Apart from the demand for rigor, the tendency to specialize may reflect the need for academics to remain employed by creating new problems to solve; to make their mark by staking out a unique territory in which to be expert; and to differentiate themselves from other thinkers through argument. Specialization, after all, is the art of knowing more and more about less and less, following the productive division of labor that characterizes civilization. On the other hand, specialization can lead to such fragmentation that thinkers in diverse intellectual realms are isolated from each other’s work. Worse, it can isolate a specialty from society at large. That can imply an enduring role for philosophers as generalists. They are positioned to stand back to integrate disparate concepts and areas of thought into a larger whole—to counterbalance specialization and interpret its products to a larger public. Yet, instead of rising to the occasion provided by specialization, philosophy more often succumbs its hazards.

Science differs from philosophy in having nature as its focus. The essential principle of scientific method is that disagreement is settled ultimately by experiment, which means by the natural world. That doesn’t mean that questions in science are definitively settled, much less that a final picture can ever be reached. The independent existence of the natural world probably means that nature is inexhaustible by thought, always presenting new surprises. Moreover, scientific experiments are increasingly complex, relying on tenuous evidence at the boundaries of perception. This means that scientific truth is increasingly a matter of statistical data, whose interpretation depends on assumptions that may not be explicit—until philosophers point them out. Nevertheless, there is progress in science. At the very least, theories become more refined, more encompassing, and quantitatively more accurate. That means that science is progressively more empowering for humanity, at least through technology.

Philosophy does not have nature as arbiter for its disputes, and little opportunity to contribute directly to technological empowerment. Quite the contrary, modern philosophers mostly quibble over contrived dilemmas of little interest or consequence to society. These are often scarcely more than make-work projects. The preoccupations and even titles of academic papers in philosophy are often expressed in terms that mock natural language. In the name of creating a precise vocabulary, their jargon establishes a realm of discourse impenetrable to outsiders—certainly to lay people and often enough even to other academics. More than an incidental by-product of useful specialization, abstruseness seems as much a ploy to justify existence within a caste and to perpetuate a self-contained scholastic world. If philosophical issues are by definition irresolvable, this at least keeps philosophers employed.

Philosophy began as rhetoric, which is the art of arguing convincingly. (Logic may have arisen as a rule-based means to reign in the extravagances of rhetoric.) Argument remains the hallmark of philosophy. Without nature to reign thought in, as in science, there is only logic and common sense as guides. Naturally, philosophers do attempt to present coherent reasoned arguments. But logic is only as good as the assumptions on which it is based. And these are wide open to disagreement. Philosophical argument does little more than hone disagreement and provide further opportunities to nit-pick. For the most part, philosophical argument promotes divergence, when its better use (“standing back”) is to arrive at convergence by getting to the bottom of things. That, however, would risk putting philosophers out of a job.

Philosophy resembles art more than science. Art, at least, serves a public beyond the coterie of artists themselves. Art too promotes divergence, and literature serves the multiplication of viewpoints we value as creative in our culture of individualism. Like artists, professional philosophers might find an esthetic satisfaction in presenting and examining arguments; they might revel in the opportunity to stand out as clever and original. However, philosophy tends to be less accessible to the general public than art. (Try to imagine a philosophy museum or gallery.) Professional philosophy has defined itself as an ivory-tower activity, and academic papers in philosophy tend to make dull reading, when comprehensible at all. That does not prevent individual philosophers from writing books of general and even topical interest. Sometimes these are eloquent, occasionally even best-sellers. Philosophers may do their best writing, and perhaps their best thinking, when addressing the rest of us instead of their fellows. After all, they were once advisors to rulers, providing a public service.

If philosophy is the art of splitting hairs, the metaphor generously conjures the image of an ideally sharp blade—the cutting edge of logic or incisive criticism. The other metaphor—of “nitpicking”—has less savory connotations but more favorable implications. Picking nits is a grooming activity of social animals, especially primates. It serves immediately to promote cleanliness (next to godliness, after all). More broadly, it serves to bond members of a group. We complain of it as a negative thing, an excessive attention to detail that distracts from the main issue. Yet its social function is actually to facilitate bonding. The metaphor puts the divisive aspect of philosophy in the context of a potentially unifying role.

That role can be fulfilled in felicitous ways, such as the mandate to stand back to see the larger picture, to find hidden connections and limiting assumptions, to “go meta.” It consists less in the skill to find fault with the arguments of others than to identify faulty thinking in the name of arriving at a truth that is the basis for agreement. Perhaps most philosophers would insist that is what they actually do. Perhaps finding truth can only be done by first scrutinizing arguments in detail and tearing them apart. However, that should never be an end in itself. As naïve as it might seem, reality—not arguments—should remain the focus of study in philosophy, as it is in science. Above all, specialization in philosophy should not distract from larger issues, but aim for them. Analysis should be part of a larger cycle that includes synthesis. Philosophy should be true to its role of seeking the largest perspective and bringing the most important things into clear focus. It should again be a public service, to an informed society if no longer to kings.

The quest for superintelligence

Human beings have been top dogs on the planet for a long time. That could change. We are now on the verge of creating artificial intelligence that is smarter than us, at least in specific ways. This raises the question of how to deal with such powerful tools—or, indeed, whether they will remain controllable tools at all or will instead become autonomous agents like ourselves, other animals, and mythical beings such as gods and monsters. An agent, in a sense derived from biology, is an autopoietic system—one that is self-defining, self-producing, and whose goal is its own existence. A tool or machine, on the other hand, is an instrument of an agent’s purposes. It is an allopoietic system—one that produces something other than itself. Unless it happens to also be an autopoietic system, it could have no goals or purposes of its own. An important philosophical issue concerning AI is whether it should remain a tool or should become an agent in its own right (presuming that is even possible). Another issue is how to ensure that powerful AI remains in tune with human goals and values. More broadly: how to make sure we remain in control of the technologies we create.

These questions should interest everyone, first of all because the development of AI will affect everyone. And secondly, because many of the issues confronting AI researchers reflect issues that have confronted human society all along, and will soon come at us on steroids. For example, the question of how to align the values of prospective AI with human interests simply projects into a technological domain the age-old question of how to align the values of human beings among themselves. Computation and AI are the modern metaphor for understanding the operations of brains and the nature of mind—i.e., ourselves. The prospect of creating artificial mind can aid in the quest to understand our own being. It is also part of the larger quest: not only to control nature but to re-create it, whether in a carbon or a silicon base. And that includes re-creating our own nature. Such goals reflect the ancient dream to self-define, like the gods, and to acquire godlike powers.

Ever since Descartes and La Mettrie, the philosophy of mechanism has blurred the distinction between autopoietic and allopoietic systems—between organisms and machines. Indeed, it traditionally regards organisms as machines. However, an obvious difference is that machines, as we ordinarily think of them, are human artifacts, whereas organisms are not. That difference is being eroded from both ends. Genetic and epigenetic manipulation can produce new creatures, just as nucleosynthesis produced new man-made elements. At the other end lies the prospect to initiate a process that results in artificial agents: AI that bootstraps into an autopoietic system, perhaps through some process of recursive self-improvement. Is that possible? If so, is it desirable?

It does not help matters that the language of AI research casually imports mentalistic terms from everyday speech. The literature is full of ambiguous notions from the domain of everyday experience, which are glibly transferred to AI—for example, concepts such as agent, general intelligence, value, and even goal. Confusing “as if” language crops up when a machine is said to reason, think or know, to have incentives, desires, goals, or motivations, etc. Even if metaphorical, such wholesale projection of agency into AI begs the question of whether a machine can become an autopoietic system, and obscures the question of what exactly would be required to make it so.

To amalgamate all AI competencies in one general and powerful all-purpose program certainly has commercial appeal. It now seems like a feasible goal, but may be too good to be true. For, the concept of artificial general intelligence (AGI) could turn out to be not even be a coherent notion—if, for example, “intelligence” cannot really be divorced from a biological context. To further want AGI to be an agent could be a very bad idea. Either way, the fear is that AI entities, if super-humanly intelligent, could evade human control and come to threaten, dominate, or even supersede humanity. A global takeover by superintelligence has been the theme of much science fiction and is now the topic of serious discussion in think tanks around the world. While some transhumanists might consider it desirable, probably most people would not. The prospect raises many questions, the first being whether AGI is inevitable or even desirable. A further question is whether AGI implies (or unavoidably leads to) agency. If not, what threats are posed by an AGI that is not an agent, and how can they be mitigated?

I cannot provide definite answers to these questions. But I can make some general observations and report on a couple of strategies proposed by others.  An agent is necessarily embodied—which means it is not just physically real but involved in a relationship with the world that matters to it. Specifically, it can interact with the world in ways that serve to maintain itself. (All natural organisms are examples, and here we are considering the possibility of an artificial organism.) One can manipulate tools in a direct way, without having to negotiate with them as we do with agents such as people and other creatures. The concept of program initially meant a unilateral series of commands to a machine to do something. A command to a machine is a different matter than a command to an agent, which has its own will and purposes and may or may not choose to obey the command. But the concept of program has evolved to include systems with which we mutually interact, as in machine learning and self-improving programs. This establishes an ambiguous category between machine and agent. Part of the anxiety surrounding AGI stems from the novelty and uncertainty regarding this zone.

It is  problematic, and may be impossible, to control an agent more intelligent than oneself. The so-called value alignment problem is the desperate quest to nevertheless find a way to have our cake (powerful AI to use at our discretion) and be able to eat it too (or perhaps to keep it from eating us). It is the challenge to make sure that AI clearly “understands” the goals we give it and pursues no others. If it has any values at all, these should be compatible with human values and it should value human life. I cannot begin here to unravel the tangled fabric of tacit assumptions and misconceptions involved in this quest. (See instead my article, “The Value Alignment Problem,” posted in the Archive on this website.) Instead, I point to two ways to circumvent the challenge. The first is simply not to aim for AGI, let alone for agents. This strategy is proposed by K. Eric Drexler. Instead of consolidating all skill in one AI entity, it would be just as effective, and far safer, to create ad hoc task-oriented software tools that do what they are programmed to do because their capacity to self-improve is deliberately limited. The second strategy is proposed by Russell Stuart: to build uncertainty into AI systems, which are then obliged to hesitate before acting in ways adverse to human purposes—and thus to consult with us for guidance.

The goal to create superintelligence must be distinguished from the goal to create artificial agents. Superintelligent tools can exist that are not agents; agents can exist that are not superintelligent. The problem of controlling AI and aligning its values are byproducts of the desire to create meta-tools that are neither conventional tools nor true agents. Furthermore, real-world goals for AI must be distinguished from specific tasks. We understandably seek powerful tools to achieve our real-world goals for us, yet fearful they may misinterpret our wishes or carry them out in some undesired way. That dilemma is avoided if we only create programs to accomplish specified tasks. That equals more work for humans than automating automation itself, but keeps technology under human control.

Why seek to eliminate human input and participation? An obvious answer is to “optimize” the accomplishment of desired goals. That is, to increase productivity (equals wealth) through automation and thereby also reduce the burdens of human labor. Perhaps modern human beings are never satisfied with enough? Perhaps at the same time we simply loathe effort of any kind, even mental. Shall we just compulsively substitute automation for human labor whenever possible? Or are we indulging as well a faith that AI could accomplish all human purposes better and more ecologically than people? If the goal is ultimately to automate everything, what would people then do with their time when they are no longer obliged to do anything? If the hope behind AI is to free us from drudgery of any sort (in order, say, to “make the best of life’s potential”) what is that potential? How does it relate to present work and human satisfactions? What will machines free us for?

And what are the deeper and unspoken motivations behind the quest for superintelligence? To imitate life, to acquire godlike powers, to transcend nature and embodiment, to create an artificial ecology? Such questions traditionally lie outside the domain of scientific discourse. They become political, social, ethical and even religious issues surrounding technology. But perhaps they should be addressed within science too, before it is too late.

Anthropocene: from climate change to changing human nature

Anthropocene is a neologism meaning “a geological epoch dating from the commencement of significant human impact on Earth’s geology and ecosystems.” However, what is involved is more than unintended consequence. Having already upset the natural equilibrium, it seems we are now obliged to deliberately intervene—either to restore it or to finish the job of creating a man-made world in place of nature. What is new on the anthropo scene is the prospect of taking deliberate charge of human destiny, indeed the future of the planet. It is the prospect of completely re-creating nature and human nature—blurring or obliterating the very distinction between natural and artificial. The Anthopocene could be short-lived, either because the project is doomed and we do ourselves in or because the form we know as human may be no more than a stepping stone to something else.

In one sense, the Anthropocene dates not from the 20th century (nor even the Industrial revolution) but from human beginnings. For, the function of culture everywhere has always been to redefine the world in human terms, and our presence has always reshaped the landscape, creating extinctions and deserts along the way. Technology has always had planetary effects, which until recently have been moderate and considered only in hindsight. New technologies now afford possibilities of total control and require total foresight. Bio-engineering, nanotechnology, and artificial intelligence are latter-day means to an ancient dream of acquiring godlike powers. Along with such powers go godlike responsibilities.

Because that dream has been so core to human being all along, and yet so far beyond reach, we’ve been in denial of it over the ages, always projecting god-envy into religious and mythological spheres, which have always cautioned against the hubris of such pretention. Newly emboldened technologically, however, humanity is finally coming out of the closet.

The Anthropocene ideal is to master all aspects of physical reality, redesigning it to human taste. Actually, that will mean to the taste of those who create the technology. This raises the political question: who, if anyone, should control these technologies? Who will it benefit? More darkly, what are the risks that will be borne by all? When there was an abundance of wild, the natural world was taken for granted as a commons, which did not prevent private interests from fencing, owning, and exploiting ever more of it for their own profit.

From biblical times, the idea of natural resource put nature in a context of use, as the object of human purposes. And that meant the purposes of certain societies or groups, at the cost of others. Now that technologies exist to literally rearrange the building blocks of life and of matter, the concept of resource shifts from specific minerals, plants and animals to a more universal stuff—even “information.” One political question is who will control these new unnatural resources, and how to preserve them as a new sort of commons for the benefit of all? Another is how to proceed safely—if there is such a thing—in the wholesale transformation of nature and ourselves.

The human essence has always been a matter of controversy. More than ever it is now up for grabs. Because we are the self-creating creature, we cannot look to a fixed human nature, nor to a consensus, for the values that should guide our use of technology. A vision of the future—and the fulfillment of human potential—is a matter of opinions and values that differ widely. Some see a glorious technological future that is not pinned to the current human form. Others envision a way of life more integrated with nature and accepting of natural constraints. Still others view the human essence as spiritual, with human destiny unfolding on some divine timetable. The means to change everything are now available, but without a consolidated guiding vision.

Genome information is now readily available and so are technologies for using it to do genetic experiments at home. While some technologies require expensive laboratory equipment, citizen scientists (bio-hackers) can get what they need online and through the mail. Since much of the technology is low-tech and readily available, anyone in their basement can launch us into a brave new unnatural world.

One impetus for such home experimentation is social disparity: biohacking is in part a rebellion against the unfairness of present social and health systems. Like the hacker movement in general, biohackers want knowledge and technology to be fairly and democratically available, which means relatively cheap if not in the public domain. It’s about public access to what they consider should be a commons. They protest the patenting of private intellectual property that drives up the price of technology and medicine and restricts the availability of information. Social disparity promises to be endemic to all new technologies that are affordable (at least initially) only to an elite.

There are personal risks for those who experiment on themselves with unproven drugs and genetic modification. But there are risks to the environment shared by all as well, for example when an engineered mutant is deliberately released into the wild to deal with the spread of ticks that carry Lyme disease or the persistence of malaria-carrying mosquitos. The difference between a genetic solution and a conventional one can be that the new organism reproduces itself, changing the biosphere in potentially unforeseeable and irreversible ways. That applies to interventions in the human genome too. Bio-hacking is but one illustration of the potential benefits and threats of bio-engineering, which is the human quest to change biology deliberately, including human biology. The immediate promise is that genetic defects can be eliminated. But why stop there? Ideal citizens can be designed from scratch. Perhaps mortality can be eliminated. That amounts to hijacking evolution or finally taking charge of it, according to your view. To change human nature might seem a natural right, especially since “human nature” includes an engrained determination to self-define. But does that include the right to define life in general and nature at large, to tinker freely with other species, to terra-form the planet? And what constitutes a “right?” Nature endows creatures with preferences and instincts but not with rights, which are a human construct, reflecting our very disaffection from nature. Who will determine the desirable traits for a future human or post-human being and on what grounds?

Tinkering with biology is one way to enhance ourselves, but another is through artificial intelligence. Bodies and now minds can be augmented prosthetically, potentially turning us into a new cyborg species (or a number of them). Another dream is to transcend embodiment (and mortality) entirely, by uploading a copy of yourself into an eternally-running supercomputer. Some of these aspirations are pipedreams. But the possibility of an AI takeover is real and already upon us in minor ways: surveillance, data collection, smart appliances, etc. The ultimate potential is to automate automation, to relieve human beings (or at least some of them) of the need to work physically and even mentally. Your robot can do all your housework, your job, even take your vacations for you! As with biotechnology, the surface motivation driving AI development is no doubt commercial and military. Yet, lurking beneath is the unconscious desire to step into divine shoes: to create life and mind from scratch even as we free ourselves from the limitations of natural life and mind.

Like biotechnology, the tools for AI development are commonly available and relatively cheap. All you need is savvy and a laptop. The implicit aim is artificial “general” intelligence, matching or exceeding human mental and physical capability. That could be in the form of superintelligent tools that remain under human control, designed for specific tasks. But it could also mean a robotic version of human slaves. Apart from the ethics involved, slaves have never been easy to control. It comes down to a tradeoff between the advantages of autonomy in artificial agents and the challenge to control them. Autonomy may seem desirable because such agents could do literally everything for us and better, with no effort on our part. But if such creations are smarter than we are, and are in effect their own persons, how long could we remain their masters? If they have their own purposes, why would they serve ours? The very idea of automating automation means forfeiting control at the outset, since the goal is to launch AI’s that effectively create themselves.

Radical conservationists and transhumanist technophiles may be at cross-purposes, but so are more moderate advocates of environment or business. As biological creatures, we inherit the universe provided by nature, which we try to make into something corresponding to our human preferences. The materials we work with ultimately derive from nature and obey laws we did not make. Scientific understanding has enabled us to reshape that world to an extent, using those very laws. We don’t yet know the ceiling of what is possible, let alone what is wise. How far should we go in transforming ourselves and nature? Why create artificial versions of ourselves at all, let alone artificial versions of gods? What used to be philosophical questions are becoming scientific and political ones. The world is our oyster and we the irritating grit within. Will the result be a pearl?