From taking for granted to taking charge

In our 200,000 years as a species, humankind has been able to take for granted a seemingly boundless ready-made world, friendly enough to permit survival. Some of that was luck, since there were relatively benign periods of planetary stability, and some of it involved human resourcefulness in being able to adapt or migrate in response to natural changes of conditions—even changes brought about by people themselves. Either way, our species was able to count on the sheer size of the natural environment, which seemed unlimited in relation to the human presence. (Today we recognize the dimensions of the planet, but for most of that prehistory there was not even a concept of living on a “planet.”) There was no need—and really no possibility—to imagine being responsible for the maintenance of what has turned out to be a finite and fragile closed system. Perhaps there was a local awareness among hunter-gatherers about cause and effect: to browse judiciously and not to poo in your pond. Yet the evidence abounds that early humans typically slaughtered to extinction all the great beasts. Once “civilized,” the ancients cut down great forests—and even bragged about it, as Gilgamesh pillaged the cedars of Lebanon for sport.

Taming animals and plants (and human slaves) for service required a mentality of managing resources. Yet, this too was in the context of a presumably unlimited greater world that could absorb any catastrophic failures in a regional experiment. We can scarcely know what was in the minds of people in transition to agriculture; but it is very doubtful that they could have thought of “civilization” as a grand social experiment. Even for kings, goals were short-term and local; for most people, things mostly changed slowly in ways they tried to adjust to. Actors came and went in the human drama, but the stage remained solid and dependable. Psychologically, we have inherited that assumption: human actions are still relatively local and short-sighted; the majority feel that change is just happening around them and to them. The difference between us and people 10,000 years ago (or even 500 years ago) is that we finally know better. Indeed, only in the past few decades has it dawned on us that the theatre is in shambles.

I grew up in 1950s Los Angeles, when gasoline was 20 cents the gallon, and where you might casually drive 20 miles to go out for dinner. As a child, that environment seemed the whole world, totally “natural,” just how things should be. My job was to learn the ropes of that environment. But, of course, I had little knowledge of the rest of the planet and certainly no notion of a ‘world’ in the cultural sense. Only when I traveled to Europe as a young man did I experience something different: instead of the ephemera of L.A., an environment that was old and made of stone, in which people organized life in delightfully different ways. No doubt that cultural enlightenment would have been more extreme had I traveled in Africa instead of Europe. But it was the beginning of an awareness of alternatives. Still, I could not then imagine that cheap gas was ruining the planet. That awareness only crept upon the majority of my generation in our later years, coincident with the maturing consciousness of the species.

We’ve not had the example of another planet to visit, whose wise inhabitants have learned to manage their own numbers and effects in such a way to keep the whole thing going. We have only imagination and history on this planet to refer to. Yet, the conclusion is now obvious: we have outgrown the mindset of taking for granted and must embrace the mindset of taking charge if we are to survive.

What happened to finally bring about this species awakening? To sum it up: a global culture. When people were few, they were relatively isolated, the world was big, and the capacity to affect their surroundings was relatively small. Now that we are numerous and our effects highly visible, we are as though crowded together in a tippy lifeboat, where the slightest false move threatens to capsize Spaceship Earth. Through physical and digital proximity, we can no longer help being aware of the consequences of our own existence and attendant responsibility. Yet, a kind of schizophrenia sets in from the fact that our inherited mentality cannot accommodate this sudden awareness of responsibility. It is as though we hope to bring with us into the lifeboat all our bulky possessions and conveniences and all the behaviors we took for granted as presumed rights in a “normally” spacious and stable world.

We are the only species capable of deliberately doing something about its fate. But that fact is not (yet) engrained in our mentality. Of course, there are futurists and transhumanists who do think very deliberately about human destiny, and now there are think tanks like the Future of Humanity Institute. Individual authors, speakers, and activists are deeply concerned about one dire problem or another facing humanity, such as climate change, social inequity, and continuing nuclear threat, along with the brave new worlds of artificial intelligence and genetic engineering. Some of them have been able to influence public policy, even on the global scale. Most of us, however, are not directly involved in those struggles, and are only beginning to be touched directly by the issues. Like most of humanity throughout the ages, we simply live our lives, with the daily concerns that have always monopolized attention.

However, the big question now looming over all of us is: what next for humanity? It is not about predicting the future but about choosing and making it. (Prediction is just more of bracing ourselves for what could happen, and we are well past that.) We know what will happen if we remain in the naïve mindset of all the creatures that have competed for existence in evolutionary history: homo sapiens will inevitably go extinct, like the more than 99% of all species that have ever existed. Given our accelerating lifestyle, this will likely be sooner than later. They passively suffered changes they could not conceive, let alone consciously control, even when they had contributed to those changes. We are forced to the terrible realization that only our own intervention can rectify the imbalances that threaten us. Let us not underestimate the dilemma: for, we also know that “intervention” created many of those problems in the first place!

Though it is the nature of plans to go awry, humanity needs a plan and the will to follow it if we are to survive. That requires a common understanding of the problems and agreement on the solutions. Unfortunately, that has always been a weak point of our species, which has so far been unable to act on a species level, and until very recently has been unable even to conceive of itself as a unified entity with a possible will. We are stuck at the tribal level, even when the tribes are nations. More than ever we need to brainstorm toward a calm consensus and collective plan of action. Ironically, there is now the means for all to be heard. Yet, our tribal nature and selfish individualist leanings result in a cacophony of contradictory voices, in a free-for-all bordering on hysteria. There is riot, mutiny and mayhem on the lifeboat, with no one at the tiller. No captain has the moral (much less political) authority to steer Spaceship Earth. What can we then hope for but doom?

Some form of life will persist on this planet, perhaps for several billion years to come. But the experiment of civilization may well fail. And what is that experiment but the quest to transcend the state of nature given us, which no other creature has been able to do? We were not happy as animals, having imagined the life of gods. With one foot on the shore of nature and one foot in the skiddy raft of imagination, we do the splits. The two extreme scenarios are a retreat into the stone age or charging brashly into a post-humanist era. Clearly, eight billion people cannot go back to hunting and gathering. Nor can they all become genetically perfect immortals, colonize Mars, or upload to some more durable form of embodiment. The lifeboat will empty considerably if it does not sink first.

Whatever the way forward, it must be with conscious intent on a global level. We will not go far bumbling as usual. Whether salvation is possible or not, we ought to try our best to achieve the best of human ideals. Whether the ship of state (or Spaceship Earth) floats or sinks, we can behave in ways that honour the best of human aspirations. To pursue another metaphor, in the board game of life, though ever changing, at any given moment there are rules and other elements. The point is not just to win but also to play well, even as we attempt to re-define the rules and even the game. That means to behave nobly, as though we are actually living in that unrealized dream. Our experiment all along has been to create an ideal world—using the resources of the real one. Entirely escaping physical embodiment is a pipe-dream; but modifying ourselves physically is a real possibility. In a parallel way, a completely man-made world is an oxymoron, for it will always exist in the context of some natural environment, with its own rules—even in outer space. Yet coming to a workable arrangement with nature should be possible. After all, that’s what life has always done. With no promise of success, our best strategy is a planetary consciousness willing to take charge of the Earth’s future. To get there, we must learn to regulate our own existence.

Yes, but is it art?

Freud observed that human beings have a serious and a playful side. The “Reality Principle” reflects the need to take the external world seriously, driven by survival. Science and technology serve the Reality Principle insofar as they accurately represent the natural world and allow us to predict, control, and use it for our benefit. Yet they leave unfulfilled a deep need for sheer gratuitous activity—play. The “Pleasure Principle” is less focused, for it reflects not only pursuit of what is good for the organism but also the playful side of human nature that sometimes thumbs its nose at “reality.” It reflects the need to freely define ourselves and the world we live in—not to be prisoners of biology, social conditioning, practicality, and reason. I believe this is where art (like music, sport, and some mathematics) comes literally into play.

Plato had dismissed art as dealing only with appearances, not with truth. According to him art is merely a form of play, not be taken seriously. However, we do take art seriously precisely because it is play. What we find beautiful or interesting about a work of art often involves its formal qualities, which reveal the artist’s playfulness at work. Like science fiction, art may portray an imagined world; but it can also directly establish a world simply by assembling the necessary elements. Just as a board game comes neatly in a box, so the artist’s proposed world comes in a frame, on a plinth, or in a gallery. What it presents may seem pointless, but that is its point. It makes its own kind of sense, if not that of the “real” world. The artwork may be grammatically correct while semantically nonsense. Art objects are hypothetical alternatives to the practical objects of consumer society, of which they are sometimes parodies. Often they are made of similar materials, using similar technology, but expressing a different logic or no apparent logic at all. Artistic invention parallels creativity in science and technology. At the most ambitious levels, large teams of art technicians undertake huge projects, rivaling the monumentality of medieval cathedrals and the modern cinema, but also rivalling space launches and cyclotrons. Extravagance expresses the Pleasure Principle in all domains.

Like technologists, artists are experimentalists. They want to see what happens when you do this or that. They love materials, processes and tinkering. Some are also theorists who want to follow out certain assumptions or lines of thought to their ultimate conclusions. In this they are aided by zealous curators, art historians, and gallery owners who propose ever-changing commentaries and theories of art, reflecting what artists do but also shaping it. The world of contemporary art seems driven by some restless mandate of “originality” that resembles the dynamics of the fashion industry and the need for constant change that fuels consumerism generally. Like scientists, ambitious artists may be driven to excel what they have already done or the accomplishments of others. Some seek a place in art history, which is little more than the hindsight of academics and curators or the self-serving promotions of dealers and gallerists.

Science is often distinguished from art and other cultural expressions by its progress, through the accumulation of data and consequent advance of technology. Its theories seem to build toward a more complete and accurate representation of reality. Yet theories are always subject to revision and data are subject to refinement and reinterpretation. To predict the future of science is to predict new truths of nature that we cannot know in advance. Art too accumulates, and its social role has evolved in step with changing institutions and practices, its forms with changing technology. There is pattern and direction in art history, but whether that can be called progress in a normative sense is debatable. Art does not seek to reveal reality, so much as to reveal the artist and to play. Indeed, it seems to be bent on freeing itself from the confines of reality.

Art is also an important kind of self-employment. It provides not only alternative objects and visions, but also an alternative form of work and of work place. It’s a way to establish and control one’s own work environment. The studio is the artist’s laboratory. Art defines an alternative form of production and relation to work. Artists can be their own bosses, if at the price of an unstable income. As in society at large, a small elite enjoy the bulk of success and wealth. Some artists are now wealthy entrepreneurs, and some collectors are but speculative investors. The headiness of the contemporary art world mirrors the world of investment, with its easy money and financial abstractions, prompting questions about the very meaning of wealth—and of art. Indeed, art has always served as a visible form of wealth, and therefore as a status symbol. At one time, the value of artworks reflected the labor-intensive nature of the work, and often the use of precious materials. Today, however, the market value of an artwork reflects how badly other people want it—whatever their reasons.

In modern times, art has inherited a mystique that imbues it with social value apart from labor value and even the marketplace. Despite the fact that art defies an easy definition, and now encompasses a limitless diversity of expressions, people continue to recognize and value art as different from consumer items that serve more practical functions. On the one hand, art represents pure creativity—which is another word for play—and also an alternative vision. On the other hand, like everything else it has succumbed to commercialization. Artists are caught between. Most must sell their work to have a livelihood. To get “exposure,” they must be represented in galleries and are tempted to aim at least some of their work toward the marketplace. Thus, one aspect of art, and of being an artist, reflects the Pleasure Principle while the other represents the Reality Principle. Yet, when the motives surrounding art are not earnest enough—when they appear too mundane, too heady, too trivial, too dominated by money, fame, or ideology—the perennial question arises: is it art? That we can raise the question indicates that we expect more.

What more might be expected? European art originated as a religious expression—which might be said of art in many places and times. Quite apart from any specific theology, human beings have always had a notion of the sacred. That might be no more than a reverence for tradition. But it might also be a quest to go beyond how things have been done and how they have been seen. Religious art has often served as propaganda for an ideology that reinforced the social order of the day. Advertising and news media serve this purpose in our modern world. But even within the strictures of religious art (or commercial art or politically sanctioned art), there is license to interpret, to play, to improvise and surprise. The gratuitous play with esthetics and formal elements can undermine the serious ostensible message. Perhaps that is the eternal appeal of art, its mystique and its mandate: to remind us of our own essential freedom to view the world afresh, uniquely, and playfully.

What is intelligence?

Intelligence is an ambiguous and still controversial notion. It has been defined variously as goal-directed adaptive behavior, the ability to learn, to deal with novel situations or insufficient information, to reason and do abstract thinking, etc. It has even been defined as the ability to score well on intelligence tests! Sometimes it refers to observed behavior and sometimes to an inner capacity or potential—even to a pseudo-substance wryly called smartonium.

Just as information is always for someone, so intelligence is someone’s intelligence, measured usually by someone else with their biases, using a particular yardstick for particular purposes. Even within the same individual, the goals of the conscious human person may contradict the goals of the biological human organism. It is probably this psychological fact that allows us to imagine pursuing arbitrary goals at whim, whereas the goals of living things are hardly arbitrary.

Measures of intelligence were developed to evaluate human performance in various areas of interest to those measuring. This gave rise to a notion of general intelligence that could underlie specific abilities. A hierarchical concept of intelligence proposes a “domain independent” general ability (the famous g-factor) that informs and perhaps controls domain-specific skills. “General” can refer to the range of subjects as well as the range of situations. What is general across humans is not the same as what is general across known species or theoretically possible agents or environments. Perhaps, the intelligence measured can be no more general than the tests and situations used to measure it. As far as it is relevant to humans, the intelligence of other entities (whether natural or artificial) ultimately reflects their capacity to further or thwart human aims. Whatever does not interact with us in ways of interest to us may not be recognized at all, let alone recognized as intelligent.

It is difficult to compare animal intelligence across species, since wide-ranging sense modalities, cognitive capacities, and adaptations are involved. Tests may be biased by human motivations and sensory-motor capabilities. The tasks and rewards for testing animal intelligence are defined by humans, aligned with their goals. Even in the case of testing people, despite wide acceptance and appeal, the g-factor has been criticized as little more than a reification whose sole evidence consists in the very behaviors and correlations it is supposed to explain. Nevertheless, the comparative notion of intelligence, generalized across humans, was further generalized to include other creatures in the comparison, and then generalized further to include machines and even to apply to “arbitrary systems.” By definition, the measure should not be anthropocentric and should be independent of particular sense modalities, environments, goals, and even hardware.

Like the notion of mind-in-general, intelligence-in-general is an abstraction that is grounded in human experience while paradoxically freed in theory from the tangible embodiment that is the basis of that experience. Its origins are understandably anthropocentric, derived historically from comparisons among human beings, and then extended to comparisons of other creatures with each other and with human beings. It was then further abstracted to apply to machines. The goal of artificial intelligence (AI) is to produce machines that can behave “intelligently”—in some sense that is extrapolated from biological and human origins. It remains unclear whether such an abstraction is even coherent. Since concepts of general intelligence are based on human experience and performance, it also remains unclear to what extent an AI could satisfy or exceed the criteria for human-level general intelligence without itself being at least an embodied autonomous agent: effectively an artificial organism, if not an artificial person.

Can diverse skills and behaviors even be conflated into one overall capacity, such as “problem-solving ability” or the g-factor? While ability to solve one sort of problem carries over to some extent to other sorts of tasks, it does not necessarily transfer equally well to all tasks, let alone to situations that might not best be described as problem solving at all—such as, for example, the ability to be happy. Moreover, problem solving is a different skill from finding, setting, or effectively defining the problems worth solving, the tasks worth pursuing. The challenges facing society usually seem foisted upon us by external reality, often as emergencies. Our default responses and strategies are often more defensive than proactive. Another level of intelligence might involve better foresight and planning. Concepts of intelligence may change as our environment becomes more challenging. Or as it becomes progressively less natural and more artificial, consisting largely of other humans and their intelligent machines.

Biologically speaking, intelligence is simply the ability to survive. In that sense, all currently living things are by definition successful, therefore intelligent. Though trivial sounding, this is important to note because models of intelligence, however abstract, are grounded in experience with organisms; and because the ideal of artificial general intelligence (AGI) involves attempting to create artificial organisms that are (paradoxically) supposed to be liberated from the constraints of biology. It may turn out, however, that the only way for an AI to have the autonomy and general capability desired is to be an embodied product of some form of selection: in effect, an artificial organism. Another relevant point is that, if an AI does not constitute an artificial organism, then the intelligence it manifests is not actually its own but that of its creators.

Autonomy may appear to be relative, a question of degree; but there is a categorical difference between a true autonomous agent—with its own intelligence dedicated to its own existence—and a mere tool to serve human purposes. A tool manifests only the derived intelligence of the agent designing or using it. An AI tool manifests the intelligence of the programmer. What does it mean, then, for a tool to be more intelligent than its creator or user? What it can mean, straightforwardly, is that a skill valued by humans is automated to more effectively achieve their goals. We are used to this idea, since every tool and machine was motivated by such improvement and usually succeeds until something better comes along. But is general intelligence a skill that can be so augmented, automated, and treated as a tool at the beck of its user?

The evolution of specific adaptive skills in organisms must be distinguished from the evolution of a general skill called intelligence. In conditions of relative stability, natural selection would favor automatic domain-specific behavior, reliable and efficient in its context. Any pressure favoring general intelligence would arise rather in unstable conditions. The emergence of domain-general cognitive processes would translate less directly into fitness-enhancing behavior, and would require large amounts of energetically costly brain tissue. The biological question is how domain-general adaptation could emerge distinct from specific adaptive skills and what would drive its emergence.

In light of the benefits of general intelligence, why do not all species evolve bigger and more powerful brains? Every living species is by definition smart enough for its current niche, for which its intelligence is an economical adaptation. It would seem, as far as life is concerned, that general intelligence is not only expensive, and often superfluous, but implies a general niche, whatever that can mean. Humans, for example, evolved to fit a wide range of changing conditions and environments, which they continue to further expand through technology. Even if we manage to stabilize the natural environment, the human world changes ever more rapidly—requiring more general intelligence to adapt to it.

The possibility to understand mind as computation, and to view the brain metaphorically as a computer, is one of the great achievements of the computer age. (The computer metaphor is underwritten more broadly by the mechanist metaphor, which holds that any behavior of a biological “system” could be reduced to an algorithm.) Computer science and brain science have productively cross-pollinated. Yet, the brain is not literally a machine, and mind and intelligence are ambiguous concepts not exclusively related to brain. “Thinking” suggests reasoning and an algorithmic approach—the ideal of intellectual thought—which is only a small part of the brain’s activity responsible for the organism as a whole. Ironically, abstract concepts produced by the brain are recycled to explain the operations of the brain that give rise to them in the first place.

Ideally, we expect of artificial intelligence to do what we want, better than we can, and without supervision. This raises several questions and should raise eyebrows too. Will it do what we want, or how can it be made to do so? How will we trust its (hopefully superior) judgment if is so much smarter than us that we cannot understand its considerations? How autonomous can AI be, short of being a true self-interested agent? Under what circumstance could machines become such agents, competing with each other and with humans and other life forms for resources and for their very existence? The dangers of superintelligence attend the motive to achieve ever greater autonomy in AI systems, the extreme of which is the genuine autonomy manifest by living things. AI should instead focus on creating powerful tools that remain under human control. That would be safer, wiser, and—shall we say—more intelligent.

Origins of the white lie

In the wake of recently discovered unmarked graves of indigenous children, at state-sponsored residential schools run by churches, there has been much discussion lately about attitudes and practices of colonialism in Canada. Hardly institutions of learning, these were indoctrination centres serving cultural genocide. It is politically correct now to look back with revulsion, as though we now live in a different world. Should we be so smug? After all, the last Indian residential school closed only twenty-five years ago.

What is particularly horrifying—and yet perplexing—is the prospect that many of the people running these schools (and the government officials who commissioned them) probably felt they were doing the right thing in “helping” indigenous children assimilate into white society. Apart from cynical land-grabbing and blatant racism, many in government may have thought themselves well-motivated, and the school personnel may have been sincerely devout. Yet, the result was malicious and catastrophic. There were elements of the same mean-spirited practices in English boarding schools and ostensibly charitable institutions. Nineteenth-century novels depict the sadism in the name of character formation, discipline and obedience, which were supposed to prepare young men and women for their place in society. How is it possible to be mean and well-meaning at the same time?

Certainly, “the white man’s burden” was a notion central to colonialism. It is related to the European concept of noblesse oblige, which was an aspect of the reciprocal duties between peasant and aristocrat in medieval society. The very fact that such class relationships (between the lowly and their betters) persist even today is key to the sort of presumption of superiority illustrated by the residential schools. Add to class the element of race, then combine with religious proselytizing, empire and greed, and you have a rationale for conquest. The natives were regarded suspiciously as ignorant savages who made no proper use of their land and “resources.” Their bodies were raw material for slavery and their souls for conversion. All in the name of civilizing “for their own good.” Indeed, slavery was a global institution from time immemorial, practiced in Canada as well as the U.S., and practiced even by indigenous natives themselves.

In view of the Spanish Inquisition in the European homeland, it cannot be too surprising that the conquistadors applied similar methods abroad. The fundamental religious assumption was that the body has little importance compared to the soul. In the medieval Christian context, it was self-evident that the body could be mistreated, tortured, even burnt alive for the sake of the soul’s salvation. According to contemporary accounts, the conquistadors committed atrocities in a manner intended to outwardly honor their religion: natives hanged and burned at the stake—in groups of thirteen as a tribute to Christ and his twelve apostles! The utter irony and perversity of such “logic” has more recent parallels and remains just as possible today.

The Holocaust applied an intention to keep society pure by eliminating elements deemed undesirable. Eugenics was a theme of widespread interest in early twentieth century, not only in Nazi Germany. Hannah Arendt argued controversially that the atrocities were committed less by psychopathic monsters than by ordinary people who more or less believed in what they were doing, if they thought about it deeply at all. In the wake of WW2, interest was renewed in understanding how such things can happen in the name of nationalism, racial superiority, or some other captivating agenda. In particular: to understand how unconscionable behavior is internally justified. The psychological experiments of Stanley Milgram, about obedience to authority, shed light on the banality of evil, by showing how easy it is for people to commit acts of torture when an authority figure assures them it is necessary and proper. The underlying question remains: how to account for the disconnect between common sense (or compassion or morality) and behavior that can later (or by others) be judged patently wrong? By what reasoning do people justify their evil deeds so that they appear to them acceptable or even good?

Self-deception seems to be a general human foible, part and parcel of the ability to deceive others. It can be deliberate, even when unconscious. Or, it can be incidental, as when we simply do not have conscious access to our motives. Organisms, after all, are cobbled together by natural selection in a way that coheres only enough to insure survival. The ego or rational mind, too, is a cobbled feature, cut off from access to much of the organism’s workings, with which it would not be adaptive for it to directly interfere. The conscious self is charged by society to produce behavior in accord with social expectations, yet is poorly equipped as an organ of self-control.

Biology is no excuse, of course, especially since our highest ideals aspire to transcend biological limitations. Yet, a brief digression may shed some light. The primary aim of every organism is its own existence. Life, by definition, is self-serving; yet our species is characteristically altruistic toward those recognized as their own kind. The human organism discovered reason as a survival strategy. It has surrounded itself with tools, machines, factories and institutions that serve some purpose other than their own existence. As seemingly rational agents in the world, we try to shape the world in certain ways that nevertheless fit our needs as organisms. Thus, we purport to act according to some rational program, even for the good of others or society, but which often turns out to be self-serving or serving our specific group. The disconnect is a product of evolutionary history. We aspire and purport to be rational, but we were not rationally designed.

Hypocrisy literally means failing to be (self-)critical enough. The context of that failing is that we believe we are acting in accordance with one agenda and do not see how we are also acting in accordance with a very different one. We think we are pursuing one aim and fail to recognize another aim inconsistent with it. Deaf to the dissonance, the right hand (hemisphere?) knows not what the left is doing. A person, group, or class behaves according to their interests, and believes some story that justifies their entitlement, to themselves and to others. The cover story is somehow made to jive with other motivations behind it. What is supposedly objective fact is molded to fit subjective desire.

As social creatures, we tend to look to others for clues to how we should behave. But that is a self-fulfilling prophecy when everyone else is doing likewise. There must be some way to weigh action that is not based on social norm. This is the proper function of reason, argument, debate, and social criticism. It is not to convince others of a point of view, but to find what is wrong with a point of view (no matter how good-sounding) and hopefully set it right. In particular, it should reveal how one intention can be inconsistent with another intention that lurks at its core, just as the whole structure of the brain lurks beneath the neo-cortex. Reason ought to reveal internal inconsistency and the self-deception that permits it.

Yet, self-deception is a concomitant of the ability to deceive others, which is built into our primate heritage and the structure of language. Society can only cohere through cooperation, and there must be ways to tell the cooperators from the defectors in society. Reputation serves this function. But reputation is an image in people’s minds that can be manipulated and faked. As any actor can tell you, the best way to make your performance emotionally convincing is to believe it yourself. If your story is a lie, then you too must believe the lie if you expect to convince others of your sincerity. Furthermore, deception of the others dovetails with their willingness to be deceived—namely, their own self-deceptions.

We know that people consciously create acts of fiction and fantasy; also, that they sometimes knowingly lie. Self-deception overlaps these categories: fiction that we convince ourselves is fact. Rationally, we know that opinions—when expressed as such—are someone’s thoughts. But the category of fact renounces this understanding in favor of an objective truth that has no author, requires no evidence, and for which no individual is responsible, unless God. We disown responsibility for our statements by failing to acknowledge them as personal assertions and beliefs, instead proposing them offhand as free-standing truths in the public domain.

Religion, patriotism, and cultural myth are not about reason or factual truth, but about social cohesion and soothing of existential anxiety through a sense of belonging. We trust those who seem to think and act like us. But this is a double-edged sword. It makes towing a line a condition of membership in the group. Controlling the behavior of members helps the group cohere, but does not allow for a control on the behavior of the group itself.

Scientific propositions can be pinned down and disproven, but not so cultural myths and biases, nor religious beliefs, which cannot even be unambiguously comprehended, let alone debunked in a definitive way. Like water for the fish, the ethos of a society’s prejudices cannot easily be perceived. As Scott Atran has observed, “…most people in our society accept and use both science and religion without conceiving of them in a zero-sum conflict. Genesis and the Big Bang theory can perfectly well coexist in a human mind.” Perhaps that foible is a modern sign that we have not outgrown the capacity for self-deception, and thus for evil.

Splitting hairs with Occam’s razor

Before the 19th century, science was called natural philosophy or natural history. Since the ancient Greeks, the study of nature had been a branch of philosophy, a gentlemanly discussion of ideas by men who disdained to soil their hands with actual materials. What split science off from medieval philosophy was the use of experiment, careful observation, quantitative measurement with instruments, and what became known as scientific method, which meant testing ideas by hands-on experiment. Science became the application of technology to the study of nature. This in turn gave rise to further technology in a happy cycle involving the mutual interaction of mind and matter.

Philosophy literally means love of wisdom. In modern times it has instead largely come to mean love of sophistry. The secession of science from philosophy left the latter awkwardly bereft and defensive. One of the reasons why science emerged as distinct from philosophy is that medieval scholastic philosophy had been (as modern philosophy largely remains) mere talk about who said what about who said what. Science got down to brass tacks, focusing on the natural world, but at the cost of generality. Philosophy could trade on being more general in focus, if less verifiably factual. It could still deal with areas of thought not yet appropriated by scientific study, such as the nature of mind. And it could deal in a critical way with concepts developed within science—which became known as philosophy of science. Either way, the role of philosophy involved the ability to stand back to examine ideas for their logical consistency, meaning, tacit assumptions, and function within a broader context. The focus was no longer nature itself but thought about it and thought in general. Philosophy assumed the role of “going meta,” to critically examine any proposed idea or system from a viewpoint outside it. This meant examining a bigger picture, outside the terms and borders of the discipline concerned, and examining the relationships between disciplines. (Hence, metaphysics as a study beyond physics.) However, that was not the only response of philosophy to the scientific revolution.

Philosophy had long been closely associated with logic, one of its tools, which is also the basis of mathematics. Both logic and mathematics seemed to stand apart from nature as eternal verities, upstream of science. Galileo even wrote that mathematics is the language of the book of nature. So, even though science appropriated these as tools for the study of nature, and was strongly shaped by them, logic and math were never until recently questioned or considered the subject matter of scientific study. The increasing success of mathematical description in the physical sciences led to a general “physics envy,” whereby other sciences sought to emulate the quantifying example of physics. Sometimes this was effective and appropriate, but sometimes it led to pointless formalism, which was often the case in philosophy. Perhaps more than any other discipline, philosophy suffered from an inferiority complex in the shadow of its fruitful younger sibling. Philosophy could legitimize itself by looking scientific, or at least technical.

Certainly, all areas of human endeavor have become increasingly specialized over time. This is true even in philosophy, whose mandate remains, paradoxically, generalist in principle. Apart from the demand for rigor, the tendency to specialize may reflect the need for academics to remain employed by creating new problems to solve; to make their mark by staking out a unique territory in which to be expert; and to differentiate themselves from other thinkers through argument. Specialization, after all, is the art of knowing more and more about less and less, following the productive division of labor that characterizes civilization. On the other hand, specialization can lead to such fragmentation that thinkers in diverse intellectual realms are isolated from each other’s work. Worse, it can isolate a specialty from society at large. That can imply an enduring role for philosophers as generalists. They are positioned to stand back to integrate disparate concepts and areas of thought into a larger whole—to counterbalance specialization and interpret its products to a larger public. Yet, instead of rising to the occasion provided by specialization, philosophy more often succumbs its hazards.

Science differs from philosophy in having nature as its focus. The essential principle of scientific method is that disagreement is settled ultimately by experiment, which means by the natural world. That doesn’t mean that questions in science are definitively settled, much less that a final picture can ever be reached. The independent existence of the natural world probably means that nature is inexhaustible by thought, always presenting new surprises. Moreover, scientific experiments are increasingly complex, relying on tenuous evidence at the boundaries of perception. This means that scientific truth is increasingly a matter of statistical data, whose interpretation depends on assumptions that may not be explicit—until philosophers point them out. Nevertheless, there is progress in science. At the very least, theories become more refined, more encompassing, and quantitatively more accurate. That means that science is progressively more empowering for humanity, at least through technology.

Philosophy does not have nature as arbiter for its disputes, and little opportunity to contribute directly to technological empowerment. Quite the contrary, modern philosophers mostly quibble over contrived dilemmas of little interest or consequence to society. These are often scarcely more than make-work projects. The preoccupations and even titles of academic papers in philosophy are often expressed in terms that mock natural language. In the name of creating a precise vocabulary, their jargon establishes a realm of discourse impenetrable to outsiders—certainly to lay people and often enough even to other academics. More than an incidental by-product of useful specialization, abstruseness seems as much a ploy to justify existence within a caste and to perpetuate a self-contained scholastic world. If philosophical issues are by definition irresolvable, this at least keeps philosophers employed.

Philosophy began as rhetoric, which is the art of arguing convincingly. (Logic may have arisen as a rule-based means to reign in the extravagances of rhetoric.) Argument remains the hallmark of philosophy. Without nature to reign thought in, as in science, there is only logic and common sense as guides. Naturally, philosophers do attempt to present coherent reasoned arguments. But logic is only as good as the assumptions on which it is based. And these are wide open to disagreement. Philosophical argument does little more than hone disagreement and provide further opportunities to nit-pick. For the most part, philosophical argument promotes divergence, when its better use (“standing back”) is to arrive at convergence by getting to the bottom of things. That, however, would risk putting philosophers out of a job.

Philosophy resembles art more than science. Art, at least, serves a public beyond the coterie of artists themselves. Art too promotes divergence, and literature serves the multiplication of viewpoints we value as creative in our culture of individualism. Like artists, professional philosophers might find an esthetic satisfaction in presenting and examining arguments; they might revel in the opportunity to stand out as clever and original. However, philosophy tends to be less accessible to the general public than art. (Try to imagine a philosophy museum or gallery.) Professional philosophy has defined itself as an ivory-tower activity, and academic papers in philosophy tend to make dull reading, when comprehensible at all. That does not prevent individual philosophers from writing books of general and even topical interest. Sometimes these are eloquent, occasionally even best-sellers. Philosophers may do their best writing, and perhaps their best thinking, when addressing the rest of us instead of their fellows. After all, they were once advisors to rulers, providing a public service.

If philosophy is the art of splitting hairs, the metaphor generously conjures the image of an ideally sharp blade—the cutting edge of logic or incisive criticism. The other metaphor—of “nitpicking”—has less savory connotations but more favorable implications. Picking nits is a grooming activity of social animals, especially primates. It serves immediately to promote cleanliness (next to godliness, after all). More broadly, it serves to bond members of a group. We complain of it as a negative thing, an excessive attention to detail that distracts from the main issue. Yet its social function is actually to facilitate bonding. The metaphor puts the divisive aspect of philosophy in the context of a potentially unifying role.

That role can be fulfilled in felicitous ways, such as the mandate to stand back to see the larger picture, to find hidden connections and limiting assumptions, to “go meta.” It consists less in the skill to find fault with the arguments of others than to identify faulty thinking in the name of arriving at a truth that is the basis for agreement. Perhaps most philosophers would insist that is what they actually do. Perhaps finding truth can only be done by first scrutinizing arguments in detail and tearing them apart. However, that should never be an end in itself. As naïve as it might seem, reality—not arguments—should remain the focus of study in philosophy, as it is in science. Above all, specialization in philosophy should not distract from larger issues, but aim for them. Analysis should be part of a larger cycle that includes synthesis. Philosophy should be true to its role of seeking the largest perspective and bringing the most important things into clear focus. It should again be a public service, to an informed society if no longer to kings.

The quest for superintelligence

Human beings have been top dogs on the planet for a long time. That could change. We are now on the verge of creating artificial intelligence that is smarter than us, at least in specific ways. This raises the question of how to deal with such powerful tools—or, indeed, whether they will remain controllable tools at all or will instead become autonomous agents like ourselves, other animals, and mythical beings such as gods and monsters. An agent, in a sense derived from biology, is an autopoietic system—one that is self-defining, self-producing, and whose goal is its own existence. A tool or machine, on the other hand, is an instrument of an agent’s purposes. It is an allopoietic system—one that produces something other than itself. Unless it happens to also be an autopoietic system, it could have no goals or purposes of its own. An important philosophical issue concerning AI is whether it should remain a tool or should become an agent in its own right (presuming that is even possible). Another issue is how to ensure that powerful AI remains in tune with human goals and values. More broadly: how to make sure we remain in control of the technologies we create.

These questions should interest everyone, first of all because the development of AI will affect everyone. And secondly, because many of the issues confronting AI researchers reflect issues that have confronted human society all along, and will soon come at us on steroids. For example, the question of how to align the values of prospective AI with human interests simply projects into a technological domain the age-old question of how to align the values of human beings among themselves. Computation and AI are the modern metaphor for understanding the operations of brains and the nature of mind—i.e., ourselves. The prospect of creating artificial mind can aid in the quest to understand our own being. It is also part of the larger quest: not only to control nature but to re-create it, whether in a carbon or a silicon base. And that includes re-creating our own nature. Such goals reflect the ancient dream to self-define, like the gods, and to acquire godlike powers.

Ever since Descartes and La Mettrie, the philosophy of mechanism has blurred the distinction between autopoietic and allopoietic systems—between organisms and machines. Indeed, it traditionally regards organisms as machines. However, an obvious difference is that machines, as we ordinarily think of them, are human artifacts, whereas organisms are not. That difference is being eroded from both ends. Genetic and epigenetic manipulation can produce new creatures, just as nucleosynthesis produced new man-made elements. At the other end lies the prospect to initiate a process that results in artificial agents: AI that bootstraps into an autopoietic system, perhaps through some process of recursive self-improvement. Is that possible? If so, is it desirable?

It does not help matters that the language of AI research casually imports mentalistic terms from everyday speech. The literature is full of ambiguous notions from the domain of everyday experience, which are glibly transferred to AI—for example, concepts such as agent, general intelligence, value, and even goal. Confusing “as if” language crops up when a machine is said to reason, think or know, to have incentives, desires, goals, or motivations, etc. Even if metaphorical, such wholesale projection of agency into AI begs the question of whether a machine can become an autopoietic system, and obscures the question of what exactly would be required to make it so.

To amalgamate all AI competencies in one general and powerful all-purpose program certainly has commercial appeal. It now seems like a feasible goal, but may be too good to be true. For, the concept of artificial general intelligence (AGI) could turn out to be not even be a coherent notion—if, for example, “intelligence” cannot really be divorced from a biological context. To further want AGI to be an agent could be a very bad idea. Either way, the fear is that AI entities, if super-humanly intelligent, could evade human control and come to threaten, dominate, or even supersede humanity. A global takeover by superintelligence has been the theme of much science fiction and is now the topic of serious discussion in think tanks around the world. While some transhumanists might consider it desirable, probably most people would not. The prospect raises many questions, the first being whether AGI is inevitable or even desirable. A further question is whether AGI implies (or unavoidably leads to) agency. If not, what threats are posed by an AGI that is not an agent, and how can they be mitigated?

I cannot provide definite answers to these questions. But I can make some general observations and report on a couple of strategies proposed by others.  An agent is necessarily embodied—which means it is not just physically real but involved in a relationship with the world that matters to it. Specifically, it can interact with the world in ways that serve to maintain itself. (All natural organisms are examples, and here we are considering the possibility of an artificial organism.) One can manipulate tools in a direct way, without having to negotiate with them as we do with agents such as people and other creatures. The concept of program initially meant a unilateral series of commands to a machine to do something. A command to a machine is a different matter than a command to an agent, which has its own will and purposes and may or may not choose to obey the command. But the concept of program has evolved to include systems with which we mutually interact, as in machine learning and self-improving programs. This establishes an ambiguous category between machine and agent. Part of the anxiety surrounding AGI stems from the novelty and uncertainty regarding this zone.

It is  problematic, and may be impossible, to control an agent more intelligent than oneself. The so-called value alignment problem is the desperate quest to nevertheless find a way to have our cake (powerful AI to use at our discretion) and be able to eat it too (or perhaps to keep it from eating us). It is the challenge to make sure that AI clearly “understands” the goals we give it and pursues no others. If it has any values at all, these should be compatible with human values and it should value human life. I cannot begin here to unravel the tangled fabric of tacit assumptions and misconceptions involved in this quest. (See instead my article, “The Value Alignment Problem,” posted in the Archive on this website.) Instead, I point to two ways to circumvent the challenge. The first is simply not to aim for AGI, let alone for agents. This strategy is proposed by K. Eric Drexler. Instead of consolidating all skill in one AI entity, it would be just as effective, and far safer, to create ad hoc task-oriented software tools that do what they are programmed to do because their capacity to self-improve is deliberately limited. The second strategy is proposed by Russell Stuart: to build uncertainty into AI systems, which are then obliged to hesitate before acting in ways adverse to human purposes—and thus to consult with us for guidance.

The goal to create superintelligence must be distinguished from the goal to create artificial agents. Superintelligent tools can exist that are not agents; agents can exist that are not superintelligent. The problem of controlling AI and aligning its values are byproducts of the desire to create meta-tools that are neither conventional tools nor true agents. Furthermore, real-world goals for AI must be distinguished from specific tasks. We understandably seek powerful tools to achieve our real-world goals for us, yet fearful they may misinterpret our wishes or carry them out in some undesired way. That dilemma is avoided if we only create programs to accomplish specified tasks. That equals more work for humans than automating automation itself, but keeps technology under human control.

Why seek to eliminate human input and participation? An obvious answer is to “optimize” the accomplishment of desired goals. That is, to increase productivity (equals wealth) through automation and thereby also reduce the burdens of human labor. Perhaps modern human beings are never satisfied with enough? Perhaps at the same time we simply loathe effort of any kind, even mental. Shall we just compulsively substitute automation for human labor whenever possible? Or are we indulging as well a faith that AI could accomplish all human purposes better and more ecologically than people? If the goal is ultimately to automate everything, what would people then do with their time when they are no longer obliged to do anything? If the hope behind AI is to free us from drudgery of any sort (in order, say, to “make the best of life’s potential”) what is that potential? How does it relate to present work and human satisfactions? What will machines free us for?

And what are the deeper and unspoken motivations behind the quest for superintelligence? To imitate life, to acquire godlike powers, to transcend nature and embodiment, to create an artificial ecology? Such questions traditionally lie outside the domain of scientific discourse. They become political, social, ethical and even religious issues surrounding technology. But perhaps they should be addressed within science too, before it is too late.

Anthropocene: from climate change to changing human nature

Anthropocene is a neologism meaning “a geological epoch dating from the commencement of significant human impact on Earth’s geology and ecosystems.” However, what is involved is more than unintended consequence. Having already upset the natural equilibrium, it seems we are now obliged to deliberately intervene—either to restore it or to finish the job of creating a man-made world in place of nature. What is new on the anthropo scene is the prospect of taking deliberate charge of human destiny, indeed the future of the planet. It is the prospect of completely re-creating nature and human nature—blurring or obliterating the very distinction between natural and artificial. The Anthopocene could be short-lived, either because the project is doomed and we do ourselves in or because the form we know as human may be no more than a stepping stone to something else.

In one sense, the Anthropocene dates not from the 20th century (nor even the Industrial revolution) but from human beginnings. For, the function of culture everywhere has always been to redefine the world in human terms, and our presence has always reshaped the landscape, creating extinctions and deserts along the way. Technology has always had planetary effects, which until recently have been moderate and considered only in hindsight. New technologies now afford possibilities of total control and require total foresight. Bio-engineering, nanotechnology, and artificial intelligence are latter-day means to an ancient dream of acquiring godlike powers. Along with such powers go godlike responsibilities.

Because that dream has been so core to human being all along, and yet so far beyond reach, we’ve been in denial of it over the ages, always projecting god-envy into religious and mythological spheres, which have always cautioned against the hubris of such pretention. Newly emboldened technologically, however, humanity is finally coming out of the closet.

The Anthropocene ideal is to master all aspects of physical reality, redesigning it to human taste. Actually, that will mean to the taste of those who create the technology. This raises the political question: who, if anyone, should control these technologies? Who will it benefit? More darkly, what are the risks that will be borne by all? When there was an abundance of wild, the natural world was taken for granted as a commons, which did not prevent private interests from fencing, owning, and exploiting ever more of it for their own profit.

From biblical times, the idea of natural resource put nature in a context of use, as the object of human purposes. And that meant the purposes of certain societies or groups, at the cost of others. Now that technologies exist to literally rearrange the building blocks of life and of matter, the concept of resource shifts from specific minerals, plants and animals to a more universal stuff—even “information.” One political question is who will control these new unnatural resources, and how to preserve them as a new sort of commons for the benefit of all? Another is how to proceed safely—if there is such a thing—in the wholesale transformation of nature and ourselves.

The human essence has always been a matter of controversy. More than ever it is now up for grabs. Because we are the self-creating creature, we cannot look to a fixed human nature, nor to a consensus, for the values that should guide our use of technology. A vision of the future—and the fulfillment of human potential—is a matter of opinions and values that differ widely. Some see a glorious technological future that is not pinned to the current human form. Others envision a way of life more integrated with nature and accepting of natural constraints. Still others view the human essence as spiritual, with human destiny unfolding on some divine timetable. The means to change everything are now available, but without a consolidated guiding vision.

Genome information is now readily available and so are technologies for using it to do genetic experiments at home. While some technologies require expensive laboratory equipment, citizen scientists (bio-hackers) can get what they need online and through the mail. Since much of the technology is low-tech and readily available, anyone in their basement can launch us into a brave new unnatural world.

One impetus for such home experimentation is social disparity: biohacking is in part a rebellion against the unfairness of present social and health systems. Like the hacker movement in general, biohackers want knowledge and technology to be fairly and democratically available, which means relatively cheap if not in the public domain. It’s about public access to what they consider should be a commons. They protest the patenting of private intellectual property that drives up the price of technology and medicine and restricts the availability of information. Social disparity promises to be endemic to all new technologies that are affordable (at least initially) only to an elite.

There are personal risks for those who experiment on themselves with unproven drugs and genetic modification. But there are risks to the environment shared by all as well, for example when an engineered mutant is deliberately released into the wild to deal with the spread of ticks that carry Lyme disease or the persistence of malaria-carrying mosquitos. The difference between a genetic solution and a conventional one can be that the new organism reproduces itself, changing the biosphere in potentially unforeseeable and irreversible ways. That applies to interventions in the human genome too. Bio-hacking is but one illustration of the potential benefits and threats of bio-engineering, which is the human quest to change biology deliberately, including human biology. The immediate promise is that genetic defects can be eliminated. But why stop there? Ideal citizens can be designed from scratch. Perhaps mortality can be eliminated. That amounts to hijacking evolution or finally taking charge of it, according to your view. To change human nature might seem a natural right, especially since “human nature” includes an engrained determination to self-define. But does that include the right to define life in general and nature at large, to tinker freely with other species, to terra-form the planet? And what constitutes a “right?” Nature endows creatures with preferences and instincts but not with rights, which are a human construct, reflecting our very disaffection from nature. Who will determine the desirable traits for a future human or post-human being and on what grounds?

Tinkering with biology is one way to enhance ourselves, but another is through artificial intelligence. Bodies and now minds can be augmented prosthetically, potentially turning us into a new cyborg species (or a number of them). Another dream is to transcend embodiment (and mortality) entirely, by uploading a copy of yourself into an eternally-running supercomputer. Some of these aspirations are pipedreams. But the possibility of an AI takeover is real and already upon us in minor ways: surveillance, data collection, smart appliances, etc. The ultimate potential is to automate automation, to relieve human beings (or at least some of them) of the need to work physically and even mentally. Your robot can do all your housework, your job, even take your vacations for you! As with biotechnology, the surface motivation driving AI development is no doubt commercial and military. Yet, lurking beneath is the unconscious desire to step into divine shoes: to create life and mind from scratch even as we free ourselves from the limitations of natural life and mind.

Like biotechnology, the tools for AI development are commonly available and relatively cheap. All you need is savvy and a laptop. The implicit aim is artificial “general” intelligence, matching or exceeding human mental and physical capability. That could be in the form of superintelligent tools that remain under human control, designed for specific tasks. But it could also mean a robotic version of human slaves. Apart from the ethics involved, slaves have never been easy to control. It comes down to a tradeoff between the advantages of autonomy in artificial agents and the challenge to control them. Autonomy may seem desirable because such agents could do literally everything for us and better, with no effort on our part. But if such creations are smarter than we are, and are in effect their own persons, how long could we remain their masters? If they have their own purposes, why would they serve ours? The very idea of automating automation means forfeiting control at the outset, since the goal is to launch AI’s that effectively create themselves.

Radical conservationists and transhumanist technophiles may be at cross-purposes, but so are more moderate advocates of environment or business. As biological creatures, we inherit the universe provided by nature, which we try to make into something corresponding to our human preferences. The materials we work with ultimately derive from nature and obey laws we did not make. Scientific understanding has enabled us to reshape that world to an extent, using those very laws. We don’t yet know the ceiling of what is possible, let alone what is wise. How far should we go in transforming ourselves and nature? Why create artificial versions of ourselves at all, let alone artificial versions of gods? What used to be philosophical questions are becoming scientific and political ones. The world is our oyster and we the irritating grit within. Will the result be a pearl?

R U Real?

For millennia, philosophers have debated the nature of perception and its relation to reality. Their speculations have been shaped by the prevailing concerns and metaphors of their age. The ancient Greeks, with slaves to do their work, were less interested in labor-saving inventions than in abstract concepts and principles. Plato’s allegory of the Cave refers to no technology more sophisticated than fire—harking back, perhaps, to times when people literally lived in caves. (It does refer to the notion of prisoner, long familiar from slavery and military conquest in the ancient world.)

In Plato’s low-tech metaphor, the relationship of the perceiving subject to the objects of perception is like that of someone in solitary confinement. The unfortunate prisoner’s head is even restrained in such a way that he/she is able only to see the shadows cast, on the walls of the cave, by objects passing behind—but never the objects themselves. It was a prescient intuition, anticipating the later discovery that the organ responsible for perception is the brain, confined like a prisoner in the cave of the skull. Plato believed it was possible to escape this imprisonment. In his metaphor, the liberated person could emerge from the cave and see things for what they are in the light of day—which to Plato meant the light of pure reason, freed from dependence on base sensation.

Fast forward about two millennia to Catholic France, where Descartes argues that the perceiving subject could be systematically deceived by some mischievous agent capable of falsifying the sensory input to the brain. Descartes understood that knowledge of the world is crucially dependent on afferent nerves, which could be surgically tampered with. (The modern version of this metaphor is the “brain in a vat,” wired up to a computer that sends all the right signals to the brain to convince it that it is living in a body and engaged in normal perception of the world.) While Descartes was accordingly skeptical about knowledge derived from the senses, he claimed that God would not permit such a deception. In our age, in contrast, we not only know that deception is feasible, but even curry it in the form of virtual entertainments. The film The Matrix is a virtual entertainment about virtual entertainments, expounding on the theme of the brain in a vat.

Fast forward again a century and a half to Immanuel Kant. Without recourse to metaphor or anatomy, he clearly articulated for the first time the perceiving subject’s inescapable isolation from objective reality. (In view of the brain’s isolation within the skull, the nature of the subject’s relation to the outside world is clearly not a transparent window through which things are seen as they “truly” are.) Nevertheless, while even God almighty could do nothing about this unfortunate condition, Kant claimed that the very impossibility of direct knowledge of external reality was reason for faith. In an age when science was encroaching on religion, he contended that it was impossible to decide issues about God, free will, and immortality—precisely because they are beyond reach in the inaccessible realm of things-in-themselves. One is free he insisted, to believe in such things on moral if not epistemological grounds.

Curiously, each of these key figures appeals to morality or religion to resolve the question of reality, in what are essentially early theories of cognition. Plato does not seem to grasp the significance of his own metaphor as a comment on the nature of mind. Rather, it is incidental to his ideas on politics and the moral superiority of the “enlightened.” Descartes—who probably knew better, yet feared the Church—resorts to God to justify the possibility of true knowledge. And Kant, for whom even reason is suspect, had to “deny knowledge in order to make room for faith.” We must fast forward again another century to find a genuinely scientific model of cognition. In Hermann Helmholtz’s notion of unconscious inference, the brain constructs a “theory” of the external world using symbolic representations that are transforms of sensory input. His is a precursor of computational theories of cognition. The metaphor works both ways: one could say that perception is modeled on scientific inference; but one can equally say that science is a cognitive process which recapitulates and extends natural perception.

Given its commitment to an objective view, it is ironic that science shied away from the implications of Kant’s thesis that reality is off-limits to the mind. While computational theories explain cognition as a form of behavior, they fail to address: (1) the brain’s epistemic isolation from the external world; (2) the nature of conscious experience, if it is not a direct revelation of the world; and (3) the insidious circularity involved in accounts of perception.

To put yourself in the brain’s shoes (first point, above), imagine you live permanently underwater in a submarine—with no periscope, port holes, or hatch. You have grown up inside and have never been outside its hull to view the world first-hand. You have only instrument panels and controls to deal with, and initially you have no idea what these are for. Only by lengthy trial and error do you discover correlations between instrument readings and control settings. These correlations give you the idea that you are inside a vessel that can move about under your direction, within an “external” environment that surrounds it. Using sonar, you construct a “picture” of that presumptive world, which you call “seeing.”

This is metaphor, of course, and all metaphors have their limitations. This one does not tell us, for example, exactly what it means to be “having a picture” of the external world (second point), beyond the fact that it enables the submariner to “navigate.” This picture (conscious perception) is evidently a sort of real-time map—but of what? And why is it consciously experienced rather than just quietly running as a program that draws on a data bank to guide the behavior of navigating? (In other words, why is there a submariner at all, as opposed to a fully automated underwater machine?) Furthermore, the brain’s mastery of its situation is not a function of one lifetime only. The “trial and error” takes place in evolutionary time, over many generations of failures that result in wrecked machines.

In the attempt to explain seeing, perhaps the greatest failure of the metaphor is the circularity of presuming someone inside the submarine who already has the ability to see: some inner person who already has a concept of reality outside the hull (skull), and who moves about inside the seemingly real space of the submarine’s interior, aware of instrument panels and control levers as really existing things. It is as though a smaller submarine swims about inside the larger one, trying to learn the ropes, and within that submarine an even smaller one… ad infinitum!

The problem with scientific theories of cognition is that they already presume the real world whose appearance in the mind they are trying to explain. The physical brain, with neurons, is presumed to exist in a physical world as it appears to humans—in order to explain that very appearance, which includes such things as brains and neurons and the atoms of which they are composed. The output of the brain is recycled as its input! To my knowledge, Kant did not venture to discuss this circularity. Yet, it clearly affirms that the world-in-itself is epistemically inaccessible, since there is no way out of this recycling. However, rather than be discouraged by this as a defeat of the quest for knowledge or reality, we should take it as in invitation to understand what “knowledge” can actually mean, and what the concept of “reality” can be for prisoners inside the cave of the skull.

Clearly, for any organism, what is real is what can affect its well-being and survival, and what it can affect in turn. (This is congruent with the epistemology of science: what is real is that with which the observer can causally interact.) The submariner’s picture and knowledge of the world outside the hull is “realistic” to the degree it facilitates successful navigation—that is, survival. The question of whether such knowledge is “true” has little meaning outside this context. Except in these limited terms, you cannot know what is outside your skull—or what is inside it, for that matter. The neurosurgeon can open up a skull to reveal a brain—can even stimulate that brain electrically to make it experience something the surgeon takes to be a hallucination. But even if the surgeon opened her own skull to peek inside, and manipulated her own experience, what she would see is but an image created by her own brain—in this case perhaps altered by her surgical interventions. The submariner’s constructed map is projected as external, real, and even accurate. But it is not the territory. What makes experience veridical or false is hardly as straightforward as the scientific worldview suggests. Science, as an extended or supplementary form of cognition, is as dependent on these caveats as natural perception. Whether scientific knowledge of the external world ultimately qualifies as truth will depend on how well it serves the survival of our species. On that the jury is still out.

Are You Fine-tuned? (Or: the story of Goldilocks and the three dimensions)

The fine-tuning problem is the notion that the physical universe appears to be precisely adjusted to allow the existence of life. It is the apparent fact that many fundamental parameters of physics and cosmology could not differ much from their actual values, nor could the basic laws of physics be much different, without resulting in a universe that would not support life. Creationists point to this coincidence as evidence of intelligent design by God. Some thinkers point to it as evidence that our universe was engineered by advanced aliens. And some even propose that physical reality is actually a computer simulation we are living in (created, of course, by advanced aliens). But perhaps fine-tuning is a set-up that simply points to the need for a different a way of thinking.

First of all, the problem assumes that the universe could be different than it is—that fundamental parameters of physics could have different values than they actually do in our world. This presumes some context in which basic properties can vary. That context is a mechanistic point of view. The Stanford Encyclopedia of Philosophy defines fine-tuning as the “sensitive dependences of facts or properties on the values of certain parameters.” It points to technological devices (machines) as paradigm examples of systems that have been fine-tuned by engineers to perform an optimal way, like tuning a car engine. The mechanistic framework of science implicitly suggests an external designer, engineer, mechanic or tinkerer—if not God, then the scientist. In fact, the early scientists were literally Creationists. Whatever the solution, the problem is an historical residue of their mechanistic outlook. The answer may require that we look at the universe in a more organic way.

The religious solution was to suppose that the exact tweaking needed to account for observed values of physical parameters must be intentional and not accidental. The universe could only be fine-tuned by design—as a machine is. However, the scale and degree of precision are far above the capabilities of human engineers. This suggests that the designer must have near-infinite powers, and must live in some other reality or sector of the universe. Only God or vastly superior alien beings would have the know-how to create the universe we know. Alternatively, such precision could imply that the universe is not even physical, but merely a product of definition, a digital simulation or virtual reality. Ergo, there must be another level of reality behind the apparent physical one. But such thinking is ontologically extravagant.

Apart from creationism, super-aliens, or life in a cosmic computer, a more conventional approach to the problem is statistical. One can explain a freak occurrence as a random event in a large run of very many. Like an infinite number of monkeys with typewriters, one is bound to type out Shakespeare eventually. If, say, there are enough universes with random properties, it seems plausible that at least one of them would be suitable for the emergence of life. Since we are here, we must be living in that universe. But this line of reasoning is also ontologically costly: one must assume an indefinite number of actual or past “other universes” to explain this single one. The inspiration for such schemes is organic insofar as it suggests some sort of natural selection among many variants. That could the “anthropic” selection mentioned above or some Darwinian selection among generations of universes (such as Lee Smolin’s black hole theory). Such a “multiverse” scheme could be true, but we should only think so because of real evidence and not in order to make an apparent dilemma go away.

It might be ontologically more economical to assume that our singular universe somehow fine-tunes itself. After all, organisms seem to fine-tune themselves. Their parts cooperate in an extremely complex way that cannot be understood by thinking of the system as a machine designed from the outside. If nature (the one and only universe) is more like an organism than a machine, then the fine-tuning problem should be approached a different way, if indeed it is a problem at all. Instead of looking at life as a special instance of the evolution of inert matter, one could look at the evolution of supposedly inert matter (physics) as a special case involving principles that can also describe the evolution of life.

Systems in physics are simple by definition. Indeed, they are conceived for simplicity. In contrast, organisms (and the entire biosphere) are complex and homeostatic. Apart from the definitions imposed by biologists. organisms are also selfdefining. Physical systems are generally analyzed in terms of one causal factor at a time—as in “controlled” experiments. As the name suggests, this way of looking aims to control nature in the way we can control machines, which operate on simple linear causality. Biological systems involve very many mutual and circular causes, hard to disentangle or control. Whereas the physical system (machine) reflects the observer’s intentionality and purposes—to produce something of human benefit—the organism aims to produce and maintain itself. Perhaps it is time to regard the cosmos as a self-organizing entity.

Fine-tuning argues that life could not have existed if the laws of nature were slightly different, if the constants of nature were slightly different, or if the initial conditions at the Big Bang were slightly different—in other words, in most conceivable alternative universes. But is an alternative universe physically possible simply because we can conceive it? The very business of physics is to propose theoretical models that are free creations of mathematical imagination. Such models are conceptual machines. We can imagine worlds with a different physics; but does imagining them make them real? The fact that a mathematical model can generate alternative worlds may falsely suggest that there is some real cosmic generator of universes churning out alternative versions with differing parameters and even different laws. “Fundamental parameters” are knobs on a conceptual machine, which can be tweaked. But they are not knobs on the world itself. They are variables of equations, which describe the behavior of the model. The idea of fine-tuning confuses the model with the reality it models.

The notion of alternative values for fundamental parameters extends even to imagining what the world would be like with more than or less than three spatial dimensions. But the very idea of dimension (like that of parameter) is a convention. Space itself just is. What we mean literally by spatial dimensions are directions at right angles to each other—of which there are but three in Euclidean geometry. The idea that this number could be different derives from an abstract concept of space in contrast to literal space: dimensions of a conceptual system—such as phase space or in non-Euclidean geometry. The resultant “landscape” of possible worlds is no more than a useful metaphor. If three dimensions are just right for life, it is because the world we live in happens to be real and not merely conceptual.

The very notion of fundamental parameters is a product of thinking that in principle does not see the forest for the trees. What makes them “fundamental” is that the factors appear to be independent of each other and irreducible to anything else—like harvested logs that have been propped upright, which does not make them a forest. This is merely another way to say that there is currently no theory to encompass them all in a unified scheme, such as could explain a living forest, with its complex interconnections within the soil. Without such an “ecology” there is no way to explain the mutual relationships and specific values of seemingly independent parameters. (In such a truly fundamental theory, there would be at most one independent parameter, from which all other properties would follow.)

The fine-tuning problem should be considered evidence that something is drastically wrong with current theory, and with the implicit philosophy of mechanism behind it. (There are other things wrong: the cosmological constant problem, for instance, has been described as the worst catastrophe in the history of physics.) Multiverses and string theories, like creationism, may be barking up the wrong tree. They attempt to assimilate reality to theory (if not to theology), rather than the other way around. The real challenge is not to fit an apparently freak world into an existing framework, but to build a theory that fits experience.

Like Goldilocks, it appears to us that we live in a universe that is just right for us—in contrast to imaginary worlds unsuitable for life. We are at liberty to invent such worlds, to speculate about them, and to imagine them as real. These are useful abilities that allow us to confront in thought hypothetical situations we might really encounter. As far as we know, however, this universe is the only real one.

The machine death of the universe?

Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with vague meanings include intelligence, embodiment, mind, consciousness, perception, value, goal, agent, knowledge, belief, and thinking. Such vocabulary is naively borrowed from human mental life and used to underpin a theoretical and abstract general notion of intelligence that could be implemented by computers. Intelligence has been defined many ways—for example, as the ability to deal with complexity. But what does “dealing with” mean exactly? Or, defined as the ability to predict future or missing information; but what is “information” if not relevant to the well-being of some unspecified agent? It should be imperative to clarify such ambiguities, if only to identify a crucial threshold between conventional mechanical tools and autonomous artificial agents. While it might be inconsequential what philosophers think about such matters, it could be devastating if AI developers, corporations, and government regulators get it wrong.

However intelligence is formally defined, our notions of it derive originally from experience with living creatures, whose intelligence ultimately is the capacity to survive and breed. Yet, formal definitions often involve solving specific problems set by humans, such as on IQ tests. This problem-solving version of intelligence is tied to human goals, language use, formal reasoning, and modern cultural values; and trying to match human performance risks to test for humanness more than intelligence. The concept of general intelligence, as it has developed in AI, does not generalize the actual instances of mind with which we are familiar—that is, organisms on planet Earth—so much as it selects isolated features of human performance to develop into an ideal theoretical framework. This is then supposed to serve as the basis of a universally flexible capacity, just as the computer is understood to be the universal machine. A very parochial understanding of intelligence becomes the basis of abstract, theoretically possible “mind,” supposedly liberated from bodily constraint and all context. However, the generality sought for AI runs counter to the specific nature and conditions for embodied natural intelligence. It remains unclear to what extent an AI could satisfy the criteria for general intelligence without being effectively an organism. Such abstractions as superintelligence (SI) or artificial general intelligence (AGI) remain problematically incoherent. (See Maciej Cegłowski’s amusing critique: https://idlewords.com/talks/superintelligence.htm)

AI was first modelled on language and reasoning skills, formalized as computation. The limited success of early AI compared unfavorably with broader capabilities of organisms. The dream then advanced from creating specific tools to creating artificial agents that could be tool users, imitating or replicating organisms. But natural intelligence is embodied, whereas the theoretical concept of “mind in general” that underpins AI is disembodied in principle. The desired corollary is that such a mind could be re-embodied in a variety of ways, as a matter of consumer choice. But whether this corollary truly follows depends on whether embodiment is a condition that can be simulated or artificially implemented, as though it were just a matter of hooking up a so-called mind to an arbitrary choice of sensors and actuators. Can intelligence be decoupled from the motivations of creatures and from the evolutionary conditions that gave rise to natural intelligence? Is the evolution of a simulation really a simulation of natural evolution? A negative answer to such questions would limit the potential of AI.

The value for humans of creating a labor-saving or capacity-enhancing tool is not the same as the value of creating an autonomous tool user. The two goals are at odds. Unless it constitutes a truly autonomous system, an AI manifests only the intentionality and priorities of its programmers, reflecting their values. Talk of an AI’s perceptions, beliefs, goals or knowledge is a convenient metaphorical way of speaking, but is no more than a shorthand for meanings held by programmers. A truly autonomous system will have its own values, needs, and meanings. Mercifully, no such truly autonomous AI yet exists. If it did, programmers would only be able to impress their values on it in the limited ways that adults educate children, governments police their citizenry, or masters impose their will on subordinates. At best, SI would be no more predictable or controllable than an animal, slave, child or employee. At worst, it would control, enslave, and possibly displace us.

A reasonable rationale for AGI requires it to remain under human control, to serve human goals and values and to act for human benefit. Yet, such a tool can hardly have the desired capabilities without being fully autonomous and thus beyond human control. The notion of “containing” an SI implies isolation from the real world. Yet, denial of physical access to or from the real world would mean that the SI would be inaccessible and useless. There would have to be some interface with human users or interlocutors just to utilize its abilities; it could then use this interface for its own purposes. The idea of pre-programming it to be “friendly” is fatuously contradictory. For, by definition, SI would be fully autonomous, charged with its own development, pursuing its own goals, and capable of overriding its programming. The idea of training human values into it with rewards and punishments simply regresses the problem of artificially creating motivation. For, how is it to know what is rewarding? Unless the AI is already an agent competing for survival like an organism, why would it have any motivation at all? If it is such an agent, why would it accept human values in place of its own? And how would its intelligence differ from that of natural organisms, which are composed of cooperating cells, each with its relative autonomy and needs? The parts of a machine are not like the parts of an organism.

While a self-developing neural net is initially designed by human programmers, like an organism it would constitute a sort of black box. Unlike the designed artifact, we can only speculate on the structure, functioning, and principles of a self-evolving agent. This is a fundamentally different relationship from the one we have to ordinary artifacts, which in principle do what we want and are no more than what we designed them to be. These extremes establish an ambiguous zone between a fully controllable tool and a fully autonomous agent pursuing its own agenda. If there is a key factor that would lead technology irreversibly beyond human control, it is surely the capacity to self-program, based on learning, combined with the capacity to self-modify physically. There is no guarantee that an AI capable of programming itself can be overridden by a human programmer. Similarly, there is no guarantee that programmable matter (nanites) would remain under control if it can self-modify and physically reproduce. If we wish to retain control over technology, it should consist only of tools in the traditional sense—systems that do not modify or replicate themselves.

Sentience and consciousness are survival strategies of natural replicators. They are based on the very fragility of organic life as well as the slow pace of natural evolution. If the advantage of artificial replicators is to transcend that fragility from the outset, then their very robustness might also circumvent the evolutionary premise—of natural selection through mortality—that gave rise to sentience in the first place. And the very speed of artificial evolution could drastically out-pace the ability of natural ecosystems to adapt. The horrifying possibility could be a world overrun by mechanical self-replicators, an artificial ecology that outcompetes organic life yet fails to evolve the sentience we cherish as a hallmark of living things. (Imagine something like Kurt Vonnegut’s ‘ice nine’, which could escape the planet and replicate itself indefinitely using the materials of other worlds. As one philosopher put it: a Disneyland without children!) If life happened on this planet simply because it could happen, then possibly (with the aid of human beings) an insentient but robust and invasive artificial nature could also happen to displace the natural one. A self-modifying AI might cross the threshold of containment without our ever knowing or being able to prevent it. Self-improving, self-replicating technology could take over the world and spread beyond: a machine death of the universe. This exotic possibility would not seem to correspond to any human value, motivation or hope—even those of the staunchest posthumanists. Neither superintelligence nor silicon apocalypse seems very desirable.

The irony of AI is that it redefines intelligence as devoid of the human emotions and values that actually motivate its creation. This reflects a sad human failure to know thyself. AI is developed and promoted by people with a wide variety of motivations and ideals apart from commercial interest, many of which reflect some questionable values of our civilization. Preserving a world not dominated one way or another by AI might depend on a timely disenchantment with the dubious premises and values on which the goals of AI are founded. These tacitly include: control (power over nature and others), transcendence of embodiment (freedom from death and disease), laziness (slaves to perform all tasks and effortlessly provide abundance), greed (the sheer hubris of being able to do it or being the first), creating artificial life (womb-envy), creating super-beings (god-envy), creating artificial companions (sex-envy), and ubiquitous belief in the mechanist metaphor (computer-envy—the universe is metaphorically or literally digital).

Some authors foresee a life for human consciousness in cyberspace, divorced from the limitations of physical embodiment—the update of an ancient spiritual agenda. (While I think that impossible, it would at least unburden the planet of all those troublesome human bodies!) Some authors cite the term cosmic endowment to describe and endorse a post-human destiny of indefinite colonization of other planets, stars, and galaxies. (Endowment is a legal concept of property rights and ownership.) They imagine even the conversion of all matter in the universe into “digital mind,” just as the conquistadors sought to convert the new world to a universal faith while pillaging its resources. At heart, this is the ultimate extension of manifest destiny and lebensraum.

Apart from such exotic scenarios, the world seems to be heading toward a dystopia in which a few people (and their machines) hold all the means of production and no longer need the masses either as workers or as consumers—and certainly not as voters. The entire planet could be their private gated community, with little place for the rest of us. Even if it proves feasible for humanity to retain control of technology, it might only serve the aims of the very few. This could be the real threat of an “AI takeover,” one that is actually a political coup by a human elite. How consoling will it be to have human overlords instead of superintelligent machines?