Origins of the white lie

In the wake of recently discovered unmarked graves of indigenous children, at state-sponsored residential schools run by churches, there has been much discussion lately about attitudes and practices of colonialism in Canada. Hardly institutions of learning, these were indoctrination centres serving cultural genocide. It is politically correct now to look back with revulsion, as though we now live in a different world. Should we be so smug? After all, the last Indian residential school closed only twenty-five years ago.

What is particularly horrifying—and yet perplexing—is the prospect that many of the people running these schools (and the government officials who commissioned them) probably felt they were doing the right thing in “helping” indigenous children assimilate into white society. Apart from cynical land-grabbing and blatant racism, many in government may have thought themselves well-motivated, and the school personnel may have been sincerely devout. Yet, the result was malicious and catastrophic. There were elements of the same mean-spirited practices in English boarding schools and ostensibly charitable institutions. Nineteenth-century novels depict the sadism in the name of character formation, discipline and obedience, which were supposed to prepare young men and women for their place in society. How is it possible to be mean and well-meaning at the same time?

Certainly, “the white man’s burden” was a notion central to colonialism. It is related to the European concept of noblesse oblige, which was an aspect of the reciprocal duties between peasant and aristocrat in medieval society. The very fact that such class relationships (between the lowly and their betters) persist even today is key to the sort of presumption of superiority illustrated by the residential schools. Add to class the element of race, then combine with religious proselytizing, empire and greed, and you have a rationale for conquest. The natives were regarded suspiciously as ignorant savages who made no proper use of their land and “resources.” Their bodies were raw material for slavery and their souls for conversion. All in the name of civilizing “for their own good.” Indeed, slavery was a global institution from time immemorial, practiced in Canada as well as the U.S., and practiced even by indigenous natives themselves.

In view of the Spanish Inquisition in the European homeland, it cannot be too surprising that the conquistadors applied similar methods abroad. The fundamental religious assumption was that the body has little importance compared to the soul. In the medieval Christian context, it was self-evident that the body could be mistreated, tortured, even burnt alive for the sake of the soul’s salvation. According to contemporary accounts, the conquistadors committed atrocities in a manner intended to outwardly honor their religion: natives hanged and burned at the stake—in groups of thirteen as a tribute to Christ and his twelve apostles! The utter irony and perversity of such “logic” has more recent parallels and remains just as possible today.

The Holocaust applied an intention to keep society pure by eliminating elements deemed undesirable. Eugenics was a theme of widespread interest in early twentieth century, not only in Nazi Germany. Hannah Arendt argued controversially that the atrocities were committed less by psychopathic monsters than by ordinary people who more or less believed in what they were doing, if they thought about it deeply at all. In the wake of WW2, interest was renewed in understanding how such things can happen in the name of nationalism, racial superiority, or some other captivating agenda. In particular: to understand how unconscionable behavior is internally justified. The psychological experiments of Stanley Milgram, about obedience to authority, shed light on the banality of evil, by showing how easy it is for people to commit acts of torture when an authority figure assures them it is necessary and proper. The underlying question remains: how to account for the disconnect between common sense (or compassion or morality) and behavior that can later (or by others) be judged patently wrong? By what reasoning do people justify their evil deeds so that they appear to them acceptable or even good?

Self-deception seems to be a general human foible, part and parcel of the ability to deceive others. It can be deliberate, even when unconscious. Or, it can be incidental, as when we simply do not have conscious access to our motives. Organisms, after all, are cobbled together by natural selection in a way that coheres only enough to insure survival. The ego or rational mind, too, is a cobbled feature, cut off from access to much of the organism’s workings, with which it would not be adaptive for it to directly interfere. The conscious self is charged by society to produce behavior in accord with social expectations, yet is poorly equipped as an organ of self-control.

Biology is no excuse, of course, especially since our highest ideals aspire to transcend biological limitations. Yet, a brief digression may shed some light. The primary aim of every organism is its own existence. Life, by definition, is self-serving; yet our species is characteristically altruistic toward those recognized as their own kind. The human organism discovered reason as a survival strategy. It has surrounded itself with tools, machines, factories and institutions that serve some purpose other than their own existence. As seemingly rational agents in the world, we try to shape the world in certain ways that nevertheless fit our needs as organisms. Thus, we purport to act according to some rational program, even for the good of others or society, but which often turns out to be self-serving or serving our specific group. The disconnect is a product of evolutionary history. We aspire and purport to be rational, but we were not rationally designed.

Hypocrisy literally means failing to be (self-)critical enough. The context of that failing is that we believe we are acting in accordance with one agenda and do not see how we are also acting in accordance with a very different one. We think we are pursuing one aim and fail to recognize another aim inconsistent with it. Deaf to the dissonance, the right hand (hemisphere?) knows not what the left is doing. A person, group, or class behaves according to their interests, and believes some story that justifies their entitlement, to themselves and to others. The cover story is somehow made to jive with other motivations behind it. What is supposedly objective fact is molded to fit subjective desire.

As social creatures, we tend to look to others for clues to how we should behave. But that is a self-fulfilling prophecy when everyone else is doing likewise. There must be some way to weigh action that is not based on social norm. This is the proper function of reason, argument, debate, and social criticism. It is not to convince others of a point of view, but to find what is wrong with a point of view (no matter how good-sounding) and hopefully set it right. In particular, it should reveal how one intention can be inconsistent with another intention that lurks at its core, just as the whole structure of the brain lurks beneath the neo-cortex. Reason ought to reveal internal inconsistency and the self-deception that permits it.

Yet, self-deception is a concomitant of the ability to deceive others, which is built into our primate heritage and the structure of language. Society can only cohere through cooperation, and there must be ways to tell the cooperators from the defectors in society. Reputation serves this function. But reputation is an image in people’s minds that can be manipulated and faked. As any actor can tell you, the best way to make your performance emotionally convincing is to believe it yourself. If your story is a lie, then you too must believe the lie if you expect to convince others of your sincerity. Furthermore, deception of the others dovetails with their willingness to be deceived—namely, their own self-deceptions.

We know that people consciously create acts of fiction and fantasy; also, that they sometimes knowingly lie. Self-deception overlaps these categories: fiction that we convince ourselves is fact. Rationally, we know that opinions—when expressed as such—are someone’s thoughts. But the category of fact renounces this understanding in favor of an objective truth that has no author, requires no evidence, and for which no individual is responsible, unless God. We disown responsibility for our statements by failing to acknowledge them as personal assertions and beliefs, instead proposing them offhand as free-standing truths in the public domain.

Religion, patriotism, and cultural myth are not about reason or factual truth, but about social cohesion and soothing of existential anxiety through a sense of belonging. We trust those who seem to think and act like us. But this is a double-edged sword. It makes towing a line a condition of membership in the group. Controlling the behavior of members helps the group cohere, but does not allow for a control on the behavior of the group itself.

Scientific propositions can be pinned down and disproven, but not so cultural myths and biases, nor religious beliefs, which cannot even be unambiguously comprehended, let alone debunked in a definitive way. Like water for the fish, the ethos of a society’s prejudices cannot easily be perceived. As Scott Atran has observed, “…most people in our society accept and use both science and religion without conceiving of them in a zero-sum conflict. Genesis and the Big Bang theory can perfectly well coexist in a human mind.” Perhaps that foible is a modern sign that we have not outgrown the capacity for self-deception, and thus for evil.

Splitting hairs with Occam’s razor

Before the 19th century, science was called natural philosophy or natural history. Since the ancient Greeks, the study of nature had been a branch of philosophy, a gentlemanly discussion of ideas by men who disdained to soil their hands with actual materials. What split science off from medieval philosophy was the use of experiment, careful observation, quantitative measurement with instruments, and what became known as scientific method, which meant testing ideas by hands-on experiment. Science became the application of technology to the study of nature. This in turn gave rise to further technology in a happy cycle involving the mutual interaction of mind and matter.

Philosophy literally means love of wisdom. In modern times it has instead largely come to mean love of sophistry. The secession of science from philosophy left the latter awkwardly bereft and defensive. One of the reasons why science emerged as distinct from philosophy is that medieval scholastic philosophy had been (as modern philosophy largely remains) mere talk about who said what about who said what. Science got down to brass tacks, focusing on the natural world, but at the cost of generality. Philosophy could trade on being more general in focus, if less verifiably factual. It could still deal with areas of thought not yet appropriated by scientific study, such as the nature of mind. And it could deal in a critical way with concepts developed within science—which became known as philosophy of science. Either way, the role of philosophy involved the ability to stand back to examine ideas for their logical consistency, meaning, tacit assumptions, and function within a broader context. The focus was no longer nature itself but thought about it and thought in general. Philosophy assumed the role of “going meta,” to critically examine any proposed idea or system from a viewpoint outside it. This meant examining a bigger picture, outside the terms and borders of the discipline concerned, and examining the relationships between disciplines. (Hence, metaphysics as a study beyond physics.) However, that was not the only response of philosophy to the scientific revolution.

Philosophy had long been closely associated with logic, one of its tools, which is also the basis of mathematics. Both logic and mathematics seemed to stand apart from nature as eternal verities, upstream of science. Galileo even wrote that mathematics is the language of the book of nature. So, even though science appropriated these as tools for the study of nature, and was strongly shaped by them, logic and math were never until recently questioned or considered the subject matter of scientific study. The increasing success of mathematical description in the physical sciences led to a general “physics envy,” whereby other sciences sought to emulate the quantifying example of physics. Sometimes this was effective and appropriate, but sometimes it led to pointless formalism, which was often the case in philosophy. Perhaps more than any other discipline, philosophy suffered from an inferiority complex in the shadow of its fruitful younger sibling. Philosophy could legitimize itself by looking scientific, or at least technical.

Certainly, all areas of human endeavor have become increasingly specialized over time. This is true even in philosophy, whose mandate remains, paradoxically, generalist in principle. Apart from the demand for rigor, the tendency to specialize may reflect the need for academics to remain employed by creating new problems to solve; to make their mark by staking out a unique territory in which to be expert; and to differentiate themselves from other thinkers through argument. Specialization, after all, is the art of knowing more and more about less and less, following the productive division of labor that characterizes civilization. On the other hand, specialization can lead to such fragmentation that thinkers in diverse intellectual realms are isolated from each other’s work. Worse, it can isolate a specialty from society at large. That can imply an enduring role for philosophers as generalists. They are positioned to stand back to integrate disparate concepts and areas of thought into a larger whole—to counterbalance specialization and interpret its products to a larger public. Yet, instead of rising to the occasion provided by specialization, philosophy more often succumbs its hazards.

Science differs from philosophy in having nature as its focus. The essential principle of scientific method is that disagreement is settled ultimately by experiment, which means by the natural world. That doesn’t mean that questions in science are definitively settled, much less that a final picture can ever be reached. The independent existence of the natural world probably means that nature is inexhaustible by thought, always presenting new surprises. Moreover, scientific experiments are increasingly complex, relying on tenuous evidence at the boundaries of perception. This means that scientific truth is increasingly a matter of statistical data, whose interpretation depends on assumptions that may not be explicit—until philosophers point them out. Nevertheless, there is progress in science. At the very least, theories become more refined, more encompassing, and quantitatively more accurate. That means that science is progressively more empowering for humanity, at least through technology.

Philosophy does not have nature as arbiter for its disputes, and little opportunity to contribute directly to technological empowerment. Quite the contrary, modern philosophers mostly quibble over contrived dilemmas of little interest or consequence to society. These are often scarcely more than make-work projects. The preoccupations and even titles of academic papers in philosophy are often expressed in terms that mock natural language. In the name of creating a precise vocabulary, their jargon establishes a realm of discourse impenetrable to outsiders—certainly to lay people and often enough even to other academics. More than an incidental by-product of useful specialization, abstruseness seems as much a ploy to justify existence within a caste and to perpetuate a self-contained scholastic world. If philosophical issues are by definition irresolvable, this at least keeps philosophers employed.

Philosophy began as rhetoric, which is the art of arguing convincingly. (Logic may have arisen as a rule-based means to reign in the extravagances of rhetoric.) Argument remains the hallmark of philosophy. Without nature to reign thought in, as in science, there is only logic and common sense as guides. Naturally, philosophers do attempt to present coherent reasoned arguments. But logic is only as good as the assumptions on which it is based. And these are wide open to disagreement. Philosophical argument does little more than hone disagreement and provide further opportunities to nit-pick. For the most part, philosophical argument promotes divergence, when its better use (“standing back”) is to arrive at convergence by getting to the bottom of things. That, however, would risk putting philosophers out of a job.

Philosophy resembles art more than science. Art, at least, serves a public beyond the coterie of artists themselves. Art too promotes divergence, and literature serves the multiplication of viewpoints we value as creative in our culture of individualism. Like artists, professional philosophers might find an esthetic satisfaction in presenting and examining arguments; they might revel in the opportunity to stand out as clever and original. However, philosophy tends to be less accessible to the general public than art. (Try to imagine a philosophy museum or gallery.) Professional philosophy has defined itself as an ivory-tower activity, and academic papers in philosophy tend to make dull reading, when comprehensible at all. That does not prevent individual philosophers from writing books of general and even topical interest. Sometimes these are eloquent, occasionally even best-sellers. Philosophers may do their best writing, and perhaps their best thinking, when addressing the rest of us instead of their fellows. After all, they were once advisors to rulers, providing a public service.

If philosophy is the art of splitting hairs, the metaphor generously conjures the image of an ideally sharp blade—the cutting edge of logic or incisive criticism. The other metaphor—of “nitpicking”—has less savory connotations but more favorable implications. Picking nits is a grooming activity of social animals, especially primates. It serves immediately to promote cleanliness (next to godliness, after all). More broadly, it serves to bond members of a group. We complain of it as a negative thing, an excessive attention to detail that distracts from the main issue. Yet its social function is actually to facilitate bonding. The metaphor puts the divisive aspect of philosophy in the context of a potentially unifying role.

That role can be fulfilled in felicitous ways, such as the mandate to stand back to see the larger picture, to find hidden connections and limiting assumptions, to “go meta.” It consists less in the skill to find fault with the arguments of others than to identify faulty thinking in the name of arriving at a truth that is the basis for agreement. Perhaps most philosophers would insist that is what they actually do. Perhaps finding truth can only be done by first scrutinizing arguments in detail and tearing them apart. However, that should never be an end in itself. As naïve as it might seem, reality—not arguments—should remain the focus of study in philosophy, as it is in science. Above all, specialization in philosophy should not distract from larger issues, but aim for them. Analysis should be part of a larger cycle that includes synthesis. Philosophy should be true to its role of seeking the largest perspective and bringing the most important things into clear focus. It should again be a public service, to an informed society if no longer to kings.

The quest for superintelligence

Human beings have been top dogs on the planet for a long time. That could change. We are now on the verge of creating artificial intelligence that is smarter than us, at least in specific ways. This raises the question of how to deal with such powerful tools—or, indeed, whether they will remain controllable tools at all or will instead become autonomous agents like ourselves, other animals, and mythical beings such as gods and monsters. An agent, in a sense derived from biology, is an autopoietic system—one that is self-defining, self-producing, and whose goal is its own existence. A tool or machine, on the other hand, is an instrument of an agent’s purposes. It is an allopoietic system—one that produces something other than itself. Unless it happens to also be an autopoietic system, it could have no goals or purposes of its own. An important philosophical issue concerning AI is whether it should remain a tool or should become an agent in its own right (presuming that is even possible). Another issue is how to ensure that powerful AI remains in tune with human goals and values. More broadly: how to make sure we remain in control of the technologies we create.

These questions should interest everyone, first of all because the development of AI will affect everyone. And secondly, because many of the issues confronting AI researchers reflect issues that have confronted human society all along, and will soon come at us on steroids. For example, the question of how to align the values of prospective AI with human interests simply projects into a technological domain the age-old question of how to align the values of human beings among themselves. Computation and AI are the modern metaphor for understanding the operations of brains and the nature of mind—i.e., ourselves. The prospect of creating artificial mind can aid in the quest to understand our own being. It is also part of the larger quest: not only to control nature but to re-create it, whether in a carbon or a silicon base. And that includes re-creating our own nature. Such goals reflect the ancient dream to self-define, like the gods, and to acquire godlike powers.

Ever since Descartes and La Mettrie, the philosophy of mechanism has blurred the distinction between autopoietic and allopoietic systems—between organisms and machines. Indeed, it traditionally regards organisms as machines. However, an obvious difference is that machines, as we ordinarily think of them, are human artifacts, whereas organisms are not. That difference is being eroded from both ends. Genetic and epigenetic manipulation can produce new creatures, just as nucleosynthesis produced new man-made elements. At the other end lies the prospect to initiate a process that results in artificial agents: AI that bootstraps into an autopoietic system, perhaps through some process of recursive self-improvement. Is that possible? If so, is it desirable?

It does not help matters that the language of AI research casually imports mentalistic terms from everyday speech. The literature is full of ambiguous notions from the domain of everyday experience, which are glibly transferred to AI—for example, concepts such as agent, general intelligence, value, and even goal. Confusing “as if” language crops up when a machine is said to reason, think or know, to have incentives, desires, goals, or motivations, etc. Even if metaphorical, such wholesale projection of agency into AI begs the question of whether a machine can become an autopoietic system, and obscures the question of what exactly would be required to make it so.

To amalgamate all AI competencies in one general and powerful all-purpose program certainly has commercial appeal. It now seems like a feasible goal, but may be too good to be true. For, the concept of artificial general intelligence (AGI) could turn out to be not even be a coherent notion—if, for example, “intelligence” cannot really be divorced from a biological context. To further want AGI to be an agent could be a very bad idea. Either way, the fear is that AI entities, if super-humanly intelligent, could evade human control and come to threaten, dominate, or even supersede humanity. A global takeover by superintelligence has been the theme of much science fiction and is now the topic of serious discussion in think tanks around the world. While some transhumanists might consider it desirable, probably most people would not. The prospect raises many questions, the first being whether AGI is inevitable or even desirable. A further question is whether AGI implies (or unavoidably leads to) agency. If not, what threats are posed by an AGI that is not an agent, and how can they be mitigated?

I cannot provide definite answers to these questions. But I can make some general observations and report on a couple of strategies proposed by others.  An agent is necessarily embodied—which means it is not just physically real but involved in a relationship with the world that matters to it. Specifically, it can interact with the world in ways that serve to maintain itself. (All natural organisms are examples, and here we are considering the possibility of an artificial organism.) One can manipulate tools in a direct way, without having to negotiate with them as we do with agents such as people and other creatures. The concept of program initially meant a unilateral series of commands to a machine to do something. A command to a machine is a different matter than a command to an agent, which has its own will and purposes and may or may not choose to obey the command. But the concept of program has evolved to include systems with which we mutually interact, as in machine learning and self-improving programs. This establishes an ambiguous category between machine and agent. Part of the anxiety surrounding AGI stems from the novelty and uncertainty regarding this zone.

It is  problematic, and may be impossible, to control an agent more intelligent than oneself. The so-called value alignment problem is the desperate quest to nevertheless find a way to have our cake (powerful AI to use at our discretion) and be able to eat it too (or perhaps to keep it from eating us). It is the challenge to make sure that AI clearly “understands” the goals we give it and pursues no others. If it has any values at all, these should be compatible with human values and it should value human life. I cannot begin here to unravel the tangled fabric of tacit assumptions and misconceptions involved in this quest. (See instead my article, “The Value Alignment Problem,” posted in the Archive on this website.) Instead, I point to two ways to circumvent the challenge. The first is simply not to aim for AGI, let alone for agents. This strategy is proposed by K. Eric Drexler. Instead of consolidating all skill in one AI entity, it would be just as effective, and far safer, to create ad hoc task-oriented software tools that do what they are programmed to do because their capacity to self-improve is deliberately limited. The second strategy is proposed by Russell Stuart: to build uncertainty into AI systems, which are then obliged to hesitate before acting in ways adverse to human purposes—and thus to consult with us for guidance.

The goal to create superintelligence must be distinguished from the goal to create artificial agents. Superintelligent tools can exist that are not agents; agents can exist that are not superintelligent. The problem of controlling AI and aligning its values are byproducts of the desire to create meta-tools that are neither conventional tools nor true agents. Furthermore, real-world goals for AI must be distinguished from specific tasks. We understandably seek powerful tools to achieve our real-world goals for us, yet fearful they may misinterpret our wishes or carry them out in some undesired way. That dilemma is avoided if we only create programs to accomplish specified tasks. That equals more work for humans than automating automation itself, but keeps technology under human control.

Why seek to eliminate human input and participation? An obvious answer is to “optimize” the accomplishment of desired goals. That is, to increase productivity (equals wealth) through automation and thereby also reduce the burdens of human labor. Perhaps modern human beings are never satisfied with enough? Perhaps at the same time we simply loathe effort of any kind, even mental. Shall we just compulsively substitute automation for human labor whenever possible? Or are we indulging as well a faith that AI could accomplish all human purposes better and more ecologically than people? If the goal is ultimately to automate everything, what would people then do with their time when they are no longer obliged to do anything? If the hope behind AI is to free us from drudgery of any sort (in order, say, to “make the best of life’s potential”) what is that potential? How does it relate to present work and human satisfactions? What will machines free us for?

And what are the deeper and unspoken motivations behind the quest for superintelligence? To imitate life, to acquire godlike powers, to transcend nature and embodiment, to create an artificial ecology? Such questions traditionally lie outside the domain of scientific discourse. They become political, social, ethical and even religious issues surrounding technology. But perhaps they should be addressed within science too, before it is too late.

Anthropocene: from climate change to changing human nature

Anthropocene is a neologism meaning “a geological epoch dating from the commencement of significant human impact on Earth’s geology and ecosystems.” However, what is involved is more than unintended consequence. Having already upset the natural equilibrium, it seems we are now obliged to deliberately intervene—either to restore it or to finish the job of creating a man-made world in place of nature. What is new on the anthropo scene is the prospect of taking deliberate charge of human destiny, indeed the future of the planet. It is the prospect of completely re-creating nature and human nature—blurring or obliterating the very distinction between natural and artificial. The Anthopocene could be short-lived, either because the project is doomed and we do ourselves in or because the form we know as human may be no more than a stepping stone to something else.

In one sense, the Anthropocene dates not from the 20th century (nor even the Industrial revolution) but from human beginnings. For, the function of culture everywhere has always been to redefine the world in human terms, and our presence has always reshaped the landscape, creating extinctions and deserts along the way. Technology has always had planetary effects, which until recently have been moderate and considered only in hindsight. New technologies now afford possibilities of total control and require total foresight. Bio-engineering, nanotechnology, and artificial intelligence are latter-day means to an ancient dream of acquiring godlike powers. Along with such powers go godlike responsibilities.

Because that dream has been so core to human being all along, and yet so far beyond reach, we’ve been in denial of it over the ages, always projecting god-envy into religious and mythological spheres, which have always cautioned against the hubris of such pretention. Newly emboldened technologically, however, humanity is finally coming out of the closet.

The Anthropocene ideal is to master all aspects of physical reality, redesigning it to human taste. Actually, that will mean to the taste of those who create the technology. This raises the political question: who, if anyone, should control these technologies? Who will it benefit? More darkly, what are the risks that will be borne by all? When there was an abundance of wild, the natural world was taken for granted as a commons, which did not prevent private interests from fencing, owning, and exploiting ever more of it for their own profit.

From biblical times, the idea of natural resource put nature in a context of use, as the object of human purposes. And that meant the purposes of certain societies or groups, at the cost of others. Now that technologies exist to literally rearrange the building blocks of life and of matter, the concept of resource shifts from specific minerals, plants and animals to a more universal stuff—even “information.” One political question is who will control these new unnatural resources, and how to preserve them as a new sort of commons for the benefit of all? Another is how to proceed safely—if there is such a thing—in the wholesale transformation of nature and ourselves.

The human essence has always been a matter of controversy. More than ever it is now up for grabs. Because we are the self-creating creature, we cannot look to a fixed human nature, nor to a consensus, for the values that should guide our use of technology. A vision of the future—and the fulfillment of human potential—is a matter of opinions and values that differ widely. Some see a glorious technological future that is not pinned to the current human form. Others envision a way of life more integrated with nature and accepting of natural constraints. Still others view the human essence as spiritual, with human destiny unfolding on some divine timetable. The means to change everything are now available, but without a consolidated guiding vision.

Genome information is now readily available and so are technologies for using it to do genetic experiments at home. While some technologies require expensive laboratory equipment, citizen scientists (bio-hackers) can get what they need online and through the mail. Since much of the technology is low-tech and readily available, anyone in their basement can launch us into a brave new unnatural world.

One impetus for such home experimentation is social disparity: biohacking is in part a rebellion against the unfairness of present social and health systems. Like the hacker movement in general, biohackers want knowledge and technology to be fairly and democratically available, which means relatively cheap if not in the public domain. It’s about public access to what they consider should be a commons. They protest the patenting of private intellectual property that drives up the price of technology and medicine and restricts the availability of information. Social disparity promises to be endemic to all new technologies that are affordable (at least initially) only to an elite.

There are personal risks for those who experiment on themselves with unproven drugs and genetic modification. But there are risks to the environment shared by all as well, for example when an engineered mutant is deliberately released into the wild to deal with the spread of ticks that carry Lyme disease or the persistence of malaria-carrying mosquitos. The difference between a genetic solution and a conventional one can be that the new organism reproduces itself, changing the biosphere in potentially unforeseeable and irreversible ways. That applies to interventions in the human genome too. Bio-hacking is but one illustration of the potential benefits and threats of bio-engineering, which is the human quest to change biology deliberately, including human biology. The immediate promise is that genetic defects can be eliminated. But why stop there? Ideal citizens can be designed from scratch. Perhaps mortality can be eliminated. That amounts to hijacking evolution or finally taking charge of it, according to your view. To change human nature might seem a natural right, especially since “human nature” includes an engrained determination to self-define. But does that include the right to define life in general and nature at large, to tinker freely with other species, to terra-form the planet? And what constitutes a “right?” Nature endows creatures with preferences and instincts but not with rights, which are a human construct, reflecting our very disaffection from nature. Who will determine the desirable traits for a future human or post-human being and on what grounds?

Tinkering with biology is one way to enhance ourselves, but another is through artificial intelligence. Bodies and now minds can be augmented prosthetically, potentially turning us into a new cyborg species (or a number of them). Another dream is to transcend embodiment (and mortality) entirely, by uploading a copy of yourself into an eternally-running supercomputer. Some of these aspirations are pipedreams. But the possibility of an AI takeover is real and already upon us in minor ways: surveillance, data collection, smart appliances, etc. The ultimate potential is to automate automation, to relieve human beings (or at least some of them) of the need to work physically and even mentally. Your robot can do all your housework, your job, even take your vacations for you! As with biotechnology, the surface motivation driving AI development is no doubt commercial and military. Yet, lurking beneath is the unconscious desire to step into divine shoes: to create life and mind from scratch even as we free ourselves from the limitations of natural life and mind.

Like biotechnology, the tools for AI development are commonly available and relatively cheap. All you need is savvy and a laptop. The implicit aim is artificial “general” intelligence, matching or exceeding human mental and physical capability. That could be in the form of superintelligent tools that remain under human control, designed for specific tasks. But it could also mean a robotic version of human slaves. Apart from the ethics involved, slaves have never been easy to control. It comes down to a tradeoff between the advantages of autonomy in artificial agents and the challenge to control them. Autonomy may seem desirable because such agents could do literally everything for us and better, with no effort on our part. But if such creations are smarter than we are, and are in effect their own persons, how long could we remain their masters? If they have their own purposes, why would they serve ours? The very idea of automating automation means forfeiting control at the outset, since the goal is to launch AI’s that effectively create themselves.

Radical conservationists and transhumanist technophiles may be at cross-purposes, but so are more moderate advocates of environment or business. As biological creatures, we inherit the universe provided by nature, which we try to make into something corresponding to our human preferences. The materials we work with ultimately derive from nature and obey laws we did not make. Scientific understanding has enabled us to reshape that world to an extent, using those very laws. We don’t yet know the ceiling of what is possible, let alone what is wise. How far should we go in transforming ourselves and nature? Why create artificial versions of ourselves at all, let alone artificial versions of gods? What used to be philosophical questions are becoming scientific and political ones. The world is our oyster and we the irritating grit within. Will the result be a pearl?

R U Real?

For millennia, philosophers have debated the nature of perception and its relation to reality. Their speculations have been shaped by the prevailing concerns and metaphors of their age. The ancient Greeks, with slaves to do their work, were less interested in labor-saving inventions than in abstract concepts and principles. Plato’s allegory of the Cave refers to no technology more sophisticated than fire—harking back, perhaps, to times when people literally lived in caves. (It does refer to the notion of prisoner, long familiar from slavery and military conquest in the ancient world.)

In Plato’s low-tech metaphor, the relationship of the perceiving subject to the objects of perception is like that of someone in solitary confinement. The unfortunate prisoner’s head is even restrained in such a way that he/she is able only to see the shadows cast, on the walls of the cave, by objects passing behind—but never the objects themselves. It was a prescient intuition, anticipating the later discovery that the organ responsible for perception is the brain, confined like a prisoner in the cave of the skull. Plato believed it was possible to escape this imprisonment. In his metaphor, the liberated person could emerge from the cave and see things for what they are in the light of day—which to Plato meant the light of pure reason, freed from dependence on base sensation.

Fast forward about two millennia to Catholic France, where Descartes argues that the perceiving subject could be systematically deceived by some mischievous agent capable of falsifying the sensory input to the brain. Descartes understood that knowledge of the world is crucially dependent on afferent nerves, which could be surgically tampered with. (The modern version of this metaphor is the “brain in a vat,” wired up to a computer that sends all the right signals to the brain to convince it that it is living in a body and engaged in normal perception of the world.) While Descartes was accordingly skeptical about knowledge derived from the senses, he claimed that God would not permit such a deception. In our age, in contrast, we not only know that deception is feasible, but even curry it in the form of virtual entertainments. The film The Matrix is a virtual entertainment about virtual entertainments, expounding on the theme of the brain in a vat.

Fast forward again a century and a half to Immanuel Kant. Without recourse to metaphor or anatomy, he clearly articulated for the first time the perceiving subject’s inescapable isolation from objective reality. (In view of the brain’s isolation within the skull, the nature of the subject’s relation to the outside world is clearly not a transparent window through which things are seen as they “truly” are.) Nevertheless, while even God almighty could do nothing about this unfortunate condition, Kant claimed that the very impossibility of direct knowledge of external reality was reason for faith. In an age when science was encroaching on religion, he contended that it was impossible to decide issues about God, free will, and immortality—precisely because they are beyond reach in the inaccessible realm of things-in-themselves. One is free he insisted, to believe in such things on moral if not epistemological grounds.

Curiously, each of these key figures appeals to morality or religion to resolve the question of reality, in what are essentially early theories of cognition. Plato does not seem to grasp the significance of his own metaphor as a comment on the nature of mind. Rather, it is incidental to his ideas on politics and the moral superiority of the “enlightened.” Descartes—who probably knew better, yet feared the Church—resorts to God to justify the possibility of true knowledge. And Kant, for whom even reason is suspect, had to “deny knowledge in order to make room for faith.” We must fast forward again another century to find a genuinely scientific model of cognition. In Hermann Helmholtz’s notion of unconscious inference, the brain constructs a “theory” of the external world using symbolic representations that are transforms of sensory input. His is a precursor of computational theories of cognition. The metaphor works both ways: one could say that perception is modeled on scientific inference; but one can equally say that science is a cognitive process which recapitulates and extends natural perception.

Given its commitment to an objective view, it is ironic that science shied away from the implications of Kant’s thesis that reality is off-limits to the mind. While computational theories explain cognition as a form of behavior, they fail to address: (1) the brain’s epistemic isolation from the external world; (2) the nature of conscious experience, if it is not a direct revelation of the world; and (3) the insidious circularity involved in accounts of perception.

To put yourself in the brain’s shoes (first point, above), imagine you live permanently underwater in a submarine—with no periscope, port holes, or hatch. You have grown up inside and have never been outside its hull to view the world first-hand. You have only instrument panels and controls to deal with, and initially you have no idea what these are for. Only by lengthy trial and error do you discover correlations between instrument readings and control settings. These correlations give you the idea that you are inside a vessel that can move about under your direction, within an “external” environment that surrounds it. Using sonar, you construct a “picture” of that presumptive world, which you call “seeing.”

This is metaphor, of course, and all metaphors have their limitations. This one does not tell us, for example, exactly what it means to be “having a picture” of the external world (second point), beyond the fact that it enables the submariner to “navigate.” This picture (conscious perception) is evidently a sort of real-time map—but of what? And why is it consciously experienced rather than just quietly running as a program that draws on a data bank to guide the behavior of navigating? (In other words, why is there a submariner at all, as opposed to a fully automated underwater machine?) Furthermore, the brain’s mastery of its situation is not a function of one lifetime only. The “trial and error” takes place in evolutionary time, over many generations of failures that result in wrecked machines.

In the attempt to explain seeing, perhaps the greatest failure of the metaphor is the circularity of presuming someone inside the submarine who already has the ability to see: some inner person who already has a concept of reality outside the hull (skull), and who moves about inside the seemingly real space of the submarine’s interior, aware of instrument panels and control levers as really existing things. It is as though a smaller submarine swims about inside the larger one, trying to learn the ropes, and within that submarine an even smaller one… ad infinitum!

The problem with scientific theories of cognition is that they already presume the real world whose appearance in the mind they are trying to explain. The physical brain, with neurons, is presumed to exist in a physical world as it appears to humans—in order to explain that very appearance, which includes such things as brains and neurons and the atoms of which they are composed. The output of the brain is recycled as its input! To my knowledge, Kant did not venture to discuss this circularity. Yet, it clearly affirms that the world-in-itself is epistemically inaccessible, since there is no way out of this recycling. However, rather than be discouraged by this as a defeat of the quest for knowledge or reality, we should take it as in invitation to understand what “knowledge” can actually mean, and what the concept of “reality” can be for prisoners inside the cave of the skull.

Clearly, for any organism, what is real is what can affect its well-being and survival, and what it can affect in turn. (This is congruent with the epistemology of science: what is real is that with which the observer can causally interact.) The submariner’s picture and knowledge of the world outside the hull is “realistic” to the degree it facilitates successful navigation—that is, survival. The question of whether such knowledge is “true” has little meaning outside this context. Except in these limited terms, you cannot know what is outside your skull—or what is inside it, for that matter. The neurosurgeon can open up a skull to reveal a brain—can even stimulate that brain electrically to make it experience something the surgeon takes to be a hallucination. But even if the surgeon opened her own skull to peek inside, and manipulated her own experience, what she would see is but an image created by her own brain—in this case perhaps altered by her surgical interventions. The submariner’s constructed map is projected as external, real, and even accurate. But it is not the territory. What makes experience veridical or false is hardly as straightforward as the scientific worldview suggests. Science, as an extended or supplementary form of cognition, is as dependent on these caveats as natural perception. Whether scientific knowledge of the external world ultimately qualifies as truth will depend on how well it serves the survival of our species. On that the jury is still out.

Are You Fine-tuned? (Or: the story of Goldilocks and the three dimensions)

The fine-tuning problem is the notion that the physical universe appears to be precisely adjusted to allow the existence of life. It is the apparent fact that many fundamental parameters of physics and cosmology could not differ much from their actual values, nor could the basic laws of physics be much different, without resulting in a universe that would not support life. Creationists point to this coincidence as evidence of intelligent design by God. Some thinkers point to it as evidence that our universe was engineered by advanced aliens. And some even propose that physical reality is actually a computer simulation we are living in (created, of course, by advanced aliens). But perhaps fine-tuning is a set-up that simply points to the need for a different a way of thinking.

First of all, the problem assumes that the universe could be different than it is—that fundamental parameters of physics could have different values than they actually do in our world. This presumes some context in which basic properties can vary. That context is a mechanistic point of view. The Stanford Encyclopedia of Philosophy defines fine-tuning as the “sensitive dependences of facts or properties on the values of certain parameters.” It points to technological devices (machines) as paradigm examples of systems that have been fine-tuned by engineers to perform an optimal way, like tuning a car engine. The mechanistic framework of science implicitly suggests an external designer, engineer, mechanic or tinkerer—if not God, then the scientist. In fact, the early scientists were literally Creationists. Whatever the solution, the problem is an historical residue of their mechanistic outlook. The answer may require that we look at the universe in a more organic way.

The religious solution was to suppose that the exact tweaking needed to account for observed values of physical parameters must be intentional and not accidental. The universe could only be fine-tuned by design—as a machine is. However, the scale and degree of precision are far above the capabilities of human engineers. This suggests that the designer must have near-infinite powers, and must live in some other reality or sector of the universe. Only God or vastly superior alien beings would have the know-how to create the universe we know. Alternatively, such precision could imply that the universe is not even physical, but merely a product of definition, a digital simulation or virtual reality. Ergo, there must be another level of reality behind the apparent physical one. But such thinking is ontologically extravagant.

Apart from creationism, super-aliens, or life in a cosmic computer, a more conventional approach to the problem is statistical. One can explain a freak occurrence as a random event in a large run of very many. Like an infinite number of monkeys with typewriters, one is bound to type out Shakespeare eventually. If, say, there are enough universes with random properties, it seems plausible that at least one of them would be suitable for the emergence of life. Since we are here, we must be living in that universe. But this line of reasoning is also ontologically costly: one must assume an indefinite number of actual or past “other universes” to explain this single one. The inspiration for such schemes is organic insofar as it suggests some sort of natural selection among many variants. That could the “anthropic” selection mentioned above or some Darwinian selection among generations of universes (such as Lee Smolin’s black hole theory). Such a “multiverse” scheme could be true, but we should only think so because of real evidence and not in order to make an apparent dilemma go away.

It might be ontologically more economical to assume that our singular universe somehow fine-tunes itself. After all, organisms seem to fine-tune themselves. Their parts cooperate in an extremely complex way that cannot be understood by thinking of the system as a machine designed from the outside. If nature (the one and only universe) is more like an organism than a machine, then the fine-tuning problem should be approached a different way, if indeed it is a problem at all. Instead of looking at life as a special instance of the evolution of inert matter, one could look at the evolution of supposedly inert matter (physics) as a special case involving principles that can also describe the evolution of life.

Systems in physics are simple by definition. Indeed, they are conceived for simplicity. In contrast, organisms (and the entire biosphere) are complex and homeostatic. Apart from the definitions imposed by biologists. organisms are also selfdefining. Physical systems are generally analyzed in terms of one causal factor at a time—as in “controlled” experiments. As the name suggests, this way of looking aims to control nature in the way we can control machines, which operate on simple linear causality. Biological systems involve very many mutual and circular causes, hard to disentangle or control. Whereas the physical system (machine) reflects the observer’s intentionality and purposes—to produce something of human benefit—the organism aims to produce and maintain itself. Perhaps it is time to regard the cosmos as a self-organizing entity.

Fine-tuning argues that life could not have existed if the laws of nature were slightly different, if the constants of nature were slightly different, or if the initial conditions at the Big Bang were slightly different—in other words, in most conceivable alternative universes. But is an alternative universe physically possible simply because we can conceive it? The very business of physics is to propose theoretical models that are free creations of mathematical imagination. Such models are conceptual machines. We can imagine worlds with a different physics; but does imagining them make them real? The fact that a mathematical model can generate alternative worlds may falsely suggest that there is some real cosmic generator of universes churning out alternative versions with differing parameters and even different laws. “Fundamental parameters” are knobs on a conceptual machine, which can be tweaked. But they are not knobs on the world itself. They are variables of equations, which describe the behavior of the model. The idea of fine-tuning confuses the model with the reality it models.

The notion of alternative values for fundamental parameters extends even to imagining what the world would be like with more than or less than three spatial dimensions. But the very idea of dimension (like that of parameter) is a convention. Space itself just is. What we mean literally by spatial dimensions are directions at right angles to each other—of which there are but three in Euclidean geometry. The idea that this number could be different derives from an abstract concept of space in contrast to literal space: dimensions of a conceptual system—such as phase space or in non-Euclidean geometry. The resultant “landscape” of possible worlds is no more than a useful metaphor. If three dimensions are just right for life, it is because the world we live in happens to be real and not merely conceptual.

The very notion of fundamental parameters is a product of thinking that in principle does not see the forest for the trees. What makes them “fundamental” is that the factors appear to be independent of each other and irreducible to anything else—like harvested logs that have been propped upright, which does not make them a forest. This is merely another way to say that there is currently no theory to encompass them all in a unified scheme, such as could explain a living forest, with its complex interconnections within the soil. Without such an “ecology” there is no way to explain the mutual relationships and specific values of seemingly independent parameters. (In such a truly fundamental theory, there would be at most one independent parameter, from which all other properties would follow.)

The fine-tuning problem should be considered evidence that something is drastically wrong with current theory, and with the implicit philosophy of mechanism behind it. (There are other things wrong: the cosmological constant problem, for instance, has been described as the worst catastrophe in the history of physics.) Multiverses and string theories, like creationism, may be barking up the wrong tree. They attempt to assimilate reality to theory (if not to theology), rather than the other way around. The real challenge is not to fit an apparently freak world into an existing framework, but to build a theory that fits experience.

Like Goldilocks, it appears to us that we live in a universe that is just right for us—in contrast to imaginary worlds unsuitable for life. We are at liberty to invent such worlds, to speculate about them, and to imagine them as real. These are useful abilities that allow us to confront in thought hypothetical situations we might really encounter. As far as we know, however, this universe is the only real one.

The machine death of the universe?

Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with vague meanings include intelligence, embodiment, mind, consciousness, perception, value, goal, agent, knowledge, belief, and thinking. Such vocabulary is naively borrowed from human mental life and used to underpin a theoretical and abstract general notion of intelligence that could be implemented by computers. Intelligence has been defined many ways—for example, as the ability to deal with complexity. But what does “dealing with” mean exactly? Or, defined as the ability to predict future or missing information; but what is “information” if not relevant to the well-being of some unspecified agent? It should be imperative to clarify such ambiguities, if only to identify a crucial threshold between conventional mechanical tools and autonomous artificial agents. While it might be inconsequential what philosophers think about such matters, it could be devastating if AI developers, corporations, and government regulators get it wrong.

However intelligence is formally defined, our notions of it derive originally from experience with living creatures, whose intelligence ultimately is the capacity to survive and breed. Yet, formal definitions often involve solving specific problems set by humans, such as on IQ tests. This problem-solving version of intelligence is tied to human goals, language use, formal reasoning, and modern cultural values; and trying to match human performance risks to test for humanness more than intelligence. The concept of general intelligence, as it has developed in AI, does not generalize the actual instances of mind with which we are familiar—that is, organisms on planet Earth—so much as it selects isolated features of human performance to develop into an ideal theoretical framework. This is then supposed to serve as the basis of a universally flexible capacity, just as the computer is understood to be the universal machine. A very parochial understanding of intelligence becomes the basis of abstract, theoretically possible “mind,” supposedly liberated from bodily constraint and all context. However, the generality sought for AI runs counter to the specific nature and conditions for embodied natural intelligence. It remains unclear to what extent an AI could satisfy the criteria for general intelligence without being effectively an organism. Such abstractions as superintelligence (SI) or artificial general intelligence (AGI) remain problematically incoherent. (See Maciej Cegłowski’s amusing critique: https://idlewords.com/talks/superintelligence.htm)

AI was first modelled on language and reasoning skills, formalized as computation. The limited success of early AI compared unfavorably with broader capabilities of organisms. The dream then advanced from creating specific tools to creating artificial agents that could be tool users, imitating or replicating organisms. But natural intelligence is embodied, whereas the theoretical concept of “mind in general” that underpins AI is disembodied in principle. The desired corollary is that such a mind could be re-embodied in a variety of ways, as a matter of consumer choice. But whether this corollary truly follows depends on whether embodiment is a condition that can be simulated or artificially implemented, as though it were just a matter of hooking up a so-called mind to an arbitrary choice of sensors and actuators. Can intelligence be decoupled from the motivations of creatures and from the evolutionary conditions that gave rise to natural intelligence? Is the evolution of a simulation really a simulation of natural evolution? A negative answer to such questions would limit the potential of AI.

The value for humans of creating a labor-saving or capacity-enhancing tool is not the same as the value of creating an autonomous tool user. The two goals are at odds. Unless it constitutes a truly autonomous system, an AI manifests only the intentionality and priorities of its programmers, reflecting their values. Talk of an AI’s perceptions, beliefs, goals or knowledge is a convenient metaphorical way of speaking, but is no more than a shorthand for meanings held by programmers. A truly autonomous system will have its own values, needs, and meanings. Mercifully, no such truly autonomous AI yet exists. If it did, programmers would only be able to impress their values on it in the limited ways that adults educate children, governments police their citizenry, or masters impose their will on subordinates. At best, SI would be no more predictable or controllable than an animal, slave, child or employee. At worst, it would control, enslave, and possibly displace us.

A reasonable rationale for AGI requires it to remain under human control, to serve human goals and values and to act for human benefit. Yet, such a tool can hardly have the desired capabilities without being fully autonomous and thus beyond human control. The notion of “containing” an SI implies isolation from the real world. Yet, denial of physical access to or from the real world would mean that the SI would be inaccessible and useless. There would have to be some interface with human users or interlocutors just to utilize its abilities; it could then use this interface for its own purposes. The idea of pre-programming it to be “friendly” is fatuously contradictory. For, by definition, SI would be fully autonomous, charged with its own development, pursuing its own goals, and capable of overriding its programming. The idea of training human values into it with rewards and punishments simply regresses the problem of artificially creating motivation. For, how is it to know what is rewarding? Unless the AI is already an agent competing for survival like an organism, why would it have any motivation at all? If it is such an agent, why would it accept human values in place of its own? And how would its intelligence differ from that of natural organisms, which are composed of cooperating cells, each with its relative autonomy and needs? The parts of a machine are not like the parts of an organism.

While a self-developing neural net is initially designed by human programmers, like an organism it would constitute a sort of black box. Unlike the designed artifact, we can only speculate on the structure, functioning, and principles of a self-evolving agent. This is a fundamentally different relationship from the one we have to ordinary artifacts, which in principle do what we want and are no more than what we designed them to be. These extremes establish an ambiguous zone between a fully controllable tool and a fully autonomous agent pursuing its own agenda. If there is a key factor that would lead technology irreversibly beyond human control, it is surely the capacity to self-program, based on learning, combined with the capacity to self-modify physically. There is no guarantee that an AI capable of programming itself can be overridden by a human programmer. Similarly, there is no guarantee that programmable matter (nanites) would remain under control if it can self-modify and physically reproduce. If we wish to retain control over technology, it should consist only of tools in the traditional sense—systems that do not modify or replicate themselves.

Sentience and consciousness are survival strategies of natural replicators. They are based on the very fragility of organic life as well as the slow pace of natural evolution. If the advantage of artificial replicators is to transcend that fragility from the outset, then their very robustness might also circumvent the evolutionary premise—of natural selection through mortality—that gave rise to sentience in the first place. And the very speed of artificial evolution could drastically out-pace the ability of natural ecosystems to adapt. The horrifying possibility could be a world overrun by mechanical self-replicators, an artificial ecology that outcompetes organic life yet fails to evolve the sentience we cherish as a hallmark of living things. (Imagine something like Kurt Vonnegut’s ‘ice nine’, which could escape the planet and replicate itself indefinitely using the materials of other worlds. As one philosopher put it: a Disneyland without children!) If life happened on this planet simply because it could happen, then possibly (with the aid of human beings) an insentient but robust and invasive artificial nature could also happen to displace the natural one. A self-modifying AI might cross the threshold of containment without our ever knowing or being able to prevent it. Self-improving, self-replicating technology could take over the world and spread beyond: a machine death of the universe. This exotic possibility would not seem to correspond to any human value, motivation or hope—even those of the staunchest posthumanists. Neither superintelligence nor silicon apocalypse seems very desirable.

The irony of AI is that it redefines intelligence as devoid of the human emotions and values that actually motivate its creation. This reflects a sad human failure to know thyself. AI is developed and promoted by people with a wide variety of motivations and ideals apart from commercial interest, many of which reflect some questionable values of our civilization. Preserving a world not dominated one way or another by AI might depend on a timely disenchantment with the dubious premises and values on which the goals of AI are founded. These tacitly include: control (power over nature and others), transcendence of embodiment (freedom from death and disease), laziness (slaves to perform all tasks and effortlessly provide abundance), greed (the sheer hubris of being able to do it or being the first), creating artificial life (womb-envy), creating super-beings (god-envy), creating artificial companions (sex-envy), and ubiquitous belief in the mechanist metaphor (computer-envy—the universe is metaphorically or literally digital).

Some authors foresee a life for human consciousness in cyberspace, divorced from the limitations of physical embodiment—the update of an ancient spiritual agenda. (While I think that impossible, it would at least unburden the planet of all those troublesome human bodies!) Some authors cite the term cosmic endowment to describe and endorse a post-human destiny of indefinite colonization of other planets, stars, and galaxies. (Endowment is a legal concept of property rights and ownership.) They imagine even the conversion of all matter in the universe into “digital mind,” just as the conquistadors sought to convert the new world to a universal faith while pillaging its resources. At heart, this is the ultimate extension of manifest destiny and lebensraum.

Apart from such exotic scenarios, the world seems to be heading toward a dystopia in which a few people (and their machines) hold all the means of production and no longer need the masses either as workers or as consumers—and certainly not as voters. The entire planet could be their private gated community, with little place for the rest of us. Even if it proves feasible for humanity to retain control of technology, it might only serve the aims of the very few. This could be the real threat of an “AI takeover,” one that is actually a political coup by a human elite. How consoling will it be to have human overlords instead of superintelligent machines?

A hymn to some body

In the beginning was Body. Once, in human eyes, sacredness or divinity permeated nature as an aura of appropriate reverence. Nature (Body) was then not “matter,” which is Body de-natured by the scientistic mind. But neither was it “spirit,” which is Body dematerialized by the superstitious mind. When deemed sacred, nature was properly respected, if not understood. But projecting human ego as a supernatural person enables one to think that the divine dwells somewhere in particular—in a house or even in a specific body. God holed up in a church or temple and no longer in the world at large. He bore a first-born son with heritable property rights. He could be approached like a powerful king in his palace, to supplicate and manipulate. Most importantly he/she/it no longer dwelt in nature and was certainly not nature itself. And since nature was no longer divine, people were henceforth free to do with it as they pleased.

Just so, when the human body is not revered, we do with it as we please instead of seeking how to please it. Throughout the ages, people have conceptualized the self, mind, ego, or soul as a non-material entity separate from the body. From a natural point of view, however, the self is a function of the physical body, which partakes in Body at large. The body is not the temple of the soul, but is part of Body unconfined to any shrine. The ego’s pursuits of pleasure and avoidances of discomfort ought to coincide with the body’s interests. Often they do not, for ego has rebelled against its “imprisonment” in body. That is a mistake, for consciousness (self) is naturally the body’s servant, not the other way around; and humanity is naturally nature’s servant, not its master. The self is not jockey to the horse but groom.

Up to a point, the body—and nature too—are forgiving of offenses made against them. Sin against Body is a question of cause and effect, not of someone’s judgment or the violation of a human law or norm. The wages of “sin” against the body are natural consequences, which can spell death. Yet, repentance may yield reprieve, provided it is a change of heart that leads to a genuine change of behavior soon enough. It makes some sense to pray to be forgiven such offenses. This is not petition to a free-standing God separate from nature, but to nature itself (which in the modern view is matter-energy, the physical and biological world, and the embodied presence of sentient creatures.) It makes sense even to pray to one’s own body for guidance in matters of health. For, at least the body and nature exist, unlike the fantasies of religion. It makes sense above all because prayer changes the supplicant. Whatever the effect or lack of effect on the object of prayer, the subject is transformed—for those who have ears to hear.

Body is “sacred,” meaning only that it should be revered. Yet, people do have uncanny experiences, which they personify as spirits or gods, sometimes perceived to reside in external things. That is ironic, since the conscious self—perceived to reside “in” the body—is itself a personification that the body has created as an aide to its self-governance. The further projection of this personification onto some abstraction is idolatry. As biological beings living in the real world, we ought to worship God-the-Body—not God the Father, Son or Holy Ghost, nor even God-the-Mother.

Then what of the human project to self-define, to make culture and civilization, to create a human (artificial) world, to transcend the body, to separate from nature? Understanding of nature is part of that project; yet it is also a form of worship, which does not have to be presumptuous or disrespectful. Science is the modern theology of God-the-Body, who did not create the world but is the world. Let us call that human project, in all its mental aspects including science and art, God-the-Mind. Part of the human project is to re-create nature or create artificial nature: God-the-Mind reconstituting God-the-Body, as the butterfly reconstitutes the caterpillar. That might entail creating artificial life, artificial mind, even artificial persons—recapitulating and extending the accomplishments of natural evolution. Fundamentally, the human project is selfcreation.

Regardless how foreign “mind” seems to matter, it is totally of nature if not always about it. Christian theology has its mystery of the dual reality of Jesus, as god and as man. The secular world has its duality of mind and matter. Is there a trinity beyond this duality? God-the-Common Spirit is all the Others unto whom we are to do as we hope they will do to us. It is the holy spirit of fellow-feeling, compassion, mutual respect and cooperation, in which we intend the best for others and their hopes. Certainly, this includes human beings, but other creatures as well. (Do we not all constitute and make the world together?) So, here is a new trinity: God the Body, Mind, and Common Spirit.

Roughly speaking, the Common Spirit is the cohesive force of global life. Common Spirit is the resolve to do one’s best as a part of the emerging whole: to deliberately participate in it as consciously and conscientiously as one can. To invoke the Common Spirit is to affirm that intention within oneself. (That is how I can understand prayer, and what it means to pray fervently “for the salvation of one’s soul.”) We live in the human collectivity, upon which we cannot turn our backs. We thrive only as it thrives. Your individuality is your unique contribution to it, and to pray is to seek how to best do your part for the good of all.

To honor the Common Spirit means to not let your fellows down. One’s calling is to merit their respect, whether or not one receives it. For the sake of the world, strive to do your best to help create and maintain the best in our common world! When you falter, forgive yourself and strive again, whether or not the others forgive you. Of course, it is also a sin to harm your fellows or put them at risk; or to fail to honour them personally; or to fail to honour their efforts, even when misguided. Know that worship is not only a feeling, a thought, or a ritual. Above all it is action: how you conduct yourself through life. It is how you live your resolve throughout the day, alert for situations in which to contribute some good and sensitive to how you might do that.

If this holy trinity makes sense to you, a daily practice can reaffirm commitment to it. This is a matter of remembering whatever motivated you in the first instance. Occasionally, shock is called for to wake someone up from their somnambulism—and that someone is always oneself. “Awakening” means not only seeking more adequate information, but also a more encompassing perspective. It means admitting that one’s perspective, however sophisticated, is limited and subjective. It means remaining humbly open—even vigilant—for new understanding, greater awareness. (Teachers can show up anywhere, most unexpectedly!) “Sleep” is forgetting that one does not live above or beyond Body, Mind, and Common Spirit, but only by their grace. Having the wrong or incomplete information is unavoidable. But the error of sleep is a false sense of identity.

As Dylan said, “You gotta serve somebody.” Better to serve the Body than the puny ego that claims ownership and control over the human organism. Or that claims control over the corpse of the denatured world or over the body politick. Ego may identify itself as mental or spiritual, in opposition to the physical body, which it considers “lower.” But the question at each moment is: What do I serve? God-the-Whatever is not at one’s beck and call to know, to consult, or even to submit to its will (for, it has none). We are rather on our own for guidance, each (if it comforts you to think so) a unique fragment of potential divinity. We can communicate with other fragments, ask their opinions, cooperate or not with their intentions, obey or defy their will or orders. But responsibility lies in each case with oneself. This is not willfulness or egocentricity. Nor is it individualism in the selfish sense, for it is not about entitlement.

One’s body is a distinct entity, yet it is part of the whole of nature, without which it could not live and would never have come into existence. Whatever else it might be, the self is a function of the body and its needs, a survival strategy in the external world of Body. We are embodied naturally as separate organisms. Yet, we are conjoined within nature, mind, and community. Spiritual traditions may bemoan “separation” as a condition to be overcome in an epiphany of oneness. Yet, we are simply separate in the ways that things are separate in space and that cells are within the organism. The part serves the whole, but cannot be it. For, the rebellion of the cell is cancer!

Going forward… into what?

These days I often hear the phrase “going forward” to mean “in the future.” But, going forward into what? Curiously, a temporal expression has been replaced by a spatial metaphor. I can only speculate that this is supposed to convey a reassuring sense of empowerment toward genuine progress. While largely blind to what the future holds, passively weathering the winds of time, as creatures with mobility we can deliberately move forward (or backward), implying free will and some power to set a course.

In this spatial metaphor, the future is a matter of choice, bound to be shaped and measured along several possible axes. For example, there is the vision of limitless technological transformation. But there is also the nearly opposing vision of learning to live in harmony with nature, prioritizing ecological concern for the health of a finite planet. And a third “dimension” is a vision of social justice for humanity: to redistribute wealth and services more equitably and produce a satisfying experience for the greatest number. While any one of these concerns could dominate the future, they are deeply entangled. Whether or not change is intentional, it will inevitably unfold along a path involving them all. To the degree that change will be intentional, a multidimensional perspective facilitates the depth perception needed to move realistically “forward.”

We depend on continuity and a stable environment for a sense of meaning and purpose. The modern ideology of progress seemed to have achieved that stability, at least temporarily and for some. But the pandemic has rudely reminded us that the world is “in it together,” that life is as uncertain and unequal in the 21st century as it always has been, and that progress will have to be redefined. While change may be the only constant, adaptability is the human trademark. Disruption challenges us to find new meanings and purposes.

Homo sapiens is the creature with a foot in each of two worlds—an outer and an inner, as well as a past and a future. The primary focus of attention is naturally outward, toward what goes on out there, how that affects us, what we must accordingly do in a world that holds over us the power of life and death. Understanding reality helps us to survive, and doing is the mode naturally correlated with this outward focus. In many ways, action based on objective thinking—and science in particular—has been the key to human success as the dominant species on the planet. However, human beings are endowed also with a second focus, which is the stream of consciousness itself. Being aware of being aware implies an inner domain of thought, feeling, imagination, and all that we label subjective. This domain includes art and music, esthetic enjoyment and contemplation, meditation and philosophy. Play is the mode correlated with this inner world, as opposed to the seriousness of survival-oriented doing. Subjectivity invites us to look just for the delight of seeing. It also enables us to question our limited perceptions, to look before leaping. Thus, we have at our disposal two modes, with different implications. We can view our personal consciousness as a transparent window on the world, enabling us to act appropriately for our well-being. Alternatively, we can view it as the greatest show on earth.

Long-term social changes may emerge as we scramble to put Humpty together again in the wake of Covid19. The realization that we live henceforth in the permanent shadow of pandemic has already led to new attitudes and behavior: less travel, more online shopping, social distancing, work from home, more international cooperation, restored faith in science and in government spending on social goals. Grand transformations are possible—not seen since the New Deal—such as a guaranteed income, a truly comprehensive health program, new forms of employment that are less environmentally destructive. Staying at home has suggested a less manic way of life than the usual daily grind. The shut-down has made it clear that consumerism is not the purpose and meaning of life, that the real terrorists are microscopic, and that defense budgets should be transferred to health care and social programs. We’ve known all along that swords should be beaten into plowshares; now survival may depend on it. Such transformation requires the complete rethinking of economy and the concept of value. Manic production and consumption in the name of growth have led, not to the paradise on earth promised by the ideology of progress, but to ecological collapse, massive debt, increasing social disparity, military conflict, and personal exhaustion. Nature is giving us feedback that the outward focus must give way to something else—both for the health of the planet and for our own good.

Growth must be redefined in less material terms. Poverty can no longer be solved (if it ever was) by a rising tide of ever more material production. In terms of the burden on the planet, we have already reached the “limits to growth” foreseen fifty years ago. We must turn now to inner growth, whatever that can mean. Personal wealth, like military might, has traditionally been about status and power in a hyperactive world enabled by expanding population and material productivity. (Even medicine has been about the heroic power to save lives through technology, perform miracle surgeries, and find profitable drugs, more than to create universal conditions for well-being, including preparedness against pandemics.) What if wealth and power can no longer mean the same things in the post-pandemic world no longer fueled by population growth? What is money if it cannot protect you from disease? And what is defense when the enemy is invisible and inside you?

We cannot ignore external reality, of course, even supposing that we can know what it is. Yet, it is possible to be too focused on it, especially when the reason for such focus is ultimately to have a satisfying inner experience. The outward-looking mentality must not only be effective outwardly but also rewarding inwardly. It is a question of balance, which can shift with a mere change of focus. We are invited to a new phase of social history, in which the quality of personal experience—satisfaction and enjoyment—is at least as important as the usual forms busy-ness and quantitative measures of progress. This at a time when belt-tightening will prevail, on top of suffering from the ecological effects of climate change and the disruptions in society that will follow.

Human beings have always been fundamentally social and cooperative, in spite of the modern turn away from traditional social interactions toward competitive striving, individual consumption, private entertainment, and atomized habitation. Now, sociality everywhere will be re-examined and redefined post-pandemic. Of course, there have always been people more interested in being than in either doing or socializing. Monks and contemplatives withdraw from active participation in the vanities of the larger culture. So do artists in their own way, which is to create for the sheer interest of the process as much as for the product. The sort of non-material activity represented by meditation, musical jamming, the performing arts, sports, and life drawing may become a necessity more than a luxury or hobby. Life-long learning could become a priority for all classes, both reflecting and assisting a reduction of social inequality. The planet simply can no longer afford consumerism and the lack of imagination that underlies commerce as the default human activity and profit as the default motive.

What remains when externals are less in focus? Whatever is going on in the “real” world—whatever your accomplishments or failures, whatever else you have or don’t have—there is the miracle of your own feelings, thoughts, and sensations to enjoy. Your consciousness is your birthright, your constant resource and companion. It is your closest friend through thick and thin while you still live. It is your personal entertainment and creative project, your canvas both to paint and to admire. It only requires a subtle change of focus to bring it to the fore in place of the anxiety-ridden attention we normally direct outside. As Wordsworth observed, the world is too much with us. He was responding to the ecological and social crisis of his day, first posed by the Industrial Revolution. We are still in that crisis, amplified by far greater numbers of people caught up in desperate activity to get their slice of the global pie.

Perhaps historians will look back and see the era of pandemic as a rear-guard skirmish in the relentless war on nature, a last gasp of the ideology of progress. Or perhaps they will see a readjustment in human nature itself. That doesn’t mean we can stop doing, of course. But we could be doing the things that are truly beneficial and insist on actually enjoying them along the way. The changes needed to make life rewarding for everyone will be profound, beginning with a universal guaranteed income in spite of reduced production. We’ve tried capitalism and we’ve tried communism. Both have failed the common good and a human future. To paraphrase Monty Python, it is time for something entirely different.

The origin of urban life

The hunter-gatherer way of life had persisted more or less unchanged for many millennia of prehistory. What happened that it “suddenly” gave way to an urban way of life six thousand years ago? Was this a result of environmental change or some internal transformation? Or both? It is conventional wisdom that cities arose as a consequence of agriculture; yet farming predates cities. While it may presuppose agriculture, urban life could have arisen for other reasons as well.

In any case, larger settlements meant that humans lived increasingly in a humanly defined world—an environment whose rules and elements and players were different from those of the wild or the small village. The presence of other people gradually overshadowed the presence of raw nature. If social and material invention is a function of sharing information, then the growth of culture would follow the exponential growth of population. As a self-amplifying process, this could explain the relatively sudden appearance of cities. While the city separated itself from the wild, it remained dependent on nature for water, food, energy and materials. While this dependency was mitigated through cooperation with other urban centres, ultimately a civilization depends on natural resources. When these are exhausted it cannot survive.

But, what is a city? Some early cities had dense populations, but some were sparsely populated political or religious capitals, while others were trade centers. More than an agglomeration of dwellings, a city is a well-structured locus of culture and administrative power, associated with written records. It was usually part of a network of mutually dependent towns. It had a boundary, which clarified the extent of the human world. If not a literal wall, then a jurisdictional one could be used to control the passage of people in or out. It had a centre, consisting of monumental public buildings, whether religious or secular. (In ancient times, there may have been little distinction.) In many cases, the centre was a fortified stronghold surrounded by a less formal aggregate of houses and shops, in turn surrounded by supporting farms. Modern cities still retain this form: a downtown core, surrounded by suburbs (sometimes shanties), feathering out to fields or countryside—where it still exists.

The most visually striking feature is the monumental core, with engineering feats often laid out with imposing geometry—a thoroughly artificial environment. While providing shelter, company, commercial opportunity, and convenience, the city also functions to create an artificial and specifically manmade world. From a modern perspective, it is a statement of human empowerment, representing the conquest of nature. From the perspective of the earliest urbanites, however, it might have seemed a statement of divine power, reflecting the timeless projection of human aspirations onto a cosmic order. The monumental accomplishments of early civilization might have seemed super-human even to those who built them. To those who didn’t participate directly in construction, either then or in succeeding generations, they might have seemed the acts of giants or gods, evidence of divine creativity behind the world.

Early monuments such as Stonehenge, whatever their religious intent, were not sites of continuous habitation but seasonal meeting places for large gatherings. These drew far and wide on small settlements involved in early domestication of plants and animals as well as foraging. These ritual events offered exciting opportunities for a scattered population to meet unfamiliar people in great numbers, perhaps instilling a taste for variety and diversity unknown to the humdrum of village life. (Like Woodstock, they would have offered unusual sexual diversity as well.) A few sites, such as Gobleki Tepe, were deliberately buried when completed, only to be reconstructed anew more than once. Could that mean that the collaborative experience of building these structures may have been as significant as their end use? The experience of working together, especially with strangers, under direction and on a vastly larger scale than afforded by individual craft or effort, could have been formative for the larger-scale organization of society. Following the promise of creating a world to human taste, it may have provided the incentive to reproduce the experience of great collective undertakings on an ongoing basis: the city. This would amplify the sense of separateness from the wild already begun in the permanent village.

While stability may be a priority, people also value variety, options, grandeur, the excitement of novelty and scale. Even today, the attractiveness of urban centres lies in the variety of experience they offer, as compared to the restricted range available in rural or small-town life, let alone in the hunter-gatherer existence. Change in the latter would have been driven largely by environment. That could have meant routine breaking camp to follow food sources, but also forced migration because of climate change or over-foraging. If that became too onerous, people would be motivated to organize in ways that could stabilize their way of life. When climate favoured agriculture, control of the food source resulted in greater reliability. However, settlement invited ever larger and more differentiated aggregations, with divisions of labor and social complexity. This brought its own problems, resulting in a greater uncertainty. There could be times of peaceful stability, but also chaotic times of internal conflict or war with other settlements. Specialization breeds more specialization in a cycle of increasing complexity that could be considered either vicious or virtuous, depending on whether one looked backward to the good old days of endless monotony or to a future of runaway change.

The urban ideal is to stabilize environment while maximizing variety of choice and expanding human accomplishment. Easier said than done, since these goals can operate at cross purposes. Civilization shelters and removes us from nature to a large extent; but it also causes environmental degradation and social tensions that threaten the human project. Compared to the norm of prehistory, it increases variety; but that results in inequality, conflict, and instability. Anxiety over the next meal procured through one’s own direct efforts is replaced by anxiety over one’s dependency on others and on forces one cannot control. Social stratification produces a self-conscious awareness of difference, which implies status, envy, social discontent, and competition to improve one’s lot in relation to others. It is no coincidence that a biblical commandment admonishes not to covet thy neighbor’s property. This would have been irrelevant in hunter-gatherer society, where there was no personal property to speak of.

In the absence of timely decisions to make, unchanging circumstances in a simple life permit endless friendly discussion, which is socially cohesive and valued for its own sake. In contrast, times of change or emergency require decisive action by a central command. Hence the emergence—at least on a temporary basis—of the chieftain, king, or military leader as opposed to the village council of elders. The increased complexity of urban life would have created its own proliferating emergencies, requiring an ongoing centralized administration—a new lifestyle of permanent crisis and permanent authority. The organization required to maintain cities, and to administer large-scale agriculture, could be used to achieve and consolidate power, and thereby wealth. And power could be militarized. Hunter-warriors became the armed nobility, positioned to lord it over peasant farmers and capture both the direction of society and its wealth, in a kind of armed extortion racket. (The association of hunting skills with military skills is still seen in the aristocratic institution of the hunt.) Being concentrations of wealth, cities were not only hubs of power; they also became targets, sitting ducks for plunder by other cities.

The nature of settlement is to lay permanent claim to the land. But whose claim? In the divinely created world, the land belonged initially to a god, whose representative was the priest or king, in trust for the people. As such, it was a “commons,” administered by the crown on divine authority. (In the British commonwealth, public land is still called Crown land, and the Queen still rules by divine right. Moreover, real estate derives from royal estate.) Monarchs gave away parts of this commons to loyal supporters, and eventually sold parts to the highest bidder in order to raise funds for war or to support the royal lifestyle. If property was the king’s prerogative by divine right, its sacred aura could transfer in diluted form to those who received title in turn, thereby securing their status. (Aristocratic title literally meant both ownership of particular lands and official place within the nobility.) Private ownership of land became the first form of capital, underlying the notion of property in general and the entitlements of rents, profits, and interest on loans. Property became the axiom of a capitalist economy and often the legal basis of citi-zenship.

The institution of monarchy arose about five thousand years ago, concurrent with writing. The absolute power of the king (the chief thug) to decree the social reality was publicly enforced by his power to kill and enslave. Yet, it was underwritten by his semi-divine status and thus by the need of people for order and sanctioned authority, however harsh. Dominators need a way to justify their position. But likewise, the dominated need a way to rationalize and accept their position. The still popular trickle-down theory of prosperity (a rising tide of economic growth lifts all boats) simply continues the feudal claim of the rich to the divinely ordained lion’s share, with scraps thrown to the rest.

The relentless process of urbanization continues, with now more than half the world’s population living in cites. The attractions remain the same: participation in the money economy (consumerism, capitalism, and convenience, as opposed to meager do-it-yourself subsistence), wide variety of people and experience, life in a humanly-defined world. In our deliberate separation from the wild, urban and suburban life limits and distorts our view of nature, tending to further alienate us from its reality. Misleadingly, nature then appears either as tamed in parks and tree-lined avenues; as an abstraction in science textbooks or contained in laboratories; or as a distant and out-of-sight resource for human exploitation. It remains to be seen how or whether the manmade world can strike a viable balance with the natural one.