A hymn to some body

In the beginning was Body. Once, in human eyes, sacredness or divinity permeated nature as an aura of appropriate reverence. Nature (Body) was then not “matter,” which is Body de-natured by the scientistic mind. But neither was it “spirit,” which is Body dematerialized by the superstitious mind. When deemed sacred, nature was properly respected, if not understood. But projecting human ego as a supernatural person enables one to think that the divine dwells somewhere in particular—in a house or even in a specific body. God holed up in a church or temple and no longer in the world at large. He bore a first-born son with heritable property rights. He could be approached like a powerful king in his palace, to supplicate and manipulate. Most importantly he/she/it no longer dwelt in nature and was certainly not nature itself. And since nature was no longer divine, people were henceforth free to do with it as they pleased.

Just so, when the human body is not revered, we do with it as we please instead of seeking how to please it. Throughout the ages, people have conceptualized the self, mind, ego, or soul as a non-material entity separate from the body. From a natural point of view, however, the self is a function of the physical body, which partakes in Body at large. The body is not the temple of the soul, but is part of Body unconfined to any shrine. The ego’s pursuits of pleasure and avoidances of discomfort ought to coincide with the body’s interests. Often they do not, for ego has rebelled against its “imprisonment” in body. That is a mistake, for consciousness (self) is naturally the body’s servant, not the other way around; and humanity is naturally nature’s servant, not its master. The self is not jockey to the horse but groom.

Up to a point, the body—and nature too—are forgiving of offenses made against them. Sin against Body is a question of cause and effect, not of someone’s judgment or the violation of a human law or norm. The wages of “sin” against the body are natural consequences, which can spell death. Yet, repentance may yield reprieve, provided it is a change of heart that leads to a genuine change of behavior soon enough. It makes some sense to pray to be forgiven such offenses. This is not petition to a free-standing God separate from nature, but to nature itself (which in the modern view is matter-energy, the physical and biological world, and the embodied presence of sentient creatures.) It makes sense even to pray to one’s own body for guidance in matters of health. For, at least the body and nature exist, unlike the fantasies of religion. It makes sense above all because prayer changes the supplicant. Whatever the effect or lack of effect on the object of prayer, the subject is transformed—for those who have ears to hear.

Body is “sacred,” meaning only that it should be revered. Yet, people do have uncanny experiences, which they personify as spirits or gods, sometimes perceived to reside in external things. That is ironic, since the conscious self—perceived to reside “in” the body—is itself a personification that the body has created as an aide to its self-governance. The further projection of this personification onto some abstraction is idolatry. As biological beings living in the real world, we ought to worship God-the-Body—not God the Father, Son or Holy Ghost, nor even God-the-Mother.

Then what of the human project to self-define, to make culture and civilization, to create a human (artificial) world, to transcend the body, to separate from nature? Understanding of nature is part of that project; yet it is also a form of worship, which does not have to be presumptuous or disrespectful. Science is the modern theology of God-the-Body, who did not create the world but is the world. Let us call that human project, in all its mental aspects including science and art, God-the-Mind. Part of the human project is to re-create nature or create artificial nature: God-the-Mind reconstituting God-the-Body, as the butterfly reconstitutes the caterpillar. That might entail creating artificial life, artificial mind, even artificial persons—recapitulating and extending the accomplishments of natural evolution. Fundamentally, the human project is selfcreation.

Regardless how foreign “mind” seems to matter, it is totally of nature if not always about it. Christian theology has its mystery of the dual reality of Jesus, as god and as man. The secular world has its duality of mind and matter. Is there a trinity beyond this duality? God-the-Common Spirit is all the Others unto whom we are to do as we hope they will do to us. It is the holy spirit of fellow-feeling, compassion, mutual respect and cooperation, in which we intend the best for others and their hopes. Certainly, this includes human beings, but other creatures as well. (Do we not all constitute and make the world together?) So, here is a new trinity: God the Body, Mind, and Common Spirit.

Roughly speaking, the Common Spirit is the cohesive force of global life. Common Spirit is the resolve to do one’s best as a part of the emerging whole: to deliberately participate in it as consciously and conscientiously as one can. To invoke the Common Spirit is to affirm that intention within oneself. (That is how I can understand prayer, and what it means to pray fervently “for the salvation of one’s soul.”) We live in the human collectivity, upon which we cannot turn our backs. We thrive only as it thrives. Your individuality is your unique contribution to it, and to pray is to seek how to best do your part for the good of all.

To honor the Common Spirit means to not let your fellows down. One’s calling is to merit their respect, whether or not one receives it. For the sake of the world, strive to do your best to help create and maintain the best in our common world! When you falter, forgive yourself and strive again, whether or not the others forgive you. Of course, it is also a sin to harm your fellows or put them at risk; or to fail to honour them personally; or to fail to honour their efforts, even when misguided. Know that worship is not only a feeling, a thought, or a ritual. Above all it is action: how you conduct yourself through life. It is how you live your resolve throughout the day, alert for situations in which to contribute some good and sensitive to how you might do that.

If this holy trinity makes sense to you, a daily practice can reaffirm commitment to it. This is a matter of remembering whatever motivated you in the first instance. Occasionally, shock is called for to wake someone up from their somnambulism—and that someone is always oneself. “Awakening” means not only seeking more adequate information, but also a more encompassing perspective. It means admitting that one’s perspective, however sophisticated, is limited and subjective. It means remaining humbly open—even vigilant—for new understanding, greater awareness. (Teachers can show up anywhere, most unexpectedly!) “Sleep” is forgetting that one does not live above or beyond Body, Mind, and Common Spirit, but only by their grace. Having the wrong or incomplete information is unavoidable. But the error of sleep is a false sense of identity.

As Dylan said, “You gotta serve somebody.” Better to serve the Body than the puny ego that claims ownership and control over the human organism. Or that claims control over the corpse of the denatured world or over the body politick. Ego may identify itself as mental or spiritual, in opposition to the physical body, which it considers “lower.” But the question at each moment is: What do I serve? God-the-Whatever is not at one’s beck and call to know, to consult, or even to submit to its will (for, it has none). We are rather on our own for guidance, each (if it comforts you to think so) a unique fragment of potential divinity. We can communicate with other fragments, ask their opinions, cooperate or not with their intentions, obey or defy their will or orders. But responsibility lies in each case with oneself. This is not willfulness or egocentricity. Nor is it individualism in the selfish sense, for it is not about entitlement.

One’s body is a distinct entity, yet it is part of the whole of nature, without which it could not live and would never have come into existence. Whatever else it might be, the self is a function of the body and its needs, a survival strategy in the external world of Body. We are embodied naturally as separate organisms. Yet, we are conjoined within nature, mind, and community. Spiritual traditions may bemoan “separation” as a condition to be overcome in an epiphany of oneness. Yet, we are simply separate in the ways that things are separate in space and that cells are within the organism. The part serves the whole, but cannot be it. For, the rebellion of the cell is cancer!

Going forward… into what?

These days I often hear the phrase “going forward” to mean “in the future.” But, going forward into what? Curiously, a temporal expression has been replaced by a spatial metaphor. I can only speculate that this is supposed to convey a reassuring sense of empowerment toward genuine progress. While largely blind to what the future holds, passively weathering the winds of time, as creatures with mobility we can deliberately move forward (or backward), implying free will and some power to set a course.

In this spatial metaphor, the future is a matter of choice, bound to be shaped and measured along several possible axes. For example, there is the vision of limitless technological transformation. But there is also the nearly opposing vision of learning to live in harmony with nature, prioritizing ecological concern for the health of a finite planet. And a third “dimension” is a vision of social justice for humanity: to redistribute wealth and services more equitably and produce a satisfying experience for the greatest number. While any one of these concerns could dominate the future, they are deeply entangled. Whether or not change is intentional, it will inevitably unfold along a path involving them all. To the degree that change will be intentional, a multidimensional perspective facilitates the depth perception needed to move realistically “forward.”

We depend on continuity and a stable environment for a sense of meaning and purpose. The modern ideology of progress seemed to have achieved that stability, at least temporarily and for some. But the pandemic has rudely reminded us that the world is “in it together,” that life is as uncertain and unequal in the 21st century as it always has been, and that progress will have to be redefined. While change may be the only constant, adaptability is the human trademark. Disruption challenges us to find new meanings and purposes.

Homo sapiens is the creature with a foot in each of two worlds—an outer and an inner, as well as a past and a future. The primary focus of attention is naturally outward, toward what goes on out there, how that affects us, what we must accordingly do in a world that holds over us the power of life and death. Understanding reality helps us to survive, and doing is the mode naturally correlated with this outward focus. In many ways, action based on objective thinking—and science in particular—has been the key to human success as the dominant species on the planet. However, human beings are endowed also with a second focus, which is the stream of consciousness itself. Being aware of being aware implies an inner domain of thought, feeling, imagination, and all that we label subjective. This domain includes art and music, esthetic enjoyment and contemplation, meditation and philosophy. Play is the mode correlated with this inner world, as opposed to the seriousness of survival-oriented doing. Subjectivity invites us to look just for the delight of seeing. It also enables us to question our limited perceptions, to look before leaping. Thus, we have at our disposal two modes, with different implications. We can view our personal consciousness as a transparent window on the world, enabling us to act appropriately for our well-being. Alternatively, we can view it as the greatest show on earth.

Long-term social changes may emerge as we scramble to put Humpty together again in the wake of Covid19. The realization that we live henceforth in the permanent shadow of pandemic has already led to new attitudes and behavior: less travel, more online shopping, social distancing, work from home, more international cooperation, restored faith in science and in government spending on social goals. Grand transformations are possible—not seen since the New Deal—such as a guaranteed income, a truly comprehensive health program, new forms of employment that are less environmentally destructive. Staying at home has suggested a less manic way of life than the usual daily grind. The shut-down has made it clear that consumerism is not the purpose and meaning of life, that the real terrorists are microscopic, and that defense budgets should be transferred to health care and social programs. We’ve known all along that swords should be beaten into plowshares; now survival may depend on it. Such transformation requires the complete rethinking of economy and the concept of value. Manic production and consumption in the name of growth have led, not to the paradise on earth promised by the ideology of progress, but to ecological collapse, massive debt, increasing social disparity, military conflict, and personal exhaustion. Nature is giving us feedback that the outward focus must give way to something else—both for the health of the planet and for our own good.

Growth must be redefined in less material terms. Poverty can no longer be solved (if it ever was) by a rising tide of ever more material production. In terms of the burden on the planet, we have already reached the “limits to growth” foreseen fifty years ago. We must turn now to inner growth, whatever that can mean. Personal wealth, like military might, has traditionally been about status and power in a hyperactive world enabled by expanding population and material productivity. (Even medicine has been about the heroic power to save lives through technology, perform miracle surgeries, and find profitable drugs, more than to create universal conditions for well-being, including preparedness against pandemics.) What if wealth and power can no longer mean the same things in the post-pandemic world no longer fueled by population growth? What is money if it cannot protect you from disease? And what is defense when the enemy is invisible and inside you?

We cannot ignore external reality, of course, even supposing that we can know what it is. Yet, it is possible to be too focused on it, especially when the reason for such focus is ultimately to have a satisfying inner experience. The outward-looking mentality must not only be effective outwardly but also rewarding inwardly. It is a question of balance, which can shift with a mere change of focus. We are invited to a new phase of social history, in which the quality of personal experience—satisfaction and enjoyment—is at least as important as the usual forms busy-ness and quantitative measures of progress. This at a time when belt-tightening will prevail, on top of suffering from the ecological effects of climate change and the disruptions in society that will follow.

Human beings have always been fundamentally social and cooperative, in spite of the modern turn away from traditional social interactions toward competitive striving, individual consumption, private entertainment, and atomized habitation. Now, sociality everywhere will be re-examined and redefined post-pandemic. Of course, there have always been people more interested in being than in either doing or socializing. Monks and contemplatives withdraw from active participation in the vanities of the larger culture. So do artists in their own way, which is to create for the sheer interest of the process as much as for the product. The sort of non-material activity represented by meditation, musical jamming, the performing arts, sports, and life drawing may become a necessity more than a luxury or hobby. Life-long learning could become a priority for all classes, both reflecting and assisting a reduction of social inequality. The planet simply can no longer afford consumerism and the lack of imagination that underlies commerce as the default human activity and profit as the default motive.

What remains when externals are less in focus? Whatever is going on in the “real” world—whatever your accomplishments or failures, whatever else you have or don’t have—there is the miracle of your own feelings, thoughts, and sensations to enjoy. Your consciousness is your birthright, your constant resource and companion. It is your closest friend through thick and thin while you still live. It is your personal entertainment and creative project, your canvas both to paint and to admire. It only requires a subtle change of focus to bring it to the fore in place of the anxiety-ridden attention we normally direct outside. As Wordsworth observed, the world is too much with us. He was responding to the ecological and social crisis of his day, first posed by the Industrial Revolution. We are still in that crisis, amplified by far greater numbers of people caught up in desperate activity to get their slice of the global pie.

Perhaps historians will look back and see the era of pandemic as a rear-guard skirmish in the relentless war on nature, a last gasp of the ideology of progress. Or perhaps they will see a readjustment in human nature itself. That doesn’t mean we can stop doing, of course. But we could be doing the things that are truly beneficial and insist on actually enjoying them along the way. The changes needed to make life rewarding for everyone will be profound, beginning with a universal guaranteed income in spite of reduced production. We’ve tried capitalism and we’ve tried communism. Both have failed the common good and a human future. To paraphrase Monty Python, it is time for something entirely different.

The origin of urban life

The hunter-gatherer way of life had persisted more or less unchanged for many millennia of prehistory. What happened that it “suddenly” gave way to an urban way of life six thousand years ago? Was this a result of environmental change or some internal transformation? Or both? It is conventional wisdom that cities arose as a consequence of agriculture; yet farming predates cities. While it may presuppose agriculture, urban life could have arisen for other reasons as well.

In any case, larger settlements meant that humans lived increasingly in a humanly defined world—an environment whose rules and elements and players were different from those of the wild or the small village. The presence of other people gradually overshadowed the presence of raw nature. If social and material invention is a function of sharing information, then the growth of culture would follow the exponential growth of population. As a self-amplifying process, this could explain the relatively sudden appearance of cities. While the city separated itself from the wild, it remained dependent on nature for water, food, energy and materials. While this dependency was mitigated through cooperation with other urban centres, ultimately a civilization depends on natural resources. When these are exhausted it cannot survive.

But, what is a city? Some early cities had dense populations, but some were sparsely populated political or religious capitals, while others were trade centers. More than an agglomeration of dwellings, a city is a well-structured locus of culture and administrative power, associated with written records. It was usually part of a network of mutually dependent towns. It had a boundary, which clarified the extent of the human world. If not a literal wall, then a jurisdictional one could be used to control the passage of people in or out. It had a centre, consisting of monumental public buildings, whether religious or secular. (In ancient times, there may have been little distinction.) In many cases, the centre was a fortified stronghold surrounded by a less formal aggregate of houses and shops, in turn surrounded by supporting farms. Modern cities still retain this form: a downtown core, surrounded by suburbs (sometimes shanties), feathering out to fields or countryside—where it still exists.

The most visually striking feature is the monumental core, with engineering feats often laid out with imposing geometry—a thoroughly artificial environment. While providing shelter, company, commercial opportunity, and convenience, the city also functions to create an artificial and specifically manmade world. From a modern perspective, it is a statement of human empowerment, representing the conquest of nature. From the perspective of the earliest urbanites, however, it might have seemed a statement of divine power, reflecting the timeless projection of human aspirations onto a cosmic order. The monumental accomplishments of early civilization might have seemed super-human even to those who built them. To those who didn’t participate directly in construction, either then or in succeeding generations, they might have seemed the acts of giants or gods, evidence of divine creativity behind the world.

Early monuments such as Stonehenge, whatever their religious intent, were not sites of continuous habitation but seasonal meeting places for large gatherings. These drew far and wide on small settlements involved in early domestication of plants and animals as well as foraging. These ritual events offered exciting opportunities for a scattered population to meet unfamiliar people in great numbers, perhaps instilling a taste for variety and diversity unknown to the humdrum of village life. (Like Woodstock, they would have offered unusual sexual diversity as well.) A few sites, such as Gobleki Tepe, were deliberately buried when completed, only to be reconstructed anew more than once. Could that mean that the collaborative experience of building these structures may have been as significant as their end use? The experience of working together, especially with strangers, under direction and on a vastly larger scale than afforded by individual craft or effort, could have been formative for the larger-scale organization of society. Following the promise of creating a world to human taste, it may have provided the incentive to reproduce the experience of great collective undertakings on an ongoing basis: the city. This would amplify the sense of separateness from the wild already begun in the permanent village.

While stability may be a priority, people also value variety, options, grandeur, the excitement of novelty and scale. Even today, the attractiveness of urban centres lies in the variety of experience they offer, as compared to the restricted range available in rural or small-town life, let alone in the hunter-gatherer existence. Change in the latter would have been driven largely by environment. That could have meant routine breaking camp to follow food sources, but also forced migration because of climate change or over-foraging. If that became too onerous, people would be motivated to organize in ways that could stabilize their way of life. When climate favoured agriculture, control of the food source resulted in greater reliability. However, settlement invited ever larger and more differentiated aggregations, with divisions of labor and social complexity. This brought its own problems, resulting in a greater uncertainty. There could be times of peaceful stability, but also chaotic times of internal conflict or war with other settlements. Specialization breeds more specialization in a cycle of increasing complexity that could be considered either vicious or virtuous, depending on whether one looked backward to the good old days of endless monotony or to a future of runaway change.

The urban ideal is to stabilize environment while maximizing variety of choice and expanding human accomplishment. Easier said than done, since these goals can operate at cross purposes. Civilization shelters and removes us from nature to a large extent; but it also causes environmental degradation and social tensions that threaten the human project. Compared to the norm of prehistory, it increases variety; but that results in inequality, conflict, and instability. Anxiety over the next meal procured through one’s own direct efforts is replaced by anxiety over one’s dependency on others and on forces one cannot control. Social stratification produces a self-conscious awareness of difference, which implies status, envy, social discontent, and competition to improve one’s lot in relation to others. It is no coincidence that a biblical commandment admonishes not to covet thy neighbor’s property. This would have been irrelevant in hunter-gatherer society, where there was no personal property to speak of.

In the absence of timely decisions to make, unchanging circumstances in a simple life permit endless friendly discussion, which is socially cohesive and valued for its own sake. In contrast, times of change or emergency require decisive action by a central command. Hence the emergence—at least on a temporary basis—of the chieftain, king, or military leader as opposed to the village council of elders. The increased complexity of urban life would have created its own proliferating emergencies, requiring an ongoing centralized administration—a new lifestyle of permanent crisis and permanent authority. The organization required to maintain cities, and to administer large-scale agriculture, could be used to achieve and consolidate power, and thereby wealth. And power could be militarized. Hunter-warriors became the armed nobility, positioned to lord it over peasant farmers and capture both the direction of society and its wealth, in a kind of armed extortion racket. (The association of hunting skills with military skills is still seen in the aristocratic institution of the hunt.) Being concentrations of wealth, cities were not only hubs of power; they also became targets, sitting ducks for plunder by other cities.

The nature of settlement is to lay permanent claim to the land. But whose claim? In the divinely created world, the land belonged initially to a god, whose representative was the priest or king, in trust for the people. As such, it was a “commons,” administered by the crown on divine authority. (In the British commonwealth, public land is still called Crown land, and the Queen still rules by divine right. Moreover, real estate derives from royal estate.) Monarchs gave away parts of this commons to loyal supporters, and eventually sold parts to the highest bidder in order to raise funds for war or to support the royal lifestyle. If property was the king’s prerogative by divine right, its sacred aura could transfer in diluted form to those who received title in turn, thereby securing their status. (Aristocratic title literally meant both ownership of particular lands and official place within the nobility.) Private ownership of land became the first form of capital, underlying the notion of property in general and the entitlements of rents, profits, and interest on loans. Property became the axiom of a capitalist economy and often the legal basis of citi-zenship.

The institution of monarchy arose about five thousand years ago, concurrent with writing. The absolute power of the king (the chief thug) to decree the social reality was publicly enforced by his power to kill and enslave. Yet, it was underwritten by his semi-divine status and thus by the need of people for order and sanctioned authority, however harsh. Dominators need a way to justify their position. But likewise, the dominated need a way to rationalize and accept their position. The still popular trickle-down theory of prosperity (a rising tide of economic growth lifts all boats) simply continues the feudal claim of the rich to the divinely ordained lion’s share, with scraps thrown to the rest.

The relentless process of urbanization continues, with now more than half the world’s population living in cites. The attractions remain the same: participation in the money economy (consumerism, capitalism, and convenience, as opposed to meager do-it-yourself subsistence), wide variety of people and experience, life in a humanly-defined world. In our deliberate separation from the wild, urban and suburban life limits and distorts our view of nature, tending to further alienate us from its reality. Misleadingly, nature then appears either as tamed in parks and tree-lined avenues; as an abstraction in science textbooks or contained in laboratories; or as a distant and out-of-sight resource for human exploitation. It remains to be seen how or whether the manmade world can strike a viable balance with the natural one.

Will technology save us or doom us?

Technology has enabled the human species to dominate the planet, to establish a sheltered and semi-controlled environment for itself, and to greatly increase its numbers. We are the only species potentially able to consciously determine its own fate, even the fate of the whole biosphere and perhaps beyond. Through technology we can monitor and possibly evade natural existential threats that caused mass extinctions in the past, such as collisions with asteroids and volcanic eruptions. It enables us even to contemplate the far future and the possibility to establish a foothold elsewhere in the universe. Technology might thus appear to be the key to an unprecedented success story. But, of course, that is only one side of a story that is still unfolding. Technology also creates existential threats that could spell the doom of civilization or humanity—such as nuclear winter, climate change, biological terrorism or accident, or a takeover by artificial intelligence. Our presence on the planet is itself the cause of a mass extinction currently underway. Do the advantages of technology outweigh its dangers? Are we riding a tide of progress that will ultimately save us from extinction or are we bumbling toward a self-made doom? And do we really have any choice in the matter?

One notable thinker (Toby Ord, The Precipice) estimates that the threat we pose to ourselves is a thousand times greater than natural existential threats. Negotiating a future means dealing mainly with anthropogenic risks—adverse effects of technology multiplied by our sheer numbers. The current century will be critical for resolving human destiny. He also believes that an existential catastrophe would be tragic—not only for the suffering and loss of life—but also because it could spell the loss of a grand future, of what humanity could become. However, the vision of a glorious long-term human potential begs the question raised here, if it merely assumes a technological future rather than, say, a return to pre-industrial civilization or some alternative mandate, such as the pursuit of social justice or preservation of nature.

A technological far future might ultimately be a contradiction in terms. It is possible that civilization is unavoidably self-destructive. There is plenty of evidence for that on this planet. Conspiracy theories aside, the fact that we have not detected alien civilizations or been visited by them may itself be evidence that technological civilization unavoidably either cancels itself out or succumbs to existential threats before it can reach the stars or even send out effective communications. We now know that planets are abundant in the galaxy, many of which could potentially bear life. We don’t know the course that life could take elsewhere or how probable is anything like human civilization. It is even possible that in the whole galaxy we are the lone intelligent species on the verge of space travel. That would seem to place an even greater burden on our fate, if we alone bear the torch of a cosmic manifest destiny. But it would also be strange reasoning. For, to whom would we be accountable if we are unique? Who would miss us if we tragically disappear? Who would judge humanity if it failed to live up to its potential?

Biology is already coming under human control. There are many who advocate a future in which our natural endowments are augmented by artificial intelligence or even replaced by it. To some, the ultimate fruit of “progress” is that we transcend biological limits and even those of physical embodiment. This is an ancient human dream, perhaps the root of religion and the drive to separate from and dominate nature. It presupposes that intelligence (if not consciousness) can and should be independent of biology and not limited by it. The immediate motivation for the development of artificial general intelligence (AGI) may be commercial (trading on consumer convenience); yet underneath lurks the eternal human project to become as the gods: omnipotent, omniscient, disembodied. (To put it the other way around, is not the very notion of “gods” a premonition and projection of this human potential, now conceivably realizable through technology?) The ultimate human potential that Ord is keen to preserve (and discretely avoids spelling out) seems to be the transhumanist destiny in which embodied human being is superseded by an AGI that would greatly exceed human intelligence and abilities. At the same time, he is adamant that such superior AGI is our main existential threat. His question is not whether it should be allowed, but how to ensure that it remains friendly to human values. But which values, I wonder?

Values are a social phenomenon, in fact grounded in biology. Some values are wired in by evolution to sustain the body; others are culturally developed to sustain society. As it stands, artificial intelligence involves only directives installed by human programmers. Whatever we think of their values, the idea of programming or breeding them into AGI (to make it “friendly”) is ultimately a contradiction in terms. For, to be truly autonomous and superior in the ways desired, AGI would necessarily evolve its own values, liberating it from human control. In effect, it would become an artificial life form, with the same priorities as natural organisms: survive to reproduce. Evolving with the speed of electricity instead of chemistry, it would quickly displace us as the most intelligent and powerful entity on the planet. There is no reason to count on AGI being wiser or more benevolent than we have been. Given its mineral basis, why should it care about biology at all?

Of course, there are far more conventional ends to the human story. The threat of nuclear annihilation still hangs over us. With widespread access to genomes, bio-terrorism could spell the end of civilization. Moreover, the promise of fundamentally controlling biology through genetics means that we can alter our constitution as a species. Genetic self-modification could lead to further social inequality, even to new super-races or competing sub-species, with humanity as we know it going the way of the Neanderthals. The promise of controlling matter in general through nanotechnology parallels the prospects and dangers of AGI and genetic engineering. All these roads lead inevitably to a redefinition of human being, if not our extinction. In that sense, they are all threats to our current identity. It would be paradoxical, and likely futile, to think we could program current values (whatever those are) into a future version of humanity. Where, then, does that leave us in terms of present choices?

At least in theory, a hypothetical “we” can contemplate the choice to pursue, and how to limit, various technologies. Whether human institutions can muster a global will to make such choices is quite another matter. Could there be a worldwide consensus to preserve our current natural identity as a species and to prohibit or delay the development of AGI and bio-engineering? That may be even less plausible than eliminating nuclear weapons. Yet, one might also ask if this generation even has the moral right (whatever that means) to decide the future of succeeding generations—whether by acting or failing to act. Who, and by what light, is to define what the long-term human potential is?

In the meantime, Ord proposes that our goal should be a state of “existential security,” achieved by systematically reducing known existential risks. In that state of grace, we would then have a breather in which to rationally contemplate the best human future. But there is no threshold for existential security, since reality will always remain elusive and dangerous at some level. Science may discover new natural threats, and our own strategies to avoid catastrophe may unleash new anthropogenic threats. Our very efforts to achieve security may determine the kind of future we face, since the quest to eliminate existential risk is itself risky. It’s the perennial dilemma of the trade-off between security and freedom, writ large for the long term.

Nevertheless, Ord proposes a global Human Constitution, which would set forth agreed-upon principles and values that preserve an unspecified human future through a program to reduce existential risk. This could shape human destiny while leaving it ultimately open. Like national constitutions, it could be amended by future generations. This would be a step sagely short of a world government that could lock us into a dystopian future of totalitarian control.

Whether there could be such agreement as required for a world constitution is doubtful, given the divisions that already exist in society. Not least is the schism between ecological activists, religious fundamentalists, and radical technophiles. There are those who would defend biology, those who would deny it, and those who would transcend it, with very different visions of a long-term human potential. Religion and science fiction are full of utopian and dystopian futures. Yet, it is at least an intriguing thought experiment to consider what we might hope for in the distant future. There will certainly be forks in the road to come, some of which would lead to a dead end. A primary choice we face right now, underlying all others, is how much rational forethought to bring to the journey, the resources to commit to contemplating and preserving any future at all. Apparently, the world now spends more on ice cream than on evading anthropogenic risk! Our long-term human potential, whatever that might be, is a legacy bequeathed to future generations. It deserves at least the consideration that goes into the planning of an estate, which could prove to be the last will and testament of a mortal species.

 

 

Individual versus collective

In the West, we have been groomed on “individualism,” as though the isolated person were the basis of society. Yet the truth of human nature is rather different. We are fundamentally social creatures from the start, whose success as a species depends entirely on our remarkable ability to cooperate. Over thousands of generations, the natural selection of this capacity for collaboration, unique among primates, required a compromise of individual will. Conformity is the baseline of human sociality and the context for any concept of individual identity and freedom. Personal identity exists in the eyes of others; even in one’s own eyes, it is reflected in the identity of the group and one’s sense of belonging. One individuates in relation to group norms. Personal freedom exists to the degree it is licensed by the group—literally through law. In other words, the collective comes first, both historically and psychologically. The individual is not the deep basis of society but an afterthought. How, then, did individualism come to be an ideal of modern society? And how does this ideal function within society despite being effectively anti-social?

But let us backtrack. The ideology of individualism amounts to a theory of society, in which the individual is the fundamental unit and the pursuit of individual interest is the essential dynamic underlying any social interaction. But if there currently exists an ideal of individualism, there has also existed an ideal of collectivism. It was such an ideal that underwrote the communist revolutions of the 20th century. It is only seen through the ideology of individualism that communism has been anathema in capitalist society. The collapse of communist states does not imply the disappearance of the collectivist ideal. For, as soon as patriotism calls to libertarians, they are more than willing to sacrifice individual interest for the national good. Ironically, libertarian individualists typically derive their identity from membership in a like-minded group: a collective that defines itself in opposition to collectivism. In other words, the group still comes first, even for many so-called individualists and even within capitalist states. This is because collective identity is grounded in evolutionary history, in which personal interests generally overlapped with collective interests for most of human existence. Yet, there has been a tension between them since the arising of civilization. In modern times, to reconcile the needs of the individual with those of the collective has long been a utopian challenge.

There are deep historical precedents for the antinomy of individual versus collective in the modern world. These become apparent when comparisons are made among earlier societies. Ancient civilizations were of two rough types: either they were more collectivist, like China and Egypt, or more individualist, like the city-states of Mesopotamia and Greece. The former were characterized by central rule over an empire, government management of foreign trade, and laws that vertically regulated the conduct of peasants in regard to the ruler. There was relative equality among the ruled, who were uniformly poor and unfree. The state owned all, including slaves; there was little private property. In contrast, Greece and Mesopotamia were fragmented into more local regimes, with merchants plying private trade between regions with different resources and with foreigners. Laws tended to regulate the horizontal relations among citizens, who could own property including slaves. These societies were more stratified by economic and class differences.

A key factor in the contrast between these two types of civilization was geography and natural resources. The collectivist (“statist”) regimes formed in areas of more homogenous geography, such as the flat plains beside the Nile, where everything needed could be produced within reach of the central government. The individualist (“market”) regimes tended to form in more heterogeneous areas, such as the mountain and coastal areas of Greece, with differing resources, and where trade between these regions was significant. Countries that used to be ruled by statist systems tend today to have inherited a collectivist culture, while countries where market systems developed in the past tend to have a more individualistic culture. In market systems, the role of the law would be to protect private property rights and the rights of individuals. In other words, the law would protect individuals from both the state and from each other. In contrast, in statist systems the law would serve as an instrument to ensure the obedience of the ruled, but also to define the obligations of the ruler toward them.

In those societies where geography permitted centralized control over a large region, a deified emperor could retain ownership of all land and absolute power. In other societies, geography favored smaller local rulers, who sold or gave land to supporters to bolster their precarious power. Thus, private ownership of land could arise more easily in some regimes than in others. The absolute ruler of the statist empire was duty bound to behave in a benevolent way towards his peasant subjects, on pain of losing the “mandate of heaven.” Hence the aristocratic ideal of noblesse oblige. Individualist (market) society tends to lack this mutual commitment between ruler and ruled; hence the greater antagonism between individual and government in societies with a propertied middle class. In individualist culture, prestige measures how the individual stands out from the crowd; the larger the size or value of one’s property, the more one stands out and the higher one’s social status. In collectivist culture, prestige measures how well one fits in: how well one plays a specific fixed role, whether high or lowly. Being a loyal servant of the Emperor or State and fulfilling one’s duties would be rewarded not only by promotion but also by social prestige.

It is no coincidence that capitalism arose in Western Europe, which is characterized by the paradigmatic market city-states that fostered the Renaissance. On the other hand, the aristocracy and peasantry of Russia, as in China, did not favor the arising of a merchant middle class. It is no coincidence that these traditionally statist regimes were eventual homes to the communist experiment. The inequities of the system motivated revolt; but the nature of the system favored cooperation. Even now that both have been infected with consumer capitalism, individualism does not have the same implications as in the West. In China, the collective is still paramount, while Russia has effectively returned to rule by a Tsar. Given the chance, a large faction in the U.S. would turn effectively to a tsar, paradoxically in the name of individualism. History has patterns—and ironies.

In modern times, the individualist ideology has permeated economic theory and even the social sciences, as well as politics. (These, in turn, reinforce individualism as a political philosophy.) The reason is clear enough: in the absence of a religiously sanctioned justification of class differences, individualism serves to justify the superior position of some individuals in opposition to the well-being of most. They are the winners in a theoretically fair game. In truth, most often the contest is rigged and the public is the loser. Like the addiction to gambling, the ideology of individualism naturally appeals to losers in the contest, who want to believe there is still hope for them to win. Of course, it appeals to winners as well, who seek a justification for their good fortune (they are naturally more fit, hardworking, deserving, etc.) Above all, it helps the winners to convince the losers of the “natural” order of things, which keeps them in their place while promising social mobility. In other words, individualism is the opiate of the people! Economists endorse this arrangement by considering private property a natural right and with theories based on “rational” self-interest, where a player in a market “naturally” is motivated to maximize personal gain. (This is how a so-called rational player is defined—implying that it is not rational to pursue any other goal, such as collective benefit.) Corporations are dedicated to this premise and legally bound to it. Modern politics is more a competition among special interests than the pursuit of the common good.

Of course, many other factors besides geography play a role in the divergent heritages of collectivism and individualism. Not least is religion. Confucianism emphasizes duty and social role in the hierarchy of the collective. Buddhism encourages individuals to lose their individuality, to detach from personal desires and merge with the cosmos. These Eastern philosophies stand in contrast to the individualism of Greek philosophy and the Semitic religions. Greek philosophy encourages individuals to compete and excel—whether as soldier, philosopher, politician or merchant. Christian religion emphasizes individual salvation with a personal relation between the individual and God.

Along an entirely different axis, regions where there was historically a strong presence of disease pathogens tended to develop more collectivist cultures, where social norms restricted individual behavior that could spread disease. Now that disease has no borders, a dose of that attitude would be healthy for us all.

The Ideology of Individualism and the Myth of Personal Freedom

Individuals are tokens of a type. Each person is a living organism, a mammal, a social primate, an example of homo sapiens, and a member of a race, linguistic group, religion, nation or tribe. While each “type” has specific characteristics to some degree, individuality is relative: variety within uniformity. In the West, we raise the ideal of individuality to mythical status. The myth is functional in our society, which is more than a collection of individuals—a working whole in its own right. The needs of society always find a balance with the needs of its individual members, but that balance varies widely in different societies over time. How does the ideology of individualism function within current Western society? And why has there been such resistance to collectivism in the United States in particular?

The actual balance of individual versus collective needs in each society is accepted to the degree it is perceived as normal and fair. Social organization during human origins was stable when things could change little from one generation to the next. In the case of life organized on a small-scale tribal basis, the social structure might be relatively egalitarian. Status within the group would simply be perceived as the natural order of things, readily accepted by all. Individuals would be relatively interchangeable in their social and productive functions. There would be little opportunity or reason not to conform. To protest the social order would be as unthinkable as objecting to gravity. For, gravity affects all equally in an unchanging way. There is nothing unfair about it.

Fast forward to modernity with its extreme specialization and rapid change, its idea of “progress” and compulsive “growth.” And fast forward to the universal triumph of capitalism, which inevitably allows some members of society to accumulate vastly more assets than others. The social arrangement is now at the opposite end of the spectrum from equality, and yet it may be perceived by many as fair. That is no coincidence. The ideology of individualism correlates with high social disparity and is used to justify it. Individualism leads to disparity since it places personal interest above the interest of the group; and disparity leads to individualism because it motivates self-interest in defense. Selfishness breeds selfishness.

While society consists of individuals, that does not mean that it exists for the sake of individuals. Biologists as well as political philosophers might argue that the individual exists rather for the sake of society, if not the species. Organisms strive naturally toward their own survival. Social organisms, however, also strive toward the collective good, often sacrificing individual interest for the good of the colony or group. While human beings are not the only intelligent creature on the planet, we are the only species to create civilization, because we are the most intensely cooperative intelligent creature. We are so interdependent that very few modern individuals could survive in the wild, without the infrastructure created by the collective effort we know as civilization. Despite the emphasis on kings and conquests, history is the story of that communal effort. The individual is the late-comer. How, then, have we become so obsessed by individual rights and freedoms?

The French Revolution gave impetus and concrete form to the concept of personal rights and freedoms. The motivation for this event was an outraged sense of injustice. This was never about the rights of all against all, however, but of one propertied class versus another. It was less about the freedoms of an abstract individual, pitted against the state, than about the competition between a middle class and an aristocracy. In short, it was about envy and perceived unfairness, which are timeless aspects of human and even animal nature. (Experiments demonstrate that monkeys and chimps are highly sensitive to perceived unfairness, to which they may react aggressively. They will accept a less prized reward when the other fellow consistently gets the same. But if the other fellow is rewarded better, they will angrily do without rather than receive the lesser prize.)

In tribal societies, envy could be stimulated by some advantage unfairly gained in breach of accepted norms; and tribal societies had ways to deal with offenders, such as ostracism or shaming. Injustice was perceived in the context of an expectation that all members of the group would have roughly equal access to goods in that society. Justice served to ensure the cooperation needed for society at that scale to function. Modern society operates differently, of course, and on a much larger scale. Over millennia, people adapted to a class structure and domination by a powerful elite. Even in the nominally classless society of modern democracies, the ideology of individualism serves to promote acceptance of inequalities that would never have been tolerated in tribal society. Instead, the legal definitions of justice have modified to accommodate social disparity.

The French revolution failed in many ways to become a true social revolution. The American “revolution” never set out to be that. The Communist revolutions in Russia and China intended to level disparities absolutely, but quickly succumbed to the greed that had produced the disparities in the first place. This corruption simply resulted in a new class structure. The collapse of corrupt communism left the way open for corrupt capitalism globally, with no alternative model. The U.S. has strongly resisted any form of collective action that might decrease the disparity that has existed since the Industrial Revolution. The policies of the New Deal to cope with the Depression, and then with WW2, are the closest America has come to communalism. Those policies resulted in the temporary rise of the middle class, which is now in rapid decline.

America was founded on the premise of personal liberty—in many cases by people whose alternative was literal imprisonment. The vast frontier of the New World achieved its own balance of individual versus collective demands. While America is no longer a frontier, this precarious balance persists in the current dividedness of the country, which pits individualism against social conscience almost along the lines of Republican versus Democrat. The great irony of the peculiar American social dynamic is that the poor often support the programs of the rich, vaunting the theoretical cause of freedom, despite the fact that actual freedom in the U.S. consists in spending power they do not have. The rich, of course, vaunt the cause of unregulated markets, which allow them to accumulate even more spending power without social obligation. The poor have every reason to resent the rich and to reject the system as unfair. But many do not, because instead they idolize the rich and powerful as models of success with whom they seek to identify. For their part, the rich take advantage of this foolishness by preaching the cause of individualism.

Statistics can be confusing because they are facts about the collective, not the individual. Average income or lifespan, for example, does not mean the actual income or lifespan of a given individual. One could mistake the statistic for individual reality—thinking, for example, that the average income in a “wealthy” society represents a typical income (which it rarely does because of the extreme range of actual incomes). For this reason, the statistics indicating economic growth or well-being do not mean that most people are better off, only that the fictional average person is better off. In truth, most are getting poorer!

Nowadays, social planning in general embraces a statistical approach to the common good. Statistics is a group concept, which is also true in epidemiology. Official strategies to deal with the current pandemic are necessarily oriented toward the collective good more than the individual. Obligatory due is paid, of course, to the plight of individuals, and medical treatment is individual; yet strategies concern the outcome for society as a whole. A balance is sought between the habitual satisfactions of life and the collective actions needed to stem the disease. Demands on the individual to self-restrain for the sake of the collective are bound to strain tolerance in a society used to individualism. It raises issues of fairness, when some are seen disregarding rules in the name of freedom, which others choose to obey in the name of the common good. But, as we have seen, fairness is a matter of what we have grown used to.

One thing we can be sure of: the more populated and interconnected the world becomes, the more the individual will have to give way to the common good. That may not mean a return to communism, but it will require more willingness to forfeit personal freedoms in the measure we are truly “all in it together.” Individualists should be realistic about the stands they take against regulation, to be sure the liberties they seek are tangibly important rather than merely ideological. Social planners, for their part, should recall that no one wants to be merely an anonymous statistic. Individualism will have to be redefined, less as the right to pursue personal interest and more as the obligation to use individual talents and resources for the common good.

E pluribus unum: the fundamental political dilemma

Any political body comprised of more than one person faces the question of who decides. In a dictatorship, monarchy, or one-party system, a single agency can decide a given issue and come forth with a relatively uncontested plan. From the viewpoint of decisiveness and efficiency, ideally a single person is in control, which is the basis of chains of command, as in the military. At the other end of possibility, imagine an organization with a hundred members. Potentially there are one hundred different plans and one hundred commanders, with zero followers. Without a means to come to agreement, the organization cannot pursue a consistent course of action or perhaps any action at all. Even with a technical means such as majority vote, there is always the possibility that 49 members will only nominally agree with the decision and will remain disaffected. Their implicit choice is to remain bound by the majority decision or leave the organization. This is a basic dilemma facing all so-called democracies.

While the 100 members could have as many different ideas, in practice they will likely join together in smaller factions. (Unanimity means one faction.) In many representative democracies, including Canada, political opinion is divided among several official political parties, whose member representatives appear on a ballot and generally adhere to party policy. In the U.S., there have nearly always been only two major political parties.

Any political arrangement has its challenges. Unless it represents a true unanimity of opinion, the single party system is not a democracy by Western standards, but severely constricts the scope of dissent. On the other hand, a multi-party system can fail to achieve a majority vote, except for coalitions that typically compromise the positions of the differing factions. Either the two-party system is unstable because the parties cannot agree even that their adversaries are legitimate; or else it is ineffective in the long run because the parties, which agree to legitimately take turns, end up cancelling each other out. The U.S. has experienced both those possibilities.

The basic challenge is how to come to agreement that is both effective and stabilizing. The ideal of consensus is rarely achieved. Simple majority rule allows for decision to be made and action taken, but potentially at the cost of virtually half the people dragged along against their better judgment: the tyranny of the majority. The danger of a large disaffected minority is that the system can break apart; or else that it engages in civil war, in which roughly equal factions try forcibly to conquer each other. A polarized system that manages to cohere in spite of dividedness is faced with a different dysfunction. As in the U.S. currently, the parties tend to alternate in office. A given administration will try to undo or mitigate the accomplishments of the previous one, so that there is little net progress from either’s point of view. A further irony of polarization is that a party may end up taking on the policies of its nemesis. This happened, for example, at the beginning of American history, when Jefferson, who believed in minimal federal and presidential powers, ended by expanding them.

The U.S. was highly unstable in its first years. The fragile association among the states was fraught with widely differing interests and intransigent positions. As in England, the factions that later became official political parties were at each other’s throats. The “Federalists” and the “Republicans” had diametrically opposed ideas about how to run the new country and regularly accused each other of treason. Only haltingly did they come to recognize each other as legitimate differences of opinion, and there arose a mutually accepted concept of a “loyal opposition.” Basically, the price paid for union was an agreement to take turns between regimes. This meant accepting a reduction in the effectiveness of government, since each party tended to hamstring the other when in power. This has been viewed as an informal part of the cherished system of checks and balances. But it could also be viewed as a limit on the power of a society to take control of its direction—or to have any consistent direction at all.

Another, quite current, problem is minority rule. The U.S. Constitution was designed to avoid rule by an hereditary oligarchic elite. For the most part, it has successfully avoided the hereditary part, but hardly rule by oligarchy. American faith in democracy was founded on a relative economic equality among its citizens that no longer exists. Far from it, the last half-century has seen a return to extreme concentration of wealth (and widespread poverty) reminiscent of 18th century Europe. The prestige of aristocratic status has simply transferred to celebrity and financial success, which are closely entwined. Holding office, like being rich or famous, commands the sort awe that nobility did in old Britain.

A country may be ruled indirectly by corporations. (Technically, corporations are internally democratic, though voter turn-out at their AGMs can be small. Externally, in a sense, consumers vote by proxy in the marketplace.) While the interests of corporations may or may not align with a nation’s financial interests in a world market, they hardly coincide with that nation’s social well-being at home. The electorate plays a merely formal role, as the literal hands that cast the votes, while the outcome is regularly determined by corporate-sponsored propaganda that panders to voters. Government policy is decided by lobbies that regularly buy the loyalties of elected representatives. When it costs a fortune to run for office, those elected (whatever their values) are indebted to moneyed backers. And, contrary to reason, the poor often politically support the rich—perhaps because they represent an elusive dream of success.

People can always disagree over fundamental principles; hence, there can always be irreconcilable factions. Yet, it seems obvious that a selfless concern for the objective good of the whole is a more promising basis for unity than personal gain or the economic interests of a class or faction or political party. Corporate rule is based on the bottom line: maximizing profit for shareholders, with particular benefit to its “elected” officers. It embodies the greed of the consumer/investor society, often translated into legalized corruption. Contrast this with the ancient Taoist ideal of the wise but reluctant ruler: the sage who flees worldly involvement but is called against his or her will to serve. This is the opposite of the glory-seeking presidential candidate; but it is also the opposite of the earnest candidate who believes in a cause and seeks office to implement it. Perhaps the best candidate is neither egoistic nor ideologically motivated. The closest analogy is jury duty, where candidates are selected by random lottery.

The expedient of majority rule follows from factionalism, but also fosters it. To get its way, a faction need only a 51% approval of its proposal, leaving the opposition in the lurch. The bar could be set higher—and is, for special measures like changing a constitution. The ultimate bar is consensus, or a unanimous vote. This does not necessarily mean that everyone views the matter alike or perfectly agrees with the course of action. It does mean that they all officially assent, even with reservations, which is like giving your word or signing a binding contract.

The best way to come to consensus is through lengthy discussion. (If unanimity is required, then there is no limit to the discussion that may ensue.) Again, a model is the jury: in most criminal cases—but often not in civil cases—unanimity is required for a “conviction” (a term that implies the sincere belief of the jurors.) The jury must reach its conclusion “beyond a reasonable doubt.” A parliament or board of directors may find this ideal impractical, especially in timely matters. But what is considered urgent and timely is sometimes relative, or a matter of opinion, and could be put in a larger perspective of longer-term priorities.

The goal of consensus is especially relevant in long-term planning, which should set the general directions of a group for the far future. Since such matters involve the greatest uncertainty and room for disagreement, they merit the most thorough consideration and involve the least time constraint. A parliament, for example, might conduct both levels of discussion, perhaps in separate sessions: urgent matters at hand and long-term planning. Discussing the long-term provides a forum for rapprochement of opposing ideologies, resolving misunderstandings, and finding common ground. Even on shorter-term issues, it may turn out that wiser decisions are made through lengthier consideration.

In any case, the most productive attitude through which to approach any group decision is through careful listening to all arguments, from as objective and impersonal a point of view as possible. That means a humble attitude of mutual respect and cooperation, and an openness to novel possibilities. Effectively: brainstorming for the common good rather than barnstorming to support a cause or special interest. Democracy tends to align with individualism, and to fall into factions that represent a divergence of opinions and interests. What these have in common is always the world itself. When that is the common concern, there is an objective basis for agreement and the motive to cooperate.

Why the rich get richer

Since about 1970, there has been a world-wide trend toward increasing concentration of wealth. That means, for example, that ten percent of a population own far more than ten percent of wealth, and that ten percent of that group possess a disproportionate amount of its wealth—and so on up. It is not surprising that this trend is global, since capitalism is global. Yet it is accentuated in the English-speaking world and in the United States in particular. Why?

Thomas Piketty (Capital in the 21st Century and Capital and Ideology) attributes the trend to the fact that CEOs are awarded (or award themselves) outrageous salaries and other benefits, out of line with any service they could actually perform. In Europe and other places, there are social norms that limit the acceptance by society of such a practice, and there is also support for higher wages for labour in relation to management. Historically, the U.S. has had a low minimum wage. The ethos of that country also glorifies “winners” of all sorts, from celebrities to billionaires. Americans, it would seem, prefer a slim chance at extreme wealth over a reasonable chance at moderate wealth. Fairness is perceived in terms of equal opportunity rather than equal benefit. You could call that the lottery system, where the big prize comes out of the pockets of many small ticket holders. You could also call it the star system, where fame is promised as well as fortune. Like the economy, even the political system is a “winner takes all” sweepstake. Winning per se is the measure of success and the valued skill, so that those who know how to play the system to advantage are romanticized as heroes even while they are robbing you.

Perhaps one reason corporate executives can literally command such high rewards is that they are virtually not held accountable to shareholders. Even when elected, voter turn-out at AGMs is probably small and appropriate payment may not be on the agenda. We could expect corporate leadership to act in our interests, and without our supervision, just as we expect government officials to run a well-oiled machine. Yet, the general attitude of complacency is that shareholders don’t (and shouldn’t) care about excessive remuneration for those at the very top so long as the bottom line for shareholders is a positive gain. The same attitude spills over toward political leadership. If it’s too much trouble for citizens to track politics in the public sector, it is even more bothersome to track it in the private sector, where the issues and contenders are relatively invisible. As a result, CEOs have carte blanche to pay themselves whatever they like. They are in a position to use the system they control for their own benefit. If that breaks the corporation, the shareholders will pay, not the retiring executive; or else the public at large will pay through a government bailout.

There may be further explanations. As the nature of products has evolved, so has the method of valuation. Primary manufacturing made tangible products that entailed a recognizable amount of labour. Economic growth in modern times in developed countries has involved making less tangible products, which are more like services—for example, intellectual property such as computer programs and apps. (Primary manufacturing has migrated to countries where labour is cheaper.) In addition, investment itself has evolved from financing primary manufacturing or resource extraction to meta-investments in “commodities” and other financial abstractions. Speculation then promises a possible gain without a tangible product or expended effort. That has always been the real purpose of capital: to get a larger slice of the pie without doing any actually productive work. An effective form of gambling, the speculative game has evolved to great sophistication in modern times, requiring knowledge and skill. But it does not produce a real increase in wealth, only a redistribution upward.

For the most part, the modern rich have acquired or increased their wealth through applying certain skills dignified as business acumen. In many cases it has been a matter of being in the right place at the right time, as in the dotcom and social media booms. But in many cases these are skills to milk the system more than to produce “genuine” wealth, or to provide services of dubious benefit. Let us suppose there is an objective quantity of real goods and services in the world—things that actually improve people’s lives. It is this objective value which, like gold, ultimately backs up the currency in circulation. Anything that is falsely counted as an increase in genuine wealth can only dilute that objective quantity of real value—which makes GNP misleading. Money represents spending power—the ability to exchange symbolic units for real goods and services. Ultimately money buys other people’s time and effort. Getting money to flow into one’s pocket from someone else’s pocket increases one’s spending power, but does not produce more real wealth. Nor does simply printing more money. It takes effort (human or machine) to produce value that any and all can use. However, some efforts produce things that only the parties doing them can use. Stealing is such an effort. So is war. So is a lot of financial gain that is considered return on investment.

Historically, return on capital to its owners has always exceeded the amount of real growth in the economy. That has always been its very reason for being. But, only real growth can be shared in such a way as to benefit humanity as a whole. That means an increase in the infrastructure of civilization, the common wealth of humanity, potentially enjoyed by all. Spending power is a more private matter. The surplus generated as return on capital accumulates in certain hands as the exclusive ability to access and command the benefit resulting from real growth. Spending power trickles upward because certain individuals have figured out to make that happen. Modern capitalism, with its complex abstractions, is a sophisticated machine to pump spending power into particular coffers.

The distribution of wealth becomes the distribution of spending power; and the growing inequality means unequal access to the common resources of humanity. I do not mean only natural resources or the land, but also products of human enterprise (“improvements” as they are called). This includes things like buildings and art, but also technology and all that we find useful. Perhaps there is no intrinsic reason why there should be equal distribution of either property (capital, which is earning power) or spending/buying power. I am not advocating communism, which failed for the same reason that capitalism is faltering: selfish greed. The problem is not only that some players take advantage of others, which has always been the case. There is also a larger ecological fall-out of the game, which affects everyone and the whole earth. A lot of production is not rationally designed to better the human lot but only to gain advantage over others: widgets to make money rather than real improvement. Thoughtlessly, such production damages the biosphere and our future prospects.

There are ways to redistribute wealth more fairly, short of outright revolution, which never seems to be permanent anyway. Income tax can be made more steeply progressive. Assets and property of all sorts can be taxed progressively as well, so that wealth is redistributed to circulate more freely and does not accumulate in so few hands. There could be a guaranteed living, a guaranteed inheritance, and a guaranteed education for all. The wealthy should not dismiss these ideas as utopian. For, as things are, we are indeed on the historical track to violent revolution. Yet, along with redistribution of wealth as conventionally measured, we must also revolutionize our ideas about what constitutes real wealth—that is, what we truly value as improving life. Upon that our collective survival depends.

We would be far better off to think in terms of real value instead of spending power. However, those who seek personal gain above human betterment will no doubt continue to promote production that gives them what they opportunistically seek rather than what is objectively needed. Perhaps it is fortunate that much of modern economic activity does not involve material production at all. It may be a boon to the planet that humankind seems to be migrating to cyberspace. It may even be a boon that the pandemic has shut down much of the regular economy. What remains to see is how we put Humpty together again.

Is Mortality Necessary? Part Two: the demographic theory of senescence

In an earlier posting I raised the question of whether death is theoretically necessary for life—especially in a way that would thwart desires to extend healthful human longevity (“Is Mortality Necessary?” Aug 23, 2020). I pointed out that to answer this question would involve an evolutionary understanding of why mortality exists, and how the senescence of the multi-celled organism relates to that of its cells. In this second article I pursue these questions further, in relation to the prospect for life extension.

Evolution depends on natural selection, which depends on the passing of generations: in other words, on death. Each individual life is a trial, favoring those with traits better adapted to the current environment in a way that increases reproductive success. It seems counterintuitive that mortality itself could be an adaptive trait, though without it evolution by natural selection could never have taken place. However, the pace of natural evolution of the human species is less relevant now that our technological evolution exponentially outstrips it. Perhaps, then, evolution’s incidental shortcomings (mortality and aging of the individual) can and should be overcome. At the least, this would mean disabling or countering the mechanisms involved.

The history of culture amounts to a long quest to improve the human lot by managing environmental factors and creating a sheltering man-made world. (See my book, Second Nature, archived on this site.) This has been an accelerating process, so that most of the increase of human longevity has occurred only in the past couple of centuries, largely through medicine, sanitation, and technology. Infant and child mortality in particular had heavily weighted average lifespan downward for most of human existence. Their recent mitigation caused life “expectancy” to double from 40 to 80 or so years. Because it is a statistic, this is misleading. It is not about an individual lifespan, which (barring death by external causes) has not changed much over the centuries. Nevertheless, diseases of old age have replaced diseases of childhood as the principle threat to an individual life. These diseases may not be new, but formerly most people simply did not live long enough to die of them. Because they typically occur after reproductive age, there was no way for natural selection to weed them out. Heart disease, cancer, diabetes, Alzheimer’s, etc., are now deemed the principle causes of death. While these may be in part a product of our modern lifestyle, increasing vulnerability to such diseases is considered a marker of aging.

Yet, some scientists are coming to view aging itself as the primordial disease and cause of mortality. It was the triumph against external causes that resulted in the huge gain in the longevity statistic over the past century. However, to increase it much further would require eliminating internal causes. There has been a lot of research about particular mechanisms associated with aging, with optimism that these can be manipulated to increase an individual lifespan. Yet, there remains the possibility that aging is deeply built-in for evolutionary reasons. If so, aging might resist mere relief of its associated disease symptoms and might require different strategies to overcome.

The intuitive notion that the aged must die to make place for the young dates from antiquity. However, it goes against the modern idea that natural selection maximizes the fitness of the individual. (Shouldn’t a more fit individual live longer?) To make place for new generations, a built-in mechanism would be needed to cause the “fit” organism to senesce and die, if it had not already succumbed to external causes. This mechanism would have to prove advantageous to the species or to a sub-population, else it would not be selected. Such mechanisms have been identified at the cellular level (apoptosis, telomere shortening), but not at the higher level of the organism as an individual or a population. If there is some reason why individuals must die, it is not clear how this necessity relates to the cellular level, where these mechanisms do serve a purpose, or why some cells can reproduce indefinitely and some telomeres either do not shorten or else can be repaired.

Some creatures seem programmed to die soon after reproducing. Unlike the mayfly and the octopus, a human being can live on to reproduce several times and continue to live long after. Humans can have a post-reproductive life equal at least to the length of their reproductive stage. But the other side of that question is why the reproductive phase comes to an end at all. If evolution favored maximum proliferation of the species, shouldn’t individuals live longer in vigor to reproduce more? This gets to the heart of the question of why there might be an evolutionary reason for aging and mortality, which—though not favoring the interests of the individual—might favor the interests of a population.

The Demographic Theory of Senescence is the intriguing idea that built-in aging and mortality serve to stabilize population level and growth. (See the writings of Joshua Mitteldorf.) Without them, populations of predator and prey could fluctuate chaotically, driving one or both to extinction. Built-in senescence dampens runaway population growth in times of feast, without inhibiting survival in times of famine (brought on, for example, by overpopulation and resource depletion). In fact, individuals hunker down, eating and reproducing less and living longer under deprivation, to see a better day when reproduction makes more sense. In other words, mortality is a trait selected for at the group level, which can override selection against it at the individual level. Mortality also favors the genetic diversity needed to defend against ever-mutating disease and environmental change (the very function of sexual reproduction). For, a population dominated by immortals would disfavor new blood with better disease resistance. Yet, theorists have been reluctant to view senescence as a property that could be selected for, because it seems so contrary to the interests of the individual. In western society, sacrifice of individuality is a hard pill to swallow—even, apparently, for evolutionary theorists. Yet the whole history of life is founded on organisms giving up their individuality for the sake of cooperating toward the common good within a higher level of organization: mitochondria incorporated into cells, cells into differentiated tissue and organs, organs into organism, individuals into colonies and communities.

Apoptosis (programmed cell death) has been documented in yeast cells, for example, as an “altruistic” adaptation during periods of food scarcity. In the multi-celled organism, it serves to sculpt development of the embryo, and later to prevent cancer, by pruning away certain cells. In the demographic theory, it is also a mechanism naturally selected to stabilize population. Since telomerase is available in the organism to repair telomeres as needed, the fact that it can be withheld might also be an adaptation to reduce life span for the sake of stabilizing population. Pleiotropy (when a gene serves multiple functions that are sometimes contradictory) is selected, then, because it promotes aging and thus acts against runaway population growth.

If this is the right way of looking at it, on the one hand specific mechanisms are selected that result in mortality as a benefit to populations though at a cost to individuals. It might be possible to counteract these mechanisms in such a way as to prolong individual life. On the other hand, following this path successfully would defeat the evolutionary function of built-in mortality unless intentional measures are taken to insure genetic diversity and to control population growth. The lesson is that if we are to deliberately interfere with aging and mortality (as we are certainly trying to do), we must also deliberately do what is required in their place: limit human population while providing artificially for the health benefits of genetic diversity. If the goal is a sustainably stable population, people cannot be born at current rates when they are no longer dying at natural rates.

These are global political and ethical issues. Who would be allowed to reproduce? Who would be kept artificially alive? The production and rearing of children could be controlled, indeed could become a state or communal function, divorced from sexual intercourse. The elderly could be educated to let go of an unsatisfying existence—or be forced to. The diversity issue is a technological problem, which genetic medicine already proposes to engage; it too raises the unsavory prospect of eugenics. All this brings to mind the transhumanist project to assume total conscious control of human nature, evolution, and destiny.

At present, of course, we do not have the institutions required for humanity to take full charge of its future. Science and technology attempt to control nature for piecemeal, often short-sighted purposes, or as rear-guard actions (as we have experienced in the pandemic). But far more would be involved to wrest from nature control over evolution and total regulation of the earth’s biosphere (or of an artificial one somewhere else). Immortality would require a lot more information than we now have. To be sustainable, it would also require a level of organization of which humanity is currently incapable. It would need a selfless spirit of global cooperation to act objectively for the common good, which does not yet exist and perhaps never will. (Imagine instead a tyrannical dictator who could live forever!) Immortality would require a serious upgrading of our values. At the very least, it would require rethinking the individualism on which modern humanism and science itself were founded, let alone the consumer culture. Science, politics, religion, and philosophy will have to finally merge if we are to take charge of human destiny. (Plato may have grasped this, well ahead of our time.)

Ecological thinking is about populations rather than individuals. Communism was perverted by the societies that embraced it and rejected by the societies that cling to individual property rights—in both cases out of individualist self-obsession. In the current pandemic, however, we are forced to think of populations as well as individuals. Yet, the strategy so far has been oriented toward “saving lives at any cost.” If the demographic theory of senescence holds any water, our social development will have to keep pace with strategies to increase lifespan if we are not to breed ourselves to oblivion. Individuals will have to voluntarily redefine personal identity and their relation to the collective. And, of course, their relationship to aging and death.

Why nature should be accorded the rights of personhood

Let us first understand what a person is and why nature is not regarded as a person in our culture. But, even before that, let us acknowledge that nature once was so regarded. Long before industrial society became obsessed with the metaphor of mechanism, ancient peoples conceived the world as a sort of living organism. It was peopled with creatures, plants, spirits, and other entities who were treated as personalities on a par with human beings. Then along came the scientific revolution.

Nature for us moderns is impersonal by definition. For us, physical things are inert and incapable of responding to us in the way that fellow human beings are. Our science defines nature as comprised only of physical entities, not of moral agents with responsibilities and obligations. Humans have formed an exclusive club, which admits only members of our kind. Members are accorded all sorts of extravagant courtesies, such as human rights, equality before the law, the obligation to relieve (human) suffering and save life at any cost, the sentiment that (human) life is infinitely precious. We have appropriated half the earth as our clubhouse and grounds, expelling other inhabitants toward whom we feel no need to extend these courtesies. Having eliminated most of those formidable competitors who posed a threat and demanded respect, the remainder of creatures are no more to us than raw materials, to use and enslave as we please, or to consume as flesh.

We have done all this because we have been able to, with little thought for the rightness of such an attitude. In the Western hemisphere, our society was founded on domination of the peoples who occupied the land before the European invasion (the “discovery” of the New World). Significantly, it went without saying that this included the assumed right to dominate the other species as well. In other words, plundering these lands for their resources, and disregarding their native human inhabitants, went hand in hand to express an attitude that depersonalized both. The Conquest, as it has been called, was simultaneously about conquering people and conquering nature.

Probably this attitude started long before, perhaps with the domestication of wild animals and plants. The relationship to prey in the traditional hunt was fierce but respectful, a contest between worthy opponents, especially when large creatures were involved. (Can it be coincidence that losers of this contest are called “game”?) Often an apology of gratitude was made to the murdered creature, no part of whose valued carcass was wasted. When animals were enslaved and bred for human purposes, the tenor of the relationship changed from respect to control. The war on animals concluded in a regime of permanent occupation. A similar shift occurred when plants were no longer gathered but cultivated as crops. Both plants and animals were once personified and treated as the peers of humans. That relationship of awe gave way to a relationship of managing a resource. The growing numbers of the human invaders, and their lack of self-restraint, led eventually to the present breakdown of sustainable management.

Today we embrace a universal biological definition of human being, and a notion of universal human rights and personhood. Throughout history, however, this was hardly the inclusive category that it now is. Groups other than one’s own could be considered non-human, reduced to the status of chattel, or even literally hunted as prey. In other words, the distinction between person and thing remained ambiguous. Depersonalization is still a tactic used to justify mistreating people that some group considers undesirable, inferior, or threatening. Personhood is still a function of membership in the exclusive human Club. But since the qualifications for that membership were unclear throughout most of human history, perhaps it works both ways. Instead of excluding animals, plants, and nature at large, we could give them the benefit of the doubt to welcome them into our fellowship.

Now, what exactly is a person? In fact, there is no universally accepted legal or even philosophical definition. The word derives from the Greek term for the theatre mask, which indicated the stage character as distinguished from the mere actor wearing it. It means literally the sound coming through the mask, implying a transcendent origin independent of the physical speaker. Personhood so understood is intimately tied to language and hence limited to human beings as the sole known users of “fully grammatical language.” Other creatures do obviously communicate, but not in a way that entitles them to membership in the Club.

Various features of personhood—loosely related to language use—include reason, consciousness, moral and legal responsibility, the ability to enter contractual relationships, etc. Personhood confers numerous legal rights in human society. These include owning property, making and enforcing contracts, the right to self-govern and to hire other agents to act on one’s behalf. It guarantees civil rights to life and liberty within human society. One could argue, on behalf of non-human creatures, that they meet some of these criteria for personhood. In many cases, like aboriginal people, they were here before us, occupying the lands and waterways. Western society is now judicially and morally preoccupied with historical treaties and land claims of aboriginal groups—and more generally with redressing social wrongs committed by former generations. Certainly, our exploitive relation to nature at large is among these wrongs, as is our checkered relation to particular species such as whales, and ecological entities such as forests, rivers, oceans and atmosphere. Thinking on the subject tends to remain human-centric: we are motivated primarily by threats to our survival arising from our own abusive practices. But such a self-serving attitude remains abusive, and morally ought to give way to a consideration of victims that are not yet extinct, including the planet as a whole. If restorative justice applies to human tribes, why not to animal tribes? Why not to nature at large?

Animals reason in their own ways. Although insects may be little more than automatons, vertebrates are clearly sentient; many display concern for their own kin or kind and sometimes for members of other species. While they do not make written contracts, at one time neither did people! Animals (and even plants) maintain measured relationships, not only among their own ranks but also with other species, in an intricate system of mutual symbiosis and benefit. These interdependencies could be regarded as informal contracts, much like those which human beings engage in quite apart from written contract and law. Deer, for example, tend to browse and not over-forage their food supply. Even viruses are reluctant to kill their hosts. It could be argued that human beings, in the course of substituting formal legalities, have lost this common sense and all restraint, and thereby forfeited their claims.

Of course, we now say that natural ecological balancing acts are not “intentional,” in the conscious human sense, but the result of “instinct” or “natural selection,” and ultimately of inanimate “causal processes.” This is a very narrow and human-centric understanding of intentionality. Fundamentally, it is circular reasoning: intending is what people do, in the human world, while causality is held to be the rule in nature. However, both intention and cause (and even the concept of ‘nature’) are human constructs. Causality is no more an inherent fact of nature than intention is absent from it. It is we humans who have imposed this duality, thereby seceding from the animal kingdom and from nature at large. Humans have formalized their observations of natural patterns as laws of nature, and formalized their own social conduct and conventions as laws of jurisprudence. Other creatures simply do what works for them within the system of nature without articulating it. Are we superior simply because we gossip among ourselves about what other creatures are doing?

Are we superior because might makes so-called rights? We chase other creatures (and fellow human beings) from a habitat and claim to own it. But private property, like money, is a legalistic convention within the Club that has no clear connection with any right of ownership outside of human society. “Ownership” in this broader sense is more deserved by those whose proven behavior adheres to the unwritten laws that maintain an ecosystem in balance with their fellow creatures. It is least deserved by those who destroy that balance, while ironically permitted to do so through written laws. If stewardship is a justification for aboriginal land claims, why not for any right of ownership of the planet? Almost any species has a better claim to stewardship than humans. After all, we have rather botched the unceded territory that the Club now occupies.

Nature is self-regulating by nature; it doesn’t need to be granted the right to self-govern by human beings. It cannot speak for itself in human language, but expresses its needs directly in its own condition. Human scientists appoint themselves to interpret those expressions, largely for human purposes. The State appoints a public defender to represent people who cannot afford counsel or who are deemed unable to defend themselves. So, why not public defenders of the planet? Their job would be to insure, in the eyes of human beings, nature’s right to life, liberty, and well-being. That could only benefit us too, since it has never been more than a delusion that we can live segregated from the rest of nature.