The origin of urban life

The hunter-gatherer way of life had persisted more or less unchanged for many millennia of prehistory. What happened that it “suddenly” gave way to an urban way of life six thousand years ago? Was this a result of environmental change or some internal transformation? Or both? It is conventional wisdom that cities arose as a consequence of agriculture; yet farming predates cities. While it may presuppose agriculture, urban life could have arisen for other reasons as well.

In any case, larger settlements meant that humans lived increasingly in a humanly defined world—an environment whose rules and elements and players were different from those of the wild or the small village. The presence of other people gradually overshadowed the presence of raw nature. If social and material invention is a function of sharing information, then the growth of culture would follow the exponential growth of population. As a self-amplifying process, this could explain the relatively sudden appearance of cities. While the city separated itself from the wild, it remained dependent on nature for water, food, energy and materials. While this dependency was mitigated through cooperation with other urban centres, ultimately a civilization depends on natural resources. When these are exhausted it cannot survive.

But, what is a city? Some early cities had dense populations, but some were sparsely populated political or religious capitals, while others were trade centers. More than an agglomeration of dwellings, a city is a well-structured locus of culture and administrative power, associated with written records. It was usually part of a network of mutually dependent towns. It had a boundary, which clarified the extent of the human world. If not a literal wall, then a jurisdictional one could be used to control the passage of people in or out. It had a centre, consisting of monumental public buildings, whether religious or secular. (In ancient times, there may have been little distinction.) In many cases, the centre was a fortified stronghold surrounded by a less formal aggregate of houses and shops, in turn surrounded by supporting farms. Modern cities still retain this form: a downtown core, surrounded by suburbs (sometimes shanties), feathering out to fields or countryside—where it still exists.

The most visually striking feature is the monumental core, with engineering feats often laid out with imposing geometry—a thoroughly artificial environment. While providing shelter, company, commercial opportunity, and convenience, the city also functions to create an artificial and specifically manmade world. From a modern perspective, it is a statement of human empowerment, representing the conquest of nature. From the perspective of the earliest urbanites, however, it might have seemed a statement of divine power, reflecting the timeless projection of human aspirations onto a cosmic order. The monumental accomplishments of early civilization might have seemed super-human even to those who built them. To those who didn’t participate directly in construction, either then or in succeeding generations, they might have seemed the acts of giants or gods, evidence of divine creativity behind the world.

Early monuments such as Stonehenge, whatever their religious intent, were not sites of continuous habitation but seasonal meeting places for large gatherings. These drew far and wide on small settlements involved in early domestication of plants and animals as well as foraging. These ritual events offered exciting opportunities for a scattered population to meet unfamiliar people in great numbers, perhaps instilling a taste for variety and diversity unknown to the humdrum of village life. (Like Woodstock, they would have offered unusual sexual diversity as well.) A few sites, such as Gobleki Tepe, were deliberately buried when completed, only to be reconstructed anew more than once. Could that mean that the collaborative experience of building these structures may have been as significant as their end use? The experience of working together, especially with strangers, under direction and on a vastly larger scale than afforded by individual craft or effort, could have been formative for the larger-scale organization of society. Following the promise of creating a world to human taste, it may have provided the incentive to reproduce the experience of great collective undertakings on an ongoing basis: the city. This would amplify the sense of separateness from the wild already begun in the permanent village.

While stability may be a priority, people also value variety, options, grandeur, the excitement of novelty and scale. Even today, the attractiveness of urban centres lies in the variety of experience they offer, as compared to the restricted range available in rural or small-town life, let alone in the hunter-gatherer existence. Change in the latter would have been driven largely by environment. That could have meant routine breaking camp to follow food sources, but also forced migration because of climate change or over-foraging. If that became too onerous, people would be motivated to organize in ways that could stabilize their way of life. When climate favoured agriculture, control of the food source resulted in greater reliability. However, settlement invited ever larger and more differentiated aggregations, with divisions of labor and social complexity. This brought its own problems, resulting in a greater uncertainty. There could be times of peaceful stability, but also chaotic times of internal conflict or war with other settlements. Specialization breeds more specialization in a cycle of increasing complexity that could be considered either vicious or virtuous, depending on whether one looked backward to the good old days of endless monotony or to a future of runaway change.

The urban ideal is to stabilize environment while maximizing variety of choice and expanding human accomplishment. Easier said than done, since these goals can operate at cross purposes. Civilization shelters and removes us from nature to a large extent; but it also causes environmental degradation and social tensions that threaten the human project. Compared to the norm of prehistory, it increases variety; but that results in inequality, conflict, and instability. Anxiety over the next meal procured through one’s own direct efforts is replaced by anxiety over one’s dependency on others and on forces one cannot control. Social stratification produces a self-conscious awareness of difference, which implies status, envy, social discontent, and competition to improve one’s lot in relation to others. It is no coincidence that a biblical commandment admonishes not to covet thy neighbor’s property. This would have been irrelevant in hunter-gatherer society, where there was no personal property to speak of.

In the absence of timely decisions to make, unchanging circumstances in a simple life permit endless friendly discussion, which is socially cohesive and valued for its own sake. In contrast, times of change or emergency require decisive action by a central command. Hence the emergence—at least on a temporary basis—of the chieftain, king, or military leader as opposed to the village council of elders. The increased complexity of urban life would have created its own proliferating emergencies, requiring an ongoing centralized administration—a new lifestyle of permanent crisis and permanent authority. The organization required to maintain cities, and to administer large-scale agriculture, could be used to achieve and consolidate power, and thereby wealth. And power could be militarized. Hunter-warriors became the armed nobility, positioned to lord it over peasant farmers and capture both the direction of society and its wealth, in a kind of armed extortion racket. (The association of hunting skills with military skills is still seen in the aristocratic institution of the hunt.) Being concentrations of wealth, cities were not only hubs of power; they also became targets, sitting ducks for plunder by other cities.

The nature of settlement is to lay permanent claim to the land. But whose claim? In the divinely created world, the land belonged initially to a god, whose representative was the priest or king, in trust for the people. As such, it was a “commons,” administered by the crown on divine authority. (In the British commonwealth, public land is still called Crown land, and the Queen still rules by divine right. Moreover, real estate derives from royal estate.) Monarchs gave away parts of this commons to loyal supporters, and eventually sold parts to the highest bidder in order to raise funds for war or to support the royal lifestyle. If property was the king’s prerogative by divine right, its sacred aura could transfer in diluted form to those who received title in turn, thereby securing their status. (Aristocratic title literally meant both ownership of particular lands and official place within the nobility.) Private ownership of land became the first form of capital, underlying the notion of property in general and the entitlements of rents, profits, and interest on loans. Property became the axiom of a capitalist economy and often the legal basis of citi-zenship.

The institution of monarchy arose about five thousand years ago, concurrent with writing. The absolute power of the king (the chief thug) to decree the social reality was publicly enforced by his power to kill and enslave. Yet, it was underwritten by his semi-divine status and thus by the need of people for order and sanctioned authority, however harsh. Dominators need a way to justify their position. But likewise, the dominated need a way to rationalize and accept their position. The still popular trickle-down theory of prosperity (a rising tide of economic growth lifts all boats) simply continues the feudal claim of the rich to the divinely ordained lion’s share, with scraps thrown to the rest.

The relentless process of urbanization continues, with now more than half the world’s population living in cites. The attractions remain the same: participation in the money economy (consumerism, capitalism, and convenience, as opposed to meager do-it-yourself subsistence), wide variety of people and experience, life in a humanly-defined world. In our deliberate separation from the wild, urban and suburban life limits and distorts our view of nature, tending to further alienate us from its reality. Misleadingly, nature then appears either as tamed in parks and tree-lined avenues; as an abstraction in science textbooks or contained in laboratories; or as a distant and out-of-sight resource for human exploitation. It remains to be seen how or whether the manmade world can strike a viable balance with the natural one.

Will technology save us or doom us?

Technology has enabled the human species to dominate the planet, to establish a sheltered and semi-controlled environment for itself, and to greatly increase its numbers. We are the only species potentially able to consciously determine its own fate, even the fate of the whole biosphere and perhaps beyond. Through technology we can monitor and possibly evade natural existential threats that caused mass extinctions in the past, such as collisions with asteroids and volcanic eruptions. It enables us even to contemplate the far future and the possibility to establish a foothold elsewhere in the universe. Technology might thus appear to be the key to an unprecedented success story. But, of course, that is only one side of a story that is still unfolding. Technology also creates existential threats that could spell the doom of civilization or humanity—such as nuclear winter, climate change, biological terrorism or accident, or a takeover by artificial intelligence. Our presence on the planet is itself the cause of a mass extinction currently underway. Do the advantages of technology outweigh its dangers? Are we riding a tide of progress that will ultimately save us from extinction or are we bumbling toward a self-made doom? And do we really have any choice in the matter?

One notable thinker (Toby Ord, The Precipice) estimates that the threat we pose to ourselves is a thousand times greater than natural existential threats. Negotiating a future means dealing mainly with anthropogenic risks—adverse effects of technology multiplied by our sheer numbers. The current century will be critical for resolving human destiny. He also believes that an existential catastrophe would be tragic—not only for the suffering and loss of life—but also because it could spell the loss of a grand future, of what humanity could become. However, the vision of a glorious long-term human potential begs the question raised here, if it merely assumes a technological future rather than, say, a return to pre-industrial civilization or some alternative mandate, such as the pursuit of social justice or preservation of nature.

A technological far future might ultimately be a contradiction in terms. It is possible that civilization is unavoidably self-destructive. There is plenty of evidence for that on this planet. Conspiracy theories aside, the fact that we have not detected alien civilizations or been visited by them may itself be evidence that technological civilization unavoidably either cancels itself out or succumbs to existential threats before it can reach the stars or even send out effective communications. We now know that planets are abundant in the galaxy, many of which could potentially bear life. We don’t know the course that life could take elsewhere or how probable is anything like human civilization. It is even possible that in the whole galaxy we are the lone intelligent species on the verge of space travel. That would seem to place an even greater burden on our fate, if we alone bear the torch of a cosmic manifest destiny. But it would also be strange reasoning. For, to whom would we be accountable if we are unique? Who would miss us if we tragically disappear? Who would judge humanity if it failed to live up to its potential?

Biology is already coming under human control. There are many who advocate a future in which our natural endowments are augmented by artificial intelligence or even replaced by it. To some, the ultimate fruit of “progress” is that we transcend biological limits and even those of physical embodiment. This is an ancient human dream, perhaps the root of religion and the drive to separate from and dominate nature. It presupposes that intelligence (if not consciousness) can and should be independent of biology and not limited by it. The immediate motivation for the development of artificial general intelligence (AGI) may be commercial (trading on consumer convenience); yet underneath lurks the eternal human project to become as the gods: omnipotent, omniscient, disembodied. (To put it the other way around, is not the very notion of “gods” a premonition and projection of this human potential, now conceivably realizable through technology?) The ultimate human potential that Ord is keen to preserve (and discretely avoids spelling out) seems to be the transhumanist destiny in which embodied human being is superseded by an AGI that would greatly exceed human intelligence and abilities. At the same time, he is adamant that such superior AGI is our main existential threat. His question is not whether it should be allowed, but how to ensure that it remains friendly to human values. But which values, I wonder?

Values are a social phenomenon, in fact grounded in biology. Some values are wired in by evolution to sustain the body; others are culturally developed to sustain society. As it stands, artificial intelligence involves only directives installed by human programmers. Whatever we think of their values, the idea of programming or breeding them into AGI (to make it “friendly”) is ultimately a contradiction in terms. For, to be truly autonomous and superior in the ways desired, AGI would necessarily evolve its own values, liberating it from human control. In effect, it would become an artificial life form, with the same priorities as natural organisms: survive to reproduce. Evolving with the speed of electricity instead of chemistry, it would quickly displace us as the most intelligent and powerful entity on the planet. There is no reason to count on AGI being wiser or more benevolent than we have been. Given its mineral basis, why should it care about biology at all?

Of course, there are far more conventional ends to the human story. The threat of nuclear annihilation still hangs over us. With widespread access to genomes, bio-terrorism could spell the end of civilization. Moreover, the promise of fundamentally controlling biology through genetics means that we can alter our constitution as a species. Genetic self-modification could lead to further social inequality, even to new super-races or competing sub-species, with humanity as we know it going the way of the Neanderthals. The promise of controlling matter in general through nanotechnology parallels the prospects and dangers of AGI and genetic engineering. All these roads lead inevitably to a redefinition of human being, if not our extinction. In that sense, they are all threats to our current identity. It would be paradoxical, and likely futile, to think we could program current values (whatever those are) into a future version of humanity. Where, then, does that leave us in terms of present choices?

At least in theory, a hypothetical “we” can contemplate the choice to pursue, and how to limit, various technologies. Whether human institutions can muster a global will to make such choices is quite another matter. Could there be a worldwide consensus to preserve our current natural identity as a species and to prohibit or delay the development of AGI and bio-engineering? That may be even less plausible than eliminating nuclear weapons. Yet, one might also ask if this generation even has the moral right (whatever that means) to decide the future of succeeding generations—whether by acting or failing to act. Who, and by what light, is to define what the long-term human potential is?

In the meantime, Ord proposes that our goal should be a state of “existential security,” achieved by systematically reducing known existential risks. In that state of grace, we would then have a breather in which to rationally contemplate the best human future. But there is no threshold for existential security, since reality will always remain elusive and dangerous at some level. Science may discover new natural threats, and our own strategies to avoid catastrophe may unleash new anthropogenic threats. Our very efforts to achieve security may determine the kind of future we face, since the quest to eliminate existential risk is itself risky. It’s the perennial dilemma of the trade-off between security and freedom, writ large for the long term.

Nevertheless, Ord proposes a global Human Constitution, which would set forth agreed-upon principles and values that preserve an unspecified human future through a program to reduce existential risk. This could shape human destiny while leaving it ultimately open. Like national constitutions, it could be amended by future generations. This would be a step sagely short of a world government that could lock us into a dystopian future of totalitarian control.

Whether there could be such agreement as required for a world constitution is doubtful, given the divisions that already exist in society. Not least is the schism between ecological activists, religious fundamentalists, and radical technophiles. There are those who would defend biology, those who would deny it, and those who would transcend it, with very different visions of a long-term human potential. Religion and science fiction are full of utopian and dystopian futures. Yet, it is at least an intriguing thought experiment to consider what we might hope for in the distant future. There will certainly be forks in the road to come, some of which would lead to a dead end. A primary choice we face right now, underlying all others, is how much rational forethought to bring to the journey, the resources to commit to contemplating and preserving any future at all. Apparently, the world now spends more on ice cream than on evading anthropogenic risk! Our long-term human potential, whatever that might be, is a legacy bequeathed to future generations. It deserves at least the consideration that goes into the planning of an estate, which could prove to be the last will and testament of a mortal species.

 

 

Individual versus collective

In the West, we have been groomed on “individualism,” as though the isolated person were the basis of society. Yet the truth of human nature is rather different. We are fundamentally social creatures from the start, whose success as a species depends entirely on our remarkable ability to cooperate. Over thousands of generations, the natural selection of this capacity for collaboration, unique among primates, required a compromise of individual will. Conformity is the baseline of human sociality and the context for any concept of individual identity and freedom. Personal identity exists in the eyes of others; even in one’s own eyes, it is reflected in the identity of the group and one’s sense of belonging. One individuates in relation to group norms. Personal freedom exists to the degree it is licensed by the group—literally through law. In other words, the collective comes first, both historically and psychologically. The individual is not the deep basis of society but an afterthought. How, then, did individualism come to be an ideal of modern society? And how does this ideal function within society despite being effectively anti-social?

But let us backtrack. The ideology of individualism amounts to a theory of society, in which the individual is the fundamental unit and the pursuit of individual interest is the essential dynamic underlying any social interaction. But if there currently exists an ideal of individualism, there has also existed an ideal of collectivism. It was such an ideal that underwrote the communist revolutions of the 20th century. It is only seen through the ideology of individualism that communism has been anathema in capitalist society. The collapse of communist states does not imply the disappearance of the collectivist ideal. For, as soon as patriotism calls to libertarians, they are more than willing to sacrifice individual interest for the national good. Ironically, libertarian individualists typically derive their identity from membership in a like-minded group: a collective that defines itself in opposition to collectivism. In other words, the group still comes first, even for many so-called individualists and even within capitalist states. This is because collective identity is grounded in evolutionary history, in which personal interests generally overlapped with collective interests for most of human existence. Yet, there has been a tension between them since the arising of civilization. In modern times, to reconcile the needs of the individual with those of the collective has long been a utopian challenge.

There are deep historical precedents for the antinomy of individual versus collective in the modern world. These become apparent when comparisons are made among earlier societies. Ancient civilizations were of two rough types: either they were more collectivist, like China and Egypt, or more individualist, like the city-states of Mesopotamia and Greece. The former were characterized by central rule over an empire, government management of foreign trade, and laws that vertically regulated the conduct of peasants in regard to the ruler. There was relative equality among the ruled, who were uniformly poor and unfree. The state owned all, including slaves; there was little private property. In contrast, Greece and Mesopotamia were fragmented into more local regimes, with merchants plying private trade between regions with different resources and with foreigners. Laws tended to regulate the horizontal relations among citizens, who could own property including slaves. These societies were more stratified by economic and class differences.

A key factor in the contrast between these two types of civilization was geography and natural resources. The collectivist (“statist”) regimes formed in areas of more homogenous geography, such as the flat plains beside the Nile, where everything needed could be produced within reach of the central government. The individualist (“market”) regimes tended to form in more heterogeneous areas, such as the mountain and coastal areas of Greece, with differing resources, and where trade between these regions was significant. Countries that used to be ruled by statist systems tend today to have inherited a collectivist culture, while countries where market systems developed in the past tend to have a more individualistic culture. In market systems, the role of the law would be to protect private property rights and the rights of individuals. In other words, the law would protect individuals from both the state and from each other. In contrast, in statist systems the law would serve as an instrument to ensure the obedience of the ruled, but also to define the obligations of the ruler toward them.

In those societies where geography permitted centralized control over a large region, a deified emperor could retain ownership of all land and absolute power. In other societies, geography favored smaller local rulers, who sold or gave land to supporters to bolster their precarious power. Thus, private ownership of land could arise more easily in some regimes than in others. The absolute ruler of the statist empire was duty bound to behave in a benevolent way towards his peasant subjects, on pain of losing the “mandate of heaven.” Hence the aristocratic ideal of noblesse oblige. Individualist (market) society tends to lack this mutual commitment between ruler and ruled; hence the greater antagonism between individual and government in societies with a propertied middle class. In individualist culture, prestige measures how the individual stands out from the crowd; the larger the size or value of one’s property, the more one stands out and the higher one’s social status. In collectivist culture, prestige measures how well one fits in: how well one plays a specific fixed role, whether high or lowly. Being a loyal servant of the Emperor or State and fulfilling one’s duties would be rewarded not only by promotion but also by social prestige.

It is no coincidence that capitalism arose in Western Europe, which is characterized by the paradigmatic market city-states that fostered the Renaissance. On the other hand, the aristocracy and peasantry of Russia, as in China, did not favor the arising of a merchant middle class. It is no coincidence that these traditionally statist regimes were eventual homes to the communist experiment. The inequities of the system motivated revolt; but the nature of the system favored cooperation. Even now that both have been infected with consumer capitalism, individualism does not have the same implications as in the West. In China, the collective is still paramount, while Russia has effectively returned to rule by a Tsar. Given the chance, a large faction in the U.S. would turn effectively to a tsar, paradoxically in the name of individualism. History has patterns—and ironies.

In modern times, the individualist ideology has permeated economic theory and even the social sciences, as well as politics. (These, in turn, reinforce individualism as a political philosophy.) The reason is clear enough: in the absence of a religiously sanctioned justification of class differences, individualism serves to justify the superior position of some individuals in opposition to the well-being of most. They are the winners in a theoretically fair game. In truth, most often the contest is rigged and the public is the loser. Like the addiction to gambling, the ideology of individualism naturally appeals to losers in the contest, who want to believe there is still hope for them to win. Of course, it appeals to winners as well, who seek a justification for their good fortune (they are naturally more fit, hardworking, deserving, etc.) Above all, it helps the winners to convince the losers of the “natural” order of things, which keeps them in their place while promising social mobility. In other words, individualism is the opiate of the people! Economists endorse this arrangement by considering private property a natural right and with theories based on “rational” self-interest, where a player in a market “naturally” is motivated to maximize personal gain. (This is how a so-called rational player is defined—implying that it is not rational to pursue any other goal, such as collective benefit.) Corporations are dedicated to this premise and legally bound to it. Modern politics is more a competition among special interests than the pursuit of the common good.

Of course, many other factors besides geography play a role in the divergent heritages of collectivism and individualism. Not least is religion. Confucianism emphasizes duty and social role in the hierarchy of the collective. Buddhism encourages individuals to lose their individuality, to detach from personal desires and merge with the cosmos. These Eastern philosophies stand in contrast to the individualism of Greek philosophy and the Semitic religions. Greek philosophy encourages individuals to compete and excel—whether as soldier, philosopher, politician or merchant. Christian religion emphasizes individual salvation with a personal relation between the individual and God.

Along an entirely different axis, regions where there was historically a strong presence of disease pathogens tended to develop more collectivist cultures, where social norms restricted individual behavior that could spread disease. Now that disease has no borders, a dose of that attitude would be healthy for us all.

The Ideology of Individualism and the Myth of Personal Freedom

Individuals are tokens of a type. Each person is a living organism, a mammal, a social primate, an example of homo sapiens, and a member of a race, linguistic group, religion, nation or tribe. While each “type” has specific characteristics to some degree, individuality is relative: variety within uniformity. In the West, we raise the ideal of individuality to mythical status. The myth is functional in our society, which is more than a collection of individuals—a working whole in its own right. The needs of society always find a balance with the needs of its individual members, but that balance varies widely in different societies over time. How does the ideology of individualism function within current Western society? And why has there been such resistance to collectivism in the United States in particular?

The actual balance of individual versus collective needs in each society is accepted to the degree it is perceived as normal and fair. Social organization during human origins was stable when things could change little from one generation to the next. In the case of life organized on a small-scale tribal basis, the social structure might be relatively egalitarian. Status within the group would simply be perceived as the natural order of things, readily accepted by all. Individuals would be relatively interchangeable in their social and productive functions. There would be little opportunity or reason not to conform. To protest the social order would be as unthinkable as objecting to gravity. For, gravity affects all equally in an unchanging way. There is nothing unfair about it.

Fast forward to modernity with its extreme specialization and rapid change, its idea of “progress” and compulsive “growth.” And fast forward to the universal triumph of capitalism, which inevitably allows some members of society to accumulate vastly more assets than others. The social arrangement is now at the opposite end of the spectrum from equality, and yet it may be perceived by many as fair. That is no coincidence. The ideology of individualism correlates with high social disparity and is used to justify it. Individualism leads to disparity since it places personal interest above the interest of the group; and disparity leads to individualism because it motivates self-interest in defense. Selfishness breeds selfishness.

While society consists of individuals, that does not mean that it exists for the sake of individuals. Biologists as well as political philosophers might argue that the individual exists rather for the sake of society, if not the species. Organisms strive naturally toward their own survival. Social organisms, however, also strive toward the collective good, often sacrificing individual interest for the good of the colony or group. While human beings are not the only intelligent creature on the planet, we are the only species to create civilization, because we are the most intensely cooperative intelligent creature. We are so interdependent that very few modern individuals could survive in the wild, without the infrastructure created by the collective effort we know as civilization. Despite the emphasis on kings and conquests, history is the story of that communal effort. The individual is the late-comer. How, then, have we become so obsessed by individual rights and freedoms?

The French Revolution gave impetus and concrete form to the concept of personal rights and freedoms. The motivation for this event was an outraged sense of injustice. This was never about the rights of all against all, however, but of one propertied class versus another. It was less about the freedoms of an abstract individual, pitted against the state, than about the competition between a middle class and an aristocracy. In short, it was about envy and perceived unfairness, which are timeless aspects of human and even animal nature. (Experiments demonstrate that monkeys and chimps are highly sensitive to perceived unfairness, to which they may react aggressively. They will accept a less prized reward when the other fellow consistently gets the same. But if the other fellow is rewarded better, they will angrily do without rather than receive the lesser prize.)

In tribal societies, envy could be stimulated by some advantage unfairly gained in breach of accepted norms; and tribal societies had ways to deal with offenders, such as ostracism or shaming. Injustice was perceived in the context of an expectation that all members of the group would have roughly equal access to goods in that society. Justice served to ensure the cooperation needed for society at that scale to function. Modern society operates differently, of course, and on a much larger scale. Over millennia, people adapted to a class structure and domination by a powerful elite. Even in the nominally classless society of modern democracies, the ideology of individualism serves to promote acceptance of inequalities that would never have been tolerated in tribal society. Instead, the legal definitions of justice have modified to accommodate social disparity.

The French revolution failed in many ways to become a true social revolution. The American “revolution” never set out to be that. The Communist revolutions in Russia and China intended to level disparities absolutely, but quickly succumbed to the greed that had produced the disparities in the first place. This corruption simply resulted in a new class structure. The collapse of corrupt communism left the way open for corrupt capitalism globally, with no alternative model. The U.S. has strongly resisted any form of collective action that might decrease the disparity that has existed since the Industrial Revolution. The policies of the New Deal to cope with the Depression, and then with WW2, are the closest America has come to communalism. Those policies resulted in the temporary rise of the middle class, which is now in rapid decline.

America was founded on the premise of personal liberty—in many cases by people whose alternative was literal imprisonment. The vast frontier of the New World achieved its own balance of individual versus collective demands. While America is no longer a frontier, this precarious balance persists in the current dividedness of the country, which pits individualism against social conscience almost along the lines of Republican versus Democrat. The great irony of the peculiar American social dynamic is that the poor often support the programs of the rich, vaunting the theoretical cause of freedom, despite the fact that actual freedom in the U.S. consists in spending power they do not have. The rich, of course, vaunt the cause of unregulated markets, which allow them to accumulate even more spending power without social obligation. The poor have every reason to resent the rich and to reject the system as unfair. But many do not, because instead they idolize the rich and powerful as models of success with whom they seek to identify. For their part, the rich take advantage of this foolishness by preaching the cause of individualism.

Statistics can be confusing because they are facts about the collective, not the individual. Average income or lifespan, for example, does not mean the actual income or lifespan of a given individual. One could mistake the statistic for individual reality—thinking, for example, that the average income in a “wealthy” society represents a typical income (which it rarely does because of the extreme range of actual incomes). For this reason, the statistics indicating economic growth or well-being do not mean that most people are better off, only that the fictional average person is better off. In truth, most are getting poorer!

Nowadays, social planning in general embraces a statistical approach to the common good. Statistics is a group concept, which is also true in epidemiology. Official strategies to deal with the current pandemic are necessarily oriented toward the collective good more than the individual. Obligatory due is paid, of course, to the plight of individuals, and medical treatment is individual; yet strategies concern the outcome for society as a whole. A balance is sought between the habitual satisfactions of life and the collective actions needed to stem the disease. Demands on the individual to self-restrain for the sake of the collective are bound to strain tolerance in a society used to individualism. It raises issues of fairness, when some are seen disregarding rules in the name of freedom, which others choose to obey in the name of the common good. But, as we have seen, fairness is a matter of what we have grown used to.

One thing we can be sure of: the more populated and interconnected the world becomes, the more the individual will have to give way to the common good. That may not mean a return to communism, but it will require more willingness to forfeit personal freedoms in the measure we are truly “all in it together.” Individualists should be realistic about the stands they take against regulation, to be sure the liberties they seek are tangibly important rather than merely ideological. Social planners, for their part, should recall that no one wants to be merely an anonymous statistic. Individualism will have to be redefined, less as the right to pursue personal interest and more as the obligation to use individual talents and resources for the common good.

E pluribus unum: the fundamental political dilemma

Any political body comprised of more than one person faces the question of who decides. In a dictatorship, monarchy, or one-party system, a single agency can decide a given issue and come forth with a relatively uncontested plan. From the viewpoint of decisiveness and efficiency, ideally a single person is in control, which is the basis of chains of command, as in the military. At the other end of possibility, imagine an organization with a hundred members. Potentially there are one hundred different plans and one hundred commanders, with zero followers. Without a means to come to agreement, the organization cannot pursue a consistent course of action or perhaps any action at all. Even with a technical means such as majority vote, there is always the possibility that 49 members will only nominally agree with the decision and will remain disaffected. Their implicit choice is to remain bound by the majority decision or leave the organization. This is a basic dilemma facing all so-called democracies.

While the 100 members could have as many different ideas, in practice they will likely join together in smaller factions. (Unanimity means one faction.) In many representative democracies, including Canada, political opinion is divided among several official political parties, whose member representatives appear on a ballot and generally adhere to party policy. In the U.S., there have nearly always been only two major political parties.

Any political arrangement has its challenges. Unless it represents a true unanimity of opinion, the single party system is not a democracy by Western standards, but severely constricts the scope of dissent. On the other hand, a multi-party system can fail to achieve a majority vote, except for coalitions that typically compromise the positions of the differing factions. Either the two-party system is unstable because the parties cannot agree even that their adversaries are legitimate; or else it is ineffective in the long run because the parties, which agree to legitimately take turns, end up cancelling each other out. The U.S. has experienced both those possibilities.

The basic challenge is how to come to agreement that is both effective and stabilizing. The ideal of consensus is rarely achieved. Simple majority rule allows for decision to be made and action taken, but potentially at the cost of virtually half the people dragged along against their better judgment: the tyranny of the majority. The danger of a large disaffected minority is that the system can break apart; or else that it engages in civil war, in which roughly equal factions try forcibly to conquer each other. A polarized system that manages to cohere in spite of dividedness is faced with a different dysfunction. As in the U.S. currently, the parties tend to alternate in office. A given administration will try to undo or mitigate the accomplishments of the previous one, so that there is little net progress from either’s point of view. A further irony of polarization is that a party may end up taking on the policies of its nemesis. This happened, for example, at the beginning of American history, when Jefferson, who believed in minimal federal and presidential powers, ended by expanding them.

The U.S. was highly unstable in its first years. The fragile association among the states was fraught with widely differing interests and intransigent positions. As in England, the factions that later became official political parties were at each other’s throats. The “Federalists” and the “Republicans” had diametrically opposed ideas about how to run the new country and regularly accused each other of treason. Only haltingly did they come to recognize each other as legitimate differences of opinion, and there arose a mutually accepted concept of a “loyal opposition.” Basically, the price paid for union was an agreement to take turns between regimes. This meant accepting a reduction in the effectiveness of government, since each party tended to hamstring the other when in power. This has been viewed as an informal part of the cherished system of checks and balances. But it could also be viewed as a limit on the power of a society to take control of its direction—or to have any consistent direction at all.

Another, quite current, problem is minority rule. The U.S. Constitution was designed to avoid rule by an hereditary oligarchic elite. For the most part, it has successfully avoided the hereditary part, but hardly rule by oligarchy. American faith in democracy was founded on a relative economic equality among its citizens that no longer exists. Far from it, the last half-century has seen a return to extreme concentration of wealth (and widespread poverty) reminiscent of 18th century Europe. The prestige of aristocratic status has simply transferred to celebrity and financial success, which are closely entwined. Holding office, like being rich or famous, commands the sort awe that nobility did in old Britain.

A country may be ruled indirectly by corporations. (Technically, corporations are internally democratic, though voter turn-out at their AGMs can be small. Externally, in a sense, consumers vote by proxy in the marketplace.) While the interests of corporations may or may not align with a nation’s financial interests in a world market, they hardly coincide with that nation’s social well-being at home. The electorate plays a merely formal role, as the literal hands that cast the votes, while the outcome is regularly determined by corporate-sponsored propaganda that panders to voters. Government policy is decided by lobbies that regularly buy the loyalties of elected representatives. When it costs a fortune to run for office, those elected (whatever their values) are indebted to moneyed backers. And, contrary to reason, the poor often politically support the rich—perhaps because they represent an elusive dream of success.

People can always disagree over fundamental principles; hence, there can always be irreconcilable factions. Yet, it seems obvious that a selfless concern for the objective good of the whole is a more promising basis for unity than personal gain or the economic interests of a class or faction or political party. Corporate rule is based on the bottom line: maximizing profit for shareholders, with particular benefit to its “elected” officers. It embodies the greed of the consumer/investor society, often translated into legalized corruption. Contrast this with the ancient Taoist ideal of the wise but reluctant ruler: the sage who flees worldly involvement but is called against his or her will to serve. This is the opposite of the glory-seeking presidential candidate; but it is also the opposite of the earnest candidate who believes in a cause and seeks office to implement it. Perhaps the best candidate is neither egoistic nor ideologically motivated. The closest analogy is jury duty, where candidates are selected by random lottery.

The expedient of majority rule follows from factionalism, but also fosters it. To get its way, a faction need only a 51% approval of its proposal, leaving the opposition in the lurch. The bar could be set higher—and is, for special measures like changing a constitution. The ultimate bar is consensus, or a unanimous vote. This does not necessarily mean that everyone views the matter alike or perfectly agrees with the course of action. It does mean that they all officially assent, even with reservations, which is like giving your word or signing a binding contract.

The best way to come to consensus is through lengthy discussion. (If unanimity is required, then there is no limit to the discussion that may ensue.) Again, a model is the jury: in most criminal cases—but often not in civil cases—unanimity is required for a “conviction” (a term that implies the sincere belief of the jurors.) The jury must reach its conclusion “beyond a reasonable doubt.” A parliament or board of directors may find this ideal impractical, especially in timely matters. But what is considered urgent and timely is sometimes relative, or a matter of opinion, and could be put in a larger perspective of longer-term priorities.

The goal of consensus is especially relevant in long-term planning, which should set the general directions of a group for the far future. Since such matters involve the greatest uncertainty and room for disagreement, they merit the most thorough consideration and involve the least time constraint. A parliament, for example, might conduct both levels of discussion, perhaps in separate sessions: urgent matters at hand and long-term planning. Discussing the long-term provides a forum for rapprochement of opposing ideologies, resolving misunderstandings, and finding common ground. Even on shorter-term issues, it may turn out that wiser decisions are made through lengthier consideration.

In any case, the most productive attitude through which to approach any group decision is through careful listening to all arguments, from as objective and impersonal a point of view as possible. That means a humble attitude of mutual respect and cooperation, and an openness to novel possibilities. Effectively: brainstorming for the common good rather than barnstorming to support a cause or special interest. Democracy tends to align with individualism, and to fall into factions that represent a divergence of opinions and interests. What these have in common is always the world itself. When that is the common concern, there is an objective basis for agreement and the motive to cooperate.

Why the rich get richer

Since about 1970, there has been a world-wide trend toward increasing concentration of wealth. That means, for example, that ten percent of a population own far more than ten percent of wealth, and that ten percent of that group possess a disproportionate amount of its wealth—and so on up. It is not surprising that this trend is global, since capitalism is global. Yet it is accentuated in the English-speaking world and in the United States in particular. Why?

Thomas Piketty (Capital in the 21st Century and Capital and Ideology) attributes the trend to the fact that CEOs are awarded (or award themselves) outrageous salaries and other benefits, out of line with any service they could actually perform. In Europe and other places, there are social norms that limit the acceptance by society of such a practice, and there is also support for higher wages for labour in relation to management. Historically, the U.S. has had a low minimum wage. The ethos of that country also glorifies “winners” of all sorts, from celebrities to billionaires. Americans, it would seem, prefer a slim chance at extreme wealth over a reasonable chance at moderate wealth. Fairness is perceived in terms of equal opportunity rather than equal benefit. You could call that the lottery system, where the big prize comes out of the pockets of many small ticket holders. You could also call it the star system, where fame is promised as well as fortune. Like the economy, even the political system is a “winner takes all” sweepstake. Winning per se is the measure of success and the valued skill, so that those who know how to play the system to advantage are romanticized as heroes even while they are robbing you.

Perhaps one reason corporate executives can literally command such high rewards is that they are virtually not held accountable to shareholders. Even when elected, voter turn-out at AGMs is probably small and appropriate payment may not be on the agenda. We could expect corporate leadership to act in our interests, and without our supervision, just as we expect government officials to run a well-oiled machine. Yet, the general attitude of complacency is that shareholders don’t (and shouldn’t) care about excessive remuneration for those at the very top so long as the bottom line for shareholders is a positive gain. The same attitude spills over toward political leadership. If it’s too much trouble for citizens to track politics in the public sector, it is even more bothersome to track it in the private sector, where the issues and contenders are relatively invisible. As a result, CEOs have carte blanche to pay themselves whatever they like. They are in a position to use the system they control for their own benefit. If that breaks the corporation, the shareholders will pay, not the retiring executive; or else the public at large will pay through a government bailout.

There may be further explanations. As the nature of products has evolved, so has the method of valuation. Primary manufacturing made tangible products that entailed a recognizable amount of labour. Economic growth in modern times in developed countries has involved making less tangible products, which are more like services—for example, intellectual property such as computer programs and apps. (Primary manufacturing has migrated to countries where labour is cheaper.) In addition, investment itself has evolved from financing primary manufacturing or resource extraction to meta-investments in “commodities” and other financial abstractions. Speculation then promises a possible gain without a tangible product or expended effort. That has always been the real purpose of capital: to get a larger slice of the pie without doing any actually productive work. An effective form of gambling, the speculative game has evolved to great sophistication in modern times, requiring knowledge and skill. But it does not produce a real increase in wealth, only a redistribution upward.

For the most part, the modern rich have acquired or increased their wealth through applying certain skills dignified as business acumen. In many cases it has been a matter of being in the right place at the right time, as in the dotcom and social media booms. But in many cases these are skills to milk the system more than to produce “genuine” wealth, or to provide services of dubious benefit. Let us suppose there is an objective quantity of real goods and services in the world—things that actually improve people’s lives. It is this objective value which, like gold, ultimately backs up the currency in circulation. Anything that is falsely counted as an increase in genuine wealth can only dilute that objective quantity of real value—which makes GNP misleading. Money represents spending power—the ability to exchange symbolic units for real goods and services. Ultimately money buys other people’s time and effort. Getting money to flow into one’s pocket from someone else’s pocket increases one’s spending power, but does not produce more real wealth. Nor does simply printing more money. It takes effort (human or machine) to produce value that any and all can use. However, some efforts produce things that only the parties doing them can use. Stealing is such an effort. So is war. So is a lot of financial gain that is considered return on investment.

Historically, return on capital to its owners has always exceeded the amount of real growth in the economy. That has always been its very reason for being. But, only real growth can be shared in such a way as to benefit humanity as a whole. That means an increase in the infrastructure of civilization, the common wealth of humanity, potentially enjoyed by all. Spending power is a more private matter. The surplus generated as return on capital accumulates in certain hands as the exclusive ability to access and command the benefit resulting from real growth. Spending power trickles upward because certain individuals have figured out to make that happen. Modern capitalism, with its complex abstractions, is a sophisticated machine to pump spending power into particular coffers.

The distribution of wealth becomes the distribution of spending power; and the growing inequality means unequal access to the common resources of humanity. I do not mean only natural resources or the land, but also products of human enterprise (“improvements” as they are called). This includes things like buildings and art, but also technology and all that we find useful. Perhaps there is no intrinsic reason why there should be equal distribution of either property (capital, which is earning power) or spending/buying power. I am not advocating communism, which failed for the same reason that capitalism is faltering: selfish greed. The problem is not only that some players take advantage of others, which has always been the case. There is also a larger ecological fall-out of the game, which affects everyone and the whole earth. A lot of production is not rationally designed to better the human lot but only to gain advantage over others: widgets to make money rather than real improvement. Thoughtlessly, such production damages the biosphere and our future prospects.

There are ways to redistribute wealth more fairly, short of outright revolution, which never seems to be permanent anyway. Income tax can be made more steeply progressive. Assets and property of all sorts can be taxed progressively as well, so that wealth is redistributed to circulate more freely and does not accumulate in so few hands. There could be a guaranteed living, a guaranteed inheritance, and a guaranteed education for all. The wealthy should not dismiss these ideas as utopian. For, as things are, we are indeed on the historical track to violent revolution. Yet, along with redistribution of wealth as conventionally measured, we must also revolutionize our ideas about what constitutes real wealth—that is, what we truly value as improving life. Upon that our collective survival depends.

We would be far better off to think in terms of real value instead of spending power. However, those who seek personal gain above human betterment will no doubt continue to promote production that gives them what they opportunistically seek rather than what is objectively needed. Perhaps it is fortunate that much of modern economic activity does not involve material production at all. It may be a boon to the planet that humankind seems to be migrating to cyberspace. It may even be a boon that the pandemic has shut down much of the regular economy. What remains to see is how we put Humpty together again.

Is Mortality Necessary? Part Two: the demographic theory of senescence

In an earlier posting I raised the question of whether death is theoretically necessary for life—especially in a way that would thwart desires to extend healthful human longevity (“Is Mortality Necessary?” Aug 23, 2020). I pointed out that to answer this question would involve an evolutionary understanding of why mortality exists, and how the senescence of the multi-celled organism relates to that of its cells. In this second article I pursue these questions further, in relation to the prospect for life extension.

Evolution depends on natural selection, which depends on the passing of generations: in other words, on death. Each individual life is a trial, favoring those with traits better adapted to the current environment in a way that increases reproductive success. It seems counterintuitive that mortality itself could be an adaptive trait, though without it evolution by natural selection could never have taken place. However, the pace of natural evolution of the human species is less relevant now that our technological evolution exponentially outstrips it. Perhaps, then, evolution’s incidental shortcomings (mortality and aging of the individual) can and should be overcome. At the least, this would mean disabling or countering the mechanisms involved.

The history of culture amounts to a long quest to improve the human lot by managing environmental factors and creating a sheltering man-made world. (See my book, Second Nature, archived on this site.) This has been an accelerating process, so that most of the increase of human longevity has occurred only in the past couple of centuries, largely through medicine, sanitation, and technology. Infant and child mortality in particular had heavily weighted average lifespan downward for most of human existence. Their recent mitigation caused life “expectancy” to double from 40 to 80 or so years. Because it is a statistic, this is misleading. It is not about an individual lifespan, which (barring death by external causes) has not changed much over the centuries. Nevertheless, diseases of old age have replaced diseases of childhood as the principle threat to an individual life. These diseases may not be new, but formerly most people simply did not live long enough to die of them. Because they typically occur after reproductive age, there was no way for natural selection to weed them out. Heart disease, cancer, diabetes, Alzheimer’s, etc., are now deemed the principle causes of death. While these may be in part a product of our modern lifestyle, increasing vulnerability to such diseases is considered a marker of aging.

Yet, some scientists are coming to view aging itself as the primordial disease and cause of mortality. It was the triumph against external causes that resulted in the huge gain in the longevity statistic over the past century. However, to increase it much further would require eliminating internal causes. There has been a lot of research about particular mechanisms associated with aging, with optimism that these can be manipulated to increase an individual lifespan. Yet, there remains the possibility that aging is deeply built-in for evolutionary reasons. If so, aging might resist mere relief of its associated disease symptoms and might require different strategies to overcome.

The intuitive notion that the aged must die to make place for the young dates from antiquity. However, it goes against the modern idea that natural selection maximizes the fitness of the individual. (Shouldn’t a more fit individual live longer?) To make place for new generations, a built-in mechanism would be needed to cause the “fit” organism to senesce and die, if it had not already succumbed to external causes. This mechanism would have to prove advantageous to the species or to a sub-population, else it would not be selected. Such mechanisms have been identified at the cellular level (apoptosis, telomere shortening), but not at the higher level of the organism as an individual or a population. If there is some reason why individuals must die, it is not clear how this necessity relates to the cellular level, where these mechanisms do serve a purpose, or why some cells can reproduce indefinitely and some telomeres either do not shorten or else can be repaired.

Some creatures seem programmed to die soon after reproducing. Unlike the mayfly and the octopus, a human being can live on to reproduce several times and continue to live long after. Humans can have a post-reproductive life equal at least to the length of their reproductive stage. But the other side of that question is why the reproductive phase comes to an end at all. If evolution favored maximum proliferation of the species, shouldn’t individuals live longer in vigor to reproduce more? This gets to the heart of the question of why there might be an evolutionary reason for aging and mortality, which—though not favoring the interests of the individual—might favor the interests of a population.

The Demographic Theory of Senescence is the intriguing idea that built-in aging and mortality serve to stabilize population level and growth. (See the writings of Joshua Mitteldorf.) Without them, populations of predator and prey could fluctuate chaotically, driving one or both to extinction. Built-in senescence dampens runaway population growth in times of feast, without inhibiting survival in times of famine (brought on, for example, by overpopulation and resource depletion). In fact, individuals hunker down, eating and reproducing less and living longer under deprivation, to see a better day when reproduction makes more sense. In other words, mortality is a trait selected for at the group level, which can override selection against it at the individual level. Mortality also favors the genetic diversity needed to defend against ever-mutating disease and environmental change (the very function of sexual reproduction). For, a population dominated by immortals would disfavor new blood with better disease resistance. Yet, theorists have been reluctant to view senescence as a property that could be selected for, because it seems so contrary to the interests of the individual. In western society, sacrifice of individuality is a hard pill to swallow—even, apparently, for evolutionary theorists. Yet the whole history of life is founded on organisms giving up their individuality for the sake of cooperating toward the common good within a higher level of organization: mitochondria incorporated into cells, cells into differentiated tissue and organs, organs into organism, individuals into colonies and communities.

Apoptosis (programmed cell death) has been documented in yeast cells, for example, as an “altruistic” adaptation during periods of food scarcity. In the multi-celled organism, it serves to sculpt development of the embryo, and later to prevent cancer, by pruning away certain cells. In the demographic theory, it is also a mechanism naturally selected to stabilize population. Since telomerase is available in the organism to repair telomeres as needed, the fact that it can be withheld might also be an adaptation to reduce life span for the sake of stabilizing population. Pleiotropy (when a gene serves multiple functions that are sometimes contradictory) is selected, then, because it promotes aging and thus acts against runaway population growth.

If this is the right way of looking at it, on the one hand specific mechanisms are selected that result in mortality as a benefit to populations though at a cost to individuals. It might be possible to counteract these mechanisms in such a way as to prolong individual life. On the other hand, following this path successfully would defeat the evolutionary function of built-in mortality unless intentional measures are taken to insure genetic diversity and to control population growth. The lesson is that if we are to deliberately interfere with aging and mortality (as we are certainly trying to do), we must also deliberately do what is required in their place: limit human population while providing artificially for the health benefits of genetic diversity. If the goal is a sustainably stable population, people cannot be born at current rates when they are no longer dying at natural rates.

These are global political and ethical issues. Who would be allowed to reproduce? Who would be kept artificially alive? The production and rearing of children could be controlled, indeed could become a state or communal function, divorced from sexual intercourse. The elderly could be educated to let go of an unsatisfying existence—or be forced to. The diversity issue is a technological problem, which genetic medicine already proposes to engage; it too raises the unsavory prospect of eugenics. All this brings to mind the transhumanist project to assume total conscious control of human nature, evolution, and destiny.

At present, of course, we do not have the institutions required for humanity to take full charge of its future. Science and technology attempt to control nature for piecemeal, often short-sighted purposes, or as rear-guard actions (as we have experienced in the pandemic). But far more would be involved to wrest from nature control over evolution and total regulation of the earth’s biosphere (or of an artificial one somewhere else). Immortality would require a lot more information than we now have. To be sustainable, it would also require a level of organization of which humanity is currently incapable. It would need a selfless spirit of global cooperation to act objectively for the common good, which does not yet exist and perhaps never will. (Imagine instead a tyrannical dictator who could live forever!) Immortality would require a serious upgrading of our values. At the very least, it would require rethinking the individualism on which modern humanism and science itself were founded, let alone the consumer culture. Science, politics, religion, and philosophy will have to finally merge if we are to take charge of human destiny. (Plato may have grasped this, well ahead of our time.)

Ecological thinking is about populations rather than individuals. Communism was perverted by the societies that embraced it and rejected by the societies that cling to individual property rights—in both cases out of individualist self-obsession. In the current pandemic, however, we are forced to think of populations as well as individuals. Yet, the strategy so far has been oriented toward “saving lives at any cost.” If the demographic theory of senescence holds any water, our social development will have to keep pace with strategies to increase lifespan if we are not to breed ourselves to oblivion. Individuals will have to voluntarily redefine personal identity and their relation to the collective. And, of course, their relationship to aging and death.

Why nature should be accorded the rights of personhood

Let us first understand what a person is and why nature is not regarded as a person in our culture. But, even before that, let us acknowledge that nature once was so regarded. Long before industrial society became obsessed with the metaphor of mechanism, ancient peoples conceived the world as a sort of living organism. It was peopled with creatures, plants, spirits, and other entities who were treated as personalities on a par with human beings. Then along came the scientific revolution.

Nature for us moderns is impersonal by definition. For us, physical things are inert and incapable of responding to us in the way that fellow human beings are. Our science defines nature as comprised only of physical entities, not of moral agents with responsibilities and obligations. Humans have formed an exclusive club, which admits only members of our kind. Members are accorded all sorts of extravagant courtesies, such as human rights, equality before the law, the obligation to relieve (human) suffering and save life at any cost, the sentiment that (human) life is infinitely precious. We have appropriated half the earth as our clubhouse and grounds, expelling other inhabitants toward whom we feel no need to extend these courtesies. Having eliminated most of those formidable competitors who posed a threat and demanded respect, the remainder of creatures are no more to us than raw materials, to use and enslave as we please, or to consume as flesh.

We have done all this because we have been able to, with little thought for the rightness of such an attitude. In the Western hemisphere, our society was founded on domination of the peoples who occupied the land before the European invasion (the “discovery” of the New World). Significantly, it went without saying that this included the assumed right to dominate the other species as well. In other words, plundering these lands for their resources, and disregarding their native human inhabitants, went hand in hand to express an attitude that depersonalized both. The Conquest, as it has been called, was simultaneously about conquering people and conquering nature.

Probably this attitude started long before, perhaps with the domestication of wild animals and plants. The relationship to prey in the traditional hunt was fierce but respectful, a contest between worthy opponents, especially when large creatures were involved. (Can it be coincidence that losers of this contest are called “game”?) Often an apology of gratitude was made to the murdered creature, no part of whose valued carcass was wasted. When animals were enslaved and bred for human purposes, the tenor of the relationship changed from respect to control. The war on animals concluded in a regime of permanent occupation. A similar shift occurred when plants were no longer gathered but cultivated as crops. Both plants and animals were once personified and treated as the peers of humans. That relationship of awe gave way to a relationship of managing a resource. The growing numbers of the human invaders, and their lack of self-restraint, led eventually to the present breakdown of sustainable management.

Today we embrace a universal biological definition of human being, and a notion of universal human rights and personhood. Throughout history, however, this was hardly the inclusive category that it now is. Groups other than one’s own could be considered non-human, reduced to the status of chattel, or even literally hunted as prey. In other words, the distinction between person and thing remained ambiguous. Depersonalization is still a tactic used to justify mistreating people that some group considers undesirable, inferior, or threatening. Personhood is still a function of membership in the exclusive human Club. But since the qualifications for that membership were unclear throughout most of human history, perhaps it works both ways. Instead of excluding animals, plants, and nature at large, we could give them the benefit of the doubt to welcome them into our fellowship.

Now, what exactly is a person? In fact, there is no universally accepted legal or even philosophical definition. The word derives from the Greek term for the theatre mask, which indicated the stage character as distinguished from the mere actor wearing it. It means literally the sound coming through the mask, implying a transcendent origin independent of the physical speaker. Personhood so understood is intimately tied to language and hence limited to human beings as the sole known users of “fully grammatical language.” Other creatures do obviously communicate, but not in a way that entitles them to membership in the Club.

Various features of personhood—loosely related to language use—include reason, consciousness, moral and legal responsibility, the ability to enter contractual relationships, etc. Personhood confers numerous legal rights in human society. These include owning property, making and enforcing contracts, the right to self-govern and to hire other agents to act on one’s behalf. It guarantees civil rights to life and liberty within human society. One could argue, on behalf of non-human creatures, that they meet some of these criteria for personhood. In many cases, like aboriginal people, they were here before us, occupying the lands and waterways. Western society is now judicially and morally preoccupied with historical treaties and land claims of aboriginal groups—and more generally with redressing social wrongs committed by former generations. Certainly, our exploitive relation to nature at large is among these wrongs, as is our checkered relation to particular species such as whales, and ecological entities such as forests, rivers, oceans and atmosphere. Thinking on the subject tends to remain human-centric: we are motivated primarily by threats to our survival arising from our own abusive practices. But such a self-serving attitude remains abusive, and morally ought to give way to a consideration of victims that are not yet extinct, including the planet as a whole. If restorative justice applies to human tribes, why not to animal tribes? Why not to nature at large?

Animals reason in their own ways. Although insects may be little more than automatons, vertebrates are clearly sentient; many display concern for their own kin or kind and sometimes for members of other species. While they do not make written contracts, at one time neither did people! Animals (and even plants) maintain measured relationships, not only among their own ranks but also with other species, in an intricate system of mutual symbiosis and benefit. These interdependencies could be regarded as informal contracts, much like those which human beings engage in quite apart from written contract and law. Deer, for example, tend to browse and not over-forage their food supply. Even viruses are reluctant to kill their hosts. It could be argued that human beings, in the course of substituting formal legalities, have lost this common sense and all restraint, and thereby forfeited their claims.

Of course, we now say that natural ecological balancing acts are not “intentional,” in the conscious human sense, but the result of “instinct” or “natural selection,” and ultimately of inanimate “causal processes.” This is a very narrow and human-centric understanding of intentionality. Fundamentally, it is circular reasoning: intending is what people do, in the human world, while causality is held to be the rule in nature. However, both intention and cause (and even the concept of ‘nature’) are human constructs. Causality is no more an inherent fact of nature than intention is absent from it. It is we humans who have imposed this duality, thereby seceding from the animal kingdom and from nature at large. Humans have formalized their observations of natural patterns as laws of nature, and formalized their own social conduct and conventions as laws of jurisprudence. Other creatures simply do what works for them within the system of nature without articulating it. Are we superior simply because we gossip among ourselves about what other creatures are doing?

Are we superior because might makes so-called rights? We chase other creatures (and fellow human beings) from a habitat and claim to own it. But private property, like money, is a legalistic convention within the Club that has no clear connection with any right of ownership outside of human society. “Ownership” in this broader sense is more deserved by those whose proven behavior adheres to the unwritten laws that maintain an ecosystem in balance with their fellow creatures. It is least deserved by those who destroy that balance, while ironically permitted to do so through written laws. If stewardship is a justification for aboriginal land claims, why not for any right of ownership of the planet? Almost any species has a better claim to stewardship than humans. After all, we have rather botched the unceded territory that the Club now occupies.

Nature is self-regulating by nature; it doesn’t need to be granted the right to self-govern by human beings. It cannot speak for itself in human language, but expresses its needs directly in its own condition. Human scientists appoint themselves to interpret those expressions, largely for human purposes. The State appoints a public defender to represent people who cannot afford counsel or who are deemed unable to defend themselves. So, why not public defenders of the planet? Their job would be to insure, in the eyes of human beings, nature’s right to life, liberty, and well-being. That could only benefit us too, since it has never been more than a delusion that we can live segregated from the rest of nature.

 

An Objective Science of Economics?

Physicists and even biologists can pretend to an objective understanding of the phenomena they study. That is to the degree they can stand outside the systems concerned. Is that possible for economists? While the economist is perhaps more akin to the anthropologist, even the latter is traditionally an outsider to the cultural systems studied. The ubiquity of modern global capitalism poses the same dilemma to the economist now that the anthropologist will face when all indigenous cultures have been absorbed into modernity.

This has not stopped generations of economists from holding forth on the “true” nature of such concepts as value, wealth, capital, and growth. Biologists, and even anthropologists, have been able to analyze living systems in their own human terms—as distinct from terms proper to the living systems themselves. That is, scientists dwell in a world apart from the world they observe. No such separation is possible for economists, who inevitably think in the terms characteristic of their society, which is increasingly a monoculture alienated from nature.

Economic concepts reflect values current in modern capitalist society. Despite rival and contradictory theories, on some level this renders all economic theory fundamentally self-serving, even self-confirming. At worst it is misleading and deceptive, often embraced by governments to legitimize their policies. For example, the “trickle down” notion endorsed by Reagan and Thatcher was used to justify an unregulated market, on the promise that profit going to the elite would ultimately benefit society as a whole, if not equally. This, when it was already widely apparent that inequality of wealth was expanding far more rapidly than wealth itself! The tacit assumption is that no one should object to the accelerating concentration of wealth, provided their own situation was moderately improving or at least not getting worse. By hiding behind averages, however, economic analysis can obscure what people on the ground are actually experiencing, which in any case is subjective and includes things like envy and perceived unfairness. Concepts like ‘national income’, ‘GNP’, ‘total capital’ (private or public) are statistical averages that usefully reveal inequalities between countries or regions, and over time. Yet, they mask huge inequalities among individuals within a given country.

Life in consumer society has in many ways been visibly democratized. We are theoretically now equal before the law, each with a single vote. The modern upper class blends in with the rest of society more easily than traditional aristocrats, who flaunted their class status. The modern gated community is far less conspicuous than the castle on the hill. The Jaguar or Tesla is not as flagrant a status symbol as the gilded horse-drawn coach; it must travel on the same public highways as the rest of us and obey the same traffic laws. First class air travel is only moderately less uncomfortable than “economy” class; most of us have not seen the inside of a private jet and probably not the outside either. The extreme wealth of the very rich consists less of evident status symbols than invisible stock portfolios and bank accounts. Yet, while the modern rich may be unobtrusive, they are increasingly powerful. And the poor are increasingly numerous, obvious, and disenfranchised.

Most of economics since Marx has been little more than an apology for systemic inequality. One reason for this may be that “the economy” is as complex as human being. The stock market and modern investment “instruments” are accordingly abstruse, with those in the know privileged to gain from them. It is little wonder that the theories devised by economists (mostly academics) have been so esoteric. But this makes it all the harder, even for them, to call a spade a spade or to declare the emperor nude. Yet, millions of people are possessed enough of common sense to see that the rich are getting richer and the poor even poorer for reasons that can hardly be historical accident. The gains of the middle class in mid-twentieth century are quickly eroding. Their very assets (such as mutual or pension funds) are being used against them to line the pockets of corporate executives. The modern capitalist system has been honed by those who control it to siphon wealth from the now shriveling middle class. This can hardly be surprising, since—to put it bluntly—the purpose of capital has always been to create inequality! What is surprising is how efficient it has become in the last few decades at undoing the hard-won gains of the middle class.

Modern economists may or may not be capitalist dupes, and the top-heavy capitalist system itself may or may not be doomed to collapse. But is there another role possible for a science of economics? The traditional focus of economic theory has been anthropocentric by definition. It is about the production and distribution of wealth in very human terms, and in many cases from the perspective of the ruling class. In particular, it does not consider business and growth from a planetary point of view, in terms of its effects on nature. Such a perspective would be nearly a negative image of the human one: all that is considered an asset in human terms is largely a deficit in the natural economy. This deficit has become so large that it can no longer be ignored by the human economy; it threatens to break the bank of nature. If an objective economics is possible at all, it would at the least have to encompass the planet as the true stage upon which all economic concepts must be redefined to play out. By this I do not mean carbon credits or assigning a market value to natural resources or to services provided by nature, such as recycling air and water. I mean rather a complete shift from the human perspective to that of nature. From that perspective, what would have value would be services provided to nature by human beings, rather than the other way around.

No doubt, from the natural point of view, the greatest service we could perform would be to simply disappear—gracefully, of course, without wreaking further havoc on the planet in the process! I don’t think people are going to agree to this, but they may ultimately be forced to concede at least some rights to the rest of the biosphere as a co-participant in the game. A more objective economics would at least give parity to human and natural interests and would focus on the interaction between them. Concepts like “value” and “growth” would shift accordingly. In human economics, value has been assigned a variety of meanings, reflecting the daunting convolutions of the human world in which economists are perforce enmeshed: labor-value, use-value, exchange-value, surplus value, market value, etc. Yet, an economics that focuses equally on the needs of nature could simplify matters by moving the human player from center stage. It could become an actual science at last, insofar as it would study the world independent of human values, like other sciences are supposed to do.

The various anthropocentric concepts of value could fold into to a single parameter representing an optimum that maximizes both human and natural interests. This approach is used in game theory, where contending human players vie strategically, and the optimum looks like a compromise among opponents trying to maximize their individual “utility” and minimize their losses. Why not consider nature a player in the game? And why not consider strategies based on cooperation rather than competition? Suppose the aim is to arrive at the best possible world for all concerned, including other species and the planet as a whole. Value then would be measured by how much something advances toward that common goal, rather than how much it advances individual human players at the expense of others and nature. Growth would be the increase of overall well-being.

Money in such a world would be a measure of duties to perform rather than privileges to exercise. It is often said that money confers freedom to its possessor. In an objective economics, this freedom would be put in the context of how it affects other beings, human and non-human. The value of a “dollar” would be the net entitlement it represents: its power to benefit its holder minus its power to damage the environment, which includes other beings. Put more positively, money would measure the power to contribute to the general welfare—not the power to consume but the power to do good. The individual would benefit only statistically and indirectly from spending it, as a member of an improved world. As in prehistory, the individual would again figure more as an element of a community than as a competitor with other individuals. Admittedly, this would be communism—perhaps on steroids. But communism as we’ve known it is passé, both as a failed social system and as a dirty word in the capitalist lexicon. There are many who wish to continue to believe in the opportunism of modern capitalism, bumbling toward ecological and social collapse. But they are obliged to find a new scare word to castigate the threat to their individual liberty, which is little more than freedom to hasten the destruction of the planet.

The Problem with Money

Capitalism is the practice of using wealth to collect interest or rent. That means that those who possess capital can have an income not from their labor but from their property. It’s important to grasp that “real” productivity comes from work that provides a service or transforms materials into something humanly useful—whether the exertion comes from muscles or from machines, and whether the materials are physical or more intangible, such as concepts. Income from capital means more spending power, but not necessarily more productivity. In other words, your spending power can grow on its own, but only your labor can make real value. While capital can increase your share of total wealth, it cannot by itself increase the supply of goods in the world that have real value.

Some economists consider it a problem when global capital is not productive enough. Being super-rich already, its owners may not be motivated to use it as effectively as someone with less means. However, productive of what? Effective toward what end? In the face of ecological disaster, production that results in pollution, exhaustion of non-renewable resources, and global warming is not objectively desirable. Consumption that requires such production should be discouraged. Money sitting idle is at least money not being used to destroy the planet!

An individual is both producer and consumer, according to how their personal wealth and energy are disposed. The extremely rich have excessive spending power as consumers; yet, only so much personal consumption can be directly satisfying. (A person can only sleep in one bed at a time, drive one car, fly one plane, eat one meal, etc.) Spending power represents choice for the wealthy more than direct consumption. Of course, they may cause more environmental fallout in their quest to establish that range of choice (several houses around the world, several cars, private jets and boats, etc.) Yet, it could be telling to imagine the cumulative effect of the same total spending power if it was distributed over many poorer individuals, who might be more motivated to use their comparatively meager resources as capital to further increase their wealth. Would the redistribution result in more or less global warming? In other words, might the capital of a large middle class have a worse ecological effect than the same capital in the hands of a small super-rich elite? If so, would it not be ecologically better to maintain the unequal distribution? That is a big “if,” and I am not justifying inequality, which many perceive as the world’s foremost social problem. Rather, I want to point out that merely redistributing wealth in the name of fairness would not solve our global ecological challenges. Productivity and growth must be altogether redefined, in such a way that eliminates production detrimental to humanity’s long-term interests.

Laissez-faire means one can buy what one wants if it is available and produce what one wants in the hopes someone will buy it. But nature is no longer going to let us do just what we want. Government could intervene to limit production to certain “goods” that are indeed good (or less bad) for our human future. It could also limit what can be purchased, by specifying what credits can be used for. While libertarians would object to such impositions, they are free to embrace and advocate voluntary restraint. One could argue, under the circumstances, that one is morally obliged to do so, since the other side of freedom is responsibility. In effect, we turn to government to make us do what we know is necessary, but which we resist—in part because we fear others will not do their fair share. The job of government is to impose fairness along with order, like a parent who must deal with squabbling children.

The whole advantage of money is that it can be used for any purpose that others accept, regardless of who it belongs to or where or in what form it is stored. But, what if the dollar were not this anonymous and abstract unit of power, but could be used only for specified purposes—for example, for basic necessities or for “green” projects? This is entirely feasible in the digital economy. A unit of wealth could be spendable only on certain products or services, or used to finance only certain projects or activities. Each dollar (or euro or yen) would have its own earmarked identity, could only be used for its designated purpose, and would be a non-transferable credit. No doubt a black market would arise to get around such restriction. But that loophole effectively already exists, in the form of untaxed offshore accounts and other unfair advantages for the ultra-rich.

If total global wealth were distributed evenly, an individual typically might not have enough savings to finance an enterprise beyond a certain scale. And where wealth is distributed unevenly (as it almost always is), the vast majority will not have the needed resources—which is why they borrow from those who do. This is a fundamental unresolved problem of social organization. At every level of community, people have always needed to pool resources to get some things done. When a few control the lion’s share of resources, however, the temptation is to charge the community for their use. From the community’s point of view, this is a form of extortion. “Usury” was once forbidden by religions for good reason. We have gotten used to it, so that it seems normal and necessary, even if unfair. The result is a mounting burden of private and public debt, with crippling interest payments and skyrocketing rents and real estate prices.

If 70% of global wealth belongs to 10% of global population, then those wealthy individuals are 70% responsible for how the world’s resources are used. The fate of the planet lies proportionately in their hands. Quite apart from the question of redistribution, fairness dictates that the greatest pressure to reform should fall upon them through taxation and legislation. But rather than being an incentive to use capital more “efficiently,” these measures should be used to foster production that does not damage the biosphere. The rich could take the lead in setting an example concerning the sort of investment to make. Rather than simply reducing inequality of wealth, the focus should also be to reduce its environmental effects.

To reduce the environmental impact of production, manufactured products must be designed to last for generations. They must satisfy real needs. Public and private property must be thought of as a patrimony to be passed on generation to generation. Nothing must be produced for the sake of making money—that is, merely to establish or justify one’s slice of the economic pie. On the contrary, a basic living should be everyone’s right! An economy is a game with many players, in which each person’s share is decided and justified. Yet, the game is not only about the individual’s winnings in a contest with others. For, every society is a collective enterprise, in which something is achieved beyond the holdings of the individual. The human enterprise as a whole is far more cooperative than competitive.

Granted that we need some material stuff, it should at least be durable and of high quality. However, since the beginning of the industrial revolution, production has only indirectly been aimed at satisfying real human need. The immediate goal has been to increase the investor’s share of the economic pie: make more widgets to get more money. It was never to make lasting stuff; or stuff so excellent that it didn’t invite an improved version. Quite the contrary, it didn’t take long for manufacturers to discover built-in obsolescence. With the development of plastics, durability and quality for the most part went out the window. The real value of “goods” (that is, their ability to satisfy genuine human need) was displaced by their symbolic or token value (that is, as a pretext to make money). The irony is that the quality of available goods was lowered for the rich along with everyone else. Spending power is thereby diluted for everyone in terms of its ability to purchase things and services of real value. As always, in compensation, the ultra-rich have sustained a cadre of artisans who can still produce the quality they alone can now afford. Otherwise, what would they have to show for their wealth?

It is largely the production of material goods that results in the carbon footprint. To some extent, the modern economy is already being de-materialized by information technologies. But Man cannot live by information alone. Material goods remain the gold standard of economic value, because we remain physical beings with material needs. In some sense, de-materializing the economy inflates the unit of value and slows “real” growth. And that growth must slow down for the sake of the planet. Yet low growth has a counterintuitive consequence in addition to the obvious belt-tightening for all: it increases inequality. Low growth increases inequality because the rate of return on capital (possessed disproportionately by the rich) has always been higher than the rate of growth (produced by everyone collectively). This has mattered less in exceptional times of high growth, because there is then a surplus to benefit everyone (so-called trickle-down) even though the rich benefit more. But high growth rate is neither historically normal nor sustainable ecologically. We have to find a way of life that is both fair and minimal in its planetary impact.