R U Real?

For millennia, philosophers have debated the nature of perception and its relation to reality. Their speculations have been shaped by the prevailing concerns and metaphors of their age. The ancient Greeks, with slaves to do their work, were less interested in labor-saving inventions than in abstract concepts and principles. Plato’s allegory of the Cave refers to no technology more sophisticated than fire—harking back, perhaps, to times when people literally lived in caves. (It does refer to the notion of prisoner, long familiar from slavery and military conquest in the ancient world.)

In Plato’s low-tech metaphor, the relationship of the perceiving subject to the objects of perception is like that of someone in solitary confinement. The unfortunate prisoner’s head is even restrained in such a way that he/she is able only to see the shadows cast, on the walls of the cave, by objects passing behind—but never the objects themselves. It was a prescient intuition, anticipating the later discovery that the organ responsible for perception is the brain, confined like a prisoner in the cave of the skull. Plato believed it was possible to escape this imprisonment. In his metaphor, the liberated person could emerge from the cave and see things for what they are in the light of day—which to Plato meant the light of pure reason, freed from dependence on base sensation.

Fast forward about two millennia to Catholic France, where Descartes argues that the perceiving subject could be systematically deceived by some mischievous agent capable of falsifying the sensory input to the brain. Descartes understood that knowledge of the world is crucially dependent on afferent nerves, which could be surgically tampered with. (The modern version of this metaphor is the “brain in a vat,” wired up to a computer that sends all the right signals to the brain to convince it that it is living in a body and engaged in normal perception of the world.) While Descartes was accordingly skeptical about knowledge derived from the senses, he claimed that God would not permit such a deception. In our age, in contrast, we not only know that deception is feasible, but even curry it in the form of virtual entertainments. The film The Matrix is a virtual entertainment about virtual entertainments, expounding on the theme of the brain in a vat.

Fast forward again a century and a half to Immanuel Kant. Without recourse to metaphor or anatomy, he clearly articulated for the first time the perceiving subject’s inescapable isolation from objective reality. (In view of the brain’s isolation within the skull, the nature of the subject’s relation to the outside world is clearly not a transparent window through which things are seen as they “truly” are.) Nevertheless, while even God almighty could do nothing about this unfortunate condition, Kant claimed that the very impossibility of direct knowledge of external reality was reason for faith. In an age when science was encroaching on religion, he contended that it was impossible to decide issues about God, free will, and immortality—precisely because they are beyond reach in the inaccessible realm of things-in-themselves. One is free he insisted, to believe in such things on moral if not epistemological grounds.

Curiously, each of these key figures appeals to morality or religion to resolve the question of reality, in what are essentially early theories of cognition. Plato does not seem to grasp the significance of his own metaphor as a comment on the nature of mind. Rather, it is incidental to his ideas on politics and the moral superiority of the “enlightened.” Descartes—who probably knew better, yet feared the Church—resorts to God to justify the possibility of true knowledge. And Kant, for whom even reason is suspect, had to “deny knowledge in order to make room for faith.” We must fast forward again another century to find a genuinely scientific model of cognition. In Hermann Helmholtz’s notion of unconscious inference, the brain constructs a “theory” of the external world using symbolic representations that are transforms of sensory input. His is a precursor of computational theories of cognition. The metaphor works both ways: one could say that perception is modeled on scientific inference; but one can equally say that science is a cognitive process which recapitulates and extends natural perception.

Given its commitment to an objective view, it is ironic that science shied away from the implications of Kant’s thesis that reality is off-limits to the mind. While computational theories explain cognition as a form of behavior, they fail to address: (1) the brain’s epistemic isolation from the external world; (2) the nature of conscious experience, if it is not a direct revelation of the world; and (3) the insidious circularity involved in accounts of perception.

To put yourself in the brain’s shoes (first point, above), imagine you live permanently underwater in a submarine—with no periscope, port holes, or hatch. You have grown up inside and have never been outside its hull to view the world first-hand. You have only instrument panels and controls to deal with, and initially you have no idea what these are for. Only by lengthy trial and error do you discover correlations between instrument readings and control settings. These correlations give you the idea that you are inside a vessel that can move about under your direction, within an “external” environment that surrounds it. Using sonar, you construct a “picture” of that presumptive world, which you call “seeing.”

This is metaphor, of course, and all metaphors have their limitations. This one does not tell us, for example, exactly what it means to be “having a picture” of the external world (second point), beyond the fact that it enables the submariner to “navigate.” This picture (conscious perception) is evidently a sort of real-time map—but of what? And why is it consciously experienced rather than just quietly running as a program that draws on a data bank to guide the behavior of navigating? (In other words, why is there a submariner at all, as opposed to a fully automated underwater machine?) Furthermore, the brain’s mastery of its situation is not a function of one lifetime only. The “trial and error” takes place in evolutionary time, over many generations of failures that result in wrecked machines.

In the attempt to explain seeing, perhaps the greatest failure of the metaphor is the circularity of presuming someone inside the submarine who already has the ability to see: some inner person who already has a concept of reality outside the hull (skull), and who moves about inside the seemingly real space of the submarine’s interior, aware of instrument panels and control levers as really existing things. It is as though a smaller submarine swims about inside the larger one, trying to learn the ropes, and within that submarine an even smaller one… ad infinitum!

The problem with scientific theories of cognition is that they already presume the real world whose appearance in the mind they are trying to explain. The physical brain, with neurons, is presumed to exist in a physical world as it appears to humans—in order to explain that very appearance, which includes such things as brains and neurons and the atoms of which they are composed. The output of the brain is recycled as its input! To my knowledge, Kant did not venture to discuss this circularity. Yet, it clearly affirms that the world-in-itself is epistemically inaccessible, since there is no way out of this recycling. However, rather than be discouraged by this as a defeat of the quest for knowledge or reality, we should take it as in invitation to understand what “knowledge” can actually mean, and what the concept of “reality” can be for prisoners inside the cave of the skull.

Clearly, for any organism, what is real is what can affect its well-being and survival, and what it can affect in turn. (This is congruent with the epistemology of science: what is real is that with which the observer can causally interact.) The submariner’s picture and knowledge of the world outside the hull is “realistic” to the degree it facilitates successful navigation—that is, survival. The question of whether such knowledge is “true” has little meaning outside this context. Except in these limited terms, you cannot know what is outside your skull—or what is inside it, for that matter. The neurosurgeon can open up a skull to reveal a brain—can even stimulate that brain electrically to make it experience something the surgeon takes to be a hallucination. But even if the surgeon opened her own skull to peek inside, and manipulated her own experience, what she would see is but an image created by her own brain—in this case perhaps altered by her surgical interventions. The submariner’s constructed map is projected as external, real, and even accurate. But it is not the territory. What makes experience veridical or false is hardly as straightforward as the scientific worldview suggests. Science, as an extended or supplementary form of cognition, is as dependent on these caveats as natural perception. Whether scientific knowledge of the external world ultimately qualifies as truth will depend on how well it serves the survival of our species. On that the jury is still out.

Are You Fine-tuned? (Or: the story of Goldilocks and the three dimensions)

The fine-tuning problem is the notion that the physical universe appears to be precisely adjusted to allow the existence of life. It is the apparent fact that many fundamental parameters of physics and cosmology could not differ much from their actual values, nor could the basic laws of physics be much different, without resulting in a universe that would not support life. Creationists point to this coincidence as evidence of intelligent design by God. Some thinkers point to it as evidence that our universe was engineered by advanced aliens. And some even propose that physical reality is actually a computer simulation we are living in (created, of course, by advanced aliens). But perhaps fine-tuning is a set-up that simply points to the need for a different a way of thinking.

First of all, the problem assumes that the universe could be different than it is—that fundamental parameters of physics could have different values than they actually do in our world. This presumes some context in which basic properties can vary. That context is a mechanistic point of view. The Stanford Encyclopedia of Philosophy defines fine-tuning as the “sensitive dependences of facts or properties on the values of certain parameters.” It points to technological devices (machines) as paradigm examples of systems that have been fine-tuned by engineers to perform an optimal way, like tuning a car engine. The mechanistic framework of science implicitly suggests an external designer, engineer, mechanic or tinkerer—if not God, then the scientist. In fact, the early scientists were literally Creationists. Whatever the solution, the problem is an historical residue of their mechanistic outlook. The answer may require that we look at the universe in a more organic way.

The religious solution was to suppose that the exact tweaking needed to account for observed values of physical parameters must be intentional and not accidental. The universe could only be fine-tuned by design—as a machine is. However, the scale and degree of precision are far above the capabilities of human engineers. This suggests that the designer must have near-infinite powers, and must live in some other reality or sector of the universe. Only God or vastly superior alien beings would have the know-how to create the universe we know. Alternatively, such precision could imply that the universe is not even physical, but merely a product of definition, a digital simulation or virtual reality. Ergo, there must be another level of reality behind the apparent physical one. But such thinking is ontologically extravagant.

Apart from creationism, super-aliens, or life in a cosmic computer, a more conventional approach to the problem is statistical. One can explain a freak occurrence as a random event in a large run of very many. Like an infinite number of monkeys with typewriters, one is bound to type out Shakespeare eventually. If, say, there are enough universes with random properties, it seems plausible that at least one of them would be suitable for the emergence of life. Since we are here, we must be living in that universe. But this line of reasoning is also ontologically costly: one must assume an indefinite number of actual or past “other universes” to explain this single one. The inspiration for such schemes is organic insofar as it suggests some sort of natural selection among many variants. That could the “anthropic” selection mentioned above or some Darwinian selection among generations of universes (such as Lee Smolin’s black hole theory). Such a “multiverse” scheme could be true, but we should only think so because of real evidence and not in order to make an apparent dilemma go away.

It might be ontologically more economical to assume that our singular universe somehow fine-tunes itself. After all, organisms seem to fine-tune themselves. Their parts cooperate in an extremely complex way that cannot be understood by thinking of the system as a machine designed from the outside. If nature (the one and only universe) is more like an organism than a machine, then the fine-tuning problem should be approached a different way, if indeed it is a problem at all. Instead of looking at life as a special instance of the evolution of inert matter, one could look at the evolution of supposedly inert matter (physics) as a special case involving principles that can also describe the evolution of life.

Systems in physics are simple by definition. Indeed, they are conceived for simplicity. In contrast, organisms (and the entire biosphere) are complex and homeostatic. Apart from the definitions imposed by biologists. organisms are also selfdefining. Physical systems are generally analyzed in terms of one causal factor at a time—as in “controlled” experiments. As the name suggests, this way of looking aims to control nature in the way we can control machines, which operate on simple linear causality. Biological systems involve very many mutual and circular causes, hard to disentangle or control. Whereas the physical system (machine) reflects the observer’s intentionality and purposes—to produce something of human benefit—the organism aims to produce and maintain itself. Perhaps it is time to regard the cosmos as a self-organizing entity.

Fine-tuning argues that life could not have existed if the laws of nature were slightly different, if the constants of nature were slightly different, or if the initial conditions at the Big Bang were slightly different—in other words, in most conceivable alternative universes. But is an alternative universe physically possible simply because we can conceive it? The very business of physics is to propose theoretical models that are free creations of mathematical imagination. Such models are conceptual machines. We can imagine worlds with a different physics; but does imagining them make them real? The fact that a mathematical model can generate alternative worlds may falsely suggest that there is some real cosmic generator of universes churning out alternative versions with differing parameters and even different laws. “Fundamental parameters” are knobs on a conceptual machine, which can be tweaked. But they are not knobs on the world itself. They are variables of equations, which describe the behavior of the model. The idea of fine-tuning confuses the model with the reality it models.

The notion of alternative values for fundamental parameters extends even to imagining what the world would be like with more than or less than three spatial dimensions. But the very idea of dimension (like that of parameter) is a convention. Space itself just is. What we mean literally by spatial dimensions are directions at right angles to each other—of which there are but three in Euclidean geometry. The idea that this number could be different derives from an abstract concept of space in contrast to literal space: dimensions of a conceptual system—such as phase space or in non-Euclidean geometry. The resultant “landscape” of possible worlds is no more than a useful metaphor. If three dimensions are just right for life, it is because the world we live in happens to be real and not merely conceptual.

The very notion of fundamental parameters is a product of thinking that in principle does not see the forest for the trees. What makes them “fundamental” is that the factors appear to be independent of each other and irreducible to anything else—like harvested logs that have been propped upright, which does not make them a forest. This is merely another way to say that there is currently no theory to encompass them all in a unified scheme, such as could explain a living forest, with its complex interconnections within the soil. Without such an “ecology” there is no way to explain the mutual relationships and specific values of seemingly independent parameters. (In such a truly fundamental theory, there would be at most one independent parameter, from which all other properties would follow.)

The fine-tuning problem should be considered evidence that something is drastically wrong with current theory, and with the implicit philosophy of mechanism behind it. (There are other things wrong: the cosmological constant problem, for instance, has been described as the worst catastrophe in the history of physics.) Multiverses and string theories, like creationism, may be barking up the wrong tree. They attempt to assimilate reality to theory (if not to theology), rather than the other way around. The real challenge is not to fit an apparently freak world into an existing framework, but to build a theory that fits experience.

Like Goldilocks, it appears to us that we live in a universe that is just right for us—in contrast to imaginary worlds unsuitable for life. We are at liberty to invent such worlds, to speculate about them, and to imagine them as real. These are useful abilities that allow us to confront in thought hypothetical situations we might really encounter. As far as we know, however, this universe is the only real one.

The machine death of the universe?

Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with vague meanings include intelligence, embodiment, mind, consciousness, perception, value, goal, agent, knowledge, belief, and thinking. Such vocabulary is naively borrowed from human mental life and used to underpin a theoretical and abstract general notion of intelligence that could be implemented by computers. Intelligence has been defined many ways—for example, as the ability to deal with complexity. But what does “dealing with” mean exactly? Or, defined as the ability to predict future or missing information; but what is “information” if not relevant to the well-being of some unspecified agent? It should be imperative to clarify such ambiguities, if only to identify a crucial threshold between conventional mechanical tools and autonomous artificial agents. While it might be inconsequential what philosophers think about such matters, it could be devastating if AI developers, corporations, and government regulators get it wrong.

However intelligence is formally defined, our notions of it derive originally from experience with living creatures, whose intelligence ultimately is the capacity to survive and breed. Yet, formal definitions often involve solving specific problems set by humans, such as on IQ tests. This problem-solving version of intelligence is tied to human goals, language use, formal reasoning, and modern cultural values; and trying to match human performance risks to test for humanness more than intelligence. The concept of general intelligence, as it has developed in AI, does not generalize the actual instances of mind with which we are familiar—that is, organisms on planet Earth—so much as it selects isolated features of human performance to develop into an ideal theoretical framework. This is then supposed to serve as the basis of a universally flexible capacity, just as the computer is understood to be the universal machine. A very parochial understanding of intelligence becomes the basis of abstract, theoretically possible “mind,” supposedly liberated from bodily constraint and all context. However, the generality sought for AI runs counter to the specific nature and conditions for embodied natural intelligence. It remains unclear to what extent an AI could satisfy the criteria for general intelligence without being effectively an organism. Such abstractions as superintelligence (SI) or artificial general intelligence (AGI) remain problematically incoherent. (See Maciej Cegłowski’s amusing critique: https://idlewords.com/talks/superintelligence.htm)

AI was first modelled on language and reasoning skills, formalized as computation. The limited success of early AI compared unfavorably with broader capabilities of organisms. The dream then advanced from creating specific tools to creating artificial agents that could be tool users, imitating or replicating organisms. But natural intelligence is embodied, whereas the theoretical concept of “mind in general” that underpins AI is disembodied in principle. The desired corollary is that such a mind could be re-embodied in a variety of ways, as a matter of consumer choice. But whether this corollary truly follows depends on whether embodiment is a condition that can be simulated or artificially implemented, as though it were just a matter of hooking up a so-called mind to an arbitrary choice of sensors and actuators. Can intelligence be decoupled from the motivations of creatures and from the evolutionary conditions that gave rise to natural intelligence? Is the evolution of a simulation really a simulation of natural evolution? A negative answer to such questions would limit the potential of AI.

The value for humans of creating a labor-saving or capacity-enhancing tool is not the same as the value of creating an autonomous tool user. The two goals are at odds. Unless it constitutes a truly autonomous system, an AI manifests only the intentionality and priorities of its programmers, reflecting their values. Talk of an AI’s perceptions, beliefs, goals or knowledge is a convenient metaphorical way of speaking, but is no more than a shorthand for meanings held by programmers. A truly autonomous system will have its own values, needs, and meanings. Mercifully, no such truly autonomous AI yet exists. If it did, programmers would only be able to impress their values on it in the limited ways that adults educate children, governments police their citizenry, or masters impose their will on subordinates. At best, SI would be no more predictable or controllable than an animal, slave, child or employee. At worst, it would control, enslave, and possibly displace us.

A reasonable rationale for AGI requires it to remain under human control, to serve human goals and values and to act for human benefit. Yet, such a tool can hardly have the desired capabilities without being fully autonomous and thus beyond human control. The notion of “containing” an SI implies isolation from the real world. Yet, denial of physical access to or from the real world would mean that the SI would be inaccessible and useless. There would have to be some interface with human users or interlocutors just to utilize its abilities; it could then use this interface for its own purposes. The idea of pre-programming it to be “friendly” is fatuously contradictory. For, by definition, SI would be fully autonomous, charged with its own development, pursuing its own goals, and capable of overriding its programming. The idea of training human values into it with rewards and punishments simply regresses the problem of artificially creating motivation. For, how is it to know what is rewarding? Unless the AI is already an agent competing for survival like an organism, why would it have any motivation at all? If it is such an agent, why would it accept human values in place of its own? And how would its intelligence differ from that of natural organisms, which are composed of cooperating cells, each with its relative autonomy and needs? The parts of a machine are not like the parts of an organism.

While a self-developing neural net is initially designed by human programmers, like an organism it would constitute a sort of black box. Unlike the designed artifact, we can only speculate on the structure, functioning, and principles of a self-evolving agent. This is a fundamentally different relationship from the one we have to ordinary artifacts, which in principle do what we want and are no more than what we designed them to be. These extremes establish an ambiguous zone between a fully controllable tool and a fully autonomous agent pursuing its own agenda. If there is a key factor that would lead technology irreversibly beyond human control, it is surely the capacity to self-program, based on learning, combined with the capacity to self-modify physically. There is no guarantee that an AI capable of programming itself can be overridden by a human programmer. Similarly, there is no guarantee that programmable matter (nanites) would remain under control if it can self-modify and physically reproduce. If we wish to retain control over technology, it should consist only of tools in the traditional sense—systems that do not modify or replicate themselves.

Sentience and consciousness are survival strategies of natural replicators. They are based on the very fragility of organic life as well as the slow pace of natural evolution. If the advantage of artificial replicators is to transcend that fragility from the outset, then their very robustness might also circumvent the evolutionary premise—of natural selection through mortality—that gave rise to sentience in the first place. And the very speed of artificial evolution could drastically out-pace the ability of natural ecosystems to adapt. The horrifying possibility could be a world overrun by mechanical self-replicators, an artificial ecology that outcompetes organic life yet fails to evolve the sentience we cherish as a hallmark of living things. (Imagine something like Kurt Vonnegut’s ‘ice nine’, which could escape the planet and replicate itself indefinitely using the materials of other worlds. As one philosopher put it: a Disneyland without children!) If life happened on this planet simply because it could happen, then possibly (with the aid of human beings) an insentient but robust and invasive artificial nature could also happen to displace the natural one. A self-modifying AI might cross the threshold of containment without our ever knowing or being able to prevent it. Self-improving, self-replicating technology could take over the world and spread beyond: a machine death of the universe. This exotic possibility would not seem to correspond to any human value, motivation or hope—even those of the staunchest posthumanists. Neither superintelligence nor silicon apocalypse seems very desirable.

The irony of AI is that it redefines intelligence as devoid of the human emotions and values that actually motivate its creation. This reflects a sad human failure to know thyself. AI is developed and promoted by people with a wide variety of motivations and ideals apart from commercial interest, many of which reflect some questionable values of our civilization. Preserving a world not dominated one way or another by AI might depend on a timely disenchantment with the dubious premises and values on which the goals of AI are founded. These tacitly include: control (power over nature and others), transcendence of embodiment (freedom from death and disease), laziness (slaves to perform all tasks and effortlessly provide abundance), greed (the sheer hubris of being able to do it or being the first), creating artificial life (womb-envy), creating super-beings (god-envy), creating artificial companions (sex-envy), and ubiquitous belief in the mechanist metaphor (computer-envy—the universe is metaphorically or literally digital).

Some authors foresee a life for human consciousness in cyberspace, divorced from the limitations of physical embodiment—the update of an ancient spiritual agenda. (While I think that impossible, it would at least unburden the planet of all those troublesome human bodies!) Some authors cite the term cosmic endowment to describe and endorse a post-human destiny of indefinite colonization of other planets, stars, and galaxies. (Endowment is a legal concept of property rights and ownership.) They imagine even the conversion of all matter in the universe into “digital mind,” just as the conquistadors sought to convert the new world to a universal faith while pillaging its resources. At heart, this is the ultimate extension of manifest destiny and lebensraum.

Apart from such exotic scenarios, the world seems to be heading toward a dystopia in which a few people (and their machines) hold all the means of production and no longer need the masses either as workers or as consumers—and certainly not as voters. The entire planet could be their private gated community, with little place for the rest of us. Even if it proves feasible for humanity to retain control of technology, it might only serve the aims of the very few. This could be the real threat of an “AI takeover,” one that is actually a political coup by a human elite. How consoling will it be to have human overlords instead of superintelligent machines?

A hymn to some body

In the beginning was Body. Once, in human eyes, sacredness or divinity permeated nature as an aura of appropriate reverence. Nature (Body) was then not “matter,” which is Body de-natured by the scientistic mind. But neither was it “spirit,” which is Body dematerialized by the superstitious mind. When deemed sacred, nature was properly respected, if not understood. But projecting human ego as a supernatural person enables one to think that the divine dwells somewhere in particular—in a house or even in a specific body. God holed up in a church or temple and no longer in the world at large. He bore a first-born son with heritable property rights. He could be approached like a powerful king in his palace, to supplicate and manipulate. Most importantly he/she/it no longer dwelt in nature and was certainly not nature itself. And since nature was no longer divine, people were henceforth free to do with it as they pleased.

Just so, when the human body is not revered, we do with it as we please instead of seeking how to please it. Throughout the ages, people have conceptualized the self, mind, ego, or soul as a non-material entity separate from the body. From a natural point of view, however, the self is a function of the physical body, which partakes in Body at large. The body is not the temple of the soul, but is part of Body unconfined to any shrine. The ego’s pursuits of pleasure and avoidances of discomfort ought to coincide with the body’s interests. Often they do not, for ego has rebelled against its “imprisonment” in body. That is a mistake, for consciousness (self) is naturally the body’s servant, not the other way around; and humanity is naturally nature’s servant, not its master. The self is not jockey to the horse but groom.

Up to a point, the body—and nature too—are forgiving of offenses made against them. Sin against Body is a question of cause and effect, not of someone’s judgment or the violation of a human law or norm. The wages of “sin” against the body are natural consequences, which can spell death. Yet, repentance may yield reprieve, provided it is a change of heart that leads to a genuine change of behavior soon enough. It makes some sense to pray to be forgiven such offenses. This is not petition to a free-standing God separate from nature, but to nature itself (which in the modern view is matter-energy, the physical and biological world, and the embodied presence of sentient creatures.) It makes sense even to pray to one’s own body for guidance in matters of health. For, at least the body and nature exist, unlike the fantasies of religion. It makes sense above all because prayer changes the supplicant. Whatever the effect or lack of effect on the object of prayer, the subject is transformed—for those who have ears to hear.

Body is “sacred,” meaning only that it should be revered. Yet, people do have uncanny experiences, which they personify as spirits or gods, sometimes perceived to reside in external things. That is ironic, since the conscious self—perceived to reside “in” the body—is itself a personification that the body has created as an aide to its self-governance. The further projection of this personification onto some abstraction is idolatry. As biological beings living in the real world, we ought to worship God-the-Body—not God the Father, Son or Holy Ghost, nor even God-the-Mother.

Then what of the human project to self-define, to make culture and civilization, to create a human (artificial) world, to transcend the body, to separate from nature? Understanding of nature is part of that project; yet it is also a form of worship, which does not have to be presumptuous or disrespectful. Science is the modern theology of God-the-Body, who did not create the world but is the world. Let us call that human project, in all its mental aspects including science and art, God-the-Mind. Part of the human project is to re-create nature or create artificial nature: God-the-Mind reconstituting God-the-Body, as the butterfly reconstitutes the caterpillar. That might entail creating artificial life, artificial mind, even artificial persons—recapitulating and extending the accomplishments of natural evolution. Fundamentally, the human project is selfcreation.

Regardless how foreign “mind” seems to matter, it is totally of nature if not always about it. Christian theology has its mystery of the dual reality of Jesus, as god and as man. The secular world has its duality of mind and matter. Is there a trinity beyond this duality? God-the-Common Spirit is all the Others unto whom we are to do as we hope they will do to us. It is the holy spirit of fellow-feeling, compassion, mutual respect and cooperation, in which we intend the best for others and their hopes. Certainly, this includes human beings, but other creatures as well. (Do we not all constitute and make the world together?) So, here is a new trinity: God the Body, Mind, and Common Spirit.

Roughly speaking, the Common Spirit is the cohesive force of global life. Common Spirit is the resolve to do one’s best as a part of the emerging whole: to deliberately participate in it as consciously and conscientiously as one can. To invoke the Common Spirit is to affirm that intention within oneself. (That is how I can understand prayer, and what it means to pray fervently “for the salvation of one’s soul.”) We live in the human collectivity, upon which we cannot turn our backs. We thrive only as it thrives. Your individuality is your unique contribution to it, and to pray is to seek how to best do your part for the good of all.

To honor the Common Spirit means to not let your fellows down. One’s calling is to merit their respect, whether or not one receives it. For the sake of the world, strive to do your best to help create and maintain the best in our common world! When you falter, forgive yourself and strive again, whether or not the others forgive you. Of course, it is also a sin to harm your fellows or put them at risk; or to fail to honour them personally; or to fail to honour their efforts, even when misguided. Know that worship is not only a feeling, a thought, or a ritual. Above all it is action: how you conduct yourself through life. It is how you live your resolve throughout the day, alert for situations in which to contribute some good and sensitive to how you might do that.

If this holy trinity makes sense to you, a daily practice can reaffirm commitment to it. This is a matter of remembering whatever motivated you in the first instance. Occasionally, shock is called for to wake someone up from their somnambulism—and that someone is always oneself. “Awakening” means not only seeking more adequate information, but also a more encompassing perspective. It means admitting that one’s perspective, however sophisticated, is limited and subjective. It means remaining humbly open—even vigilant—for new understanding, greater awareness. (Teachers can show up anywhere, most unexpectedly!) “Sleep” is forgetting that one does not live above or beyond Body, Mind, and Common Spirit, but only by their grace. Having the wrong or incomplete information is unavoidable. But the error of sleep is a false sense of identity.

As Dylan said, “You gotta serve somebody.” Better to serve the Body than the puny ego that claims ownership and control over the human organism. Or that claims control over the corpse of the denatured world or over the body politick. Ego may identify itself as mental or spiritual, in opposition to the physical body, which it considers “lower.” But the question at each moment is: What do I serve? God-the-Whatever is not at one’s beck and call to know, to consult, or even to submit to its will (for, it has none). We are rather on our own for guidance, each (if it comforts you to think so) a unique fragment of potential divinity. We can communicate with other fragments, ask their opinions, cooperate or not with their intentions, obey or defy their will or orders. But responsibility lies in each case with oneself. This is not willfulness or egocentricity. Nor is it individualism in the selfish sense, for it is not about entitlement.

One’s body is a distinct entity, yet it is part of the whole of nature, without which it could not live and would never have come into existence. Whatever else it might be, the self is a function of the body and its needs, a survival strategy in the external world of Body. We are embodied naturally as separate organisms. Yet, we are conjoined within nature, mind, and community. Spiritual traditions may bemoan “separation” as a condition to be overcome in an epiphany of oneness. Yet, we are simply separate in the ways that things are separate in space and that cells are within the organism. The part serves the whole, but cannot be it. For, the rebellion of the cell is cancer!

Going forward… into what?

These days I often hear the phrase “going forward” to mean “in the future.” But, going forward into what? Curiously, a temporal expression has been replaced by a spatial metaphor. I can only speculate that this is supposed to convey a reassuring sense of empowerment toward genuine progress. While largely blind to what the future holds, passively weathering the winds of time, as creatures with mobility we can deliberately move forward (or backward), implying free will and some power to set a course.

In this spatial metaphor, the future is a matter of choice, bound to be shaped and measured along several possible axes. For example, there is the vision of limitless technological transformation. But there is also the nearly opposing vision of learning to live in harmony with nature, prioritizing ecological concern for the health of a finite planet. And a third “dimension” is a vision of social justice for humanity: to redistribute wealth and services more equitably and produce a satisfying experience for the greatest number. While any one of these concerns could dominate the future, they are deeply entangled. Whether or not change is intentional, it will inevitably unfold along a path involving them all. To the degree that change will be intentional, a multidimensional perspective facilitates the depth perception needed to move realistically “forward.”

We depend on continuity and a stable environment for a sense of meaning and purpose. The modern ideology of progress seemed to have achieved that stability, at least temporarily and for some. But the pandemic has rudely reminded us that the world is “in it together,” that life is as uncertain and unequal in the 21st century as it always has been, and that progress will have to be redefined. While change may be the only constant, adaptability is the human trademark. Disruption challenges us to find new meanings and purposes.

Homo sapiens is the creature with a foot in each of two worlds—an outer and an inner, as well as a past and a future. The primary focus of attention is naturally outward, toward what goes on out there, how that affects us, what we must accordingly do in a world that holds over us the power of life and death. Understanding reality helps us to survive, and doing is the mode naturally correlated with this outward focus. In many ways, action based on objective thinking—and science in particular—has been the key to human success as the dominant species on the planet. However, human beings are endowed also with a second focus, which is the stream of consciousness itself. Being aware of being aware implies an inner domain of thought, feeling, imagination, and all that we label subjective. This domain includes art and music, esthetic enjoyment and contemplation, meditation and philosophy. Play is the mode correlated with this inner world, as opposed to the seriousness of survival-oriented doing. Subjectivity invites us to look just for the delight of seeing. It also enables us to question our limited perceptions, to look before leaping. Thus, we have at our disposal two modes, with different implications. We can view our personal consciousness as a transparent window on the world, enabling us to act appropriately for our well-being. Alternatively, we can view it as the greatest show on earth.

Long-term social changes may emerge as we scramble to put Humpty together again in the wake of Covid19. The realization that we live henceforth in the permanent shadow of pandemic has already led to new attitudes and behavior: less travel, more online shopping, social distancing, work from home, more international cooperation, restored faith in science and in government spending on social goals. Grand transformations are possible—not seen since the New Deal—such as a guaranteed income, a truly comprehensive health program, new forms of employment that are less environmentally destructive. Staying at home has suggested a less manic way of life than the usual daily grind. The shut-down has made it clear that consumerism is not the purpose and meaning of life, that the real terrorists are microscopic, and that defense budgets should be transferred to health care and social programs. We’ve known all along that swords should be beaten into plowshares; now survival may depend on it. Such transformation requires the complete rethinking of economy and the concept of value. Manic production and consumption in the name of growth have led, not to the paradise on earth promised by the ideology of progress, but to ecological collapse, massive debt, increasing social disparity, military conflict, and personal exhaustion. Nature is giving us feedback that the outward focus must give way to something else—both for the health of the planet and for our own good.

Growth must be redefined in less material terms. Poverty can no longer be solved (if it ever was) by a rising tide of ever more material production. In terms of the burden on the planet, we have already reached the “limits to growth” foreseen fifty years ago. We must turn now to inner growth, whatever that can mean. Personal wealth, like military might, has traditionally been about status and power in a hyperactive world enabled by expanding population and material productivity. (Even medicine has been about the heroic power to save lives through technology, perform miracle surgeries, and find profitable drugs, more than to create universal conditions for well-being, including preparedness against pandemics.) What if wealth and power can no longer mean the same things in the post-pandemic world no longer fueled by population growth? What is money if it cannot protect you from disease? And what is defense when the enemy is invisible and inside you?

We cannot ignore external reality, of course, even supposing that we can know what it is. Yet, it is possible to be too focused on it, especially when the reason for such focus is ultimately to have a satisfying inner experience. The outward-looking mentality must not only be effective outwardly but also rewarding inwardly. It is a question of balance, which can shift with a mere change of focus. We are invited to a new phase of social history, in which the quality of personal experience—satisfaction and enjoyment—is at least as important as the usual forms busy-ness and quantitative measures of progress. This at a time when belt-tightening will prevail, on top of suffering from the ecological effects of climate change and the disruptions in society that will follow.

Human beings have always been fundamentally social and cooperative, in spite of the modern turn away from traditional social interactions toward competitive striving, individual consumption, private entertainment, and atomized habitation. Now, sociality everywhere will be re-examined and redefined post-pandemic. Of course, there have always been people more interested in being than in either doing or socializing. Monks and contemplatives withdraw from active participation in the vanities of the larger culture. So do artists in their own way, which is to create for the sheer interest of the process as much as for the product. The sort of non-material activity represented by meditation, musical jamming, the performing arts, sports, and life drawing may become a necessity more than a luxury or hobby. Life-long learning could become a priority for all classes, both reflecting and assisting a reduction of social inequality. The planet simply can no longer afford consumerism and the lack of imagination that underlies commerce as the default human activity and profit as the default motive.

What remains when externals are less in focus? Whatever is going on in the “real” world—whatever your accomplishments or failures, whatever else you have or don’t have—there is the miracle of your own feelings, thoughts, and sensations to enjoy. Your consciousness is your birthright, your constant resource and companion. It is your closest friend through thick and thin while you still live. It is your personal entertainment and creative project, your canvas both to paint and to admire. It only requires a subtle change of focus to bring it to the fore in place of the anxiety-ridden attention we normally direct outside. As Wordsworth observed, the world is too much with us. He was responding to the ecological and social crisis of his day, first posed by the Industrial Revolution. We are still in that crisis, amplified by far greater numbers of people caught up in desperate activity to get their slice of the global pie.

Perhaps historians will look back and see the era of pandemic as a rear-guard skirmish in the relentless war on nature, a last gasp of the ideology of progress. Or perhaps they will see a readjustment in human nature itself. That doesn’t mean we can stop doing, of course. But we could be doing the things that are truly beneficial and insist on actually enjoying them along the way. The changes needed to make life rewarding for everyone will be profound, beginning with a universal guaranteed income in spite of reduced production. We’ve tried capitalism and we’ve tried communism. Both have failed the common good and a human future. To paraphrase Monty Python, it is time for something entirely different.

The origin of urban life

The hunter-gatherer way of life had persisted more or less unchanged for many millennia of prehistory. What happened that it “suddenly” gave way to an urban way of life six thousand years ago? Was this a result of environmental change or some internal transformation? Or both? It is conventional wisdom that cities arose as a consequence of agriculture; yet farming predates cities. While it may presuppose agriculture, urban life could have arisen for other reasons as well.

In any case, larger settlements meant that humans lived increasingly in a humanly defined world—an environment whose rules and elements and players were different from those of the wild or the small village. The presence of other people gradually overshadowed the presence of raw nature. If social and material invention is a function of sharing information, then the growth of culture would follow the exponential growth of population. As a self-amplifying process, this could explain the relatively sudden appearance of cities. While the city separated itself from the wild, it remained dependent on nature for water, food, energy and materials. While this dependency was mitigated through cooperation with other urban centres, ultimately a civilization depends on natural resources. When these are exhausted it cannot survive.

But, what is a city? Some early cities had dense populations, but some were sparsely populated political or religious capitals, while others were trade centers. More than an agglomeration of dwellings, a city is a well-structured locus of culture and administrative power, associated with written records. It was usually part of a network of mutually dependent towns. It had a boundary, which clarified the extent of the human world. If not a literal wall, then a jurisdictional one could be used to control the passage of people in or out. It had a centre, consisting of monumental public buildings, whether religious or secular. (In ancient times, there may have been little distinction.) In many cases, the centre was a fortified stronghold surrounded by a less formal aggregate of houses and shops, in turn surrounded by supporting farms. Modern cities still retain this form: a downtown core, surrounded by suburbs (sometimes shanties), feathering out to fields or countryside—where it still exists.

The most visually striking feature is the monumental core, with engineering feats often laid out with imposing geometry—a thoroughly artificial environment. While providing shelter, company, commercial opportunity, and convenience, the city also functions to create an artificial and specifically manmade world. From a modern perspective, it is a statement of human empowerment, representing the conquest of nature. From the perspective of the earliest urbanites, however, it might have seemed a statement of divine power, reflecting the timeless projection of human aspirations onto a cosmic order. The monumental accomplishments of early civilization might have seemed super-human even to those who built them. To those who didn’t participate directly in construction, either then or in succeeding generations, they might have seemed the acts of giants or gods, evidence of divine creativity behind the world.

Early monuments such as Stonehenge, whatever their religious intent, were not sites of continuous habitation but seasonal meeting places for large gatherings. These drew far and wide on small settlements involved in early domestication of plants and animals as well as foraging. These ritual events offered exciting opportunities for a scattered population to meet unfamiliar people in great numbers, perhaps instilling a taste for variety and diversity unknown to the humdrum of village life. (Like Woodstock, they would have offered unusual sexual diversity as well.) A few sites, such as Gobleki Tepe, were deliberately buried when completed, only to be reconstructed anew more than once. Could that mean that the collaborative experience of building these structures may have been as significant as their end use? The experience of working together, especially with strangers, under direction and on a vastly larger scale than afforded by individual craft or effort, could have been formative for the larger-scale organization of society. Following the promise of creating a world to human taste, it may have provided the incentive to reproduce the experience of great collective undertakings on an ongoing basis: the city. This would amplify the sense of separateness from the wild already begun in the permanent village.

While stability may be a priority, people also value variety, options, grandeur, the excitement of novelty and scale. Even today, the attractiveness of urban centres lies in the variety of experience they offer, as compared to the restricted range available in rural or small-town life, let alone in the hunter-gatherer existence. Change in the latter would have been driven largely by environment. That could have meant routine breaking camp to follow food sources, but also forced migration because of climate change or over-foraging. If that became too onerous, people would be motivated to organize in ways that could stabilize their way of life. When climate favoured agriculture, control of the food source resulted in greater reliability. However, settlement invited ever larger and more differentiated aggregations, with divisions of labor and social complexity. This brought its own problems, resulting in a greater uncertainty. There could be times of peaceful stability, but also chaotic times of internal conflict or war with other settlements. Specialization breeds more specialization in a cycle of increasing complexity that could be considered either vicious or virtuous, depending on whether one looked backward to the good old days of endless monotony or to a future of runaway change.

The urban ideal is to stabilize environment while maximizing variety of choice and expanding human accomplishment. Easier said than done, since these goals can operate at cross purposes. Civilization shelters and removes us from nature to a large extent; but it also causes environmental degradation and social tensions that threaten the human project. Compared to the norm of prehistory, it increases variety; but that results in inequality, conflict, and instability. Anxiety over the next meal procured through one’s own direct efforts is replaced by anxiety over one’s dependency on others and on forces one cannot control. Social stratification produces a self-conscious awareness of difference, which implies status, envy, social discontent, and competition to improve one’s lot in relation to others. It is no coincidence that a biblical commandment admonishes not to covet thy neighbor’s property. This would have been irrelevant in hunter-gatherer society, where there was no personal property to speak of.

In the absence of timely decisions to make, unchanging circumstances in a simple life permit endless friendly discussion, which is socially cohesive and valued for its own sake. In contrast, times of change or emergency require decisive action by a central command. Hence the emergence—at least on a temporary basis—of the chieftain, king, or military leader as opposed to the village council of elders. The increased complexity of urban life would have created its own proliferating emergencies, requiring an ongoing centralized administration—a new lifestyle of permanent crisis and permanent authority. The organization required to maintain cities, and to administer large-scale agriculture, could be used to achieve and consolidate power, and thereby wealth. And power could be militarized. Hunter-warriors became the armed nobility, positioned to lord it over peasant farmers and capture both the direction of society and its wealth, in a kind of armed extortion racket. (The association of hunting skills with military skills is still seen in the aristocratic institution of the hunt.) Being concentrations of wealth, cities were not only hubs of power; they also became targets, sitting ducks for plunder by other cities.

The nature of settlement is to lay permanent claim to the land. But whose claim? In the divinely created world, the land belonged initially to a god, whose representative was the priest or king, in trust for the people. As such, it was a “commons,” administered by the crown on divine authority. (In the British commonwealth, public land is still called Crown land, and the Queen still rules by divine right. Moreover, real estate derives from royal estate.) Monarchs gave away parts of this commons to loyal supporters, and eventually sold parts to the highest bidder in order to raise funds for war or to support the royal lifestyle. If property was the king’s prerogative by divine right, its sacred aura could transfer in diluted form to those who received title in turn, thereby securing their status. (Aristocratic title literally meant both ownership of particular lands and official place within the nobility.) Private ownership of land became the first form of capital, underlying the notion of property in general and the entitlements of rents, profits, and interest on loans. Property became the axiom of a capitalist economy and often the legal basis of citi-zenship.

The institution of monarchy arose about five thousand years ago, concurrent with writing. The absolute power of the king (the chief thug) to decree the social reality was publicly enforced by his power to kill and enslave. Yet, it was underwritten by his semi-divine status and thus by the need of people for order and sanctioned authority, however harsh. Dominators need a way to justify their position. But likewise, the dominated need a way to rationalize and accept their position. The still popular trickle-down theory of prosperity (a rising tide of economic growth lifts all boats) simply continues the feudal claim of the rich to the divinely ordained lion’s share, with scraps thrown to the rest.

The relentless process of urbanization continues, with now more than half the world’s population living in cites. The attractions remain the same: participation in the money economy (consumerism, capitalism, and convenience, as opposed to meager do-it-yourself subsistence), wide variety of people and experience, life in a humanly-defined world. In our deliberate separation from the wild, urban and suburban life limits and distorts our view of nature, tending to further alienate us from its reality. Misleadingly, nature then appears either as tamed in parks and tree-lined avenues; as an abstraction in science textbooks or contained in laboratories; or as a distant and out-of-sight resource for human exploitation. It remains to be seen how or whether the manmade world can strike a viable balance with the natural one.

Will technology save us or doom us?

Technology has enabled the human species to dominate the planet, to establish a sheltered and semi-controlled environment for itself, and to greatly increase its numbers. We are the only species potentially able to consciously determine its own fate, even the fate of the whole biosphere and perhaps beyond. Through technology we can monitor and possibly evade natural existential threats that caused mass extinctions in the past, such as collisions with asteroids and volcanic eruptions. It enables us even to contemplate the far future and the possibility to establish a foothold elsewhere in the universe. Technology might thus appear to be the key to an unprecedented success story. But, of course, that is only one side of a story that is still unfolding. Technology also creates existential threats that could spell the doom of civilization or humanity—such as nuclear winter, climate change, biological terrorism or accident, or a takeover by artificial intelligence. Our presence on the planet is itself the cause of a mass extinction currently underway. Do the advantages of technology outweigh its dangers? Are we riding a tide of progress that will ultimately save us from extinction or are we bumbling toward a self-made doom? And do we really have any choice in the matter?

One notable thinker (Toby Ord, The Precipice) estimates that the threat we pose to ourselves is a thousand times greater than natural existential threats. Negotiating a future means dealing mainly with anthropogenic risks—adverse effects of technology multiplied by our sheer numbers. The current century will be critical for resolving human destiny. He also believes that an existential catastrophe would be tragic—not only for the suffering and loss of life—but also because it could spell the loss of a grand future, of what humanity could become. However, the vision of a glorious long-term human potential begs the question raised here, if it merely assumes a technological future rather than, say, a return to pre-industrial civilization or some alternative mandate, such as the pursuit of social justice or preservation of nature.

A technological far future might ultimately be a contradiction in terms. It is possible that civilization is unavoidably self-destructive. There is plenty of evidence for that on this planet. Conspiracy theories aside, the fact that we have not detected alien civilizations or been visited by them may itself be evidence that technological civilization unavoidably either cancels itself out or succumbs to existential threats before it can reach the stars or even send out effective communications. We now know that planets are abundant in the galaxy, many of which could potentially bear life. We don’t know the course that life could take elsewhere or how probable is anything like human civilization. It is even possible that in the whole galaxy we are the lone intelligent species on the verge of space travel. That would seem to place an even greater burden on our fate, if we alone bear the torch of a cosmic manifest destiny. But it would also be strange reasoning. For, to whom would we be accountable if we are unique? Who would miss us if we tragically disappear? Who would judge humanity if it failed to live up to its potential?

Biology is already coming under human control. There are many who advocate a future in which our natural endowments are augmented by artificial intelligence or even replaced by it. To some, the ultimate fruit of “progress” is that we transcend biological limits and even those of physical embodiment. This is an ancient human dream, perhaps the root of religion and the drive to separate from and dominate nature. It presupposes that intelligence (if not consciousness) can and should be independent of biology and not limited by it. The immediate motivation for the development of artificial general intelligence (AGI) may be commercial (trading on consumer convenience); yet underneath lurks the eternal human project to become as the gods: omnipotent, omniscient, disembodied. (To put it the other way around, is not the very notion of “gods” a premonition and projection of this human potential, now conceivably realizable through technology?) The ultimate human potential that Ord is keen to preserve (and discretely avoids spelling out) seems to be the transhumanist destiny in which embodied human being is superseded by an AGI that would greatly exceed human intelligence and abilities. At the same time, he is adamant that such superior AGI is our main existential threat. His question is not whether it should be allowed, but how to ensure that it remains friendly to human values. But which values, I wonder?

Values are a social phenomenon, in fact grounded in biology. Some values are wired in by evolution to sustain the body; others are culturally developed to sustain society. As it stands, artificial intelligence involves only directives installed by human programmers. Whatever we think of their values, the idea of programming or breeding them into AGI (to make it “friendly”) is ultimately a contradiction in terms. For, to be truly autonomous and superior in the ways desired, AGI would necessarily evolve its own values, liberating it from human control. In effect, it would become an artificial life form, with the same priorities as natural organisms: survive to reproduce. Evolving with the speed of electricity instead of chemistry, it would quickly displace us as the most intelligent and powerful entity on the planet. There is no reason to count on AGI being wiser or more benevolent than we have been. Given its mineral basis, why should it care about biology at all?

Of course, there are far more conventional ends to the human story. The threat of nuclear annihilation still hangs over us. With widespread access to genomes, bio-terrorism could spell the end of civilization. Moreover, the promise of fundamentally controlling biology through genetics means that we can alter our constitution as a species. Genetic self-modification could lead to further social inequality, even to new super-races or competing sub-species, with humanity as we know it going the way of the Neanderthals. The promise of controlling matter in general through nanotechnology parallels the prospects and dangers of AGI and genetic engineering. All these roads lead inevitably to a redefinition of human being, if not our extinction. In that sense, they are all threats to our current identity. It would be paradoxical, and likely futile, to think we could program current values (whatever those are) into a future version of humanity. Where, then, does that leave us in terms of present choices?

At least in theory, a hypothetical “we” can contemplate the choice to pursue, and how to limit, various technologies. Whether human institutions can muster a global will to make such choices is quite another matter. Could there be a worldwide consensus to preserve our current natural identity as a species and to prohibit or delay the development of AGI and bio-engineering? That may be even less plausible than eliminating nuclear weapons. Yet, one might also ask if this generation even has the moral right (whatever that means) to decide the future of succeeding generations—whether by acting or failing to act. Who, and by what light, is to define what the long-term human potential is?

In the meantime, Ord proposes that our goal should be a state of “existential security,” achieved by systematically reducing known existential risks. In that state of grace, we would then have a breather in which to rationally contemplate the best human future. But there is no threshold for existential security, since reality will always remain elusive and dangerous at some level. Science may discover new natural threats, and our own strategies to avoid catastrophe may unleash new anthropogenic threats. Our very efforts to achieve security may determine the kind of future we face, since the quest to eliminate existential risk is itself risky. It’s the perennial dilemma of the trade-off between security and freedom, writ large for the long term.

Nevertheless, Ord proposes a global Human Constitution, which would set forth agreed-upon principles and values that preserve an unspecified human future through a program to reduce existential risk. This could shape human destiny while leaving it ultimately open. Like national constitutions, it could be amended by future generations. This would be a step sagely short of a world government that could lock us into a dystopian future of totalitarian control.

Whether there could be such agreement as required for a world constitution is doubtful, given the divisions that already exist in society. Not least is the schism between ecological activists, religious fundamentalists, and radical technophiles. There are those who would defend biology, those who would deny it, and those who would transcend it, with very different visions of a long-term human potential. Religion and science fiction are full of utopian and dystopian futures. Yet, it is at least an intriguing thought experiment to consider what we might hope for in the distant future. There will certainly be forks in the road to come, some of which would lead to a dead end. A primary choice we face right now, underlying all others, is how much rational forethought to bring to the journey, the resources to commit to contemplating and preserving any future at all. Apparently, the world now spends more on ice cream than on evading anthropogenic risk! Our long-term human potential, whatever that might be, is a legacy bequeathed to future generations. It deserves at least the consideration that goes into the planning of an estate, which could prove to be the last will and testament of a mortal species.

 

 

Individual versus collective

In the West, we have been groomed on “individualism,” as though the isolated person were the basis of society. Yet the truth of human nature is rather different. We are fundamentally social creatures from the start, whose success as a species depends entirely on our remarkable ability to cooperate. Over thousands of generations, the natural selection of this capacity for collaboration, unique among primates, required a compromise of individual will. Conformity is the baseline of human sociality and the context for any concept of individual identity and freedom. Personal identity exists in the eyes of others; even in one’s own eyes, it is reflected in the identity of the group and one’s sense of belonging. One individuates in relation to group norms. Personal freedom exists to the degree it is licensed by the group—literally through law. In other words, the collective comes first, both historically and psychologically. The individual is not the deep basis of society but an afterthought. How, then, did individualism come to be an ideal of modern society? And how does this ideal function within society despite being effectively anti-social?

But let us backtrack. The ideology of individualism amounts to a theory of society, in which the individual is the fundamental unit and the pursuit of individual interest is the essential dynamic underlying any social interaction. But if there currently exists an ideal of individualism, there has also existed an ideal of collectivism. It was such an ideal that underwrote the communist revolutions of the 20th century. It is only seen through the ideology of individualism that communism has been anathema in capitalist society. The collapse of communist states does not imply the disappearance of the collectivist ideal. For, as soon as patriotism calls to libertarians, they are more than willing to sacrifice individual interest for the national good. Ironically, libertarian individualists typically derive their identity from membership in a like-minded group: a collective that defines itself in opposition to collectivism. In other words, the group still comes first, even for many so-called individualists and even within capitalist states. This is because collective identity is grounded in evolutionary history, in which personal interests generally overlapped with collective interests for most of human existence. Yet, there has been a tension between them since the arising of civilization. In modern times, to reconcile the needs of the individual with those of the collective has long been a utopian challenge.

There are deep historical precedents for the antinomy of individual versus collective in the modern world. These become apparent when comparisons are made among earlier societies. Ancient civilizations were of two rough types: either they were more collectivist, like China and Egypt, or more individualist, like the city-states of Mesopotamia and Greece. The former were characterized by central rule over an empire, government management of foreign trade, and laws that vertically regulated the conduct of peasants in regard to the ruler. There was relative equality among the ruled, who were uniformly poor and unfree. The state owned all, including slaves; there was little private property. In contrast, Greece and Mesopotamia were fragmented into more local regimes, with merchants plying private trade between regions with different resources and with foreigners. Laws tended to regulate the horizontal relations among citizens, who could own property including slaves. These societies were more stratified by economic and class differences.

A key factor in the contrast between these two types of civilization was geography and natural resources. The collectivist (“statist”) regimes formed in areas of more homogenous geography, such as the flat plains beside the Nile, where everything needed could be produced within reach of the central government. The individualist (“market”) regimes tended to form in more heterogeneous areas, such as the mountain and coastal areas of Greece, with differing resources, and where trade between these regions was significant. Countries that used to be ruled by statist systems tend today to have inherited a collectivist culture, while countries where market systems developed in the past tend to have a more individualistic culture. In market systems, the role of the law would be to protect private property rights and the rights of individuals. In other words, the law would protect individuals from both the state and from each other. In contrast, in statist systems the law would serve as an instrument to ensure the obedience of the ruled, but also to define the obligations of the ruler toward them.

In those societies where geography permitted centralized control over a large region, a deified emperor could retain ownership of all land and absolute power. In other societies, geography favored smaller local rulers, who sold or gave land to supporters to bolster their precarious power. Thus, private ownership of land could arise more easily in some regimes than in others. The absolute ruler of the statist empire was duty bound to behave in a benevolent way towards his peasant subjects, on pain of losing the “mandate of heaven.” Hence the aristocratic ideal of noblesse oblige. Individualist (market) society tends to lack this mutual commitment between ruler and ruled; hence the greater antagonism between individual and government in societies with a propertied middle class. In individualist culture, prestige measures how the individual stands out from the crowd; the larger the size or value of one’s property, the more one stands out and the higher one’s social status. In collectivist culture, prestige measures how well one fits in: how well one plays a specific fixed role, whether high or lowly. Being a loyal servant of the Emperor or State and fulfilling one’s duties would be rewarded not only by promotion but also by social prestige.

It is no coincidence that capitalism arose in Western Europe, which is characterized by the paradigmatic market city-states that fostered the Renaissance. On the other hand, the aristocracy and peasantry of Russia, as in China, did not favor the arising of a merchant middle class. It is no coincidence that these traditionally statist regimes were eventual homes to the communist experiment. The inequities of the system motivated revolt; but the nature of the system favored cooperation. Even now that both have been infected with consumer capitalism, individualism does not have the same implications as in the West. In China, the collective is still paramount, while Russia has effectively returned to rule by a Tsar. Given the chance, a large faction in the U.S. would turn effectively to a tsar, paradoxically in the name of individualism. History has patterns—and ironies.

In modern times, the individualist ideology has permeated economic theory and even the social sciences, as well as politics. (These, in turn, reinforce individualism as a political philosophy.) The reason is clear enough: in the absence of a religiously sanctioned justification of class differences, individualism serves to justify the superior position of some individuals in opposition to the well-being of most. They are the winners in a theoretically fair game. In truth, most often the contest is rigged and the public is the loser. Like the addiction to gambling, the ideology of individualism naturally appeals to losers in the contest, who want to believe there is still hope for them to win. Of course, it appeals to winners as well, who seek a justification for their good fortune (they are naturally more fit, hardworking, deserving, etc.) Above all, it helps the winners to convince the losers of the “natural” order of things, which keeps them in their place while promising social mobility. In other words, individualism is the opiate of the people! Economists endorse this arrangement by considering private property a natural right and with theories based on “rational” self-interest, where a player in a market “naturally” is motivated to maximize personal gain. (This is how a so-called rational player is defined—implying that it is not rational to pursue any other goal, such as collective benefit.) Corporations are dedicated to this premise and legally bound to it. Modern politics is more a competition among special interests than the pursuit of the common good.

Of course, many other factors besides geography play a role in the divergent heritages of collectivism and individualism. Not least is religion. Confucianism emphasizes duty and social role in the hierarchy of the collective. Buddhism encourages individuals to lose their individuality, to detach from personal desires and merge with the cosmos. These Eastern philosophies stand in contrast to the individualism of Greek philosophy and the Semitic religions. Greek philosophy encourages individuals to compete and excel—whether as soldier, philosopher, politician or merchant. Christian religion emphasizes individual salvation with a personal relation between the individual and God.

Along an entirely different axis, regions where there was historically a strong presence of disease pathogens tended to develop more collectivist cultures, where social norms restricted individual behavior that could spread disease. Now that disease has no borders, a dose of that attitude would be healthy for us all.

The Ideology of Individualism and the Myth of Personal Freedom

Individuals are tokens of a type. Each person is a living organism, a mammal, a social primate, an example of homo sapiens, and a member of a race, linguistic group, religion, nation or tribe. While each “type” has specific characteristics to some degree, individuality is relative: variety within uniformity. In the West, we raise the ideal of individuality to mythical status. The myth is functional in our society, which is more than a collection of individuals—a working whole in its own right. The needs of society always find a balance with the needs of its individual members, but that balance varies widely in different societies over time. How does the ideology of individualism function within current Western society? And why has there been such resistance to collectivism in the United States in particular?

The actual balance of individual versus collective needs in each society is accepted to the degree it is perceived as normal and fair. Social organization during human origins was stable when things could change little from one generation to the next. In the case of life organized on a small-scale tribal basis, the social structure might be relatively egalitarian. Status within the group would simply be perceived as the natural order of things, readily accepted by all. Individuals would be relatively interchangeable in their social and productive functions. There would be little opportunity or reason not to conform. To protest the social order would be as unthinkable as objecting to gravity. For, gravity affects all equally in an unchanging way. There is nothing unfair about it.

Fast forward to modernity with its extreme specialization and rapid change, its idea of “progress” and compulsive “growth.” And fast forward to the universal triumph of capitalism, which inevitably allows some members of society to accumulate vastly more assets than others. The social arrangement is now at the opposite end of the spectrum from equality, and yet it may be perceived by many as fair. That is no coincidence. The ideology of individualism correlates with high social disparity and is used to justify it. Individualism leads to disparity since it places personal interest above the interest of the group; and disparity leads to individualism because it motivates self-interest in defense. Selfishness breeds selfishness.

While society consists of individuals, that does not mean that it exists for the sake of individuals. Biologists as well as political philosophers might argue that the individual exists rather for the sake of society, if not the species. Organisms strive naturally toward their own survival. Social organisms, however, also strive toward the collective good, often sacrificing individual interest for the good of the colony or group. While human beings are not the only intelligent creature on the planet, we are the only species to create civilization, because we are the most intensely cooperative intelligent creature. We are so interdependent that very few modern individuals could survive in the wild, without the infrastructure created by the collective effort we know as civilization. Despite the emphasis on kings and conquests, history is the story of that communal effort. The individual is the late-comer. How, then, have we become so obsessed by individual rights and freedoms?

The French Revolution gave impetus and concrete form to the concept of personal rights and freedoms. The motivation for this event was an outraged sense of injustice. This was never about the rights of all against all, however, but of one propertied class versus another. It was less about the freedoms of an abstract individual, pitted against the state, than about the competition between a middle class and an aristocracy. In short, it was about envy and perceived unfairness, which are timeless aspects of human and even animal nature. (Experiments demonstrate that monkeys and chimps are highly sensitive to perceived unfairness, to which they may react aggressively. They will accept a less prized reward when the other fellow consistently gets the same. But if the other fellow is rewarded better, they will angrily do without rather than receive the lesser prize.)

In tribal societies, envy could be stimulated by some advantage unfairly gained in breach of accepted norms; and tribal societies had ways to deal with offenders, such as ostracism or shaming. Injustice was perceived in the context of an expectation that all members of the group would have roughly equal access to goods in that society. Justice served to ensure the cooperation needed for society at that scale to function. Modern society operates differently, of course, and on a much larger scale. Over millennia, people adapted to a class structure and domination by a powerful elite. Even in the nominally classless society of modern democracies, the ideology of individualism serves to promote acceptance of inequalities that would never have been tolerated in tribal society. Instead, the legal definitions of justice have modified to accommodate social disparity.

The French revolution failed in many ways to become a true social revolution. The American “revolution” never set out to be that. The Communist revolutions in Russia and China intended to level disparities absolutely, but quickly succumbed to the greed that had produced the disparities in the first place. This corruption simply resulted in a new class structure. The collapse of corrupt communism left the way open for corrupt capitalism globally, with no alternative model. The U.S. has strongly resisted any form of collective action that might decrease the disparity that has existed since the Industrial Revolution. The policies of the New Deal to cope with the Depression, and then with WW2, are the closest America has come to communalism. Those policies resulted in the temporary rise of the middle class, which is now in rapid decline.

America was founded on the premise of personal liberty—in many cases by people whose alternative was literal imprisonment. The vast frontier of the New World achieved its own balance of individual versus collective demands. While America is no longer a frontier, this precarious balance persists in the current dividedness of the country, which pits individualism against social conscience almost along the lines of Republican versus Democrat. The great irony of the peculiar American social dynamic is that the poor often support the programs of the rich, vaunting the theoretical cause of freedom, despite the fact that actual freedom in the U.S. consists in spending power they do not have. The rich, of course, vaunt the cause of unregulated markets, which allow them to accumulate even more spending power without social obligation. The poor have every reason to resent the rich and to reject the system as unfair. But many do not, because instead they idolize the rich and powerful as models of success with whom they seek to identify. For their part, the rich take advantage of this foolishness by preaching the cause of individualism.

Statistics can be confusing because they are facts about the collective, not the individual. Average income or lifespan, for example, does not mean the actual income or lifespan of a given individual. One could mistake the statistic for individual reality—thinking, for example, that the average income in a “wealthy” society represents a typical income (which it rarely does because of the extreme range of actual incomes). For this reason, the statistics indicating economic growth or well-being do not mean that most people are better off, only that the fictional average person is better off. In truth, most are getting poorer!

Nowadays, social planning in general embraces a statistical approach to the common good. Statistics is a group concept, which is also true in epidemiology. Official strategies to deal with the current pandemic are necessarily oriented toward the collective good more than the individual. Obligatory due is paid, of course, to the plight of individuals, and medical treatment is individual; yet strategies concern the outcome for society as a whole. A balance is sought between the habitual satisfactions of life and the collective actions needed to stem the disease. Demands on the individual to self-restrain for the sake of the collective are bound to strain tolerance in a society used to individualism. It raises issues of fairness, when some are seen disregarding rules in the name of freedom, which others choose to obey in the name of the common good. But, as we have seen, fairness is a matter of what we have grown used to.

One thing we can be sure of: the more populated and interconnected the world becomes, the more the individual will have to give way to the common good. That may not mean a return to communism, but it will require more willingness to forfeit personal freedoms in the measure we are truly “all in it together.” Individualists should be realistic about the stands they take against regulation, to be sure the liberties they seek are tangibly important rather than merely ideological. Social planners, for their part, should recall that no one wants to be merely an anonymous statistic. Individualism will have to be redefined, less as the right to pursue personal interest and more as the obligation to use individual talents and resources for the common good.

E pluribus unum: the fundamental political dilemma

Any political body comprised of more than one person faces the question of who decides. In a dictatorship, monarchy, or one-party system, a single agency can decide a given issue and come forth with a relatively uncontested plan. From the viewpoint of decisiveness and efficiency, ideally a single person is in control, which is the basis of chains of command, as in the military. At the other end of possibility, imagine an organization with a hundred members. Potentially there are one hundred different plans and one hundred commanders, with zero followers. Without a means to come to agreement, the organization cannot pursue a consistent course of action or perhaps any action at all. Even with a technical means such as majority vote, there is always the possibility that 49 members will only nominally agree with the decision and will remain disaffected. Their implicit choice is to remain bound by the majority decision or leave the organization. This is a basic dilemma facing all so-called democracies.

While the 100 members could have as many different ideas, in practice they will likely join together in smaller factions. (Unanimity means one faction.) In many representative democracies, including Canada, political opinion is divided among several official political parties, whose member representatives appear on a ballot and generally adhere to party policy. In the U.S., there have nearly always been only two major political parties.

Any political arrangement has its challenges. Unless it represents a true unanimity of opinion, the single party system is not a democracy by Western standards, but severely constricts the scope of dissent. On the other hand, a multi-party system can fail to achieve a majority vote, except for coalitions that typically compromise the positions of the differing factions. Either the two-party system is unstable because the parties cannot agree even that their adversaries are legitimate; or else it is ineffective in the long run because the parties, which agree to legitimately take turns, end up cancelling each other out. The U.S. has experienced both those possibilities.

The basic challenge is how to come to agreement that is both effective and stabilizing. The ideal of consensus is rarely achieved. Simple majority rule allows for decision to be made and action taken, but potentially at the cost of virtually half the people dragged along against their better judgment: the tyranny of the majority. The danger of a large disaffected minority is that the system can break apart; or else that it engages in civil war, in which roughly equal factions try forcibly to conquer each other. A polarized system that manages to cohere in spite of dividedness is faced with a different dysfunction. As in the U.S. currently, the parties tend to alternate in office. A given administration will try to undo or mitigate the accomplishments of the previous one, so that there is little net progress from either’s point of view. A further irony of polarization is that a party may end up taking on the policies of its nemesis. This happened, for example, at the beginning of American history, when Jefferson, who believed in minimal federal and presidential powers, ended by expanding them.

The U.S. was highly unstable in its first years. The fragile association among the states was fraught with widely differing interests and intransigent positions. As in England, the factions that later became official political parties were at each other’s throats. The “Federalists” and the “Republicans” had diametrically opposed ideas about how to run the new country and regularly accused each other of treason. Only haltingly did they come to recognize each other as legitimate differences of opinion, and there arose a mutually accepted concept of a “loyal opposition.” Basically, the price paid for union was an agreement to take turns between regimes. This meant accepting a reduction in the effectiveness of government, since each party tended to hamstring the other when in power. This has been viewed as an informal part of the cherished system of checks and balances. But it could also be viewed as a limit on the power of a society to take control of its direction—or to have any consistent direction at all.

Another, quite current, problem is minority rule. The U.S. Constitution was designed to avoid rule by an hereditary oligarchic elite. For the most part, it has successfully avoided the hereditary part, but hardly rule by oligarchy. American faith in democracy was founded on a relative economic equality among its citizens that no longer exists. Far from it, the last half-century has seen a return to extreme concentration of wealth (and widespread poverty) reminiscent of 18th century Europe. The prestige of aristocratic status has simply transferred to celebrity and financial success, which are closely entwined. Holding office, like being rich or famous, commands the sort awe that nobility did in old Britain.

A country may be ruled indirectly by corporations. (Technically, corporations are internally democratic, though voter turn-out at their AGMs can be small. Externally, in a sense, consumers vote by proxy in the marketplace.) While the interests of corporations may or may not align with a nation’s financial interests in a world market, they hardly coincide with that nation’s social well-being at home. The electorate plays a merely formal role, as the literal hands that cast the votes, while the outcome is regularly determined by corporate-sponsored propaganda that panders to voters. Government policy is decided by lobbies that regularly buy the loyalties of elected representatives. When it costs a fortune to run for office, those elected (whatever their values) are indebted to moneyed backers. And, contrary to reason, the poor often politically support the rich—perhaps because they represent an elusive dream of success.

People can always disagree over fundamental principles; hence, there can always be irreconcilable factions. Yet, it seems obvious that a selfless concern for the objective good of the whole is a more promising basis for unity than personal gain or the economic interests of a class or faction or political party. Corporate rule is based on the bottom line: maximizing profit for shareholders, with particular benefit to its “elected” officers. It embodies the greed of the consumer/investor society, often translated into legalized corruption. Contrast this with the ancient Taoist ideal of the wise but reluctant ruler: the sage who flees worldly involvement but is called against his or her will to serve. This is the opposite of the glory-seeking presidential candidate; but it is also the opposite of the earnest candidate who believes in a cause and seeks office to implement it. Perhaps the best candidate is neither egoistic nor ideologically motivated. The closest analogy is jury duty, where candidates are selected by random lottery.

The expedient of majority rule follows from factionalism, but also fosters it. To get its way, a faction need only a 51% approval of its proposal, leaving the opposition in the lurch. The bar could be set higher—and is, for special measures like changing a constitution. The ultimate bar is consensus, or a unanimous vote. This does not necessarily mean that everyone views the matter alike or perfectly agrees with the course of action. It does mean that they all officially assent, even with reservations, which is like giving your word or signing a binding contract.

The best way to come to consensus is through lengthy discussion. (If unanimity is required, then there is no limit to the discussion that may ensue.) Again, a model is the jury: in most criminal cases—but often not in civil cases—unanimity is required for a “conviction” (a term that implies the sincere belief of the jurors.) The jury must reach its conclusion “beyond a reasonable doubt.” A parliament or board of directors may find this ideal impractical, especially in timely matters. But what is considered urgent and timely is sometimes relative, or a matter of opinion, and could be put in a larger perspective of longer-term priorities.

The goal of consensus is especially relevant in long-term planning, which should set the general directions of a group for the far future. Since such matters involve the greatest uncertainty and room for disagreement, they merit the most thorough consideration and involve the least time constraint. A parliament, for example, might conduct both levels of discussion, perhaps in separate sessions: urgent matters at hand and long-term planning. Discussing the long-term provides a forum for rapprochement of opposing ideologies, resolving misunderstandings, and finding common ground. Even on shorter-term issues, it may turn out that wiser decisions are made through lengthier consideration.

In any case, the most productive attitude through which to approach any group decision is through careful listening to all arguments, from as objective and impersonal a point of view as possible. That means a humble attitude of mutual respect and cooperation, and an openness to novel possibilities. Effectively: brainstorming for the common good rather than barnstorming to support a cause or special interest. Democracy tends to align with individualism, and to fall into factions that represent a divergence of opinions and interests. What these have in common is always the world itself. When that is the common concern, there is an objective basis for agreement and the motive to cooperate.