Category Archives: Dan

Left to our own devices

Social media are paradoxically anti-social. A common scene goes like this: a group of people around the dinner table ignore each other, having wandered off in separate worlds on their respective “devices.” Even before the pandemic, Facebook had displaced face-to-face contact. Ironically, modern connectivity is disconnecting us. How can this be?

While this phenomenon is an aspect of modern technology, perhaps it all began with theater and the willing suspension of disbelief. On the stage are flesh and blood actors, but the characters they play belong to another realm, another time and place, which draws attention away from the immediate surroundings. The story they tell is not the here-and-now in the theater hall with the others in the audience. Thus, we enjoy a dual focus. Theater morphed into cinema, which morphed into the pixelated screen. But the situation is the same: there is a choice between worlds. One attends either to the proximal here-and-now reality that includes the physical interface (iPhone, TV, computer) or to the distal reality transmitted through it. Even the telephone presents a similar alternative. Is the locus of reality my ear pressed against the receiver or is it the person speaking on the other end? In the case of the real-time conversation, one focuses on the person at the other end as though they were nearby, which is a sort of hallucination. In the case of the digital device, the distal focus involved in texting chronically engulfs the participants in a sort of online role-playing game that has little to do with the here and now.

Our own senses offer a parallel choice. For, our natural senses are the original “personal devices.” Like a laptop or mobile phone, they serve as an interface with the world. Yet, they can be used for other purposes. Our sensations monitor and present a reality beyond the nervous system; yet we have the ability and option to pay attention to the sensations themselves. We can focus on the interface or the reality it is supposed to transmit. We thus have the ability to entertain ourselves with our own sensory input. Even the visual field can be regarded either as a sort of proximal screen one observes or as a transparent window on the real world beyond. Hence, the fascination of painting as a pure visual experience, whether or not it represents a real scene. Hence, the even greater fascination of the cinema. I suggest it is this ambiguity, built into perception, that renders us especially susceptible to the adictiveness of screens.

Modern technology provides the current metaphor to understand how perception works. The brain computes for us a virtual reality to represent the external world. In that metaphor, the world appears to us as a “show” on the “screen” of consciousness. The participatory nature of perception underlined by the metaphor gives us some control and responsibility over what we perceive. This has a social advantage. Subjectivity is a capacity evolved to help us get along in the real world, by realizing we are active co-producers of this “show,” not just passive bystanders looking through a transparent window. Yet, by its very nature this realization permits us to focus on the screen rather than the supposed reality it transmits. This freedom, afforded by self-consciousness, is a defining human ability. But, like all freedoms, it can be misused. We turn away from reality at our own risk.

There is a difference between the senses as information channels and digital devices as information channels. The senses are naturally oriented outward. We survive by paying attention to the external world, and natural selection guarantees that only those organisms survive that get reliable information about their environment. There is little guarantee that electronic channels give us reliable information, however. They can do that, of course, but it depends on how they are used and who actually controls them. One can use a cell phone to contact someone for a specific purpose, just as one can use a computer to seek important information. But a mobile “phone” is not just a cordless telephone whose advantage is convenience. It functions more as pocket entertainment. The computer screen, too, is as likely to be used for video games or other entertainment that has nothing to do with reality. Even “googling” stuff online is sometimes no more than a trivial entertainment.

Worse, digital devices are not part of our natural equipment, and thus not under control as parts of one’s body. Back in the 17th century, Descartes was perhaps the first to recognize this dilemma. He realized that the information flowing into the brain could be faked—even if it came from the natural senses, if someone malevolent were in a position to tamper with your nervous system. If your sensory input could be manipulated, then so could your experience and behavior. He reassured himself that God (today we might say nature) would not permit such deception by the natural senses. But there is no such guarantee against deception by those who control the information that comes to us through our artificial senses—our external devices.

In the ancestral environment of the wild, being distracted by your experience as a form of self-entertainment would have been frivolous and lethal, if at that moment you needed to pay close attention to the predator stalking you or the prey you were stalking. (Daydreaming, you could miss your meal or become the meal!) People then had designated times and venues for enjoying their imagination and subjectivity—such as story-telling around the campfire. In the modern age we have blurred those boundaries. We now carry an instant “campfire” around in our pockets. While mobile phones would have been highly useful for coordinating the hunt, as currently used they would have dangerously disrupted it. They could be used by a malicious tribe to steer you away from your quarry or over a cliff.

Of course, we no longer live in the wild but in man-made environments, where traditional threats have mostly been eliminated (no saber-tooth tigers roaming the streets). We can now enjoy our entertainments at the push of a button. One can further argue that the human world has always been primarily social and that social media are merely the modern extension of an ancient connectivity. However, external threats have hardly ceased. (If they had, there would be no basis for interest in the “news”.) The modern world may be an artificial environment, a sort of virtual reality  in which we wish to wander freely; but it is still immersed in the natural order, which still holds over us the power of life and death. It may seem to us, at leisure in the well-off parts of the world, that no distinction is really needed anymore between the virtual and the real. However, this will never be the case. Even if hordes of transhumanists or gamers migrate to live in cyber-space, ignoring political realities, the computers and energy required to maintain the illusion will continue to exist in the real world. Someone will have to stay behind to mind the store, and that someone will have enormous power. (See the TV series Upload, or for that matter The Matrix films.)

The new virtual universe no doubt has its own rules, but we don’t yet know what they are. We knew what a stalking tiger looked like. We still know what a rapidly approaching bus looks like. But we are hard pressed to recognize the modern predators seeking our blood and the man-made entities that can run us down if we don’t pay attention. In the Information Age, ironically, we no longer know the difference between what informs us and what deforms us. Instant connectivity and fingertip information have rendered obsolete the traditional institutions for vetting information, such as librarians, peer review, and trusted news reporting. Everything has become a subjective blur, a free-for-all of competing opinions, suspected propaganda. Yet, because we are genetically conditioned to deal with reality, mere opinion must present itself as fact to get our attention and acceptance. This provides a perfect opportunity for those who control information channels to manage what we think.

We are challenged to know what to believe. The downside of the glut of information is the task of sorting it. It is for the convenience of avoiding that task that we rely on trusted sources to do it on our behalf. We’re used to such conveniences in the consumer culture, where information has been appealingly packaged as another consumer product, just as we’ve come to trust what is on the supermarket shelf, in part seduced by its glitzy over-packaging. When the trust is no longer there, however, the burden falls back on the consumer to read the fine print on the labels, so to speak. (More extremely, one may learn to grow one’s own food. But, then too, weeding is necessary!) In a divided world, with no rules of engagement, the challenge of sifting information is so daunting and tedious that it is tempting to throw up one’s hands in despair and regard the clash of viewpoints as ultimately no more than another harmless entertainment. That would be throwing the baby out with the bath water, however, since presumably there is a reality, a truth or fact of the matter that can affect us and which we can affect.

It is convenient, but hardly reasonable, to believe a claim because one has decided to trust its source. By the same token, it is convenient, but not reasonable, to automatically dismiss the claims of suspicious sources because other parties misuse the information. (These fallacies are known in logic as argumentum ad hominem.) The reasonable approach is to evaluate all claims on their own merit. But that means a lot of time-consuming work cross-checking apparent facts and evaluating arguments. It means taking responsibility for deciding what to believe—or, alternatively, to admit that one doesn’t know yet. It is far easier to simply take a side that has some emotional appeal in a heated controversy, and be done with the ardors of thinking. It’s easier to be a believer than a doubter, but easier still to sit back and be entertained.

 

La Belle Planète

The discovery that planets are actually typical in the universe is one of the great achievements of modern astronomy. From an understanding of how solar systems form, it follows that in our galaxy there could be on average at least one planet for each of more than a hundred billion stars. Most of these would not resemble the Earth and could not foster the development of life. But with such staggering numbers, the odds are that life in some form should actually be abundant in the galaxy. Somewhere there must be civilizations capable of space travel or sending trans-galactic messages. This raises the question of why ours is, so far, the only planet we know of that works to support life. Unless you are one of those folks who believe the aliens are already here, walking disguised among us, the question then also arises, where are they? If aliens are so probable, why have they not made their presence known to us? Named after a famous physicist of the 1950s, this question is known as the Fermi paradox and has stimulated many creative answers. I will focus on some that seem most relevant to our time.

One sobering explanation is that technological civilization is doomed to self-destruct, or is inherently self-limiting to the degree that it cannot penetrate the great distances between stars. Of course, any such thinking presumes a great deal about the possible manifestations of intelligence. For example, a “technological civilization” would be a form of biological life—and thus is a product of ruthless natural selection—or derived from such a form (robots). Like us, aliens would not be angels but craven animals who exploit other life forms. We need only look at slaughterhouses and how ideals of progress are regularly thwarted by human nature.

We owe our civilization to our great numbers and to fossil fuels, which are the remnants of past organisms. We have nearly used up this resource in ways not directly connected to the goal of space travel, in the process polluting and changing our world in ways that may limit future efforts to enter space. The sheer success of our species at dominating this planet has created living conditions which favor the spread of diseases that may ultimately cripple our civilization to such extent that space travel is not feasible. SpaceX notwithstanding, we may already be experiencing the beginning of this with the corona virus epidemic. We still live under the shadow of nuclear annihilation, not to mention biological warfare. Climate change may disrupt society with migrations and armed conflicts to such an extent that it cannot afford space exploration. Rampant nationalism and factionalism may preclude the cooperative global effort required. The violent and competitive nature at the base of our existence, reflected in the notion of conquering space, may be the very thing that prevents it. Our counterparts on other planets could face similar obstacles.

Another possible explanation is that aliens might simply not be interested in space travel. They could even lose interest in the external world altogether. This could happen if, like modern humans, they became too self-absorbed in entertainment, or decided to migrate to cyberspace, where consciousness can dwell in a virtual world. We have long had spiritual and meditative traditions that focus inward rather than outward, as well as myths and narratives of non-physical realms. Especially in a circumstance of limited or dwindling resources, and in the face of mortality, an alien civilization might choose some form of mental life over physical participation in the material world. Already our own civilization is moving in this direction, with digital entertainments and communication devices, not to mention the perennial resurgence of religion.

There is also the possibility that we have simply not been searching for extra-terrestrial signals long enough. We have only been able to receive and transmit radio emissions for little more than a century, and optical signals not much longer. Even the telltale heat signature of civilization has existed for but a few millennia. Technological development has been exponential, with most rapid change in the past couple of centuries. We can expect the change in the next couple of centuries to be far greater. Extrapolating, some people predict an imminent “singularity,” a point of no return when automation takes technology beyond human control. Perhaps the forms of AI that will control this world (or others) will have no interest in space travel and may even make the planet inhospitable to biological life. Some transhumanists foresee artificial life as an advancement over humanity. Equally likely seems the prospect that a post-human world could be dominated by artificial versions of viruses, bacteria and insects, rather than high-level artificial intelligence capable of space travel.

The fanciful vision of inter-galactic space travel, which fired our imaginations in Star Trek and Star Wars, projects characteristic human ambitions, values, and social dynamics on a cosmological scale. There is no sound reason to expect aliens to have a humanoid form, however, much less to gather in saloons at galactic outposts. Yet, if alien intelligence is a product of natural selection, as on this planet, there would be every reason to expect it to follow the fundamentally opportunistic patterns we see on Earth. Civilized aliens could only develop on other planets in the context of their biosphere, upon which they would be as dependent as we are here. Granted that we humans attempt to set ourselves apart from our biosphere, we might expect space-faring civilizations to have developed ideals and codes of behavior that facilitate cooperation, and a science that facilitates control of nature. And, of course, mathematics.

There is an understanding among scientists and science-fiction writers that the math glorified on this planet would be a lingua franca among galactic civilizations—not the same symbols, of course, but similar concepts deemed universal. However, the biological basis and parochial nature of our mathematics is even less recognized than that of our physics. The basis of our math is the natural numbers, which abstract the integrity of discrete “objects” such as human beings perceive in their physical or mental environment, and which they themselves exemplify as individuals. The finite steps of a proof, calculation, or verification exemplify discrete acts of an agent upon a world of objects it can manipulate. This corresponds to primate experience and action in an environment consisting of countable things, whether tangible or abstract. What if, at some level perceivable to aliens, the world does not consist of discrete objects and actions? Would another concept of mathematics and of computation be more suitable? What about an alien whose body does not amount to a discrete object? Of course, the technology of space travel may require manipulating and assembling what we take to be countable objects. Yet, an amorphous or non-localized creature might have ways to change its environment analogically—for example, through chemical emissions. (Indeed, this is how most self-regulation works within the organisms we know.) Computation for such an alien would not be digital processing, but direct covariance with environmental changes. What need or use could such a being have for natural “laws” as algorithmic compressions of an input? Would such a math, and the biology and mentality behind it, lend itself to space travel?

I had an epiphany while watching the original Star Wars movie. I realized that this planet—our home in space­—is an alien world and that we are the bizarre aliens that inhabit it! Whether or not any of those fancied other interstellar watering holes exist, the vision of them was created here, modelled on what is familiar to us. Just as children are attached to their mothers, we naturally find beautiful the place of our origins, without which we could not have come forth into existence. That first Star Wars film was released when the Province of Quebec still issued license plates with the motto: “La Belle Province.” Just to let all that alien traffic know how we feel about home, perhaps one day our missions to the stars will bear license plates that read “The Beautiful Planet,” in whatever language then prevails.

Why the grass is green(er)

Close your eyes and try to imagine how the world looks when no one is looking. You can’t, of course, because the “look” of something necessarily involves someone looking. This conundrum has bedeviled philosophers for centuries. Some people think we create our own reality. Others think our experience is dictated by atoms and genes. But our experience depends on both factors together—the inner and the outer. It’s always an interaction of self and world.

If the grass looks green in the springtime, there must be something about it different from brown grass at the end of summer or the red skin of a ripe cherry. That something is chlorophyll. But why does chlorophyll produce in us the sensation of greenness rather than of the redness associated with the cherry or the brownness of dry grass? This sort of question involves the perceiver’s active role in perception and the evaluative process at the base of consciousness. It calls to mind the adage about the grass on the other side of the fence. Perhaps greenness signifies something to the motivated perceiver, in the way that pain signifies tissue damage and hunger signifies a need for nourishment. But what?

This is a question that falls through the gap between science and philosophy. Science has defined itself as a study of the natural world, not of the perceiver’s subjective experience. Consciousness has long been an embarrassment for the physical sciences. Even psychology often embraces this “objective” approach; for decades it was more about the behavior of rats than the subjective experience of people. Philosophy, on the other hand, is willing to ask apparently silly questions, like: “Could I experience as red what you experience as green?” The answer, I hold, is no: to the degree we are physically similar, you and I can expect to have a similar experience of chlorophyll-bearing leaves. Because we are variations of the same creature, we evaluate perceptual input similarly.

I think of conscious experience as a conversation the brain has with itself. Words have meanings by social convention. These arise through common interaction with the world to which they largely refer. Perhaps sensations get their characteristic quality in a similar way—as “words” in the language of the senses. The quality of greenness is a convention (not social but genetic) arising through evolutionary interaction with the world over thousands of generations. In other words, sensations must have an evolutionary history, just as words have an etymological history. The “meaning” of a perceptual quality such as greenness refers to its role within the brain/body’s internal communications, just as a given word plays a specific role within human communication. We have, of course, a grand ability to play with words. To a limited extent, we are able to play with perception too. Though I cannot will to see green as red, yet imagination, hallucination, drugs, painting and stained glass all give me access to greenness as a pure experience disengaged from the things usually associated with it. Experiments demonstrate that human perception can indeed adapt to color filters that switch red for green input to the eyes. With time, the grass eventually returns to looking green the way it normally does.

What the world looks like “when no one is looking” can only mean what it looks like to the organism one happens to be. There is no “real” way things look apart from someone looking, just as there is no intrinsic meaning to words apart from language users. Naturally, it is functional for some organisms to distinguish wavelengths of light. However, the behavioral ability to distinguish red from green does not answer the question of why chlorophyll looks green rather than red, let alone why the world looks like anything at all. Yet, it is a clue. For, one’s conscious perceptual experience is organized in a particular and largely consistent way that reflects one’s needs as a living organism and as a member of the human species in particular.

The question may be likened to asking why a particular meaning is denoted in the English language by a particular word, written and pronounced its particular way, rather than by some other symbol. For the native language user, the association seems natural and unquestionable, though of course it is logically arbitrary and a product of historical accident. The subjective experience of qualities—in this case color—arises from sensory input in a way analogous to how meaning arises from the sounds or characters of language. Some symbol must be chosen, and it will inevitably come by convention to seem imbued with the meaning it has been made to convey. So, it is backwards to ask why grass appears green; rather greenness is imbued with the association of grass and other verdure. Greenness is the way we visually experience the totality of associations related primarily to chlorophyll.

Stability and structure in the world is largely matched by the stability and structure of the organism’s perception. Organisms must evaluate stimuli in order to respond in a way that allows them to survive at least long enough to reproduce. Perception is about evaluation, which is expressed in the way things look. For example, pain, fear and hunger evaluate the significance of a stimulus in terms of the body’s needs. The behavioral “meaning” of pain could be compared to the verbal warning: watch out! Whatever the meaning of greenness, it is not as evident as that of bodily sensations, nor does color bear their urgency. Color does have associations and can bear emotional charge. However, like hearing—and unlike touch—vision is a distance sense. Because of distance from the stimulus, there is time to monitor events and evaluate them in relative safety and detachment. Because of their negligible momentum, the physical impact of photons on the retina is nil. In contrast, pain occurs when the body has already been damaged through direct contact.

Phenomenal qualities in general are like the intelligible meanings that emerge through the babble of spoken syllables or the symbols on a written page. The greenness of green, like the hurtfulness of pain, contains a “self-evident” meaning that arises in much the way that meaning in language does. Just as squiggles on a page come alive as a story, “qualia” are how the brain/body represents and interprets to itself the internal connections it has itself made to bear significance for it. I call this assertive process fiat—a command as in “Let there be light!” Or a supposition as in “Let x stand for such-and-such.” This is the fundamental conjuring act that calls forth the world into consciousness. It is the action of an agent rather than the reaction of a passive causal system. In this case, the brain/body decrees with royal authority: “Let there be green, to represent grass.”

This brings to mind another “silly” question of philosophy: could a person behave exactly as they do, in every detail, but without conscious awareness? Again I say no. The fact that we are not “dark” inside is functional; conscious experience is essential to human survival. There must be some way that the world looks and sounds and smells and feels if one is to respond to it in all the sophisticated ways that human creatures do. There must be a story we narrate to ourselves not only in words, but directly in the language of the senses. Given that one is conscious at all, and can distinguish certain wavelengths, the grass looks green first of all because it must look somehow. Secondly, it looks green because that experience is the tag assigned to a particular input, just as one has learned to tag that input with the word ‘green’. The word may seem inseparable from its meaning, just as greenness seems inseparable from a certain wavelength of light, but both are conventions asserted by fiat.

What can we learn from the philosophy of perception? First and foremost: that our experience is not an open window directly revealing the world as it “really” is; but neither is it a private illusion. Instead, perception is an interaction between a living organism, with its needs and drives, and the external world (whatever that turns out to be). Consciousness is produced by the brain/body in response to physical stimuli. It is a co-creation of mind and world. Understanding that enables and requires us to take responsibility for how we interpret our experience as well as how we behave in response. That’s a daunting challenge, to be sure, since two factors operating jointly always result in ambiguity and uncertainty. Yet, one cannot afford to ignore either factor for the sake of certainty or convenience. What appears self-evident or objectively true is always open to interpretation. Appearances cannot be taken at face value. We must always question our assumptions and the motivations that underlie them.

The Riddle of the Cheshire Cat

Are you your body, your consciousness, an immaterial soul, a program running on a protein computer, a person before the law, a product of biological and social evolution? Those are but some of the concepts of self that come to mind in response to the Cheshire Cat’s famous question: Who are you?

A possessive pronoun designates the body: my body, as though it is a thing separate from the presence I feel “myself” to be, and belonging to it. This presence is no object, but a subject. But let us back up, however, since the thing one takes a body to be is already a learned notion. Even before birth, what we actually begin with is sensation. When the eyes open, we are flooded with visual appearances coordinated with other sensations in the body and limbs. We feel the body and the sense of will through which we learn to move its parts. And we see it as we see other objects in space. We learned that we can move other things by moving this one, from which we can never get away. (According to Piaget, the concept of causality, as an impersonal force acting between distant objects, derives from the early experience of making our own limbs move.) We acquire knowledge about the body as a physical entity among other objects—constituted of organs, cells, molecules, etc. It becomes an object of disinterested study as well as the highly interested source of all information about the world, to which one is connected like no other object. Since it is “me” who gains and uses this information, the body seems like my personal instrument, through which “I” interface with the world.

When you ask people where they feel themselves to be located in the body, most reply that it is behind the eyes or mouth. (Other answers might include in the chest or solar plexus, the groin, the limbs, etc.) While earlier civilizations identified the soul with the breath and lungs, or the vital principle with blood, we moderns identify our psychological life with the brain and now, perhaps, with genes. We have also inherited the mechanist metaphor through which to view the body—and the rest of the world—as a machine. Descartes proposed that the human body (like the animal body) is effectively a  machine, while the human person (the locus of consciousness) is immaterial, not part of the machine but its operator.

Though that view strongly reflects the intuitive sense of being separate from the body, it is disputed by modern science, which insists that consciousness and the self are brain functions. However, science has hamstrung itself by its own objective focus, in such a way that it has not succeeded to explain consciousness, but only to identify how it correlates with physiological and brain states. (As they say, correlation does not imply causation.) Indeed, describing how objects interact with other objects does not seem appropriate to explain consciousness, which always involves a subject as well as objects that are interacting. This relation of subject to object is enshrined in many languages as parts of speech.

The sense that “I” am not another object in the world (though my body is) led easily (though not logically) to the notion of the soul or subtle body. Theologically, the soul is a quasi-material thing (think of ectoplasm and ghosts), an awkward mix of subject and object. It explains nothing, while reassuring one of having some substantial basis that could survive death. Smart as he was, even Descartes was caught in that fallacy. (I suspect that he knew better—since Aristotle had known better long before—but had to pay lip service to traditional theology in order to evade the Inquisition.) Nowadays, scientists hold that the self dies with the brain; but they still cannot explain in traditional scientific terms how a living meat machine can produce consciousness and the sense of being a self.

Enter the computer metaphor. Just as the digital computer is the most subtle, complex, and general machine—capable of simulating any other machine—so the brain is the most plastic organ in the body, capable of simulating the external world. A great deal of the mechanics of cognition has been explained by writing programs that can accomplish with computation the perceptual tasks that the brain accomplishes naturally. However, it is cognitive behavior that is simulated and explained, not the subjective experience of being a conscious self. That challenge is now usually known as the hard problem of consciousness, to distinguish it from the “easier” problems of explaining behavior causally. However, I don’t believe that even the behavior of organisms, much less consciousness, can be explained strictly in the sort of terms that apply in physics and chemistry to inanimate matter.

The most helpful contemporary metaphor to explain consciousness, experience, and the nature of the self is virtual reality. This extends the computational metaphor, since VR runs on computers. What we know as consciousness or phenomenal experience is a VR program the brain runs to simulate the external world. What you know as your self is your bodily representative in that VR world, your avatar. How you see your virtual body, like how you see the virtual world, is guided by your real body’s real interactions in the real world. You have evolved to see the world a particular way in order survive—that is, to represent it a particular way in your VR. That representation—that seeing—is your VR world. Unlike actual VR (where you put on and take off goggles), you cannot quit the VR program run by the brain in order to see the world as it “really” is.

Your virtual self (that is, you!) is as much a body function as the brain that runs the VR program. It exists to serve the needs of the body, although it has a limited autonomy that allows it to do things that are not in the body’s interests. I think of this agent as the CEO of a corporation (corporation literally means body—in this case, a corporation of share-holding cells). In its executive role, the CEO mostly doesn’t interfere with the running of the corporate machine. Consciousness monitors the external and internal environments, and their relationship, troubleshooting in situations when automatic programming can’t adequately do the job. But consciousness isn’t just on or off. As you’ve no doubt noticed, there are degrees and domains of attention. Following a familiar route, one can drive a car more or less “automatically,” only becoming fully attentive when required to make some decision in novel circumstances. Meanwhile, the CEO is free to daydream or play golf.

We are considered agents responsible before the law. Unlike machines, we are considered to have free will—that is, to be the originators of our actions. Causal explanation, appropriate to material systems, does not usually get us off the hook. In principle, the buck stops with a free agent. This runs counter to the modern understanding of the body as a causal system whose actions can be determined by internal and external factors. Perhaps one day machines will have as much free will as we do, will be considered legal persons, and will be accountable to the law. At that point, they will probably no longer be considered machines and will have no more license than human beings to have their behavior excused on causal grounds.

Yet, nature overall is considered a causal system, including the whole biosphere. From that point of view, as products of evolution, we are as determined in our nature as other creatures. The difference is that we are aware of this and have developed some capacity to override our programming, change it, or compensate for it. It may well be that this capacity itself is an evolutionary product that has served our survival. On the one hand, though we still may not like it, we have by now largely outgrown the notion that we stand outside the natural order. On the other, the whole of human culture is a valiant attempt to do so.

Certainly, there are aspects of being natural creatures that remain limiting, when not utterly horrifying. For example, we die; and we live by killing other creatures. We now know that even our integrity as physical organisms is illusory. It is baffling enough to think that that this body is a complex configuration of trillions of cells that work together for mutual benefit under the umbrella of a common DNA. Yet, those cells are outnumbered ten to one by a menagerie of parasites and freeloaders that share the same skin but not that DNA. If it was hard before to identify with one’s liver—or, for that matter, one’s brain—it is all the harder now to identify with this bag of “foreign” creatures. (The cells now “officially” comprising the body might even once have been parasites that eventually joined the cause.) Now one must defer to genes that run the show—and not only “my” genes but those of countless parasites. The self barely hangs on as a legal and social entity, and is no longer credible as a spiritual entity. On the other hand, even the atoms of the body are hardly material in the traditional sense, but mostly empty space or some nebulous field. Who you are, then, is not so easy to answer when the self evaporates, leaving only a Cheshire grin. Perhaps the answer  depends on who wants to know and why.

 

Feeding the Hand that Bites You

Economists and game theorists define rationality as playing to win. They assume that a political system, like an economy, consists of players who try to maximize their own advantage. Altruism and cooperation are considered strategies in this competition. Capitalism and individualism are assumed. In such a context, it is difficult to grasp why people in a lower economic bracket (the exploited) seem irrationally eager to support politics advocated by those in a higher income bracket who exploit them. I am thinking of the working poor in the United States, though there have been parallels and precedents elsewhere. Has Trump had such support among the nearly dispossessed simply because of public ignorance and media propaganda? Or are there deeper reasons that make gullibility somehow willful?

 

Vaunting individualism, America has long been suspicious of any form of collectivism, including social initiatives that could best be administered by a strong federal government. Hence, the ongoing resistance to “socialized” medicine, for example. Maybe it goes back to colonial experience under British rule. Certainly, the New World offered freedom from the stultifying feudalism of Europe, with its fixed class structure, where violent revolution seemed the only way to improve life. By comparison, America was the land of opportunity—for individuals to make their way without banding into associations that might limit their freedom.

 

Though Americans call their war of independence a revolution, it was in no wise a social revolution like the French or Russian ones. On the contrary, the American revolution was a land-grab, mostly from indigenous peoples, by an already propertied class. It would also have been evident to the colonists that England was about to abolish slavery. The slave economy of the South could carry on only with separation from Britain. Hence, the spirit of independence that fired the Constitutional Convention served very particular interests, which needed the common man to fire the actual bullets. Even then, backwoodsmen and small farmers were co-opted into the sabre-rattling schemes of the propertied class.

 

A century or two later, in the full swing of industrialization, this willingness of the common man to support the elite would come back to bite workers, as they continued to embrace the values of their employers. Collective bargaining was always stronger in Europe than in the U.S, where unionism went against the engrained horror of collectivism. At their peak in the 1940s, unions eventually succumbed to an anti-revolutionary spirit manifested in the Red Scare. In the wake of the Taft-Hartley act, they obligingly purged their memberships of communists and socialists, emasculating their own power. The corporate barons had every right to fear collectivism and social revolution. Amazingly, the mass of poor bought into the hysteria.

 

But then, flag waving is a national sport in America, bordering on hysteria. The national anthem is a song about a flag, and every school day begins with a pledge of allegiance—not to the national government or to the land—but to a symbolic bit of cloth. (Significantly, some religious groups protested this as idolatry.) Huge flags hang inside the stock exchange and within some churches. They blazon at car dealerships. They are embroidered onto the uniforms of security guards. In many neighborhoods, every other porch boasts the Stars-and-Stripes. The flag is a symbol, of course—but of what, exactly? Often the most ardent flag wavers are ironically opposed to central government, collectivist social policies, or federal control versus states’ rights.

 

There is an ironic bit of history behind the flag too, whose ubiquitous popularity dates from an advertising stunt: the campaign of a youth magazine in the 1890s to increase circulation by selling flags to schools. The Pledge of Allegiance was composed for this campaign by a socialist minister who preached against the evils of capitalism and advocated strong government to administer social justice. He also advocated separation of church and state, so it deliberately excluded reference to God. (That reference, of “one nation under God,” was added by Congress in 1954, the year after Eisenhower was baptized as a Presbyterian. He wanted American school children to daily proclaim their dedication to the Almighty as well as to the flag, clearly distinguishing the U.S from its godless Cold-War rivals.) The original pledge was devised for the 400th anniversary of Columbus’ first voyage, to be celebrated in schools as part of the flag-raising ceremony. State after state then adopted it as a daily indoctrination in schools to encourage patriotism, especially among immigrant children. Now it is a universal ritual before public events. Until 1942, it was accompanied by a gesture we recognize as the Nazi salute.

 

Some Americans are opposed to initiatives that would benefit them because these have to be paid for through taxation. Instead, they vote for initiatives that reduce corporate taxes because such policies seem to support a free market economy—as though this cliché promises to benefit them personally. In fact, many people with modest incomes do own stocks and shares, if only through their retirement fund. However, I doubt that working-class conservatism rationally reflects their holdings as investors, so much as a magical belief in “the free enterprise system.” That is the wild-west lottery in which the lucky winner takes all, carrying away the pot collected from an anonymous mass of contributors. It’s the mythic gamble in which all have a theoretically equal chance to strike it rich or become president, whatever their beginnings. Trump and Putin are ironic living proof that some people do in fact get rich… and then become president… and then become even richer!

 

It is no mystery that corporations and interest groups control the American government, especially through lobbies. This reflects more than simple cronyism. It takes a lot of money to get onto a ballot in the first place, which means the candidate is likely to be wealthy and from the business sector, and/or has moneyed supporters. But once elected, there is a long-standing trend in American government to delegate responsibility for governing to private interest groups, especially to the group from which one came or one’s supporters. It would make sense to draw on the expertise of such groups—among which are many think tanks—provided they are dedicated to the well-being of the country as a whole. Being special-interest groups, however, they are often dedicated instead to the well-being of a particular class or sector, and have a highly biased viewpoint. The mandate of government to achieve fairness and balance in decision making is left to the competition of these voices, which tend to cancel each other out. This tendency simply mirrors laissez faire in the marketplace and reflects the engrained suspicion of government.

 

Responsibility to govern is thus abdicated, ceded instead to organisations outside government or which have been specially brought within the government bureaucracy. The flip side of the suspicion of government is a tacit understanding that elections are not about government at all. If the de facto regime is not the elected one, why bother to find candidates who can do the best job? Those who can win a popularity contest will do just as well. Perhaps the poor are so willing to vote for the rich to represent them because they believe the success of the latter will magically rub off on them. It’s naturally more attractive to identify with winners than with losers, despite the slim odds of the lottery. It seems easier to believe the myth of upward mobility than to recognize the fact of increasing general poverty.

 

Standard of living is the total wealth of a country divided by the number of citizens. It is a figure that hides the extreme inequality of the rich and the poor by the trick of averaging their assets, masking the economic reality for most people. The social mobility Americans prize has morphed into a game in which there are ever-greater winnings for ever-fewer few winners. The jackpot grows exponentially along with the number of losers. The overwhelming amount of any rise in GNP these days goes into very few, already deep pockets. (On the other hand, any drop will likely come out of the wage-earner’s ever-emptying pockets.) While trickle-down of the Reagan-Thatcher era is still the mantra, the reality is that wealth in the laissez-faire economy “naturally” trickles up—until, if ever, it is deliberately redistributed by government or charity.

 

A citizen may take pride in the nation’s economic growth, or take heart when the Dow Jones is up; but the gains celebrated rarely go to the conservative poor. This does not seem to matter when they can bask in the glitzy success stories of celebrities—whether from the entertainment world, the financial world, or the political world (which now considerably overlap). In Britain, where there was traditionally little social mobility, such sentimental adulation was reserved for the monarchy (and the Beatles). Wouldn’t it be ironic if its counterpart in the U.S. turned out to be a transplant from the Old-World class system, where downtrodden domestic servants cheerfully identified with the household of their aristocratic masters? I guess we will see at election time whether that is the American way after all.

The Meaning of Meaning

The things that give meaning to our lives relieve us of the need to ask, “What will I do next?” This is one reason why the loss of a loved one, a job, a personal faculty or skill, one’s health or one’s home, or even the loss of a routine or a possession can be so devastating. It leaves us bereft, at a loss of what to do with life and love and energy. For, these cherished things had heretofore set the tone, the scene, and the agenda for our daily lives. Through them, we knew what life is about and how our days would be passed. Without them, the future may seem uncertain and bleak. One carries on, since life has its own day-to-day momentum. But anxiety follows loss, which is a loss of meaning.

Meaning is about choices we make—of what to value and what to do. Anxiety attends freedom of choice, which thus is burdensome. When we lose that which gives structure and direction to life, we are then faced all over again with fundamental choices that can be riddled with anxiety. The other side of this coin is that we made such choices in the first place partly to be done with the anxiety of choosing. Understanding this can help us better appreciate what we have not lost, what has not been thwarted by external events or losses beyond our control. One prefers, of course, to make choices that will prevail and and not have to be made all over again, each year or week or moment. Human society is founded on this desire and has conspired to make that stability possible in many ways. We make contracts and social arrangements toward that end. But human society is not inherently stable in the ways we hope; much less can nature be relied upon.

One lives by habit and routine, since these structure our lives, eliminate the need for constant decisions, and establish a stable environment we can count on. This reliability is the enduring appeal of culture and institutions—of the man-made world of systems, protocols, algorithms, artifacts and machines, which are not subject to the uncertainties that inhere in nature. But even these are unreliable to the extent they depend on human participation and natural materials and forces.

Just as culture is the order we impose on nature, meaning is the narrative that we project onto an unstable natural and social background. It is a structured story we attempt to impose on life. That works, for brief periods in limited ways, in the measure that life seems to conform to our hopes. Yet, the fact that meaning may endure for extended periods is serendipitous. We may be grateful for it, but it is never guaranteed. It is only by chance (some would say by grace) that we can take anything for granted. Loss reminds us that meaning is fragile and choice is always imminent—which is another way to say that conscious attention is often required.

The other side of this coin is not to take things for granted. Rather than count on reality to remain as we wish, according to choices we hoped would be definitive, we can renew the investment we make in the things we hope to find meaningful, moment to moment as we go along. This implies knowing, on the one hand, that we can change our minds; and, on the other hand, that reality may refuse to cooperate with our desires.

One is relatively unhindered in the realms of imagination and thought. However, meaning is naturally found outside us in the real world, which constrains us in myriad ways. There is a trade-off between the freedom inherent in subjectivity and the meaning that inheres biologically in what is given or imposed by the external world. It is as though one can have free will or ready-made meaning, but not both. The ultimate price of freedom may be to live in a world that seems arbitrary, inhuman, and empty of meaning. And that may seem reason enough to choose something other than such freedom.

But meaning is not a quality residing potentially or actually in things or symbols. It is rather a capacity residing in us, the makers of meaning. That is because experience is not simply a direct revelation of the world, but a product of our brains interacting in and with the world. As in language, meaning in life exists only by convention and agreement. Hence, nothing—not even human life—is inherently meaningful or valuable. (By the same token, neither can anything be inherently meaningless.) Rather, it is up to us to give (or take back) meaning and value where we see fit. That is an easy enough statement to make; but it is hardly easy to stomach the full truth of it, or to live with that truth in all its implications. For, it takes the burden off external reality to be meaningful and puts it on our shoulders as the creators and destroyers of meaning. This burden can be very intimidating, especially since we are naturally conditioned to look outward into the world for every satisfaction, and to rely on it as the source of meaning and direction—just as we once relied on our parents.

One thinks of convention as agreement among individuals, for the sake of communicating and getting along. But there is communication within the individual as well. There is a language even of unconscious thought, with its own conventions. The nervous system, after all, is a network of internal connections, mutually communicating. The brain invests experience with meaning, in the way that language users invest their language with meaning, whether one is aware of this activity or not. Of course, like society, the brain can also change its conventions, which (by definition) are neither true nor false. If they are to some extent hard-wired within us, it is for biological reasons outside the conventions themselves. The very idea of truth or reality is a habit we have formed because of our biological nature, which compels us to look at the world as real and necessary rather than as arbitrary, illusory, or a matter of convention. We could not otherwise have survived to be here thinking about it. Yet, the fact that we can think about it allows us to question any given meaning, potentially resulting in its loss.

One is often advised that a “meaningful life” can best be found in service to some cause bigger than oneself. This stratagem works psychologically to the extent that one believes in the cause. However, it trades on our biologically inbuilt awe for a natural reality that is indeed vaster than the individual and even the species. We are in the natural habit of looking outside ourselves for meaning and purpose, since our very existence depends on that external reality. Like other primates, we are also intensely social organisms, who are finely tuned to the needs of others, to the group and its dynamics. The values behind these habits are ultimately a matter of biological and social conditioning, within which one may indeed find satisfaction in pursuing a cause or in serving others. Yet, even this grounding provides no ultimate psychological security or defense against nihilism. For, one is also at liberty to question the conditioning and the act of finding meaning in values that are biologically or socially conditioned. (Indeed, to think of it as “conditioning” already calls it into question!) One might come to look with suspicion upon such meaning as no more than another arbitrary and empty convention—a spell cast by biology, one’s parents, or society. Where one can be enchanted, one can also become disenchanted!

Where does this leave us? Well, with the uncomfortable notion that there is no escape from personal responsibility for how one sees and relates to the world. After the initial shock, this realization can be accepted as simply how things are. Life is as meaningful or meaningless as we take it to be; the things that we cherish are valued because we value them. This does not negate the qualities or properties of those things; it merely insists that valuation is something we do. Significantly, there is no plain verb in the English language that means “to give meaning,” as the verb ‘to value’ means to actively give value. Language does not support taking responsibility for meaning, and we are thus not in the habit. Since different people (not to mention other creatures) give meaning differently, there may be disagreement. What emerges is not a common reality or consensus, but a community of beings capable of perceiving and bestowing meaning on what is perceived. Perhaps that is meaning enough.

Is mortality necessary?

Living things age and die. But there is quite a range of life spans. There is a general correlation between the size of the creature and its longevity, but this is not a strict rule. Human beings find themselves at the larger and longer-lived end of the scale, wondering why—or even whether—death is necessary, how to prolong their lives, and how to avoid the degenerative effects of aging. Whether those are desirable goals is a separate question, both for the individual and the collective. Here, my question is to what extent it is feasible to indefinitely extend human life and healthful functioning. Does anything in nature render it theoretically impossible for the human organism to avoid aging and mortality? (Note that failure to find such an obstacle is not proof against it.) Under what conditions could an immortal conscious form of life evolve naturally or be engineered artificially? Could human intervention in natural biology (e.g. genetically) circumvent aging and death for the human organism—and with what consequences for the individual, the species, and the biosphere? I don’t propose to answer these questions, only to unpack them a little.

The prospect to defeat natural senescence—for example, through gene manipulation—might hinge on a clear understanding of why senescence occurs naturally in the larger evolutionary scheme. Since that includes the whole history of life on earth, this would involve modeling in detail the entire functioning of the biosphere from a “systems” point of view. (Such an understanding would be useful in astrobiology as well, to anticipate possible alien forms of life.) If possible at all, such a project could begin either at the highest or the lowest level. “Synthetic biology” is an approach at the bottom level, attempting to create an artificial analog of DNA that can replicate, self-maintain, and evolve. Even there, success in the simulation does not guarantee success in the real world. To engineer useful biological products is trivial compared to the challenge of a complete theory of all possible biospheres.

Genetic “engineering” usually means specific interventions in natural biology as we know it—not building an organism artificially from scratch, following a “blueprint,” which may not be possible even in principle, let alone in practice. The simple reason is that our ideas about any natural thing can never be perfect or complete. We can only have such exhaustive knowledge about things we make ourselves—that is, about things that follow from given definitions, such as machines and conceptual models. Those have well defined parts that are supposed to be fixed and stable, related to each other in ways specified by their designers. The parts of organisms as proposed by human observers may not correspond to reality. They are ill-defined, squishy rather than hard, and may interrelate in multiple ways that elude human grasp. We cannot properly define what the human organism is, let alone replicate it. (Does it include, for example, the microorganisms that live within it and on its surface, do not share its DNA, yet far outnumber the cells that do? That biome is known to profoundly influence human perception and behavior. It is integral to the health of the body and may be integral to its definition as well.) This fact alone may preclude reverse-engineering the human form, whether to renew it or build it from scratch.

Conversely, a being engineered from an original defining blueprint would be an artifact, not a natural organism, and thus would technically not be human. That is not to say that an artificial person is impossible, whether or not one calls it human. But at the least, an artificial person would first have to be an artificial organism, which is a self-defining system, not merely a product of human definitions and design. To predict whether an artificial organism could be conscious hinges on a thorough understanding of consciousness in natural organisms. Even the extent to which it would behave as a natural human being would depend on details of its structure that may be no more accessible than such details of any natural thing.

Then, what about mortality and senescence as natural phenomena, and the hope of circumventing them through genetic manipulation as currently understood? Single-celled organisms can self-reproduce simply by dividing (mitosis). If this process of self-cloning can go on indefinitely without error, then the original cell is “immortal” in the sense that an identical copy exists any number of generations down the line. But already this is a different notion than the continuing existence of the original cell. It raises the question of how to keep track of an individual cell in a culture of rapidly dividing cells or an individual hydra or fruit fly in a rapidly multiplying population. In any case, one must distinguish between the mortality of the creature as a whole and that of the units composing it. The longevity of a coral colony, for example, should be distinguished from that of an individual polyp. We want to know how the longevity of cells relates to that of the organism they compose.

A cell can divide into two equal cells or into a “mother” and “daughter” cell, with different properties. Stem cells have the capacity to divide into one cell that remains “totipotent” and one that is differentiated to become a specific tissue. (Stem cells of hydra, for example, replicate every three days; those of small rodents every 4 weeks; of cats, every 10 weeks; of human beings, every 50 weeks.) Plants, hydra, and some other simple animals can regenerate a whole new organism from a severed part (in the case of plants, from a single stem cell). Many non-vertebrates can regenerate limbs and other parts, but regeneration is limited in large mammals. Whatever the cause, there are trade-offs between the size and complexity of the organism and the potentials of its various cells.

Even aside from top-down production from design, if a human individual could duplicate itself in the way that a single cell can duplicate itself, there would be serious questions of identity (see the film The Sixth Day). Would the original or the copy be the bona fide person? Presumably they would have different experience (as identical twins do), at least by virtue of occupying different places. If the point of cloning oneself in this way is for “me” to continue living healthfully, then the duplicate version would have to be free from the defects of the original that accrue through aging, disease and accident. In other words, it would not be a copy of the original but a fresh example cast from the original mold, so to speak, but using fresh materials. (Compare the difference between manufacturing two items from the same design and copying a manufactured item already in use, with its imperfections and wear.) To continue to be “me,” the copy would also have to duplicate the psychological identity and all the stored memories of the original. In any case, nature has not opted to clone a complex multi-celled organism in the way a single cell can clone itself.

The mortality of a multi-celled organism is usually associated with sexual reproduction, which separates cells into two types: the “immortal” gene line and the somatic line which forms most of the organism’s tissues. Senescence appears to be “the inevitable fate of all multicellular organisms with germ-soma separation” [Wikipedia: senescence]. Yet, the question is whether senescence and mortality really are inevitable. To put it another way, why are the somatic cells generally not immortal? Since disease causes cell damage, and cell damage is a marker of senescence, we want to know how “built-in” senescence (the Hayflick limit) differs from the effects of disease.

Aging is not merely the passage of time. It has been defined as “a progressive deterioration of physiological function, an intrinsic age-related process of loss of viability and increase in vulnerability” [Wiki]. It might be compared to entropy in physical science: the inevitable increase of disorder. However, life has long been recognized as a process that goes against entropy insofar as it can create local order. The question is whether or how an individual organism could maintain its internal order indefinitely. To a human engineer or designer, it might seem an incongruous design flaw that nature would go to all the trouble to develop a complex multi-celled creature and not provide the ability to permanently maintain itself. However, products of human design often have built-in obsolescence. The designer’s naive goal to produce something perfect and lasting is only part of the bigger picture. The product on the drafting board must serve the corporation’s goals in the marketplace. Just so, the individual specimen must fit in with the system of life as a whole.

Natural selection is a slow and haphazard process. Basically, what exists at a given time is co-determined by the presence of other life forms and their needs, indeed by the entire history of such forms as they have built on each other. The evolutionary tree of life is a game tree of branching and narrowing possibilities. Evolution does not easily go backwards or start entirely new branches. There might not be time for life to begin more than once on a given planet. And once begun, there is no ecological space for a radical alternative to emerge. Rather, evolution cobbles variations onto existing variations, incrementally, with changes only manifest in the succeeding generation. Sometimes they may seem to regress, but always they are new adaptations.

One can imagine (as Lamarck did) an alternative system in which a change in the phenome could result in a change in its genome. The changes in a body that could alter itself adaptively would pass directly into the next generation. But such an organism wouldn’t need a next generation in order to adapt through natural selection! It would reproduce only to maintain and expand the colony. In any case, nature does not work that way, at least for organisms in which genome and phenome exist separately. Perhaps that separation is one of the irreversible choices that life has made while crawling onto its present limb. We can conceive an alternative system in which changes are induced deliberately by human beings (as in genetic engineering or even traditional breeding). But even such imagined possibilities have to fit in with the existing system of nature. We can hardly presume to redesign the whole biosphere—at least not yet, for this planet, without first wiping clean the slate of all existing life, which would of course include us!

Don’t You Believe It!

They say that seeing is believing. But believing can take the place of seeing. One speaks of believing one’s own eyes. But that is inappropriate, for we are hardly called upon to believe what is readily apparent to the senses. Verbal propositions are another matter. Apart from statements of what is immediately given in experience, one must decide whether to give credence to the claims that others make. These may be claims about their own experience, but more often they are about concepts and abstractions. We accept a claim as true when we can corroborate it with our own eyes. That cannot happen, of course, when the claim is about something invisible or abstract. In that case, acceptable evidence is less direct, but could still consist of something visible. We cannot see subatomic particles, for example, but we can see the tracks they leave on a photographic emulsion. Therefore, we “believe” in their existence.

Religious people may claim that their faith has a similar foundation. That is, they have personally experienced what they consider sufficient evidence for the existence of God. As creationists, they may take the manifest complexity of the world as proof of intelligent design. Some may claim to have felt God’s presence. While I discount these claims for varying reasons, at least they involve the testimony of direct experience! It is quite another matter to accept the testimony of a scripture, a text, however revered it may be. To suppose that the written word is reliable because it was dictated by God is absurdly circular reasoning. And yet millions believe in the Bible or the Koran as the literal word of God. How can this be?

Some religious people hold that God is needed to regulate a world that cannot be safely left in human hands. For them, civil law is not wise enough (or free enough from corruption) to provide the absolute authority that can command universal obedience. Humans are like squabbling children who need an all-powerful parental authority to keep them in order. (Note that dictators can perform this function!) In Christian Europe, there once was a universal Church, uniting believers, but it was hardly free from corruption and couldn’t sustain authority. Nothing has ever replaced it to fulfill that role. We have a United Nations that is anything but united. We certainly do not have a world united under God. Quite the contrary, people embracing this rationale for religion are divided in pockets, hoping their brand of invisible Super-hero will set things right in some final reckoning.

I am sure that religion did arise out of historical collective need. But that does not quite explain the ongoing appeal to individuals of its fanciful story lines. I think that appeal lies less in the details of the story than in the need for stories generally. After all, we are creatures of language. However we conceive the world amounts to some narrative or other—even the scientific one. When the story is written down, it acquires an aura of objectivity that goes beyond the claims of accountable individuals. The expression “it is written” means that it is far more than someone’s opinion or fantasy. Since the invention of printing at least, widespread access to the text allows it to assume the place of a power standing over all. In principle, the text is timeless and authoritative: just what we want a god to be. Apart from errors of copying, whoever accesses a text reliably finds the same words each time they look and the same as anyone else would find. In contrast, nature and human affairs are ambiguous, elusive, and ever changing. (Even scientific findings often cannot be repeated.) While the hand of God or Adam Smith are invisible abstractions, you can hold the tangible Bible or Koran in your hand. It consists of readily quotable memes or tweets. As a guide for living, it provides a ready map of a territory that may be overwhelmingly complex.

Personally, I think it is about as reasonable to believe in God as in the tooth fairy. But what I want to emphasize is the conundrum of belief itself. A clue may be the Protestant doctrine of salvation by faith alone, according to which one is “saved” simply by believing. Such pathological reasoning contradicts Jesus’ actual teaching of unconditional love. For, what one is to be saved from is the wrath of a punitive and judgmental father god. The doctrine of salvation by faith goes against the very reasons for which religion was socially useful in the first place: to ensure that we get along together. For, it doesn’t matter how wicked you have been, you can still be forgiven at the eleventh hour if only you claim to believe. It is far easier to believe than to be good.

Language itself makes such unreason possible, which renders belief far more dangerous than a mere question of theology. The bottom line is that you can be controlled when you can be led to believe a claim despite the evidence of your senses and common sense. It’s a form of hypnosis, the power of suggestion. Inherent in language is the power to deceive—not only by outright lying but also by creating an ersatz world. That may have served society two or three millennia ago, when people had to be tricked into behaving properly. It has hardly achieved that goal in the modern era. On the contrary, belief has become the bane of demagoguery, social media and divisive politics, which are platforms for bad behavior.

Some people argue that there is nothing to lose by being a believer, no matter how outrageous the “truths” one is asked to embrace. It may seem rational to cover all the bases, just in case. I don’t agree. I think one can lose one’s dignity, credibility, and the trust of others, which is a big price to pay for a fairy tale ending. Nature has nightmarish aspects to be sure—disease and pain and horrifying creatures. But at least there is nothing personal in its machinations. Religious believers hold that the world is supernaturally personal and that all the claims and implications of the text are literally true. The believer lives as a subservient child in a dysfunctional family, headed by an abusive and autocratic Super-parent. One wonders not only at the sanity of this vision but also at the vindictive intent that would cast unsaved neighbors, relatives, and loved ones into eternal hellfire because they do not share one’s thoughts. A few centuries ago, it meant literally burning them alive! One is right to mistrust the believer.

Of course, neither belief nor craziness is limited to religion. Conspiracy theories, for example, are akin to religious beliefs. They allow the insider to be special, provocative and prophetic, in the know, even saved. They are based on claims as off-the-wall and paranoid as the absurdities of theology—the wilder the better, since belief serves to provide identity, status, and belonging within a cult. It polarizes and divides society. Political extremism flourishes in times of uncertainty, when there is a glut of indigestible information, of divergent voices clamoring for attention and belief, overwhelming one’s confidence in the ability to make sense of them or know what to believe.

Unlike the testimony of your senses, verbal claims—for instance by political leaders, newscasters, and social media—provoke belief or disbelief, unless one simply ignores them. They direct attention—sometimes away from the real issues. It matters little whether the claims are true or false, for either way one’s energies are channeled to sort them out in the terms in which they are presented. It is easier to believe a claim than to reject it, which requires a certain inner effort that goes against our social instinct to be agreeable. It is yet harder to think in one’s own original terms. These are the downsides and liabilities of our human capacity to communicate. Belief is the Achilles heel of our wondrous ability to imagine and abstract, and to express our thoughts in symbols. Belief is how we talk ourselves into things that we ought to know cannot be true or even relevant. Believers do not have to come up with their own vision, their own claims about reality, their own sense of what is important, their own story. They simply sign up to someone else’s.

Ideally, communication is an honest game among equal players, in which one is respectfully invited to consider the claims of the others. But communication is disrespectful when the intent is to manipulate. The bad faith of the gullible believer meshes with that of the cynical communicator—a match made in hell, where neither is obliged to be sensible. The onus is on the buyer to beware of seductive games that distract one from the senses and from common sense—which, being common, should unite rather than divide. Individuality is a matter of sorting information oneself, not for the sake of being different or belonging, but as one’s best effort toward truth. While I hope you agree, I invite you to not believe a word I’ve said!

How do you know?

We live in an age that craves precision and proposes to measure information scientifically. Yet, it is precisely in an environment dominated by junk information that it is most appropriate to ask: how do I know what I think I know?

We’re born with instinct and quickly develop an ingrained trust of our senses and other faculties. We naturally believe that reality is what our eyes show us. For, how can we know how to act in a given situation unless it’s clear what the situation is? Yet, reality is not naturally clear. We see things a definite way because ambiguous perception would be useless, leaving one confused, wavering in doubt. We trust our feelings because they provide an instant assessment and ready-made behavior, especially when delay could be fatal. But precisely because they are hasty and automatic, perception, instinct, and feeling can be wrong. Fortunately we have reason as well. But how reliable is it?

One likes to think that logical reasoning leads to truth. That is because the truths of logic are independent of particular facts, which are always subject to error. However, the sure steps of logical reasoning are merely stepping stones from presumed facts to action. They are only reliable if we know where to step in the first place. To arrive with certainty, you have to begin with certainty. Much of our reasoning is now done for us by computers, which are logic machines. But like us, their output is only as reliable as their input.

Logic forces the issue of certainty, because it is based on language rather than on reality. Logical propositions are statements defined to be either true or false. By formulating propositions, reason misleads us to believe that we can know things to be clearly one way or another. We are even trained in school to answer “true or false” questions on examinations. But only statements are true or false, not reality itself. It may seem evident, in a given time and place, that the statement “it is raining” is either true or false. But when you step outdoors and feel a single, barely discernible tingle of cool moisture on the skin of your face, is it then raining or not? The question may only matter if you are trying to decide whether to take your umbrella or even whether to go out at all. And that is the crux of the matter: the point of certainty is to decide, to know what to do next.

We are all familiar with the maddening limitations of public surveys, which ask us to rate our degree of accord with various propositions. In effect, these are “multiple choice” questions, also familiar from school days. Expanding the number of categories beyond the true/false dichotomy may seem like an improvement, but in fact all categories are arbitrary divisions. In the case of surveys our replies are used by others to make decisions that matter to them. Perhaps such information is reliable to the extent that people actually behave in ways that correspond to how they answer. But, haven’t you felt that the questions are misleading in the first place, and wished you could give more nuanced answers? In some ways, the public survey is an apt metaphor for our own internal thought processes. We query ourselves in order to decide some issue that could require action. How we reason, the questions we pose, the options we imagine, and the sort of answers we expect are shaped by language—our self-talk. We tend to think in words, which means in propositions and categories.

The digital age reflects this inborn propositional thinking. The essence of digital processing is ‘yes’ or ‘no’, ‘either/or’. In logic this is formally known as the law of excluded middle: there is no ground between true and false. But true and false are artificially sharp categories designed to generate the certainty upon which to act decisively. In many cases this works and serves us well. Even if we cannot predict the weather perfectly, we send spacecraft millions of miles to rendezvous precisely with a location as elusive as the proverbial needle in a haystack. The mathematics based on the law of excluded middle, and digital computers in particular, enable us to do this because the truths of mathematics, as of logic, are certain by definition. Yet, no plan or theory ever corresponds perfectly to reality, which is always more nuanced and may include unforeseeable surprises. Calculations are no more accurate than the data on which they are based. If you start with a false assumption, only by sheer luck can you arrive at a true conclusion.

Probability, statistics, and “fuzzy” logic have developed to compensate for the limitations of conventional reasoning as it applies to the naturally ambiguous real world. The probability that it will rain in the next minute refers in a fundamental way to similar situations in the past, of which a record has been kept. If it rained in 60 out of 100 past situations where similar conditions prevailed, then it is fair to claim there is a “60% chance” that it is about to rain now. Yet, even statistics deals with definable events, which are presumed either to have happened or not. (Was it indeed raining in each of those 60 cases? On what criterion was that decided?) And any logic, even fuzzy, depends on concepts, operations and conditions that are clearly defined to begin with. Thought aims toward clarity, but also presupposes it.

The truth is that truth is not a property of reality, but of statements or thoughts. Certainty is a state of mind, not a state of the world. We hope to feel certain, especially when we need to act, because being wrong (or failing to act) can have dire consequences for which we loathe to be responsible. Yet, however certain we feel, mistakes with dire consequences are possible. Sometimes (but not always, of course) it is better to do nothing than to act prematurely. In some situations, especially when time allows, it is wise to doubt what the situation actually is, because the reality is never as clear and simple as human accounts of it. There is room for a middle ground between true and false, which are categories that unrealistically presuppose well-defined situations. Yet, navigating the no-man’s-land between true and false is psychologically challenging—perhaps especially for action-oriented men. Remaining in doubt goes against the fundamental instinct to be decisive and ready to act. Not acting in that instance requires a different sort of action: to take the stance of unknowing.

As a septuagenarian male, I like to think I know my way around. While I generally trust my mind and my perceptions, I’ve also learned to know better. Experience has demonstrated that I can be wrong, and I sometimes find myself misjudging situations. I consider myself lucky when these errors are revealed before they are compounded by further action. I do not envy people in positions of responsibility who must make weighty decisions, which are unavoidably based on imperfect information. My own errors of judgment usually involve false assumptions. These may be based on poor information (for example, rumor), but the underlying problem is that I can trust my judgment inappropriately. And sometimes that is because I haven’t sufficiently questioned my own motivations, which underlie how I perceive the situation.

This is where the stance of unknowing comes in. It insists on taking time to self-examine and to question appearances. I have to remind myself that I may be making assumptions that are unjustified or that I am not even aware of making. I need to clearly understand and acknowledge my emotional stake in the situation. I have to ask myself how I know what I think I know. I may still be required to act, to come to a decision. But honoring the place of doubt may result in a better decision. In most cases, little is lost but time and a spurious sense of certainty. Henry Ford observed that time wasted is forever lost. But time lost is not necessarily wasted. Ford was a headstrong man whose self-confidence led to great success in early life. The same certainty led to regret in later life.

 

Going gentle

Dylan Thomas’ famous “Do not go gentle into that good night” is one of the great poems of the English language. It expresses an oddly ironic sentiment, however. The author was a tortured alcoholic, who never lived to old age, writing about his dying father who was probably just trying to come to peace. It pleads that “old age should burn and rave at close of day.” Like many of my generation, I encountered this poem in high school, where we thought that good romantics die at twenty-nine (still burning and raving), and thirty seemed impossibly old. (Later, in college, we learned to tune in, turn on, and drop out, and never to trust anyone over that age!) I now turn seventy-five, having somehow survived these prescriptions. Earlier in life I wondered why old people dodder. Now I know.

And I know quite well that “my words had forked no lightning,” but I don’t entirely blame myself for this failure. There is a glut of words out there, after all, and a monstrous cultural machine that decides what is worthy of note. I continue to write, to do my tiny part in the face of obscurity and now oblivion. Should I view death as a defeat, against which to rail, simply because I have not yet made a brilliant name for myself? Is recognition the right reason to do anything?

Ageing is a catastrophe if you view it as a disease. And there is certainly a close alliance of ageing and disease, both at the cellular level and as an experience of the elderly. The old are more vulnerable to illness, in more pain, less able. Ageing is a process of degeneration, and dying “of old age” generally means dying of some degenerative disease. Heroic efforts are underway to identify the molecular and genetic causes of ageing and to find the fountain of youth. While I don’t disapprove, I don’t particularly care either. If a pill or procedure is eventually discovered that enables people to halt ageing and even reset their biological clock, it will probably be too late for me.

Let me propose a different point of view: old age is a stage of development. We age as a function of unfolding, which only happens to coincide with the passage of time. From the get-go, the processes of degeneration are inseparable from those of development. The cells of the embryo, multiplying and differentiating in the womb, have already begun a countdown toward their final destiny. The hundred trillion cells that make up the adult human body began as a single cell, which divides again and again to form the specialized types of cells of the various organs and tissues. For complex and not entirely understood reasons, there is a limit to how many times these cells can divide. This covers roughly the minimum required to make an adult body that can reproduce itself—and not much more. The limit sets a lifespan for the body. If it could be extended, so could the lifespan, and that is the focus of much research on longevity.

My point here, however, is to put ageing in the context of development—as part of the disambiguation of the individual that begins in the womb and culminates in death. Experience and the process of psychological development make you you rather than someone else you might have been. The choices we make are part of this development and are largely irreversible. We begin as quite plastic and general beings and end up somewhat rigid and quite specific beings. We crawl out onto a limb from which there is no return. Viewed this way, old age is not a disease or catastrophe but a developmental stage of life, part of the unfolding. If—through drugs or gene therapy—it becomes possible to have a thirty-year-old body again, what would it mean for my brain to be rejuvenated that way? If my brain’s youthful plasticity were restored, would I be the same person? Surely I would want to retain the memories since age thirty, and the development of my personality and understanding that occurred since then. I am not convinced that a brain can function as though young, yet with the outlook and wisdom of the old. Good upbringing seems more promising toward that end than therapeutics late in life. And there is no way to undo the choices I have made along the way, which have defined who I am.

The survival instinct, built into every creature, is a necessary feature without which it could not have come to exist. Those creatures lacking such an instinct would not live long enough to reproduce and would be excluded through the filter of natural selection. In other words, the drive to keep on living is particularly appropriate to young organisms until they have replaced themselves with the next generation. It may be less biologically relevant later on. As conscious beings, of course, we have other reasons to be attached to living than the body’s natural ones. But to me it is helpful to keep in mind my biological context. It makes more sense for thirty-year-olds to “rage against the dying of the light” than for septuagenarians. The metaphor itself is suggestive. The light dies every twenty-four hours, then also resumes for a new dawn. I might wish the day were longer, like a young child who rebels against bedtime, but I’ve gotten to live my day and others will get to live theirs.

Everyone may have a different idea of the phases of their life. I recognize my childhood as probably the happiest epoch (and certainly most carefree). Adolescence turned darker, more serious and introverted. Young adulthood was about intellectual and emotional discovery, personal growth, and career. Age fifty seemed a continental divide, all downhill on the far side. But it began a productive period of consolidation and “giving back” more than personal expansion. By age seventy I had begun to think of wrapping up loose ends in a phase of completion. Currently, I think about “making peace” with myself. Whatever that turns out to mean, it seems like a necessary preparation for a graceful finale. Of course, I only need to make peace because I have been at war. In my case, inner conflicts have mostly not had dramatic outer consequences. But I feel them no less.

I think of this program, of coming to inner peace, as an aspect of living deliberately, as though my life were a work of art I am trying to finish before a literal deadline. Every artist knows that a medium has its own properties that resist control. One’s life is no exception. There is nothing I can do to change the past, for example, though I could lie to myself or change it in memory. But art is not only about imposing one’s will on materials according to a preconceived vision; it’s also a kind of experimental research. Artists welcome accidents or unexpected effects. The game adjusts to see what can be done with them. So it is with the course of one’s life, in many ways a series of inadvertent events, felicitous or no. What can I do now with what I have become? And now?

Of course, artists sometimes get frustrated and want to tear up the work and start over. Alternatively, they may just want to put final touches on something with which they are basically satisfied. Or maybe they are not as satisfied as they would like but understand the risk of over-working the piece, ironically spoiling it. Artworks are a literal canvas on which artists paint their inner struggles. Perfecting a work is a matter of coming to an inner resolution about it. And that is more than a matter of the objective product or of what someone else thinks of it, as important as those may be. I am now one of those grave men, nearing death, whom Dylan Thomas admonishes to rage. But frankly I do not see the benefit now of such impetuosity. It may provide momentary release, helpful in mid-life. What I seek instead is resolution before the final release.