The subject is no object

The parts of speech recognized in many languages reflect an inescapable duality: the fundamental divide between subject and object. To be a subject is to have a point of view. To be an object is to appear (to a subject) in and from a point of view. One cannot see where one is looking from, and what one does see is not the seer. This difference is more than a matter of location, space or physicality. Rather, it is a categorical difference so fundamental that nothing can undercut it because it underlies every act of cognition including imagination. I say this advisedly, because there have been many attempts to transcend this dualism. In my opinion, they do not succeed.

The subject can never see itself, nor can another subject see it. It is not, in fact, an ‘it’. It cannot even be an object of thought (as opposed to a physical object) except as a kind of abstraction. Any way you slice it,  the subject is here—the point of view of the mind in question—and the object is there, some content of a mind’s consciousness. One can look in a mirror, but it is the physical body that one sees, not the subject who is doing the seeing. One might feel a configuration of sensations one is tempted to take to be one’s self, the seeming locus of one’s consciousness; but these are no more than subtle objects of attention. They are not the witnessing subject. One might nevertheless imagine this sort of experience to imply a “presence” underlying consciousness; but that is an act of imagination and not a logical inference. (For this reason, I have no more confidence in the hypothetical enduring Self of Vedanta than in the immortal Christian soul.)

Consciousness requires objects to be aware of. But do objects require subjects? Does the tree fall in the forest whether or not anyone sees or hears it? Naïve realism (or materialism) is the belief that objects can exist without subjects. Naïve idealism is the belief that subjects can exist without a physical basis or “real” objects to be aware of. It includes the notion that there are no physical objects, or that mind creates them. Such beliefs are extreme responses to the conundrum of subject-object dualism, trying to wish away the dualism itself. Materialism denies or ignores the subject. Idealism denies or ignores the independent existence of the object. It is more sensible to realize that subject and object enter conjointly and inseparably into what we know as consciousness.

This mutual involvement means there is no way the object “really” is, apart from how observers see it. Observation is an interaction of subject and object. Observers are necessarily physical agents, subjects embodied as objects. Human beings routinely assume that the way the world appears to them is just how it objectively is. In truth, how it appears to them depends on their state and their needs as living creatures.

Such subtleties do not stand in the way of human affairs. But they do have consequences. The subject/object dualism is at the core of certain problems in science, for example. Observation in classical physics presumes a clear divide between observer and observed: the act of observation ideally has little physical effect on the system observed. Moreover, the observer is supposed to be a fly on the wall, whose considerations or state ideally have no bearing on the world’s appearance. The naiveté of this standard was finally challenged by phenomena that could not be accounted for with classical models. Yet, the subject-object relationship undercuts any possible account, including quantum models. (Entanglement, for example, refers to ties between objects, not the entanglement of object with subject we are discussing.) Science is still implicitly about subjects trying to make sense of objects, even when the subject goes unmentioned.

Self-reference is spoken of as though it were actually possible. (One seems to self-refer in speaking of oneself, for example.) Strictly speaking, however, a subject cannot refer to itself. Logical paradoxes that involve self-reference arise from mishandling the categorical dissimilarity of subject and object. The putative self-reference is actually the reference of a subject to a linguistic or conceptual object. In the famous Liar Paradox, the statement “this sentence is false” appears to involve self-reference. (It contradicts itself because it also involves negation. The statement “this sentence is true” seems comparably self-referential, without being contradictory. In effect, it asserts that “this sentence is true” is true.) However, as the logician Tarski pointed out, “this sentence” is not interchangeable with “this sentence is true,” nor with “{‘this sentence is true’} is true,” and so forth. (Similarly, for “this sentience is false.”) Rather, each reference is a separate object of a separate assertion, iterated on a different logical level, ad infinitum. The asserting subject always escapes to a realm logically distinct from what it asserts, just as the classical observer always stands outside the system observed.

While science and logic are not immune to dualism, religion is dualism writ large. The child innocently asks, “if God created the world, who or what created God?” I doubt that any learned theologian has ever given a satisfying answer. A subject can create an object, since every artifact was made by someone. But a subject cannot create itself. The notion that God is self-creating (‘I am that I am’) is an evasion. So is the notion of a Prime Mover, which simply refuses to consider an endless causal chain. This dilemma has been inherited by modern cosmology, which asks how something (the universe) could boot-strap itself from nothing (the unstable vacuum?) through some series of causal transitions. It aspires to show how an object (the universe) can self-create, without invoking a subject.

We are familiar with objects that can self-replicate, and phenomena that can self-organize. There are computer programs that can copy or modify themselves. And, of course, DNA copies itself, and an organism can self-modify and reproduce. It evolved somehow in the first place. A self-replicating device, even if only a string of computer code, must include instructions about how to copy itself in its entirety, which must include the instructions themselves. Von Neumann addressed the problem by separating functions. There must be a description (blueprint), a constructor mechanism, and a further mechanism to copy the description—which the constructor then installs into the new self-replicator. Presumably a dividing cell does something similar. Strictly speaking, however, these entities are not subjects. They are theoretical objects in the purview of biologists or programmers, who are the actual subjects involved.

Causal explanation involves only interactions among inanimate objects, not the actions or interrelations of subjects. This categorical distinction may seem of interest only to philosophers and brain scientists. In everyday life, however, we are continually beset with the challenge to imagine the subjective experiences of others when all that is presented to our senses is their form as objects. We are used to dealing with inanimate things, along with the physical bodies of people and creatures. Sometimes we fail to make an essential distinction among such objects, for it is our biological nature to treat all objects as potential resources to exploit or threats to avoid. As Buber pointed out, from that fundamental position everything is object to the subject, including some objects that might also be sentient beings like oneself. Yet, another sort of relationship is possible and necessary for human beings. This is the relationship between I and thou, which is categorically different from the relationship of a subject to sensible objects.

Underlying the ethical problem of behavior toward other subjects is the physiological fact that one’s nervous system serves one’s own body uniquely. Empathy is a deliberate effort to compensate for this isolation of subjects in separate bodies. It attempts to imagine an experience that animates the other’s body and resembles one’s own experience—to walk in the other’s shoes, as we say. It is difficult to imagine the experience of another body when one has in mind a use of one’s own for that body as an object. It is particularly difficult to imagine the pain in another body when not in pain oneself. Or to imagine someone’s feelings and motivations when one feels quite differently or unsympathetic. The temptation is to shrink from this challenge and identify only with one’s own point of view as a subject—opposed to the other as mere object, whose experience then is not forefront or does not count.

This retreat is aptly named depersonalization. While deemed morally vicious in extreme cases, fundamentally it amounts to a mundane failure of nerve. Through an unbecoming yet commonplace cognitive trick, one withdraws back into the subject-object relationship that naturally dominates the life of the organism. However, the joke is ironically upon the subject who thus retreats, who is thereby depersonalized along with the victim. For, the depth, the mystery, and the privilege—indeed, the very distinction—of being a conscious subject lies in the relationship to other subjects. Granted, the relationship to objects is necessary and inevitable for biological beings. But human beings have never settled for being merely biological or merely objects.

I, Thou, and It

Martin Buber underlined the deep significance of two complementary ways of relating:  I to It and I to Thou. It is no coincidence that these correspond to parts of speech. For, the relationship of subject to subject (me to you) is above all a function of language, of two-way communication. It is fundamentally unlike the relation of subject to object (me to that), which is unilateral and potentially exploitative. One perceives, uses, and physically interacts with objects; but one communicates with other subjects.

A basic survival mode of organisms is to treat everything as an object to consume, manipulate, or avoid. Human beings live in that biological context. So, for us too, the I/it relationship is fundamental and unavoidable in the material world. Yet, we are social animals that have developed fully grammatical language, and have developed ideals and codes of social conduct as well. One of these ideals (which implies certain conduct) is the notion of person. A person is a subject who gives and receives communications. One is a perceiver (a mind) in relation to various objects of perception and thought. But one is a person only in relationship to other persons.

Buber goes further, to claim the I/thou relation as a stance that can apply where one wills. At the least, it abstains from the I/it stance. By default, it should apply at least to human subjects (though their bodies are objects). But it might be applied as well to animals, plants, non-living things, and abstractions like “nature” or “the universe.” It is a way of relating that does not depend on what is related to. For the mystic Buber, the epitome of this stance is the relationship to God—who is always transcendentally a Thou and never an it. I am content to have this stance be essential within human affairs, without bringing God into it. I understand the motive: to write the principle as large as possible. That might work for Buber, who had the mental fortitude to refuse to conceive God as an object (that is, to refuse to conceive “Him” at all!) In general, however, religious believers have demonstrated again and again their willingness to treat God—and other persons—as mere objects to manage.

The I/thou relationship is a dynamic of mutual respect between peers. The dynamic of the I/it relationship is not respect, but control over objects. It is not a symmetrically mutual relationship, but a one-way stance of mastery. For modern man, “objects” include animals, trees, minerals, and nature-at-large—anything to be used as a resource. Even when the object is incomparably vaster and more powerful than the insignificant subject (such as a planet or the cosmos at large), it is cut down to size by the stance of control. That the universe deigns to tolerate our puny existence absurdly gives us courage to mount an offensive attack.

Reason and analysis are frequently brought to bear to prosecute the I/it stance. But the human psyche is much deeper than reason. Underneath the presumption to dominate nature lurks a primordial fear that we are out of our league. It is the fear that we are dealing with a stupendous Subject, who might be angered by our hubris, rather than a mere object under one’s thumb. I suspect this is where the “fear” of God comes from; for God projects the idea of an all-powerful person with whom one is necessarily a subordinate. We address the divine familiarly as “Thou,” but hardly as a peer. The wrath we fear is retribution for the insolence of presuming to trifle with a powerful superior. At the same time, the ideal of power we humanly aspire to is projected as divine control over everything, including nature and us. This is likely a displaced version of nature’s power over us, personified as a being one could attempt to placate, as a child learns to manipulate its parents. In this way, we vicariously trump nature, on which we are utterly dependent. (If God created the world, didn’t we create God?) However, such a relationship with the Creator does not qualify as I/thou in Buber’s sense. Rather, it merely turns the tables on nature.

The idea of legal rights—especially of universal human rights—hinges on the I/thou dynamic. Persons have rights and responsibilities; objects do not. In patriarchal society, women, children, slaves, and animals fell in an intermediate category. While children have a will of their own, adults generally do not consider them peers, but persons in training, who must be controlled for their own good. In class society, only the ruling class had full rights. Equality is not automatic even within the group, let alone accorded to other groups. It used to be routinely denied on the basis of race, gender, or class. The idea of universal “suffrage” is relatively new. In a long painful process, some groups within society demanded that the dominant group grant them equal status, as though it were not to be assumed. This was resisted, because in theory personhood, with rights, also exempts its bearer from being treated as a useful object. Slavery, for example, was justified on the basis of race, by denying to some people status as persons before the law.

Buber admits that the default stance is I/it. He does not invoke biology, but clearly this is the organism’s natural stance in regard to its environment. Otherwise, natural selection would not have allowed the organism to establish itself. He attributes a spiritual side to humanity, which consists essentially in embracing the relation of I to thou. This does not replace the fundamental relation of I to it, but augments it and puts it in perspective. It reclaims the freedom of the subject to choose an alternative stance. It exercises the ability to transcend mental limits, since one must abandon any preconception, plan, desire, scheme, or intention to use the other. In effect, the other becomes literally useless. To truly meet, one must temporarily lose one’s mind.

This may seem to set a nearly unattainable standard for human relationship. It requires us to push beyond appearances (the literal content of experience) toward some unseen essence of the other, which seems to pull the strings while coyly hiding, as it were, behind the physical puppet we recognize as their body. This is metaphor, but it needn’t mystify. It does not propose an animating soul or a mystical quest. It merely reminds us to treat our fellows as we would have them treat us. It asks us to leave our weapons at the door and our tools as well. It invites us to take the stance of unknowing, at least in regard to any thou. And it is this that enables us to be fully persons ourselves.

 

 

 

Left to our own devices

Social media are paradoxically anti-social. A common scene goes like this: a group of people around the dinner table ignore each other, having wandered off in separate worlds on their respective “devices.” Even before the pandemic, Facebook had displaced face-to-face contact. Ironically, modern connectivity is disconnecting us. How can this be?

While this phenomenon is an aspect of modern technology, perhaps it all began with theater and the willing suspension of disbelief. On the stage are flesh and blood actors, but the characters they play belong to another realm, another time and place, which draws attention away from the immediate surroundings. The story they tell is not the here-and-now in the theater hall with the others in the audience. Thus, we enjoy a dual focus. Theater morphed into cinema, which morphed into the pixelated screen. But the situation is the same: there is a choice between worlds. One attends either to the proximal here-and-now reality that includes the physical interface (iPhone, TV, computer) or to the distal reality transmitted through it. Even the telephone presents a similar alternative. Is the locus of reality my ear pressed against the receiver or is it the person speaking on the other end? In the case of the real-time conversation, one focuses on the person at the other end as though they were nearby, which is a sort of hallucination. In the case of the digital device, the distal focus involved in texting chronically engulfs the participants in a sort of online role-playing game that has little to do with the here and now.

Our own senses offer a parallel choice. For, our natural senses are the original “personal devices.” Like a laptop or mobile phone, they serve as an interface with the world. Yet, they can be used for other purposes. Our sensations monitor and present a reality beyond the nervous system; yet we have the ability and option to pay attention to the sensations themselves. We can focus on the interface or the reality it is supposed to transmit. We thus have the ability to entertain ourselves with our own sensory input. Even the visual field can be regarded either as a sort of proximal screen one observes or as a transparent window on the real world beyond. Hence, the fascination of painting as a pure visual experience, whether or not it represents a real scene. Hence, the even greater fascination of the cinema. I suggest it is this ambiguity, built into perception, that renders us especially susceptible to the adictiveness of screens.

Modern technology provides the current metaphor to understand how perception works. The brain computes for us a virtual reality to represent the external world. In that metaphor, the world appears to us as a “show” on the “screen” of consciousness. The participatory nature of perception underlined by the metaphor gives us some control and responsibility over what we perceive. This has a social advantage. Subjectivity is a capacity evolved to help us get along in the real world, by realizing we are active co-producers of this “show,” not just passive bystanders looking through a transparent window. Yet, by its very nature this realization permits us to focus on the screen rather than the supposed reality it transmits. This freedom, afforded by self-consciousness, is a defining human ability. But, like all freedoms, it can be misused. We turn away from reality at our own risk.

There is a difference between the senses as information channels and digital devices as information channels. The senses are naturally oriented outward. We survive by paying attention to the external world, and natural selection guarantees that only those organisms survive that get reliable information about their environment. There is little guarantee that electronic channels give us reliable information, however. They can do that, of course, but it depends on how they are used and who actually controls them. One can use a cell phone to contact someone for a specific purpose, just as one can use a computer to seek important information. But a mobile “phone” is not just a cordless telephone whose advantage is convenience. It functions more as pocket entertainment. The computer screen, too, is as likely to be used for video games or other entertainment that has nothing to do with reality. Even “googling” stuff online is sometimes no more than a trivial entertainment.

Worse, digital devices are not part of our natural equipment, and thus not under control as parts of one’s body. Back in the 17th century, Descartes was perhaps the first to recognize this dilemma. He realized that the information flowing into the brain could be faked—even if it came from the natural senses, if someone malevolent were in a position to tamper with your nervous system. If your sensory input could be manipulated, then so could your experience and behavior. He reassured himself that God (today we might say nature) would not permit such deception by the natural senses. But there is no such guarantee against deception by those who control the information that comes to us through our artificial senses—our external devices.

In the ancestral environment of the wild, being distracted by your experience as a form of self-entertainment would have been frivolous and lethal, if at that moment you needed to pay close attention to the predator stalking you or the prey you were stalking. (Daydreaming, you could miss your meal or become the meal!) People then had designated times and venues for enjoying their imagination and subjectivity—such as story-telling around the campfire. In the modern age we have blurred those boundaries. We now carry an instant “campfire” around in our pockets. While mobile phones would have been highly useful for coordinating the hunt, as currently used they would have dangerously disrupted it. They could be used by a malicious tribe to steer you away from your quarry or over a cliff.

Of course, we no longer live in the wild but in man-made environments, where traditional threats have mostly been eliminated (no saber-tooth tigers roaming the streets). We can now enjoy our entertainments at the push of a button. One can further argue that the human world has always been primarily social and that social media are merely the modern extension of an ancient connectivity. However, external threats have hardly ceased. (If they had, there would be no basis for interest in the “news”.) The modern world may be an artificial environment, a sort of virtual reality  in which we wish to wander freely; but it is still immersed in the natural order, which still holds over us the power of life and death. It may seem to us, at leisure in the well-off parts of the world, that no distinction is really needed anymore between the virtual and the real. However, this will never be the case. Even if hordes of transhumanists or gamers migrate to live in cyber-space, ignoring political realities, the computers and energy required to maintain the illusion will continue to exist in the real world. Someone will have to stay behind to mind the store, and that someone will have enormous power. (See the TV series Upload, or for that matter The Matrix films.)

The new virtual universe no doubt has its own rules, but we don’t yet know what they are. We knew what a stalking tiger looked like. We still know what a rapidly approaching bus looks like. But we are hard pressed to recognize the modern predators seeking our blood and the man-made entities that can run us down if we don’t pay attention. In the Information Age, ironically, we no longer know the difference between what informs us and what deforms us. Instant connectivity and fingertip information have rendered obsolete the traditional institutions for vetting information, such as librarians, peer review, and trusted news reporting. Everything has become a subjective blur, a free-for-all of competing opinions, suspected propaganda. Yet, because we are genetically conditioned to deal with reality, mere opinion must present itself as fact to get our attention and acceptance. This provides a perfect opportunity for those who control information channels to manage what we think.

We are challenged to know what to believe. The downside of the glut of information is the task of sorting it. It is for the convenience of avoiding that task that we rely on trusted sources to do it on our behalf. We’re used to such conveniences in the consumer culture, where information has been appealingly packaged as another consumer product, just as we’ve come to trust what is on the supermarket shelf, in part seduced by its glitzy over-packaging. When the trust is no longer there, however, the burden falls back on the consumer to read the fine print on the labels, so to speak. (More extremely, one may learn to grow one’s own food. But, then too, weeding is necessary!) In a divided world, with no rules of engagement, the challenge of sifting information is so daunting and tedious that it is tempting to throw up one’s hands in despair and regard the clash of viewpoints as ultimately no more than another harmless entertainment. That would be throwing the baby out with the bath water, however, since presumably there is a reality, a truth or fact of the matter that can affect us and which we can affect.

It is convenient, but hardly reasonable, to believe a claim because one has decided to trust its source. By the same token, it is convenient, but not reasonable, to automatically dismiss the claims of suspicious sources because other parties misuse the information. (These fallacies are known in logic as argumentum ad hominem.) The reasonable approach is to evaluate all claims on their own merit. But that means a lot of time-consuming work cross-checking apparent facts and evaluating arguments. It means taking responsibility for deciding what to believe—or, alternatively, to admit that one doesn’t know yet. It is far easier to simply take a side that has some emotional appeal in a heated controversy, and be done with the ardors of thinking. It’s easier to be a believer than a doubter, but easier still to sit back and be entertained.

 

La Belle Planète

The discovery that planets are actually typical in the universe is one of the great achievements of modern astronomy. From an understanding of how solar systems form, it follows that in our galaxy there could be on average at least one planet for each of more than a hundred billion stars. Most of these would not resemble the Earth and could not foster the development of life. But with such staggering numbers, the odds are that life in some form should actually be abundant in the galaxy. Somewhere there must be civilizations capable of space travel or sending trans-galactic messages. This raises the question of why ours is, so far, the only planet we know of that works to support life. Unless you are one of those folks who believe the aliens are already here, walking disguised among us, the question then also arises, where are they? If aliens are so probable, why have they not made their presence known to us? Named after a famous physicist of the 1950s, this question is known as the Fermi paradox and has stimulated many creative answers. I will focus on some that seem most relevant to our time.

One sobering explanation is that technological civilization is doomed to self-destruct, or is inherently self-limiting to the degree that it cannot penetrate the great distances between stars. Of course, any such thinking presumes a great deal about the possible manifestations of intelligence. For example, a “technological civilization” would be a form of biological life—and thus is a product of ruthless natural selection—or derived from such a form (robots). Like us, aliens would not be angels but craven animals who exploit other life forms. We need only look at slaughterhouses and how ideals of progress are regularly thwarted by human nature.

We owe our civilization to our great numbers and to fossil fuels, which are the remnants of past organisms. We have nearly used up this resource in ways not directly connected to the goal of space travel, in the process polluting and changing our world in ways that may limit future efforts to enter space. The sheer success of our species at dominating this planet has created living conditions which favor the spread of diseases that may ultimately cripple our civilization to such extent that space travel is not feasible. SpaceX notwithstanding, we may already be experiencing the beginning of this with the corona virus epidemic. We still live under the shadow of nuclear annihilation, not to mention biological warfare. Climate change may disrupt society with migrations and armed conflicts to such an extent that it cannot afford space exploration. Rampant nationalism and factionalism may preclude the cooperative global effort required. The violent and competitive nature at the base of our existence, reflected in the notion of conquering space, may be the very thing that prevents it. Our counterparts on other planets could face similar obstacles.

Another possible explanation is that aliens might simply not be interested in space travel. They could even lose interest in the external world altogether. This could happen if, like modern humans, they became too self-absorbed in entertainment, or decided to migrate to cyberspace, where consciousness can dwell in a virtual world. We have long had spiritual and meditative traditions that focus inward rather than outward, as well as myths and narratives of non-physical realms. Especially in a circumstance of limited or dwindling resources, and in the face of mortality, an alien civilization might choose some form of mental life over physical participation in the material world. Already our own civilization is moving in this direction, with digital entertainments and communication devices, not to mention the perennial resurgence of religion.

There is also the possibility that we have simply not been searching for extra-terrestrial signals long enough. We have only been able to receive and transmit radio emissions for little more than a century, and optical signals not much longer. Even the telltale heat signature of civilization has existed for but a few millennia. Technological development has been exponential, with most rapid change in the past couple of centuries. We can expect the change in the next couple of centuries to be far greater. Extrapolating, some people predict an imminent “singularity,” a point of no return when automation takes technology beyond human control. Perhaps the forms of AI that will control this world (or others) will have no interest in space travel and may even make the planet inhospitable to biological life. Some transhumanists foresee artificial life as an advancement over humanity. Equally likely seems the prospect that a post-human world could be dominated by artificial versions of viruses, bacteria and insects, rather than high-level artificial intelligence capable of space travel.

The fanciful vision of inter-galactic space travel, which fired our imaginations in Star Trek and Star Wars, projects characteristic human ambitions, values, and social dynamics on a cosmological scale. There is no sound reason to expect aliens to have a humanoid form, however, much less to gather in saloons at galactic outposts. Yet, if alien intelligence is a product of natural selection, as on this planet, there would be every reason to expect it to follow the fundamentally opportunistic patterns we see on Earth. Civilized aliens could only develop on other planets in the context of their biosphere, upon which they would be as dependent as we are here. Granted that we humans attempt to set ourselves apart from our biosphere, we might expect space-faring civilizations to have developed ideals and codes of behavior that facilitate cooperation, and a science that facilitates control of nature. And, of course, mathematics.

There is an understanding among scientists and science-fiction writers that the math glorified on this planet would be a lingua franca among galactic civilizations—not the same symbols, of course, but similar concepts deemed universal. However, the biological basis and parochial nature of our mathematics is even less recognized than that of our physics. The basis of our math is the natural numbers, which abstract the integrity of discrete “objects” such as human beings perceive in their physical or mental environment, and which they themselves exemplify as individuals. The finite steps of a proof, calculation, or verification exemplify discrete acts of an agent upon a world of objects it can manipulate. This corresponds to primate experience and action in an environment consisting of countable things, whether tangible or abstract. What if, at some level perceivable to aliens, the world does not consist of discrete objects and actions? Would another concept of mathematics and of computation be more suitable? What about an alien whose body does not amount to a discrete object? Of course, the technology of space travel may require manipulating and assembling what we take to be countable objects. Yet, an amorphous or non-localized creature might have ways to change its environment analogically—for example, through chemical emissions. (Indeed, this is how most self-regulation works within the organisms we know.) Computation for such an alien would not be digital processing, but direct covariance with environmental changes. What need or use could such a being have for natural “laws” as algorithmic compressions of an input? Would such a math, and the biology and mentality behind it, lend itself to space travel?

I had an epiphany while watching the original Star Wars movie. I realized that this planet—our home in space­—is an alien world and that we are the bizarre aliens that inhabit it! Whether or not any of those fancied other interstellar watering holes exist, the vision of them was created here, modelled on what is familiar to us. Just as children are attached to their mothers, we naturally find beautiful the place of our origins, without which we could not have come forth into existence. That first Star Wars film was released when the Province of Quebec still issued license plates with the motto: “La Belle Province.” Just to let all that alien traffic know how we feel about home, perhaps one day our missions to the stars will bear license plates that read “The Beautiful Planet,” in whatever language then prevails.

Why the grass is green(er)

Close your eyes and try to imagine how the world looks when no one is looking. You can’t, of course, because the “look” of something necessarily involves someone looking. This conundrum has bedeviled philosophers for centuries. Some people think we create our own reality. Others think our experience is dictated by atoms and genes. But our experience depends on both factors together—the inner and the outer. It’s always an interaction of self and world.

If the grass looks green in the springtime, there must be something about it different from brown grass at the end of summer or the red skin of a ripe cherry. That something is chlorophyll. But why does chlorophyll produce in us the sensation of greenness rather than of the redness associated with the cherry or the brownness of dry grass? This sort of question involves the perceiver’s active role in perception and the evaluative process at the base of consciousness. It calls to mind the adage about the grass on the other side of the fence. Perhaps greenness signifies something to the motivated perceiver, in the way that pain signifies tissue damage and hunger signifies a need for nourishment. But what?

This is a question that falls through the gap between science and philosophy. Science has defined itself as a study of the natural world, not of the perceiver’s subjective experience. Consciousness has long been an embarrassment for the physical sciences. Even psychology often embraces this “objective” approach; for decades it was more about the behavior of rats than the subjective experience of people. Philosophy, on the other hand, is willing to ask apparently silly questions, like: “Could I experience as red what you experience as green?” The answer, I hold, is no: to the degree we are physically similar, you and I can expect to have a similar experience of chlorophyll-bearing leaves. Because we are variations of the same creature, we evaluate perceptual input similarly.

I think of conscious experience as a conversation the brain has with itself. Words have meanings by social convention. These arise through common interaction with the world to which they largely refer. Perhaps sensations get their characteristic quality in a similar way—as “words” in the language of the senses. The quality of greenness is a convention (not social but genetic) arising through evolutionary interaction with the world over thousands of generations. In other words, sensations must have an evolutionary history, just as words have an etymological history. The “meaning” of a perceptual quality such as greenness refers to its role within the brain/body’s internal communications, just as a given word plays a specific role within human communication. We have, of course, a grand ability to play with words. To a limited extent, we are able to play with perception too. Though I cannot will to see green as red, yet imagination, hallucination, drugs, painting and stained glass all give me access to greenness as a pure experience disengaged from the things usually associated with it. Experiments demonstrate that human perception can indeed adapt to color filters that switch red for green input to the eyes. With time, the grass eventually returns to looking green the way it normally does.

What the world looks like “when no one is looking” can only mean what it looks like to the organism one happens to be. There is no “real” way things look apart from someone looking, just as there is no intrinsic meaning to words apart from language users. Naturally, it is functional for some organisms to distinguish wavelengths of light. However, the behavioral ability to distinguish red from green does not answer the question of why chlorophyll looks green rather than red, let alone why the world looks like anything at all. Yet, it is a clue. For, one’s conscious perceptual experience is organized in a particular and largely consistent way that reflects one’s needs as a living organism and as a member of the human species in particular.

The question may be likened to asking why a particular meaning is denoted in the English language by a particular word, written and pronounced its particular way, rather than by some other symbol. For the native language user, the association seems natural and unquestionable, though of course it is logically arbitrary and a product of historical accident. The subjective experience of qualities—in this case color—arises from sensory input in a way analogous to how meaning arises from the sounds or characters of language. Some symbol must be chosen, and it will inevitably come by convention to seem imbued with the meaning it has been made to convey. So, it is backwards to ask why grass appears green; rather greenness is imbued with the association of grass and other verdure. Greenness is the way we visually experience the totality of associations related primarily to chlorophyll.

Stability and structure in the world is largely matched by the stability and structure of the organism’s perception. Organisms must evaluate stimuli in order to respond in a way that allows them to survive at least long enough to reproduce. Perception is about evaluation, which is expressed in the way things look. For example, pain, fear and hunger evaluate the significance of a stimulus in terms of the body’s needs. The behavioral “meaning” of pain could be compared to the verbal warning: watch out! Whatever the meaning of greenness, it is not as evident as that of bodily sensations, nor does color bear their urgency. Color does have associations and can bear emotional charge. However, like hearing—and unlike touch—vision is a distance sense. Because of distance from the stimulus, there is time to monitor events and evaluate them in relative safety and detachment. Because of their negligible momentum, the physical impact of photons on the retina is nil. In contrast, pain occurs when the body has already been damaged through direct contact.

Phenomenal qualities in general are like the intelligible meanings that emerge through the babble of spoken syllables or the symbols on a written page. The greenness of green, like the hurtfulness of pain, contains a “self-evident” meaning that arises in much the way that meaning in language does. Just as squiggles on a page come alive as a story, “qualia” are how the brain/body represents and interprets to itself the internal connections it has itself made to bear significance for it. I call this assertive process fiat—a command as in “Let there be light!” Or a supposition as in “Let x stand for such-and-such.” This is the fundamental conjuring act that calls forth the world into consciousness. It is the action of an agent rather than the reaction of a passive causal system. In this case, the brain/body decrees with royal authority: “Let there be green, to represent grass.”

This brings to mind another “silly” question of philosophy: could a person behave exactly as they do, in every detail, but without conscious awareness? Again I say no. The fact that we are not “dark” inside is functional; conscious experience is essential to human survival. There must be some way that the world looks and sounds and smells and feels if one is to respond to it in all the sophisticated ways that human creatures do. There must be a story we narrate to ourselves not only in words, but directly in the language of the senses. Given that one is conscious at all, and can distinguish certain wavelengths, the grass looks green first of all because it must look somehow. Secondly, it looks green because that experience is the tag assigned to a particular input, just as one has learned to tag that input with the word ‘green’. The word may seem inseparable from its meaning, just as greenness seems inseparable from a certain wavelength of light, but both are conventions asserted by fiat.

What can we learn from the philosophy of perception? First and foremost: that our experience is not an open window directly revealing the world as it “really” is; but neither is it a private illusion. Instead, perception is an interaction between a living organism, with its needs and drives, and the external world (whatever that turns out to be). Consciousness is produced by the brain/body in response to physical stimuli. It is a co-creation of mind and world. Understanding that enables and requires us to take responsibility for how we interpret our experience as well as how we behave in response. That’s a daunting challenge, to be sure, since two factors operating jointly always result in ambiguity and uncertainty. Yet, one cannot afford to ignore either factor for the sake of certainty or convenience. What appears self-evident or objectively true is always open to interpretation. Appearances cannot be taken at face value. We must always question our assumptions and the motivations that underlie them.

The Riddle of the Cheshire Cat

Are you your body, your consciousness, an immaterial soul, a program running on a protein computer, a person before the law, a product of biological and social evolution? Those are but some of the concepts of self that come to mind in response to the Cheshire Cat’s famous question: Who are you?

A possessive pronoun designates the body: my body, as though it is a thing separate from the presence I feel “myself” to be, and belonging to it. This presence is no object, but a subject. But let us back up, however, since the thing one takes a body to be is already a learned notion. Even before birth, what we actually begin with is sensation. When the eyes open, we are flooded with visual appearances coordinated with other sensations in the body and limbs. We feel the body and the sense of will through which we learn to move its parts. And we see it as we see other objects in space. We learned that we can move other things by moving this one, from which we can never get away. (According to Piaget, the concept of causality, as an impersonal force acting between distant objects, derives from the early experience of making our own limbs move.) We acquire knowledge about the body as a physical entity among other objects—constituted of organs, cells, molecules, etc. It becomes an object of disinterested study as well as the highly interested source of all information about the world, to which one is connected like no other object. Since it is “me” who gains and uses this information, the body seems like my personal instrument, through which “I” interface with the world.

When you ask people where they feel themselves to be located in the body, most reply that it is behind the eyes or mouth. (Other answers might include in the chest or solar plexus, the groin, the limbs, etc.) While earlier civilizations identified the soul with the breath and lungs, or the vital principle with blood, we moderns identify our psychological life with the brain and now, perhaps, with genes. We have also inherited the mechanist metaphor through which to view the body—and the rest of the world—as a machine. Descartes proposed that the human body (like the animal body) is effectively a  machine, while the human person (the locus of consciousness) is immaterial, not part of the machine but its operator.

Though that view strongly reflects the intuitive sense of being separate from the body, it is disputed by modern science, which insists that consciousness and the self are brain functions. However, science has hamstrung itself by its own objective focus, in such a way that it has not succeeded to explain consciousness, but only to identify how it correlates with physiological and brain states. (As they say, correlation does not imply causation.) Indeed, describing how objects interact with other objects does not seem appropriate to explain consciousness, which always involves a subject as well as objects that are interacting. This relation of subject to object is enshrined in many languages as parts of speech.

The sense that “I” am not another object in the world (though my body is) led easily (though not logically) to the notion of the soul or subtle body. Theologically, the soul is a quasi-material thing (think of ectoplasm and ghosts), an awkward mix of subject and object. It explains nothing, while reassuring one of having some substantial basis that could survive death. Smart as he was, even Descartes was caught in that fallacy. (I suspect that he knew better—since Aristotle had known better long before—but had to pay lip service to traditional theology in order to evade the Inquisition.) Nowadays, scientists hold that the self dies with the brain; but they still cannot explain in traditional scientific terms how a living meat machine can produce consciousness and the sense of being a self.

Enter the computer metaphor. Just as the digital computer is the most subtle, complex, and general machine—capable of simulating any other machine—so the brain is the most plastic organ in the body, capable of simulating the external world. A great deal of the mechanics of cognition has been explained by writing programs that can accomplish with computation the perceptual tasks that the brain accomplishes naturally. However, it is cognitive behavior that is simulated and explained, not the subjective experience of being a conscious self. That challenge is now usually known as the hard problem of consciousness, to distinguish it from the “easier” problems of explaining behavior causally. However, I don’t believe that even the behavior of organisms, much less consciousness, can be explained strictly in the sort of terms that apply in physics and chemistry to inanimate matter.

The most helpful contemporary metaphor to explain consciousness, experience, and the nature of the self is virtual reality. This extends the computational metaphor, since VR runs on computers. What we know as consciousness or phenomenal experience is a VR program the brain runs to simulate the external world. What you know as your self is your bodily representative in that VR world, your avatar. How you see your virtual body, like how you see the virtual world, is guided by your real body’s real interactions in the real world. You have evolved to see the world a particular way in order survive—that is, to represent it a particular way in your VR. That representation—that seeing—is your VR world. Unlike actual VR (where you put on and take off goggles), you cannot quit the VR program run by the brain in order to see the world as it “really” is.

Your virtual self (that is, you!) is as much a body function as the brain that runs the VR program. It exists to serve the needs of the body, although it has a limited autonomy that allows it to do things that are not in the body’s interests. I think of this agent as the CEO of a corporation (corporation literally means body—in this case, a corporation of share-holding cells). In its executive role, the CEO mostly doesn’t interfere with the running of the corporate machine. Consciousness monitors the external and internal environments, and their relationship, troubleshooting in situations when automatic programming can’t adequately do the job. But consciousness isn’t just on or off. As you’ve no doubt noticed, there are degrees and domains of attention. Following a familiar route, one can drive a car more or less “automatically,” only becoming fully attentive when required to make some decision in novel circumstances. Meanwhile, the CEO is free to daydream or play golf.

We are considered agents responsible before the law. Unlike machines, we are considered to have free will—that is, to be the originators of our actions. Causal explanation, appropriate to material systems, does not usually get us off the hook. In principle, the buck stops with a free agent. This runs counter to the modern understanding of the body as a causal system whose actions can be determined by internal and external factors. Perhaps one day machines will have as much free will as we do, will be considered legal persons, and will be accountable to the law. At that point, they will probably no longer be considered machines and will have no more license than human beings to have their behavior excused on causal grounds.

Yet, nature overall is considered a causal system, including the whole biosphere. From that point of view, as products of evolution, we are as determined in our nature as other creatures. The difference is that we are aware of this and have developed some capacity to override our programming, change it, or compensate for it. It may well be that this capacity itself is an evolutionary product that has served our survival. On the one hand, though we still may not like it, we have by now largely outgrown the notion that we stand outside the natural order. On the other, the whole of human culture is a valiant attempt to do so.

Certainly, there are aspects of being natural creatures that remain limiting, when not utterly horrifying. For example, we die; and we live by killing other creatures. We now know that even our integrity as physical organisms is illusory. It is baffling enough to think that that this body is a complex configuration of trillions of cells that work together for mutual benefit under the umbrella of a common DNA. Yet, those cells are outnumbered ten to one by a menagerie of parasites and freeloaders that share the same skin but not that DNA. If it was hard before to identify with one’s liver—or, for that matter, one’s brain—it is all the harder now to identify with this bag of “foreign” creatures. (The cells now “officially” comprising the body might even once have been parasites that eventually joined the cause.) Now one must defer to genes that run the show—and not only “my” genes but those of countless parasites. The self barely hangs on as a legal and social entity, and is no longer credible as a spiritual entity. On the other hand, even the atoms of the body are hardly material in the traditional sense, but mostly empty space or some nebulous field. Who you are, then, is not so easy to answer when the self evaporates, leaving only a Cheshire grin. Perhaps the answer  depends on who wants to know and why.

 

Feeding the Hand that Bites You

Economists and game theorists define rationality as playing to win. They assume that a political system, like an economy, consists of players who try to maximize their own advantage. Altruism and cooperation are considered strategies in this competition. Capitalism and individualism are assumed. In such a context, it is difficult to grasp why people in a lower economic bracket (the exploited) seem irrationally eager to support politics advocated by those in a higher income bracket who exploit them. I am thinking of the working poor in the United States, though there have been parallels and precedents elsewhere. Has Trump had such support among the nearly dispossessed simply because of public ignorance and media propaganda? Or are there deeper reasons that make gullibility somehow willful?

 

Vaunting individualism, America has long been suspicious of any form of collectivism, including social initiatives that could best be administered by a strong federal government. Hence, the ongoing resistance to “socialized” medicine, for example. Maybe it goes back to colonial experience under British rule. Certainly, the New World offered freedom from the stultifying feudalism of Europe, with its fixed class structure, where violent revolution seemed the only way to improve life. By comparison, America was the land of opportunity—for individuals to make their way without banding into associations that might limit their freedom.

 

Though Americans call their war of independence a revolution, it was in no wise a social revolution like the French or Russian ones. On the contrary, the American revolution was a land-grab, mostly from indigenous peoples, by an already propertied class. It would also have been evident to the colonists that England was about to abolish slavery. The slave economy of the South could carry on only with separation from Britain. Hence, the spirit of independence that fired the Constitutional Convention served very particular interests, which needed the common man to fire the actual bullets. Even then, backwoodsmen and small farmers were co-opted into the sabre-rattling schemes of the propertied class.

 

A century or two later, in the full swing of industrialization, this willingness of the common man to support the elite would come back to bite workers, as they continued to embrace the values of their employers. Collective bargaining was always stronger in Europe than in the U.S, where unionism went against the engrained horror of collectivism. At their peak in the 1940s, unions eventually succumbed to an anti-revolutionary spirit manifested in the Red Scare. In the wake of the Taft-Hartley act, they obligingly purged their memberships of communists and socialists, emasculating their own power. The corporate barons had every right to fear collectivism and social revolution. Amazingly, the mass of poor bought into the hysteria.

 

But then, flag waving is a national sport in America, bordering on hysteria. The national anthem is a song about a flag, and every school day begins with a pledge of allegiance—not to the national government or to the land—but to a symbolic bit of cloth. (Significantly, some religious groups protested this as idolatry.) Huge flags hang inside the stock exchange and within some churches. They blazon at car dealerships. They are embroidered onto the uniforms of security guards. In many neighborhoods, every other porch boasts the Stars-and-Stripes. The flag is a symbol, of course—but of what, exactly? Often the most ardent flag wavers are ironically opposed to central government, collectivist social policies, or federal control versus states’ rights.

 

There is an ironic bit of history behind the flag too, whose ubiquitous popularity dates from an advertising stunt: the campaign of a youth magazine in the 1890s to increase circulation by selling flags to schools. The Pledge of Allegiance was composed for this campaign by a socialist minister who preached against the evils of capitalism and advocated strong government to administer social justice. He also advocated separation of church and state, so it deliberately excluded reference to God. (That reference, of “one nation under God,” was added by Congress in 1954, the year after Eisenhower was baptized as a Presbyterian. He wanted American school children to daily proclaim their dedication to the Almighty as well as to the flag, clearly distinguishing the U.S from its godless Cold-War rivals.) The original pledge was devised for the 400th anniversary of Columbus’ first voyage, to be celebrated in schools as part of the flag-raising ceremony. State after state then adopted it as a daily indoctrination in schools to encourage patriotism, especially among immigrant children. Now it is a universal ritual before public events. Until 1942, it was accompanied by a gesture we recognize as the Nazi salute.

 

Some Americans are opposed to initiatives that would benefit them because these have to be paid for through taxation. Instead, they vote for initiatives that reduce corporate taxes because such policies seem to support a free market economy—as though this cliché promises to benefit them personally. In fact, many people with modest incomes do own stocks and shares, if only through their retirement fund. However, I doubt that working-class conservatism rationally reflects their holdings as investors, so much as a magical belief in “the free enterprise system.” That is the wild-west lottery in which the lucky winner takes all, carrying away the pot collected from an anonymous mass of contributors. It’s the mythic gamble in which all have a theoretically equal chance to strike it rich or become president, whatever their beginnings. Trump and Putin are ironic living proof that some people do in fact get rich… and then become president… and then become even richer!

 

It is no mystery that corporations and interest groups control the American government, especially through lobbies. This reflects more than simple cronyism. It takes a lot of money to get onto a ballot in the first place, which means the candidate is likely to be wealthy and from the business sector, and/or has moneyed supporters. But once elected, there is a long-standing trend in American government to delegate responsibility for governing to private interest groups, especially to the group from which one came or one’s supporters. It would make sense to draw on the expertise of such groups—among which are many think tanks—provided they are dedicated to the well-being of the country as a whole. Being special-interest groups, however, they are often dedicated instead to the well-being of a particular class or sector, and have a highly biased viewpoint. The mandate of government to achieve fairness and balance in decision making is left to the competition of these voices, which tend to cancel each other out. This tendency simply mirrors laissez faire in the marketplace and reflects the engrained suspicion of government.

 

Responsibility to govern is thus abdicated, ceded instead to organisations outside government or which have been specially brought within the government bureaucracy. The flip side of the suspicion of government is a tacit understanding that elections are not about government at all. If the de facto regime is not the elected one, why bother to find candidates who can do the best job? Those who can win a popularity contest will do just as well. Perhaps the poor are so willing to vote for the rich to represent them because they believe the success of the latter will magically rub off on them. It’s naturally more attractive to identify with winners than with losers, despite the slim odds of the lottery. It seems easier to believe the myth of upward mobility than to recognize the fact of increasing general poverty.

 

Standard of living is the total wealth of a country divided by the number of citizens. It is a figure that hides the extreme inequality of the rich and the poor by the trick of averaging their assets, masking the economic reality for most people. The social mobility Americans prize has morphed into a game in which there are ever-greater winnings for ever-fewer few winners. The jackpot grows exponentially along with the number of losers. The overwhelming amount of any rise in GNP these days goes into very few, already deep pockets. (On the other hand, any drop will likely come out of the wage-earner’s ever-emptying pockets.) While trickle-down of the Reagan-Thatcher era is still the mantra, the reality is that wealth in the laissez-faire economy “naturally” trickles up—until, if ever, it is deliberately redistributed by government or charity.

 

A citizen may take pride in the nation’s economic growth, or take heart when the Dow Jones is up; but the gains celebrated rarely go to the conservative poor. This does not seem to matter when they can bask in the glitzy success stories of celebrities—whether from the entertainment world, the financial world, or the political world (which now considerably overlap). In Britain, where there was traditionally little social mobility, such sentimental adulation was reserved for the monarchy (and the Beatles). Wouldn’t it be ironic if its counterpart in the U.S. turned out to be a transplant from the Old-World class system, where downtrodden domestic servants cheerfully identified with the household of their aristocratic masters? I guess we will see at election time whether that is the American way after all.

The Meaning of Meaning

The things that give meaning to our lives relieve us of the need to ask, “What will I do next?” This is one reason why the loss of a loved one, a job, a personal faculty or skill, one’s health or one’s home, or even the loss of a routine or a possession can be so devastating. It leaves us bereft, at a loss of what to do with life and love and energy. For, these cherished things had heretofore set the tone, the scene, and the agenda for our daily lives. Through them, we knew what life is about and how our days would be passed. Without them, the future may seem uncertain and bleak. One carries on, since life has its own day-to-day momentum. But anxiety follows loss, which is a loss of meaning.

Meaning is about choices we make—of what to value and what to do. Anxiety attends freedom of choice, which thus is burdensome. When we lose that which gives structure and direction to life, we are then faced all over again with fundamental choices that can be riddled with anxiety. The other side of this coin is that we made such choices in the first place partly to be done with the anxiety of choosing. Understanding this can help us better appreciate what we have not lost, what has not been thwarted by external events or losses beyond our control. One prefers, of course, to make choices that will prevail and and not have to be made all over again, each year or week or moment. Human society is founded on this desire and has conspired to make that stability possible in many ways. We make contracts and social arrangements toward that end. But human society is not inherently stable in the ways we hope; much less can nature be relied upon.

One lives by habit and routine, since these structure our lives, eliminate the need for constant decisions, and establish a stable environment we can count on. This reliability is the enduring appeal of culture and institutions—of the man-made world of systems, protocols, algorithms, artifacts and machines, which are not subject to the uncertainties that inhere in nature. But even these are unreliable to the extent they depend on human participation and natural materials and forces.

Just as culture is the order we impose on nature, meaning is the narrative that we project onto an unstable natural and social background. It is a structured story we attempt to impose on life. That works, for brief periods in limited ways, in the measure that life seems to conform to our hopes. Yet, the fact that meaning may endure for extended periods is serendipitous. We may be grateful for it, but it is never guaranteed. It is only by chance (some would say by grace) that we can take anything for granted. Loss reminds us that meaning is fragile and choice is always imminent—which is another way to say that conscious attention is often required.

The other side of this coin is not to take things for granted. Rather than count on reality to remain as we wish, according to choices we hoped would be definitive, we can renew the investment we make in the things we hope to find meaningful, moment to moment as we go along. This implies knowing, on the one hand, that we can change our minds; and, on the other hand, that reality may refuse to cooperate with our desires.

One is relatively unhindered in the realms of imagination and thought. However, meaning is naturally found outside us in the real world, which constrains us in myriad ways. There is a trade-off between the freedom inherent in subjectivity and the meaning that inheres biologically in what is given or imposed by the external world. It is as though one can have free will or ready-made meaning, but not both. The ultimate price of freedom may be to live in a world that seems arbitrary, inhuman, and empty of meaning. And that may seem reason enough to choose something other than such freedom.

But meaning is not a quality residing potentially or actually in things or symbols. It is rather a capacity residing in us, the makers of meaning. That is because experience is not simply a direct revelation of the world, but a product of our brains interacting in and with the world. As in language, meaning in life exists only by convention and agreement. Hence, nothing—not even human life—is inherently meaningful or valuable. (By the same token, neither can anything be inherently meaningless.) Rather, it is up to us to give (or take back) meaning and value where we see fit. That is an easy enough statement to make; but it is hardly easy to stomach the full truth of it, or to live with that truth in all its implications. For, it takes the burden off external reality to be meaningful and puts it on our shoulders as the creators and destroyers of meaning. This burden can be very intimidating, especially since we are naturally conditioned to look outward into the world for every satisfaction, and to rely on it as the source of meaning and direction—just as we once relied on our parents.

One thinks of convention as agreement among individuals, for the sake of communicating and getting along. But there is communication within the individual as well. There is a language even of unconscious thought, with its own conventions. The nervous system, after all, is a network of internal connections, mutually communicating. The brain invests experience with meaning, in the way that language users invest their language with meaning, whether one is aware of this activity or not. Of course, like society, the brain can also change its conventions, which (by definition) are neither true nor false. If they are to some extent hard-wired within us, it is for biological reasons outside the conventions themselves. The very idea of truth or reality is a habit we have formed because of our biological nature, which compels us to look at the world as real and necessary rather than as arbitrary, illusory, or a matter of convention. We could not otherwise have survived to be here thinking about it. Yet, the fact that we can think about it allows us to question any given meaning, potentially resulting in its loss.

One is often advised that a “meaningful life” can best be found in service to some cause bigger than oneself. This stratagem works psychologically to the extent that one believes in the cause. However, it trades on our biologically inbuilt awe for a natural reality that is indeed vaster than the individual and even the species. We are in the natural habit of looking outside ourselves for meaning and purpose, since our very existence depends on that external reality. Like other primates, we are also intensely social organisms, who are finely tuned to the needs of others, to the group and its dynamics. The values behind these habits are ultimately a matter of biological and social conditioning, within which one may indeed find satisfaction in pursuing a cause or in serving others. Yet, even this grounding provides no ultimate psychological security or defense against nihilism. For, one is also at liberty to question the conditioning and the act of finding meaning in values that are biologically or socially conditioned. (Indeed, to think of it as “conditioning” already calls it into question!) One might come to look with suspicion upon such meaning as no more than another arbitrary and empty convention—a spell cast by biology, one’s parents, or society. Where one can be enchanted, one can also become disenchanted!

Where does this leave us? Well, with the uncomfortable notion that there is no escape from personal responsibility for how one sees and relates to the world. After the initial shock, this realization can be accepted as simply how things are. Life is as meaningful or meaningless as we take it to be; the things that we cherish are valued because we value them. This does not negate the qualities or properties of those things; it merely insists that valuation is something we do. Significantly, there is no plain verb in the English language that means “to give meaning,” as the verb ‘to value’ means to actively give value. Language does not support taking responsibility for meaning, and we are thus not in the habit. Since different people (not to mention other creatures) give meaning differently, there may be disagreement. What emerges is not a common reality or consensus, but a community of beings capable of perceiving and bestowing meaning on what is perceived. Perhaps that is meaning enough.

Is mortality necessary?

Living things age and die. But there is quite a range of life spans. There is a general correlation between the size of the creature and its longevity, but this is not a strict rule. Human beings find themselves at the larger and longer-lived end of the scale, wondering why—or even whether—death is necessary, how to prolong their lives, and how to avoid the degenerative effects of aging. Whether those are desirable goals is a separate question, both for the individual and the collective. Here, my question is to what extent it is feasible to indefinitely extend human life and healthful functioning. Does anything in nature render it theoretically impossible for the human organism to avoid aging and mortality? (Note that failure to find such an obstacle is not proof against it.) Under what conditions could an immortal conscious form of life evolve naturally or be engineered artificially? Could human intervention in natural biology (e.g. genetically) circumvent aging and death for the human organism—and with what consequences for the individual, the species, and the biosphere? I don’t propose to answer these questions, only to unpack them a little.

The prospect to defeat natural senescence—for example, through gene manipulation—might hinge on a clear understanding of why senescence occurs naturally in the larger evolutionary scheme. Since that includes the whole history of life on earth, this would involve modeling in detail the entire functioning of the biosphere from a “systems” point of view. (Such an understanding would be useful in astrobiology as well, to anticipate possible alien forms of life.) If possible at all, such a project could begin either at the highest or the lowest level. “Synthetic biology” is an approach at the bottom level, attempting to create an artificial analog of DNA that can replicate, self-maintain, and evolve. Even there, success in the simulation does not guarantee success in the real world. To engineer useful biological products is trivial compared to the challenge of a complete theory of all possible biospheres.

Genetic “engineering” usually means specific interventions in natural biology as we know it—not building an organism artificially from scratch, following a “blueprint,” which may not be possible even in principle, let alone in practice. The simple reason is that our ideas about any natural thing can never be perfect or complete. We can only have such exhaustive knowledge about things we make ourselves—that is, about things that follow from given definitions, such as machines and conceptual models. Those have well defined parts that are supposed to be fixed and stable, related to each other in ways specified by their designers. The parts of organisms as proposed by human observers may not correspond to reality. They are ill-defined, squishy rather than hard, and may interrelate in multiple ways that elude human grasp. We cannot properly define what the human organism is, let alone replicate it. (Does it include, for example, the microorganisms that live within it and on its surface, do not share its DNA, yet far outnumber the cells that do? That biome is known to profoundly influence human perception and behavior. It is integral to the health of the body and may be integral to its definition as well.) This fact alone may preclude reverse-engineering the human form, whether to renew it or build it from scratch.

Conversely, a being engineered from an original defining blueprint would be an artifact, not a natural organism, and thus would technically not be human. That is not to say that an artificial person is impossible, whether or not one calls it human. But at the least, an artificial person would first have to be an artificial organism, which is a self-defining system, not merely a product of human definitions and design. To predict whether an artificial organism could be conscious hinges on a thorough understanding of consciousness in natural organisms. Even the extent to which it would behave as a natural human being would depend on details of its structure that may be no more accessible than such details of any natural thing.

Then, what about mortality and senescence as natural phenomena, and the hope of circumventing them through genetic manipulation as currently understood? Single-celled organisms can self-reproduce simply by dividing (mitosis). If this process of self-cloning can go on indefinitely without error, then the original cell is “immortal” in the sense that an identical copy exists any number of generations down the line. But already this is a different notion than the continuing existence of the original cell. It raises the question of how to keep track of an individual cell in a culture of rapidly dividing cells or an individual hydra or fruit fly in a rapidly multiplying population. In any case, one must distinguish between the mortality of the creature as a whole and that of the units composing it. The longevity of a coral colony, for example, should be distinguished from that of an individual polyp. We want to know how the longevity of cells relates to that of the organism they compose.

A cell can divide into two equal cells or into a “mother” and “daughter” cell, with different properties. Stem cells have the capacity to divide into one cell that remains “totipotent” and one that is differentiated to become a specific tissue. (Stem cells of hydra, for example, replicate every three days; those of small rodents every 4 weeks; of cats, every 10 weeks; of human beings, every 50 weeks.) Plants, hydra, and some other simple animals can regenerate a whole new organism from a severed part (in the case of plants, from a single stem cell). Many non-vertebrates can regenerate limbs and other parts, but regeneration is limited in large mammals. Whatever the cause, there are trade-offs between the size and complexity of the organism and the potentials of its various cells.

Even aside from top-down production from design, if a human individual could duplicate itself in the way that a single cell can duplicate itself, there would be serious questions of identity (see the film The Sixth Day). Would the original or the copy be the bona fide person? Presumably they would have different experience (as identical twins do), at least by virtue of occupying different places. If the point of cloning oneself in this way is for “me” to continue living healthfully, then the duplicate version would have to be free from the defects of the original that accrue through aging, disease and accident. In other words, it would not be a copy of the original but a fresh example cast from the original mold, so to speak, but using fresh materials. (Compare the difference between manufacturing two items from the same design and copying a manufactured item already in use, with its imperfections and wear.) To continue to be “me,” the copy would also have to duplicate the psychological identity and all the stored memories of the original. In any case, nature has not opted to clone a complex multi-celled organism in the way a single cell can clone itself.

The mortality of a multi-celled organism is usually associated with sexual reproduction, which separates cells into two types: the “immortal” gene line and the somatic line which forms most of the organism’s tissues. Senescence appears to be “the inevitable fate of all multicellular organisms with germ-soma separation” [Wikipedia: senescence]. Yet, the question is whether senescence and mortality really are inevitable. To put it another way, why are the somatic cells generally not immortal? Since disease causes cell damage, and cell damage is a marker of senescence, we want to know how “built-in” senescence (the Hayflick limit) differs from the effects of disease.

Aging is not merely the passage of time. It has been defined as “a progressive deterioration of physiological function, an intrinsic age-related process of loss of viability and increase in vulnerability” [Wiki]. It might be compared to entropy in physical science: the inevitable increase of disorder. However, life has long been recognized as a process that goes against entropy insofar as it can create local order. The question is whether or how an individual organism could maintain its internal order indefinitely. To a human engineer or designer, it might seem an incongruous design flaw that nature would go to all the trouble to develop a complex multi-celled creature and not provide the ability to permanently maintain itself. However, products of human design often have built-in obsolescence. The designer’s naive goal to produce something perfect and lasting is only part of the bigger picture. The product on the drafting board must serve the corporation’s goals in the marketplace. Just so, the individual specimen must fit in with the system of life as a whole.

Natural selection is a slow and haphazard process. Basically, what exists at a given time is co-determined by the presence of other life forms and their needs, indeed by the entire history of such forms as they have built on each other. The evolutionary tree of life is a game tree of branching and narrowing possibilities. Evolution does not easily go backwards or start entirely new branches. There might not be time for life to begin more than once on a given planet. And once begun, there is no ecological space for a radical alternative to emerge. Rather, evolution cobbles variations onto existing variations, incrementally, with changes only manifest in the succeeding generation. Sometimes they may seem to regress, but always they are new adaptations.

One can imagine (as Lamarck did) an alternative system in which a change in the phenome could result in a change in its genome. The changes in a body that could alter itself adaptively would pass directly into the next generation. But such an organism wouldn’t need a next generation in order to adapt through natural selection! It would reproduce only to maintain and expand the colony. In any case, nature does not work that way, at least for organisms in which genome and phenome exist separately. Perhaps that separation is one of the irreversible choices that life has made while crawling onto its present limb. We can conceive an alternative system in which changes are induced deliberately by human beings (as in genetic engineering or even traditional breeding). But even such imagined possibilities have to fit in with the existing system of nature. We can hardly presume to redesign the whole biosphere—at least not yet, for this planet, without first wiping clean the slate of all existing life, which would of course include us!

Don’t You Believe It!

They say that seeing is believing. But believing can take the place of seeing. One speaks of believing one’s own eyes. But that is inappropriate, for we are hardly called upon to believe what is readily apparent to the senses. Verbal propositions are another matter. Apart from statements of what is immediately given in experience, one must decide whether to give credence to the claims that others make. These may be claims about their own experience, but more often they are about concepts and abstractions. We accept a claim as true when we can corroborate it with our own eyes. That cannot happen, of course, when the claim is about something invisible or abstract. In that case, acceptable evidence is less direct, but could still consist of something visible. We cannot see subatomic particles, for example, but we can see the tracks they leave on a photographic emulsion. Therefore, we “believe” in their existence.

Religious people may claim that their faith has a similar foundation. That is, they have personally experienced what they consider sufficient evidence for the existence of God. As creationists, they may take the manifest complexity of the world as proof of intelligent design. Some may claim to have felt God’s presence. While I discount these claims for varying reasons, at least they involve the testimony of direct experience! It is quite another matter to accept the testimony of a scripture, a text, however revered it may be. To suppose that the written word is reliable because it was dictated by God is absurdly circular reasoning. And yet millions believe in the Bible or the Koran as the literal word of God. How can this be?

Some religious people hold that God is needed to regulate a world that cannot be safely left in human hands. For them, civil law is not wise enough (or free enough from corruption) to provide the absolute authority that can command universal obedience. Humans are like squabbling children who need an all-powerful parental authority to keep them in order. (Note that dictators can perform this function!) In Christian Europe, there once was a universal Church, uniting believers, but it was hardly free from corruption and couldn’t sustain authority. Nothing has ever replaced it to fulfill that role. We have a United Nations that is anything but united. We certainly do not have a world united under God. Quite the contrary, people embracing this rationale for religion are divided in pockets, hoping their brand of invisible Super-hero will set things right in some final reckoning.

I am sure that religion did arise out of historical collective need. But that does not quite explain the ongoing appeal to individuals of its fanciful story lines. I think that appeal lies less in the details of the story than in the need for stories generally. After all, we are creatures of language. However we conceive the world amounts to some narrative or other—even the scientific one. When the story is written down, it acquires an aura of objectivity that goes beyond the claims of accountable individuals. The expression “it is written” means that it is far more than someone’s opinion or fantasy. Since the invention of printing at least, widespread access to the text allows it to assume the place of a power standing over all. In principle, the text is timeless and authoritative: just what we want a god to be. Apart from errors of copying, whoever accesses a text reliably finds the same words each time they look and the same as anyone else would find. In contrast, nature and human affairs are ambiguous, elusive, and ever changing. (Even scientific findings often cannot be repeated.) While the hand of God or Adam Smith are invisible abstractions, you can hold the tangible Bible or Koran in your hand. It consists of readily quotable memes or tweets. As a guide for living, it provides a ready map of a territory that may be overwhelmingly complex.

Personally, I think it is about as reasonable to believe in God as in the tooth fairy. But what I want to emphasize is the conundrum of belief itself. A clue may be the Protestant doctrine of salvation by faith alone, according to which one is “saved” simply by believing. Such pathological reasoning contradicts Jesus’ actual teaching of unconditional love. For, what one is to be saved from is the wrath of a punitive and judgmental father god. The doctrine of salvation by faith goes against the very reasons for which religion was socially useful in the first place: to ensure that we get along together. For, it doesn’t matter how wicked you have been, you can still be forgiven at the eleventh hour if only you claim to believe. It is far easier to believe than to be good.

Language itself makes such unreason possible, which renders belief far more dangerous than a mere question of theology. The bottom line is that you can be controlled when you can be led to believe a claim despite the evidence of your senses and common sense. It’s a form of hypnosis, the power of suggestion. Inherent in language is the power to deceive—not only by outright lying but also by creating an ersatz world. That may have served society two or three millennia ago, when people had to be tricked into behaving properly. It has hardly achieved that goal in the modern era. On the contrary, belief has become the bane of demagoguery, social media and divisive politics, which are platforms for bad behavior.

Some people argue that there is nothing to lose by being a believer, no matter how outrageous the “truths” one is asked to embrace. It may seem rational to cover all the bases, just in case. I don’t agree. I think one can lose one’s dignity, credibility, and the trust of others, which is a big price to pay for a fairy tale ending. Nature has nightmarish aspects to be sure—disease and pain and horrifying creatures. But at least there is nothing personal in its machinations. Religious believers hold that the world is supernaturally personal and that all the claims and implications of the text are literally true. The believer lives as a subservient child in a dysfunctional family, headed by an abusive and autocratic Super-parent. One wonders not only at the sanity of this vision but also at the vindictive intent that would cast unsaved neighbors, relatives, and loved ones into eternal hellfire because they do not share one’s thoughts. A few centuries ago, it meant literally burning them alive! One is right to mistrust the believer.

Of course, neither belief nor craziness is limited to religion. Conspiracy theories, for example, are akin to religious beliefs. They allow the insider to be special, provocative and prophetic, in the know, even saved. They are based on claims as off-the-wall and paranoid as the absurdities of theology—the wilder the better, since belief serves to provide identity, status, and belonging within a cult. It polarizes and divides society. Political extremism flourishes in times of uncertainty, when there is a glut of indigestible information, of divergent voices clamoring for attention and belief, overwhelming one’s confidence in the ability to make sense of them or know what to believe.

Unlike the testimony of your senses, verbal claims—for instance by political leaders, newscasters, and social media—provoke belief or disbelief, unless one simply ignores them. They direct attention—sometimes away from the real issues. It matters little whether the claims are true or false, for either way one’s energies are channeled to sort them out in the terms in which they are presented. It is easier to believe a claim than to reject it, which requires a certain inner effort that goes against our social instinct to be agreeable. It is yet harder to think in one’s own original terms. These are the downsides and liabilities of our human capacity to communicate. Belief is the Achilles heel of our wondrous ability to imagine and abstract, and to express our thoughts in symbols. Belief is how we talk ourselves into things that we ought to know cannot be true or even relevant. Believers do not have to come up with their own vision, their own claims about reality, their own sense of what is important, their own story. They simply sign up to someone else’s.

Ideally, communication is an honest game among equal players, in which one is respectfully invited to consider the claims of the others. But communication is disrespectful when the intent is to manipulate. The bad faith of the gullible believer meshes with that of the cynical communicator—a match made in hell, where neither is obliged to be sensible. The onus is on the buyer to beware of seductive games that distract one from the senses and from common sense—which, being common, should unite rather than divide. Individuality is a matter of sorting information oneself, not for the sake of being different or belonging, but as one’s best effort toward truth. While I hope you agree, I invite you to not believe a word I’ve said!