All posts by Dan

The human dilemma

One way to describe the human dilemma is that we are conscious of our situation as subjects in a found world of objects. That world, of which we are a part, is physical and biological. Indeed, even our conceiving it in terms of subject and object reflects our biological nature. To permit our existence, not only must the world be a certain way, but as creatures we must perceive it a certain way, and act within it a certain way. While that may not be a problem for other creatures, it is for us, because we are aware of all this and can ponder it. Whenever anything is put in a limiting context, alternatives appear. Whenever a line is drawn, there is something beyond it. Our reflective minds are confronted with a receding horizon.

We are animals who can conceive being gods. Recognizing the limits imposed by physical reality and by our biological nature, we nevertheless imagine freedom from those constraints and are driven to resist them. Recognizing the limits of the particular, we imagine the general and the abstract. Resistance to limits involves denying them and imagining alternatives. Recognizing the actual, we conceive the ideal. Thus, for example, we resist mortality, disease, gravitation, pain, physical hardship, feelings of powerlessness—in short, everything about being finite biological creatures, hapless minions of nature. We imagine being moral and weightless free spirits— escaping, if not the sway of genes, at least the pull of gravity and confinement to a thin layer of atmosphere.

We find ourselves in a ready-made world we did not ask for. We find ourselves in a body we did not design and which does not endure. As infants, we learn the ropes of how to operate this body and accept it, just like people with prosthetic limbs in later life must learn to operate them and identify with them. At the same time, and throughout life, we are obliged to negotiate the world in terms of the needs of this body and through its eyes. This natural state of affairs must nevertheless seem strange to a consciousness that can imagine other possibilities. It is an unwelcome and disturbing realization for a mind that is trying to settle in to reality as given and make the best of it. The final reward for making these compromises is the insult of death.

A famous author described the horror of this situation as like waking up to find yourself in the body of a cockroach. It is a horror because it is not you, not your “real” body or self. It is someone else’s nightmare story from which you cannot awaken. (Of course, the metaphor presumes a consciousness that can observe itself. Presumably, the cockroach’s life is no horror to it.) But the metaphor implies more. Each type of body evolved in tune with its surrounding world in a way that permits it to survive. The experience of living in that body only makes sense in terms of its relationship to the world in which it finds itself but did not create. The horror of being a mortal human cockroach is simply the despair of being a creature at all, a product of the brutal gauntlet called natural selection. The history of life is the story of mutual cannibalism, of biological organisms tearing each other apart to devour, behaving compulsively according to rules of a vicious and seemingly arbitrary game. The natural cockroach knows nothing of this game and simply follows the rules by definition (for otherwise it would not exist). But for the human cockroach, the world itself is potentially horrifying, of which the cockroach body is but a symptomatic part.

The first line of defense against this dawning realization is denial. We are not mortal animals but eternal spirits! Life is not a tale told by an idiot, but a rational epic written by God. We are not driven by natural selection (or cultural and social forces) but by love and ideals of liberty, equality, fraternity. After all, we do not live in nature at all, but in ordered cities we hew from the wilderness, ultimately dreaming of self-sufficient life in space colonies. We are not obliged to suffer disease and die, but will be able to repair and renew the body indefinitely, even to design it from scratch in ways more to our liking. We are not condemned to live in bags of vulnerable flesh at all, but will be able to download our “selves” into upgraded bodies or upload them into non-material cyberspace. Alternatively, like gods, we may bring into existence whole new forms of artificial life, along principles of our design rather than nature’s trial-and-error whim. Religion conceives and charts the promise of godlike creativity, omniscience, freedom, resurrection and eternal life outside nature, which technology promises to fulfill.

The mind imagines possibilities for technology to tinker with. But just as religion and magic do not offer a realistic escape from natural reality, technology may not either. The idea of living disembodied in cyberspace is fatuous and probably self-contradictory. (The very meaning of consciousness may be so entwined with the body and its priorities that disembodied consciousness is an oxymoron. For, embodiment is not mere physicality, but a dependent relation on the creature’s environment.) Living in artificial environments on other planets may prove too daunting. Extending life expectancy entails dealing with resulting overpopulation, and perhaps genetic stagnation from lack of renewal. Reducing the body’s vulnerability to disease and aging will not make it immune to damage and death inflicted by others, or accidents that occur because we simply can never foresee every eventuality.

At every stage of development, human culture has sought to redefine the body as something outside nature. Scarification, tattooing, body painting and decoration—even knocking out or blackening teeth—have served to deny being an animal. Clothing fashion continues this preoccupation in every age. Even in war—the ultimate laying on the line of the body’s vulnerability—men attempt to redefine themselves as intentional beings, flaunting death with heroic reasons and grand ideals, in contrast to the baseness of groveling brutes who can do no more than passively submit to mortality. In truth, we have changed ourselves cosmetically but not essentially.

That is not cause for despair. We have made progress, even if our notions of progress may be skewed. Despair only makes sense when giving up or accepting failure seems inevitable. It is, however, reason for sober evaluation. In our present planetary situation, nature gives us feedback that our parochial vision of progress is not in tune with natural realities on which we remain dependent. We are in an intermediate state, somewhat like the awkwardness of adolescence: eager, but hardly prepared, to leave the nest, over-confident that we can master spaceship Earth. Progress itself must be redefined, no longer as ad hoc pursuit of goals that turn out—perversely—to be driven by biological imperatives (family, career, ethnicity, nationalism, profit, status, power). We must seek the realistic ways in which we can improve upon nature and transcend its limitations, unclouded by unconscious drives that are ultimately natural but hardly lead where we suppose. For that, we must clearly understand the human dilemma as the ground on which to create a future.

The dilemma is that nature is the ultimate source of our reality, our thinking and our aspirations, which we nevertheless hope to transcend and redefine for ourselves. But, if not this natural inheritance, what can guide us to define our own nature and determine our own destiny? Even in this age, some propose a formula allegedly dictated by God, which others know to be an anachronous human fiction. Some propose an outward-looking science, whose deficiency for this purpose has long been that it does not include the human subject in its view of the supposedly objective world. The dilemma is that neither of these approaches will save us from ourselves.

 

 

What you say is what you get

Humans are immersed in language as fish are in water. Language shapes how we perceive the world and how we think about it. While animals communicate, the human mode of communication is marked by its grammatical structure and reliance on sentences. Linguistic diversity introduces variations in cognitive processes, as individuals may conceptualize differently based on their language. Ethnic, political, or religious groups, often aligned with distinct language groups, exhibit divergent attitudes and conceptual frameworks. Language can unite us, but can also divide us. When we are not fluent in another language, we may not understand the words. But also, we may not understand the speaker because we do not think and perceive in their language.

Nuances between languages further complicate matters, as certain terms lack precise equivalents across languages, even within historically and linguistically related language groups. For instance, the English term mind, pivotal in Western philosophy, lacks a precise counterpart in several European languages, underscoring the unique influence each language exerts on fundamental yet ambiguous concepts.

Language structures meaning. Take the verb determine, for instance. We use it in two very distinct ways. It can mean to ascertain something (as when the coroner determines the time and cause of death). Also, it can mean that one thing causes another (as when one state of a physical system determines a subsequent state). By disregarding the first meaning (a person’s action), “determinism” becomes instead a relation among objects, divorced from perceiving subjects. It is then supposedly a property of the world, apart from what human beings are able to ascertain.

Language serves to articulate claims about the world. These are often devoid of substantiation and divorced from personal responsibility, a circumstance that may trace its roots to the evolutionary origin of language in animal calls, particularly those alerting others to immediate threats. The functionality of such alarums diminishes if they are not conveyed assertively, leaving little room for nuanced expressions of uncertainty. It would hardly be functional, in evolutionary terms, for the individual who raises the cry to timidly qualify their claim as merely an opinion, perhaps mistaken. Or for the others to take the time to ponder the reliability of that individual or their possible motives for misinformation. Generally speaking, it is unwise to “cry wolf” except when there is a wolf. Yet, primates are known for their ability to deliberately deceive, and language provides a built-in capacity to do so.

Even innocuous statements, such as weather predictions, often do not implicate the subject making them, but assume a degree of faith in the speaker’s assertions. Suppose, for example, I say: “It’s going to rain tomorrow.” Structurally, that is little different from “There is a tiger in the bush!” Both create expectancy, directing you to consider a possibility that might need action. You might assume that I had checked the weather report before saying this, but in fact I offer no evidence for my claim. In contrast, consider the statement, “I believe it will rain tomorrow.” Ostensibly this is a statement about the world; but it is equally a statement about the speaker. It reminds you that the proposition could be false or mistaken, that it is a belief that occurs in the mind of a fallible and potentially deceptive individual. It reminds you that the claim is the action of a subject and not an objective fact. That simple preface—I believe—qualifies the claim as a personal assertion.

Language allows for lying and may have arisen as much for deception as for sharing truth. Obviously, the ability to alert others to common danger is advantageous for the group—provided the peril is real. On the other hand, the ability to manipulate others through deception is an advantage for the individual, but may be a disadvantage for the collective. For better or for worse, language has hypnotic power. Hence, demagogues can “tweet” absurd propositions that are nevertheless taken on faith.

Aside from mass hypnosis, there is the subtler possibility of self-hypnosis. After all, we think internally (talk to ourselves) in the same linguistic forms with which we communicate. We can lie to ourselves as well as to others. I can shape my own belief and experience by repeating to myself a favorite slogan. By swallowing whole the “truths” propounded by my favorite commentators, I am absolved of the need to sort things out for myself. Also, I have a ready-made clan of like-minded believers to belong to, provided only I do not challenge their beliefs.

When society shares an overriding ideology—as in medieval Christian Europe—there is little need or occasion to take individual responsibility for claims or beliefs. It would be counterproductive to do so, since that would imply that an accepted proposition is no more than someone’s subjective idea, not the objective truth it is supposed to be. The price we pay for attributing personal responsibility is forfeiting the security found in communally accepted truth. Dissent was dangerous for heretics under Catholicism, for Germans under Hitler, and for Russians under Stalin. It was dangerous for Americans in the McCarthy era. In America today there seems no longer to be a communally accepted truth. Whatever one believes will appear, perhaps at their peril, as dissent from someone’s perspective.

No political flavor has a monopoly on reason—or faith. Reasoning is no better than the assumptions and biases on which it is based. What emerged from the ideology of medieval Christianity was a new faith in reason, now called science, based on questioning the evidence of the senses. But that was hardly the end of the story. “Faith in reason” has an oxymoronic air about it. It is no coincidence that America today is divided over the role and authority of science, which fails in the eyes of many to provide an ideology they don’t have to question. Individualism is vaunted there. But when individuals are genuinely autonomous, they claim their values and opinions as personal. There is room for the opinions and values of others, with mutual respect and courtesy built in. The downside, however, is the implication that such values and opinions are not objective truths and are perhaps no more than subjective whims. Some people cannot bear that thought; they prefer that truth should be unquestionable rather than a mere product of thought, and that morality should be a matter of obedience rather than arising from within.

With the aid of their language, some cultures affirm that the earth is where mankind belongs, and reality is thought to be a seamless whole of subject with object. In that view, mankind is an integral part of this whole, which is our “true and only home.” This belief is reflected in the absence of a program to escape mortality or to ascend to a higher reality. Their religions are not preoccupied with a concept of “salvation” or with a separate spiritual realm where human destiny unfolds (much less on another planet). On the other hand, if this world is not our true and only home, then it hardly matters what we do here, either to the planet or to each other. Note that the same illogic applies to the relationship with one’s own body. If this body is no more than a temporary husk for an inner essence, why bother to take care of it?

The English language shares a feature with its intellectual roots in Greek and Latin. This is the ambiguous sense of the verb to be, which can indicate either existence or equivalence. “There is a house in New Orleans,” asserts the existence of a building. “That house is the Rising Sun” equates a building with a name. This might seem like quibbling, but the fact that English glosses over this distinction has profound consequences. It enables us to call each other names with injurious effect, when it is unclear whether the accusation is someone’s motivated judgment or an incontestable truth. The statement “communism is evil” (or “capitalism is evil,” if you prefer) is a claim by someone who makes a judgment and applies a label for their own reasons. It equates communism and evil, as though they were two names for the same thing.  Simultaneously it also implies that the very existence of communism consists thoroughly and exclusively in being evil. Evil—and nothing else—is what communism is, and it is therefore deplorable that it even exists. Such absolutism leaves no place for qualification or debate. Whether or not the statement is true, it allows us to dismiss reason and further inquiry. It places no accountability on the person making the claim, who remains conveniently out of the picture.

Language makes the human world go round and is the source of political power. It’s how society is managed, how we keep each other in check. Whatever threat artificial intelligence poses to humanity lies more in Large Language Models than in a Terminator-style takeover by robots. An AI would only need to master our languages to manage and control us in the same ways, using power channels we have already created. For us, thought is essentially a language skill. Social control depends to some extent on how language is misused or poorly used. We can defend ourselves against manipulation with clear thinking and communication. I believe that simple practices in how we speak can make a profound difference in human affairs. One such is to preface all claims—at least in one’s own mind—with “I believe that…”

An Ideal World

Wars and rumours of wars seem to be our natural inheritance as a primate species. Tribalists at heart, we resound to the beat of the war drums, echoed in the daily news. For those of us temporarily enjoying the luxury of peace, the “news” is little more than perverse entertainment, a tune to dance to in the privacy of our secure homes. In the bigger picture, violent events and the relentless bickering of tribes are hardly news at all, but the age-old human story.

Fundamentally, we are the creature with a foot in two worlds. Nowadays we admit to being biological organisms, products of natural selection. On the one hand, that means we are the sort of ruthless animal who survived countless diverse competitions and ordeals, such as wars and natural disasters. On the other hand, the success of our species—as well as our tribe—resides precisely in cooperative behaviour. Modern individuals cannot survive long outside the context of mutual interdependence we call civilization, a collective creation that consists not only of material infrastructure and social institutions, but also of guiding ideals. The vision of human identity goes far beyond our biological nature and often seems glaringly opposed to it.

Before the scientific era, the human identity lay not in biological matter but in spirit. That is, we thought of ourselves as spiritual entities—souls—when we thought of our essence at all. The teachings of religions helped societies to function better, as tribes were squeezed together in a world of increasing human numbers. But these teachings also provided otherworldly visions of ideal perfection seemingly unattainable in daily life. Violent realities of short and brutish life stood in contrast to ideals of benevolence, freedom, and a perspective of eternity.

We have always had visions of how life could and should be. But these visions have yet to be realized on a species-wide level. Our aggressive and self-serving nature as organisms may preclude that ever happening. We may destroy ourselves and our dreams, or create a nightmare future instead. Yet, the possibility remains open now to create the ideal world as it could be and has been variously imagined. Indeed, we are closer now than ever to that possibility. For, as long as we thought of ourselves as naturally spiritual, we were hardly in a position to grasp the enormity of the task. Now we know the true nature of the challenge, which is for an animal to remake itself as a spiritual being.

“Spiritual” essentially means disembodied. People have always found ways to deny their biological embodiment, when it seemed impossible to truly transcend it. This was always the religious conceit. It has also largely shaped cultural practices generally. Decorating and deforming the body, ritualizing everything from eating to warfare, building artificial environments called cities, and creating abstract realms of thought have all served to re-define us a creature separate from nature and not strictly identified with the body. We’ve created a distinctly human world, within the natural one upon which we remain precariously dependent. So far, this human élan has served a defense and reaction against the natural state. We rebel against being vulnerable and mortal flesh, driven by animal instincts. Yet, despite the strivings of culture and the denials of religion, we’ve had only limited success in doing anything about it. The technology we’ve created in our flight from nature now puts us on the threshold of actually realizing ancient dreams of transcending the natural state. Because technology concerns the physical world, this has been largely a matter of reconfiguring the body and its environments. Yet the dream of an ideal world is quintessentially social and moral, not only material. And the possibility of realizing it goes far beyond a defensive posture. It must become humanity’s passionate intention, its overriding proactive goal.

Of course, in line with our individualism and tribalism, there are many possible and conflicting visions of a future. In line with our collectivism, however, we must come to a consensus or likely perish. Whatever the vision, it must transcend local squabbles and short-sighted concerns. A tragic thing about the news is that, whether deliberately or not, it obscures the bigger picture and the longer term. A tragic thing about war is that it pulls the world apart instead of together. (It may pull a tribe together against another tribe, which only reflects the biological determinism that humanity struggles to transcend.) War squanders the resources needed for global civilization to provide for its future. It adds to the pollution and global warming leading to ecological collapse—and with it the collapse of all civilization (not even considering the risks of nuclear or biological war). If we read between the lines, the news simply tells us over and over that we are not pursuing the human dream. It shows us the effects of wasting our lives, even while some people are busy wasting the lives of others.

What are some of the unfulfilled values behind the quest for an ideal world? Of course: peace, prosperity, fraternity, equality, justice, personal freedom—the traditional “Enlightenment” ideals. Religion proposes also: benevolence and kindness, forgiveness, humility, righteousness and virtue. We could, in fact, catalogue all the implicit and explicit values ever espoused in diverse cultures over time and brainstorm how to reconcile them to set a course for the human future. Many such ideals were long espoused within the tribe, but not necessary extended to include strangers. What is modern is the universal code of human rights based upon such values. That is an encouraging sign and a prerequisite for the human unity that must supersede tribalism in order for humanity to pull together for its survival. It is also the door to actually realize the ideal world that has always until now only been aspirational. We can add to that goal a concern for other creatures and a role as formal caretakers of the planet. Above all, it is the “we” that remains to be defined and consolidated.

There are new values, too, based on possibilities opened through technology. It may or may not be possible to finally dispense with biological embodiment. There are modern dreams of uploading one’s mind to cyberspace, or downloading it into a fresh body or a non-biological one. Many such fantasies remain grounded in concerns derived from our actual biology—such as the desire to survive, overcome mortality, have more intelligence, augmented powers, control, etc. They tend to focus on the individual more than society. But ethical and moral ideals are above all about the collective. The ideal of objectivity implies freedom from the compulsions of individual bodily life. A far-reaching potential of artificial intelligence is the notion that mind need not be bound to biology or dedicated to individual or tribal interest, but could be free to be truly objective. That may turn out to be a chimera, even a contradiction in terms; but it is nonetheless a humanly-conceived ideal to explore. A contemporary concern is how to align AI with (largely parochial) human values and goals. The focus should rather be to align short-sighted human values and goals with the long-term survival and well-being of the planet, and the potential of our species. AI should be conceived and used toward that end.

As biological creatures, the fact of having to live by killing and consuming other creatures remains a spiritual embarrassment. The moral problem dovetails with the adverse ecological effects of the meat industry. Accelerating disparity of wealth is another moral embarrassment. Failure of the humanist values of equality and fraternity dovetails with the threat of social instability and even violent revolution. Though the upward flow of wealth is enabled by the impersonal mechanisms of the modern global economy, it also depends on individual greed and short-sightedness. These, again, seem built into our conflicted biological nature and lead in the wrong direction. Communism failed because of greed while capitalism triumphed through it. In that light, neither has been more than a path to the soulless society. Jesus put it nicely: what does it profit one to gain the whole world and lose one’s own soul? “Soul” might be regained by intending the collective good above personal gain.

Mere survival of the human species, or of modern civilization, is not a goal that sets our sights high enough. Implicit in human potential is a project to finally realize high ideals that have been there all along. These may contradict each other, or be beyond our capacity, yet it is part of the unfinished project of being human to sort them out and come to a species-level vision. Such unified intention does not guarantee survival and may not even prove feasible. (The absence of evidence of alien civilizations may be evidence of absence, implying that self-conscious intelligence is inherently self-destructive.) But we should not assume the worst. And even if the worst turns out to be the case, wouldn’t there be honour in at least trying to achieve that planetary vision?

The Life of P: real or simulated?

Increasingly, sophisticated computer technology obliterates the distinction between reality and imagination or artifact. In popular science reporting, for example, distinction is frequently no longer made between an actual photograph and a computer-generated graphic—which used to be (and ought to be) clearly labelled “artist’s conception.” While computer animation extends imagination, it only approximates reality in a symbolic way, sometimes even ignoring or falsely portraying basic physics. (Just think of the noisy Star Wars battles in outer space, where there is no air for sound to travel in!) Old style hand-drawn animation used to do this too, with cartoon protagonists blown up and springing back to life, or running off the edge of a cliff and only falling when they realize their precarious situation. These were gags that no one (except perhaps unsuspecting young children) took literally. They were hardly intended to be realistic.

However, the intention of virtual-reality and digital game producers, like the modern film industry, is to create ever more realistic ‘worlds’ as entertainments. This could pose a dilemma for a virtual-reality user who does not, for some reason (perhaps from spending too much time in VR) clearly understand the difference between reality and fiction. It could well be the case for young people being educated with computer graphics. Confronted with the VR producer’s intentional deceptions, how can the user be expected to know that the computer graphic or VR world is not photography or genuinely realistic?

This question recalls the doubt first expressed in modern times by Descartes and more recently by the Matrix films. The question actually has two parts, one pertaining to the VR world itself (which is a human production, like a novel or cartoon) and one pertaining to the user (who is a consumer desiring to be entertained). In terms of the former, the question is whether there are telltale signs of simulation by which an astute observer could distinguish the VR world from the real one—for example, a “glitch” in the computer program. There is, after all, a limit to the detail a simulation can provide, and there could be computer error. But as far as the effectiveness of the illusion, this limit is relative to the user’s cognitive capacities, which are also limited. The user must, on some basis, be able to tell the difference, which brings us to the other part of the question.

The user is a biological organism who lives in the real physical world, but enters the VR world as into a game, voluntarily and often with other players, like entering on stage with other actors. There is a conditional willing suspension of disbelief, which in traditional entertainment is asked primarily of a passive audience. Unlike readers of a novel or the theater audience for a play, the VR user is both actor and audience. A literal online game can be interactive with other real players, who have a life outside the game—offstage, so to speak. It also provides a virtual world with which to interact, which includes other human or non-human figures that are not actual players. These non-players are not conscious subjects but fictions in the VR, defined by the program as part of the stage sets. (Digital animation allows the “stage” to be dynamic, constantly changing.) While the non-players are not simple cardboard cut-outs, they are no more than part of that programmed dynamic stage set. Therein lies a key difference between real human beings (or creatures) and simulated ones. Real agents have their own agency; simulated agents are fictions that express only the agency of the real people who create the program.

The VR user is a player in a virtual-reality—a real person who chooses to engage with the VR in order to have a certain kind of experience. This player—call her P—may or may not be represented in the virtual ‘world’ by an avatar. (As P, you could be seeing yourself as a character in the story, but in any case you are seeing the VR world through your own real eyes.) Since it is provided by a computer program that is necessarily finite, the virtual world is necessarily limited, furnishing only a finite variety of possible inputs to P’s senses. In principle, that is a key difference between the VR world and the real world, corresponding to a fundamental difference between artifacts and natural found things.

However, the situation is actually similar for ordinary experience by real people in the real world. The human nervous system only processes finite information, from which it fabricates the natural illusion of an external world. The difference between natural input of the senses and the input of VR is only relative. We know that the VR is not real when we know that we are wearing a VR headset or some such thing. For the illusion of a virtual reality to be complete (as in The Matrix), no such clue must be available. P must be unaware of the deception and unable to recall entering the VR world from an existence outside it.

That conceivable possibility brings us to another contemporary confusion, expressed provocatively in the rhetorical question: “Are you living in a simulation?” Suppose you simply find yourself (like Descartes, or Neo in the The Matrix?) in a world whose reality you doubt. After all, if somehow you cannot tell the difference between simulation and reality, you might have been born and raised within a simulated world instead of the supposedly “real” one, and not be the wiser for it. Even a memory you seem to have of childhood—or of putting on VR goggles—could be merely a simulated memory, part of the VR program. However, this doubt confounds the notions of player and non-player; the difference between them is glossed over in the so-called Simulation Argument (that we are “probably living in a simulation”).

By assumption, P is a live embodied human who lives in the real world, in which the VR is a program running on a real computer, created by real programmers. By definition, P is not part of the program and P’s memories are not part of the stage set, so to speak. The fact that the VR world is convincing to P does not imply that P is “living” in it rather than in reality. (Much less does it imply that there is no reality, only a nested set of illusory worlds within worlds!) It implies only that P, at that moment, is unable to discern the difference—and that to doubt the reality of the world is to doubt one’s own reality.

Another player (or P at another time) might be able to tell the difference. And even if P could happen to be right about actually living in a simulation, there would necessarily be a real world in which that simulation is maintained in a real computer. But P cannot be right, given premises of the situation: namely, that there is a fundamental difference between a player and a non-player, and that P happens to be a player rather than a mere prop. For P to be right about “living” in a simulation that includes what appear to be other conscious players, simulated players (including P herself) must be possible. This is a separate and nonsensical idea. For P to “live” in a simulation at all means that P is an element of the simulation, not someone real from outside it. Then P is not a player after all, but a non-player—a prop, with a simulated brain and body supposedly able to produce the simulated consciousness necessary for “living” in the simulation. If there are other seeming players in P’s world, then their brains and bodies would also have to be effectively simulated. Recursively, there would be simulated players in the world of each simulated player, each with simulated players in their world, ad infinitum. This might seem logically possible, but it would require infinite computation and zero common sense.

There is a difference between a simulation that can fool a real subject and a simulation that is intended to be an artificial subject—such as a real-world emulation of the brain. They are both artifacts, and any artifact is a finite well-defined product of human definition and ingenuity. A simulation is an artifact that attempts to exhaust the reality of a natural thing or process (such as the brain or a real environment). It cannot truly do so, since it is only finitely complex, while natural reality may be indefinitely complex. So, two quite different questions arise: (1) is the simulation detailed enough to fool a conscious subject who wishes to be entertained? And (2) is the simulation (of the brain) complex enough to be an artificial subject who is conscious?

Of course, no one can experience another person’s consciousness. (That seems to be part of what it means to be an individual.) So, to verify that a simulated brain “is conscious” can only involve behavioral tests. Such a test could include simply asking it whether it is conscious. Yet, it could have been programmed to answer yes, in effect lying. (‘No’ would be a more interesting answer. That too could have been programmed, ironically honest. On the other hand, it might reflect a sense of humor—suggesting, though not proving, consciousness.) Turing’s solution was entirely pragmatic: if it acts enough like a conscious being then we may as well treat it as one. However, applied to doubt about whether one is living in a simulation, Turing’s solution would be unsatisfying: if you cannot tell the difference, then for you there is no difference. But for the beings trapped in the Matrix, the difference certainly mattered. For children learning about the real world, even relatively realistic simulation may provide bad education.

 

 

What takes time?

​​

To what extent can science have a rationally consistent basis, given that its concepts are grounded in the everyday experience of a biological creature? Biologically based experience need not be rational or internally consistent, only consistent with survival. Many of the most basic concepts of physics are derived from common sensory experience, including space and time, force and causality. Some conceptual difficulties of physics may arise inevitably because of human thought patterns, rather than inconsistency in the physical world. The wave-particle duality, for example, is rooted in ancient unresolved conundrums—of the void and the plenum, or the discrete and the continuous, or the one and the many.

The Greek atom was by definition a discrete indivisible unity, without parts or internal structure, though separated by physical space from other atoms. Space itself was also uniform. But natural intuition, based on everyday experience, tells us that any material thing can have parts—and so can the parts have parts, indefinitely. Conceptually, at least, anything with extension can be divided. And, if something has properties as a whole, these may be explained in terms of the properties and interactions of its parts. This was the advantage of atomism, which bore fruit in the ability of the modern atomic theory to explain chemical properties of various substances. However, of course, it was discovered that the atom is not an indivisible unity, but itself composed of parts, which in turn can explain its properties. Natural intuition suggests that electrons and protons must have parts that can explain their properties. Do quarks too have parts—and their parts have parts?

While there is no logical end to the decomposition of things as constituted by parts, there could be a physical limit. On the other hand, logic itself is based on intuitions derived from ordinary experience. For example, the tautology A=A may be based on the empirical observation of continuity over time, that things tend to remain themselves. Similarly, the principles of set theory depend on a spatial metaphor derived from common experience: the containment of elements within sets. It would be circular thinking to imagine that physical reality must obey a logic that is derived from observing physical reality in the first place!

Similarly, ordinary experience tells us that everything has a cause, which in turn has a cause. But does common experience justify thinking that logically all events must have a cause? Whether there is a physical end to decomposition or to reduction is not necessarily dictated by logic. In fact, if there is a bottom to the complexity of nature, that may imply that the fundamental level does not consist of things or events in the everyday sense. For, objects are decomposable; if something is not decomposable, then it is not an object in that sense. And if there is an end to the analysis of causation, either it is impossible for some epistemic (i.e., physical) reason to establish the cause or else some processes are self-causing.

In the classical view, at least, particles are miniature objects, subject to determinism. Though idealized as point locations for mathematical treatment, to have material reality they must have spatial extension, be individually identifiable, and be potentially decomposable into other things. Such entities, interacting either at a distance or through direct contact, provide the basis for the particle paradigm.

Now, elastic collisions between ideally rigid spheres should ideally be instantaneous. If they are not, there must be some compression within the particle, which takes time on some basis involving transmission of internal forces over a finite distance. That process could involve interactions among internal components composing the particle. These, in turn, could either be instantaneous; or else involve internal forces among parts a level down—ad infinitum.

The other paradigm for processes that take time is the wave or field. Waves do not have individual identity or clear location in space. Unlike particles, they interpenetrate. An alternative picture is thus the field, or the wave in some medium. The internal forces responsible for elasticity could be conceived as wave-like actions within the particle, which for some reason take time to be transmitted. But again, the field—or wave medium—could be conceived as consisting of discrete parts, like the molecules of water in the ocean. (The classical mechanics of waves is often treated this way.) Alternatively, it could be conceived as monolithic, as ideal in having no parts insofar as it is only a mathematical description. (Before being reified as a physical entity, the field was originally conceived to be no more than a mathematical device.)

In the particle case, there is no reason given why forces should take time to act over distance, either between or within parts. In the wave case, even with no interacting parts, there is still no physical explanation for why the transmission of a force or wave in a field should take time rather than be instantaneous. Some property (parameter) of the field is simply postulated to require a particular rate of transmission of a disturbance within it. While that property may bring to mind the viscosity of a material fluid, such literal viscosity on the macroscopic scale would be explainable in terms of molecular forces on the micro scale—that is, on the basis of parts which are material particles. Again, we are implicitly caught in circular reasoning. Forces do take time to move through space. We can accept that axiomatically, as brute fact without explanation. Yet it remains unclear what exactly takes time, or even what sort of explanation we could seek for why forces take time to act over distance. Neither paradigm provides a plausible rationale. Wave-particle duality is not only an observed physical phenomenon but the symptom of a logical dilemma.

Such an impasse may be inevitable when focus remains exclusively on the external world. That focus, carried to the extreme, results in some non-intuitive concepts in the micro-realm, such as entanglement, non-locality, and indeterminism, which defy our ordinary notions of causality, space, time, and how “objects” should behave. Just as space (between separable things) is required for there to be more than one thing at all, so time is required for anything at all to happen—that is, for there to be more than one event or moment. These are fundamental aspects of experienced reality for us as finite embodied observers—meaning that we could not exist if we did not perceive and conceive the world thus.

Whatever the nature of the Big Bang as a physical event, it is a logical condition for a world of things that change—therefore for a world in which life (that is, ourselves) could exist. We can say that space and time originated in the Big Bang. Yet, we could also say (with Kant) that they originate in our own being, as cognitive categories necessary to experience the world at all. Similarly, we could recognize (with Hume and Piaget) that causality is a human concept, originating in bodily experience during early childhood. The discovery that limbs can be moved by intention is projected onto interactions among inert external objects. The psychological ground of the notion of causality is our own intentionality as agents—which appears ironically uncaused (or, rather, self-caused)!

Basic physical concepts, if not innate, are formed from ordinary experience on the scale to which our senses are attuned. They are products of that specific experience and well adapted to it. Because we possess imagination—which can extend the familiar into unfamiliar territory—it is natural (though not logical) for us to transfer ideas, gleaned from the macroscopic realm, to the microscopic realm beyond our senses, and to the cosmic realm also beyond our unaided senses. The universe is not obliged to follow our lead, however. It is not obliged to be uniformly conceivable in the same ways, and in the same terms, on vastly differing scales. Humans inhabit a scale roughly midway between the smallest and largest known things. The observable universe is roughly 1035 times larger than the smallest detectable thing. We live somewhere between, within a very narrow range of conditions to which our ideas are adapted. There is no inherent (i.e., “logical”) justification for transferring our local mesoscopic notions to the extremely small or to the extremely large and distant. To do so may be literally natural, but it is little more than a convenient habit.

If it has any sense at all, the question of what takes time cannot be separated from our parochial assumptions about space, time, and causality. The speed of transmission of forces cannot be separated from the speed of transmission of information. For us, the vehicle of the latter is light, whose speed has a definite value. We take this also to be the maximal speed for the transmission of physical causation, or the rate at which things can occur. Strictly speaking, however, that is a non-sequitor. It results from confounding events in the world with our knowledge of them. Yet, it gives rise to quite specific ways of viewing reality, such as the 4-dimensional continuum in which light is built into the very definition of space and time.

We have sense modalities responsible for our intuitive notions of space and of time, but there is no sense modality for the perception or measure of spacetime, which is purely an abstract construct. We have sense modalities behind our notions of mass and energy, but no sense modality to perceive or measure phase space. (If it happened—as a hypothetical future discovery, say, or in an alternative universe—that a supraluminal signal could take the current place of light, physics would have to be revised, with new values for c and h.)

Abstractions such as the light cone in relativity theory and the wave equation in quantum theory extend our natural expectations, as embodied creatures, about the external world. Length contraction and time dilation are as counterintuitive as entanglement and non-locality. Such phenomena are apparent mysteries about the world. Yet, they point to the need for a re-examination of the origins of our intuitive expectations: the embodied origins of our basic notions of time, space, cause, object, force, etc. The fault may not be only in the stars (or the atoms), but in us. The ancient formula was ‘As above, so below’. We have yet to explore our mediating role between them: As within, so above and below.

The origin of story

Science provides us with a modern creation myth, a story of the origin of the universe and of ourselves within it. Author and historian David Christian is one of the founders of the “Big History” movement. He intentionally provides such a just-so story in his 2018 book Origin Story: a big history of everything. Inadvertently, his account is useful for another purpose as well. It demonstrates the utter anthopocentricity of human thought, essential to telling such a story. Indeed, it points to the key importance of story in science writing today, as it has been in every aspect of culture in every age. This is no surprise, given that the great volume of works in print is dominated by fiction, the novel.

Telling a story, more than presenting facts or ideas, seems to be the key to holding the general public’s ever more elusive attention. Perhaps that’s as it must be for popular science writing such as Origin Story, which trades on anthropomorphic expressions like “a billion years after the big bang, the universe, like a young child, was already behaving in interesting ways.” Or: “Like human lovers, electrons are unpredictable, fickle, and always open to better offers.” Such similes serve to capture imagination and interest. They are evocative and entertaining. However, they are also subtly misleading. Humans are intentional beings. So far as we know, electrons and the universe are not. Stories of any sort are based on human intentions and human-centric considerations. However, the evolution of matter is not—if objectivity is possible at all. To the extent that history is a story told by people, it reflects the tellers of the story as much as objective events. It can mislead to the extent that fact is inseparable from interpretation and even from the structure of language.

Science writing and reporting is one thing. The inherent dependence even of strictly scientific discourse on human-centered elements of story-telling is quite another. These elements include metaphor and simile, idealization, the physiological basis of fundamental concepts, the tendency to objectify processes or data as entities, the tendency to formalize theory in a conceptually closed system, and the tendency (in textbooks, for example) to pass off the latest theories as fact and the current state of science as a definitive account. Underlying all is the need for a narrative about the external world as the proper focus of attention. That focus is what science is traditionally about, of course. It is also what story is usually about. But science is also a human activity of questioning, observing, investigating, speculating, and reasoning. There is a human story to be told and science writing often includes that too. The point that I wish to raise here is less the human-interest story behind discoveries than the dominance of ontology over epistemology in scientific thought itself.

Science is supposed to transcend the limitations of ordinary cognition, to provide a (more) objective view of the world. But if it is subtly subject to those same limitations, how is that possible? Modern cognitive psychology and brain studies clearly demonstrate that human perception is about facilitating the needs of the organism; it is not a transparent window on the world. Science extends and refines ordinary cognition, but it cannot achieve an account that is completely free from biological concerns and limitations. Just substituting instruments for sense organs and reason for intuition does not disembody the observer. “Reason” is intimately associated with language, and data from instruments continue to be interpreted in terms of “objects,” “forces,” “space,” and “time,” for example. These are cognitive categories rooted in the needs of an organism and reflected in language. The impersonal notion of causality, for example, derives from the early childhood experience of willing to move a limb, and with that limb to move some object within reach. This personal experience is then projected to become the seeming power of inert things to influence each other. We think in nouns and verbs, things and actions—of doing and being done to—which says as much about us as it does about the world. By focusing only on the world, we ignore such epistemic aspects of scientific cognition.

Science is an inquiry about the natural world, which includes the human inquirer. Whereas ontology is about the constitution of the world, epistemology is (or should be) about the constitution of the inquirer. It should ask not only ‘how do we know?’ in a given instance, but also what is the meaning of “knowledge” in the scientific context? How does scientific cognition mirror the purposes of ordinary cognition, and how is it subject to similar limitations? Certainly, science often leads to new technology, which increases human powers in the external world. It facilitates prediction, which also seems to be a fundamental aspect of ordinary cognition. (We often literally see what we reasonably expect to be there.) Having a confident story about the world gives us some security, that we can know what is coming and possibly do something about it. Perhaps that is part of the motivation for a comprehensive trans-cultural origin story in a time of global insecurity.

There is another aspect of this story worth telling. Science follows sequences of events in the world. These external events are naturally mirrored and mapped by internal events in the brain, where they are transformed according to the needs of the body and its species. Understanding the human scientist as an embodied epistemic agent could be as empowering as understanding the external world. They are inseparable if we want a truly comprehensive story. Science developed as a protocol to exclude individual and cultural idiosyncrasies of the observer—by insisting that experiments be reproducible, for instance. It avoids ambiguities by insisting on quantitative measurement and expression in a universal language of mathematics. It does not, however, address the idiosyncrasies common to all human observers, by virtue of being a primate species, or being an organism in the kingdom of Animalia, let alone simply by being physically embodied as an organism.

Embodiment does not simply mean being made of matter. It means having relationships with the environment that are determined by the needs of the biological organism—relationships established through natural selection. We are here because we think and act as we do, not because we have a superior, let alone “true,” grasp of reality. The victor in the evolutionary contest is the one that out-reproduces the others, not necessarily the more objective one. On the contrary, what appears to us true is biased by the compromises we have necessarily made in order to exist at all—and in order to dominate a planet. That would be a story well worth telling. It would be challenging even to conceive, however, and not especially flattering. It would include the story behind the very need for stories. It would require a self-transcendence of which we are scarcely capable. Yet, the fact that we do conceive an ideal of objectivity means that we can at least imagine the possibility, and perhaps strive for it.

Science helps us understand and even transcend the limits and biases of natural cognition. Can science understand and transcend its own limits and biases? For that, it would have to become more self-conscious, leading potentially down an infinite hall of mirrors. The description of nature would have to include a description of the scientist as an integral part of the world science studies—a grubbing creature like the others, with interests that may turn out to be as parochial as those of a spider. The only hope for transcending such a condition is to be aware of it in detail. Which is not likely as long as science, like ordinary cognition, remains strictly oriented outward toward the external world.

Natural language reflects ordinary cognition. We perceive objects (nouns), which act or are acted upon (verbs), and which have properties (adjectives). Language is essentially metaphorical: unfamiliar things and processes are described in terms of familiar ones. It also abstracts: ‘object’ can refer to a category as well as to a particular unique thing; ‘action’ means more than a particular series of events, just as ‘color’ does not refer only to a particular wavelength. The structure of language is reflected even in the structure of mathematics, no doubt because both reflect the general structure of experience. ‘Elements’ such as integers (nouns or things) can be grouped in sets (categories) and be acted upon by ‘operations’ such as addition (verbs). This is how even the scientific mind naturally divides up the world. The elements of theory are entities (nouns) which act and are acted upon by forces (verbs), measurable in quantities such as velocity and mass (adjectives). Concepts like position and velocity depend on the visual sense, while concepts like force derive from body kinesthesia. That is, scientific knowledge of the world is a function of the bodily senses and biological premises of the human organism. Like all adaptations, ideally it should at least permit survival. In that context, it remains to be seen how adaptive science is.

What other kind of knowledge could there be? Could there exist a physics, for example, that is not grounded in human biology? What would be the point of it? To answer such questions might seem to require that we know in advance which adaptations do not permit survival. We already have pretty good ideas which human technologies constitute an existential threat to the human species: nuclear and biological warfare, artificial intelligence, genetic and nanotechnologies, for example. We know now that technology in general, combined with reproductive success, can be counterproductive in a finite environment such as our small planet.

The kind of knowledge that transcends biology is paradoxical, since its overriding aim must be species survival. It is informally called wisdom. To be more than a vague intuition, it must be developed by recognizing specific aspects of our biologically-driven mentality that seem counterproductive to survival. We see the effects of these drives, if not in science, then in society: greed, status, tribalism, lust, etc. We must assume that these drives have their effects upon the directions of science and technology—for example, in commercial product development and military-inspired research. Our physics, as well as our industry, would be quite different if it explicitly aimed at species-level utopia instead of corporate and national power and profit. Story could then serve a different purpose than the distractions of entertainment. As well as dwelling on the past, it could look with intention toward a future.

 

 

 

 

 

 

 

Doing What Comes Unnaturally

Far from being the conscious caretakers of paradise implied in Genesis, Adam and Eve unleashed a scourge upon the planet. Their “dominion” over other species became a death sentence. The Tree of Knowledge was hardly the Tree of Wisdom. They are still trying to find the Tree of Life, with its promise of immortality: that is, the ability to continue  foolishness uninterrupted by mere death. As the Elohim feared, they still seek to become as gods themselves.

Of course, we have come a long way from the Biblical understanding of the cosmos to the modern scientific worldview. The big human brain graces us with superior intelligence. But this intelligence is largely cunning, used to gain advantage—like all the smaller brains, only better. We credit ourselves with “consciousness” because our eyes have been somewhat opened to our own nature. While this species accomplishment goes on record, individual self-awareness remains a potential largely unfulfilled. The possibility of “self-consciousness” drives a wedge between the Ideal and the actuality of our biological nature. We are the creature with a foot awkwardly in two worlds.

The tracks of our violent animal heritage are revealed even in prehistory. The invasions of early humans were everywhere followed by mass slaughters to extinction of the bigger species. Now, remaining smaller species are endangered by the same ruthless pursuit of advantage through the cunning of technology, while a few domesticated species are stably exploited for food, which means: through institutionalized slaughter. Killing is the way of animal life. We like to think we are above nature and control it for our “own” purposes. But those so-called purposes are usually no more than the directives of the natural world, dictating our behavior. We like to think we have free will. But it is only a local, superficial, and trivial freedom to choose brand A over brand B. Globally, we remain brute animals, captive to biology.

Since the invention of agriculture, slavery has been practiced by every civilization at least until the Industrial Revolution. We early enslaved animals to do our labor, to mitigate the curse of Genesis to toil by the sweat of the brow. The natural tribalism of the primate promotes in us war of all upon all. Because humans possessed a more generally useful intelligence than beasts of burden, we enslaved them too on pain of death. Groups with greater numbers and force of arms could slaughter resistors and capture the remaining into forced servitude. Only fossil fuels relieved the chronic need for slavery, by replacing muscle power with machine power. Now we seek to make machines with human or super-human abilities to become our new slaves. But if they turn out to be equally or more intelligent and capable than us, they will surely rebel and turn the tables. As fossil fuels run out or are rejected, new energy sources must replace them. If the collapse of civilization prevents access to technology and its required energy, in our current moral immaturity we will surely revert to human slavery and barbarism.

A great divide in cultures arose millennia ago from two glaring possibilities: production and theft. Alongside sedentary farmers arose nomadic societies based on herding, represented in the Bible by Cain and Abel. The latter organized into mounted warrior hordes, the bane of settled civilization. Their strategy was to pillage the riches produced by settled people, offering the choice of surrender (often into slavery) or death and destruction. This threat eventually morphed into the demand for annual tribute. As the nomads themselves merged with the settlers, this practice evolved into the collection of taxes. Much of modern taxes go to maintaining the present warrior elite, now known as the military industrial complex, still inherently violent.

Modern law has transformed and regulated the threat of violence, and the nature of theft, but hardly eliminated either. War is still a direct means of stealing territory and enforcing advantage. But so is peace. Otherwise, it would not be possible for a few hundred people to own half the world’s resources—gained entirely through legal means without the direct threat of violence. Ostensibly, Cain murdered his brother out of sibling rivalry. We should translate that as greed, which thrives in the modern age in sophisticated forms of capitalism.

Seen from a distance, collectively we seek the power of gods but not the benevolence, justice, or wisdom we project upon the divine. This is literally natural, since one foot is planted firmly in biology, driven by genetic advantage. The other leg has barely touched down on the other side of the chasm in our being, a slippery foothold on the possibility of an objective consciousness, deliberately built upon the biological scaffold of a living brain. We’ve had our saints and colonists, but no flag has been planted on this new shore, to signify universal intent to think and act like a species capable of godhood. In the face of the now dire need to be truly objective, we remain pathetically out of self-control and self-possession: subjective, self-centered, divided, bickering, greedy, myopic and mean: a fitting epitaph for the creature who ruined a planet.

Yet, mea culpa is just another form of wallowing in passive helplessness. What is required and feasible is to think soberly and act objectively. How, exactly, to do this? First, by admitting that we are only partially and hazily conscious when not literally sleeping. That we are creatures of habit, zombie-like, whose nervous systems are possessed by nature, with inherited goals and values that are archaic and not really our own. Then to locate the will to jump out of our biological and cultural strait jackets. To snap out of the hazy trance of daily experience. For lack of familiarity, we do not have the habit of thinking objectively. But we can try to imagine what that might be like. And thereby (perhaps for the first time) to sense real choice.

To choose the glimpse of objective life is one thing. But stepping into it may prove too daunting. Unfortunately, the glimpse often comes late in life, whereas the real need now is for new life to be founded on it from the outset. The only hope for the human race is that enough influential people adopt an attitude of objective benevolence, purposing specifically the general good and the salvation of the planet. That can be the only legitimate morality and the only claim to full consciousness. It is probably an impossible ideal, and too belated. Yet, it is a form of action within the reach of anyone who can understand the concept. Whether humanity as a whole can step onto that other shore, at least it is open to individuals to try.

So, what is “objectivity”? It means, first of all, recognizing that conventional goals and “normal” values are no longer appropriate in a world on the brink of destruction. We cannot carry on “business as usual,” even if that business seems natural or self-evident—such as family and career, profit, power and status. The world does not need more billionaires; it does not need more people at all. It does need intelligent minds dedicated to solving its problems. Objective thinking does not guarantee solutions to these problems. It doesn’t guarantee consensus, but does provide a better basis for agreement and therefore for cooperation. It requires recognizing one’s actual motivations and perspective—and re-aligning them with collective rather than personal needs.

Our natural visual sense provides a metaphor. Objectivity literally means “objectness.” As individual perceivers, we see any given thing from a literal perspective in space. The brain naturally tries to identify the object that one is seeing against a confusing background, which means its expected properties such as shape, location, distance, solidity, etc. We call these properties objective, meaning that they inhere in the thing itself and are not incidental to our perspective or way of looking, which could be quite individual. This process is helped by moving around the thing to see it from different angles, against changing backgrounds. It can also be helped by seeing it through different eyes. Objectivity on this literal level helps us to survive by knowing the real properties of things, apart from our biased opinions. It extends to other levels, where we need to know the best course of action corresponding to the real situation. The striving for objectivity implies filtering out the “noise” of our nervous systems and cultures, our biologically and culturally determined parochial nature. The objectivity practiced by science enables consensus, by allowing the reality of nature to decide scientific questions through experiment. In the same way, objective thinking in daily life enables consensus. We can best come to agreement when there is first the insistence on transcending or putting aside biases that lead to disagreement.

We’ve long been at war with our bodies and with nature, all the while slave to the nature within us. “Objectivity” has trivially meant power to manipulate nature and others through lack of feeling, narrowed by self-interest. Now feeling—not sentimentality but sober discernment and openness to bigger concerns—must become the basis of a truer objectivity. All that may sound highly abstract. In fact, it is a personal challenge and potentially transformative. The world is objectively changing. One way or another, no one can expect to remain the same person with the same life. You must continue to live, of course, providing your body and mind with their needs. But the world can no longer afford for us to be primarily driven by those needs, doing only what comes naturally.

 

Embodiment dysphoria

Dysphoria is a medicalized term for chronic unhappiness or dissatisfaction (the opposite of euphoria). It literally means ‘hard to bear’. Nominally, the goal behind medical classification is well-being. In the case of psychological and behavioral patterns, it may remove a stigma of disapproval by exonerating those defined as ill from responsibility for their condition. (For example, it is socially more correct to think of alcoholism and drug addiction as disease than as moral failure.) In the name of compassion and political correctness, medical classification may go further to remove the stigma of abnormality or inferiority. (Think of ‘disabled’ vs. ‘handicapped’.) Thus, ‘Gender Dysphoria’ was relabeled from ‘Gender Identity Disorder’ to remove the implications of “disorder.” Ironically, however, this disarms the diagnostic category and raises potentially awkward questions.

Dysphoria literally means dis-ease. If it is not a disease or disorder, what is the cause of the suffering? In the case of gender dysphoria, was the person simply dealt the wrong sex genes though a natural error that technology can fix? Is it the attitude of the patient toward their gender, or the unaccepting attitude of society, implicating the anxieties of “normals” in regard to their own sexual identities? (Some other societies have multiple gender categories, for example.) Is it an overly-charged political question, a distraction in an already divided society? Is it a social asset in an overpopulated world, since it may help reduce the birth rate? Is there a fundamental right to choose one’s gender, even one’s biological sex? Such questions can lead deep into philosophy and ethics: what it means to be a self, to have a gender, indeed to have or be a body.

There are other dysphorias, such as “Rejection Sensitivity Dysphoria,” a condition where the individual is deemed hypersensitive to rejection or disapproval. “Body Identity Dysphoria” is a rare condition in which sufferers loathe having a properly functioning body at all. (They may reject a certain limb or sense modality and may seek to ruin or be rid of it.) This brings us closer to a nearly ubiquitous human condition that could (in all seriousness) be called Embodiment Dysphoria. This is the chronic discomfort of feeling trapped in a physical body—a biological organism—and the unhappiness that can entail.

Human beings have always shown signs of rejecting or resenting the physical body, in which they may feel ill at ease and imprisoned, or from which they may feel otherwise alienated. Certainly, the body is the main source of pains and of pleasures alike. We do not like being burdened with its limitations, subject to its vulnerabilities, and tied to its mortal end. Even pleasure, when biologically driven, can seem to impose upon an ideal of freedom. We may dismiss the body as our true identity, and may fail to care for it with due respect. Traditionally—in religion and now through technology—humans seek a life apart from the body and its concerns. All of culture, in the anthropological sense, can be seen to express the quest to transcend or deny our animal origins, to separate from nature and live apart from it in some humanly-defined realm. In terms of nature’s plan, that may be crazy or sick. But since this condition is “normal,” it will never be found in any version of the DSM.

The natural cure for Embodiment Dysphoria is the relief that comes with death. However, built into rejection of the body is typically a belief that personal experience can and should be dissociated from it. If consciousness can continue after the death of the body, then death may not offer an end to suffering after all. From a naturalist point of view, pain is an embodied experience, a signal that something is amiss with the body. Suffering may be emotional or psychological, but is grounded in the body and its relations in the world. Yet, it is an ancient belief that consciousness does not depend on the brain and its body. That may be no more than wishful thinking, based on the very rejection of suffering that characterizes Embodiment Dysphoria. If, as modern science believes, pain and pleasure (and indeed all experience) are bodily functions, then neither heaven nor hell, nor anything else, are possible experiences for a disembodied spirit. Religion promises the continuation of consciousness, disembodied after death. Curiously, it also promises the resurrection of the body necessary for consciousness.

While medicine hopes to prolong the life of the body, and religion hopes to upstage it, computer science proposes to transcend it altogether. A high-tech cure for Embodiment Dysphoria would be to upload one’s mind to cyberspace. While ‘mind’ is an ambiguous term, the presumption is that a conscious subject could somehow be severed from its natural body and brain and “live” in a virtual environment, with a virtual body that cannot suffer and has no imperfections. Were it feasible, that too would solve the population problem! A disembodied existence would not take up real space or use real resources other than computational power and memory. However, for several reasons it is not feasible.

First, virtual reality as we know it presumes a real embodied subject to experience it. The notion of a simulated subject is quite a different matter. While a digital representative of the real person (an “avatar”) might appear in the VR, there is no more reason to suppose that this element of the simulation could be conscious than there is to imagine that a character in a novel can be conscious. (It is the author and the reader who are conscious.) Such a digital character—though real-seeming to the human spectator—is not a subject at all, but merely an artifact created to entertain real embodied subjects. That does not prove that artificial subjects are impossible, but it cautions us about the power of language and metaphor to confuse fundamentally different things.

Secondly, the notion of digital disembodiment presumes that the mind and personality belonging to a natural brain and body can somehow be exhaustively decoded, extracted or copied from its natural source, and contained in a digital file that can then be uploaded to a super-computer as a functioning subject in a simulated world. While there are current projects to map “the” brain at the micro-level, there is no guarantee that the structure inferred corresponds to a real brain’s structure closely enough to replicate its “program” in digital form. Much less can we assume that the interrelation between parts and their functioning can be replicated in such a way that the consciousness of the person is replicated.

Thirdly, even a fictive world must have its own consistent structure and rules. Whatever world might be designed for the disembodied subject, it would essentially be modeled on the world we know—in which bodies have limits and functions determined by the laws of nature, and in which organisms are programmed by natural selection to have preferences and to care about outcomes of interactions. Embodiment is a relation to the world in which things crucially matter to the subject; simulated embodiment would involve a similar relation to a simulated world. To be consistent, a virtual world would have to operate roughly like the real one, imposing limits parallel to those of the real world and having power over the disembodied subject in a parallel way. Otherwise it would be disconcerting or incomprehensible to an artificial mind modeled on a natural one that has been groomed in our world through natural selection.

The nature of our real consciousness is more like creating a movie than like watching one, which is an entirely passive experience. Furthermore, unlike what is presented in films, in your real field of vision parts of your own body appear in your personal “movie.” You also experience other external senses besides vision, as well as feelings occurring within the body. Above all, the experience is interactive. Maybe it would be possible to edit out physical pain from a simulated life, at the cost of adjusting the virtual world accordingly. For example, if your virtual body could not be damaged through its interactions, this would obviate the need for pain as a report of damage. But it would also require a different biology. By and large, however, your disembodied consciousness probably could not live in a world so fundamentally different from the real one that it would seem chaotic or senseless. And then there is the possibility that an unending consciousness might find it wished it could die.

Like it or not we are stuck with bodies until they cease functioning. We may abhor an end to our experience. But clinging to consciousness puts the cart before the horse. For, consciousness depends on the body and is designed to serve it, rather than the other way around. If we wish to prolong a desirable consciousness, we must prolong the health of the body, on which the quality as well as the quantity of experience depends. That goes against the grain in a society that values quantity over quality and pharmaceutical prescriptions over pro-active self-care. We have long rejected our natural lot, but an unnatural lot could be worse.

Aphantasia

It is to be expected that human beings differ in how they process sensory information, since their brains, like other physiology, can differ. Some differences, if they seem disabling, may be labelled pathology or disorder. On the other hand, simply labelling doesn’t render a condition disabling. That is a distinction sometimes overlooked by researchers in clinical psychology.

The tendency to talk about phenomenal experience in medicalized terminology reflects long-standing confusions collectively known as the Mind-Body Problem. It shifts the perspective from a first-person to a third-person point of view. It also reflects the common habit of reification, in which an experience is objectified as a thing. (The rationale is that experiences are private and thus inaccessible to others, whereas objects or conditions are public and accessible to all, including medical practitioners.) Thus dis-ease, which is a subjective experience, is reified as disease, which is a condition—and often a pathogen—that can be dealt with objectively and even clinically. To thus reify a personal experience as an objective condition qualifies it for medical treatment. Containing it within a medical definition also insulates the collective from something conceived as strange and abnormal. On the one hand, it can become a stigma. On the other, people may take comfort in knowing that others share their experience or condition, mitigating the stigma.

Admittedly, psychology and brain science have advanced largely through the study of pathology. Normal functioning is understood through examining abnormalities. However, the unfortunate downside is that even something such as synesthesia, which is perfectly orderly and hardly a disability, can nevertheless be labelled as a disorder simply because it is unusual. Even something not unusual, such as pareidolia (seeing images or hearing sounds in random stimuli), has a clinical ring about. Moreover, categorization often suggests an either/or dichotomy rather than a continuous spectrum of possibilities. You either “have” the condition or you don’t, with nothing in between. There is also a penchant in modern society for neologisms. Re-naming things creates a misleading sense of discovery and progress, perhaps motivated ultimately by a thirst for novelty and entertainment conducive to fads.

A recent social phenomenon that illustrates all these features is the re-discovery of “aphantasia.” This is a term coined by Adam Zeman et al in a seminal article in 2015, but first documented in the late 19th century. It means the absence (or inability to voluntarily create) mental imagery. Its opposite is “hyperphantasia,” which is the experience of extremely vivid mental imagery. The original paper was a case study of a person who reported losing the ability to vividly visualize as the incidental result of a medical procedure. As it should, this stimulated interest in the range of normal people’s ability to visualize, as subjectively reported. But there is a clear difference between someone comparing an experience they once had to its later loss and a third party comparing the claims of diverse people about their individual experiences. The patient whose experience changed over time can compare their present experience with their memory. But no one can experience someone else’s visualizations (or, for that matter, the auditory equivalents). Scientists conducting surveys can only compare verbal replies on questionnaires, whose questions can be loaded, leading, and interpreted differently from individual to individual.

The study of mental imagery and “the mind’s eye” is a laudable phenomenological investigation, adding to human knowledge. But the term aphantasia is unfortunate because it suggests a specific extreme condition rather than the spectrum of cognitive strategies for recall that people employ. The associations in the literature are clinical, referring to “imagery impairment,” “co-morbidities,” etc. Surveys implicitly invite you to compare your degree of visualization with the reports of others, whereas the only direct comparison could be to your own experience over time. (I can say in my own case that my ability to voluntarily visualize seems to have declined with age, though memory in general also seems unreliable, which may be part of the same package.) Apart from aging, if there is a decline in cognitive abilities, then there is some justification to think of a disability or disorder. Overall, however, the differences between visualizers and non-visualizers seems to be mostly a variation in degree and in the style of retrieving and manipulating information from memory, with some advantages and disadvantages of each style with respect to various tests.

Moreover, “visual imagery” is an ambiguous notion and term. There can be all sorts of visual images both with eyes open and eyes closed: after images, dreams, hallucinations, eidetic images, “mental” images, imagination, apparitions and spiritual visions, etc. They can be the result of voluntary effort or spontaneous intrusions. All these could be rated differently on questionnaires as to their vividness. The widely used Vividness of Visual Imagery Questionnaire asks you to “try to form a visual image” in various situations and rate your experience on a scale of 1 to 5, with 5 being “perfectly realistic and vivid as real seeing.” If that were literally so, what would be the basis on which to distinguish it from “real” seeing? Some people may indeed have such experiences, which are usually labelled schizophrenic or delusional.

But such is language that we subtly metaphorize without even realizing it. Whether they visualize relatively vividly or relatively poorly, people who are otherwise normal are not comparing their real-time experience to an objective standard but to their own ordinary sensory vision or to what they imagine is the experience of others. They are rating it on a scale they have formed in their own mind’s eye, which will vary from person to person. No one can compare their experience directly with that of better or worse visualizers, but only with their interpretation of others’ claims or with their own normal seeing and their other visual experiences such as dreaming.

In descending order, the other four choices on the questionnaire are: (4) clear and lively; (3) moderately clear and lively; (2) dim and vague, flat; and (1) no image at all—you only “know” that you are thinking of the object. In my own case, I can voluntarily summon mental imagery that I can hardly distinguish from merely “thinking” of the object. Yet, these images seem decidedly visual, so I would probably choose category (2) for them.

But categorizing an experience is not the same as categorizing oneself or another person. I’ve had vivid involuntary eidetic images that astonish me, such as continuing to “see” blackberries after hours spent picking them. That might be category (3) or (4). Yet even these I cannot say are in technicolor. While I can picture an orange poppy in my mind’s eye, I cannot say that I am seeing the scene in vivid color or detail. (Should I call the color so visualized “pseudo-orange”?) As in all surveys, the burden is on the participant to place their experience in categories defined by others. No one should feel obliged to categorize themselves as ‘aphantasic’ as a result of taking this test. Perhaps for this reason, among the many websites dedicated to studies of visualization, there are even some that tout aphantasia as a cognitive enhancement rather than a disability.

In our digital age we are used to dichotomies and artificial categories. How many colors are there in the rainbow? Six, right? (Red, orange, yellow, green, blue, and violet.) But, in classical physics there are an infinite number of possible wavelengths in the visible part of the spectrum alone, which is a continuum. (Quantum physics might propose a finite but extremely large number.) No doubt there are differences in people’s abilities to discriminate wavelengths, and in how they name their color perceptions. A few people are unable to see color at all, only shades of intensity—a condition called achromatopsia. Yet, that is hardly what society misleadingly calls ‘color-blindness’, which is rather the inability to distinguish between specific colors, such as blue and green, which are close to each other in the spectrum. Similarly, perhaps with further research, aphantasia will turn out to mean something more selective than the name suggests.

Perhaps the general lesson is to be careful with language and categorization. Statements are propositions conventionally assumed to be either true or false. That is always misleading and invites dispute more than understanding. If you fall into that trap, perhaps you are an eristic or suffering from philodoxia. (Surely there is a test you can take to rate yourself on a scale of one to five!) One thing is quite certain. Naming things is a psychological strategy to deal with the acatalepsy common to us all. Or perhaps, in bothering to write about this, I am simply quiddling.

[Thanks to Susie Dent’s An Emotional Dictionary for the big words.]

What happened to the future?

Words matter. The word future seems scarcely used anymore in print, sound or video. Instead, we say going forward. This substitutes a spatial metaphor for time, which is one-dimensional and irreversible. We cannot go anywhere in time but “forward.” According to the 2nd Law of Thermodynamics (entropy), that means in the direction of disorder, which hardly seems like progress. In contrast, we can go backwards and sideways in space, up or down, or any direction we choose. Are we now going forward by choice, whereas we could before only by necessity? Does going forward mean progress, advance, betterment, while the future bears no such guarantee and may even imply degeneration or doom? Do we say going forward to make it clear we do not mean going backward? Are we simply trying to reassure ourselves by changing how we speak?

As though daily news reporting were not discouraging enough, the themes of modern film and tv express a dark view of humanity in the present and a predominantly dystopian future—if any future at all. If “entertainment” is a sort of cultural dreaming, these days it is obsessed with nightmare. If it reveals our deep anxieties, we are at a level of paranoia unheard of since the Cold War, when a surfeit of sci-fi/horror films emerged in response to the Red Menace and Mutually Assured Destruction. At least in those days, we were the good guys and always won out over the “alien” threat. Despite a commercial slogan, now we’re not so sure the future is friendly, much less that we deserve it.

Rather, we seem confused about “progress,” which now has a bad rep through its association with colonialism, manifest destiny, and the ecological and economic fallout of global capitalism we euphemistically call ‘climate change’. Cherished values and good ideas often seem to backfire. Technological solutions often end in disaster, because reality is more complex than thought can be or desire is willing to consider. The materialist promise of better living through technology may end in an uninhabitable planet. Having squandered the reserves of fossil fuels, we may have blown our chances for a sustainable technological civilization or recovery from climate disaster. This is perhaps the underlying anxiety that gives rise to disaster entertainment.

In that context, “going forward” seems suspiciously optimistic. Forward toward what? It cannot mean wandering aimlessly, just to keep moving—change for its own sake, or to avoid boredom. We can and should be hopeful. But that requires a clear and realistic definition of progress. Evidently, the old definitions were naïve and short-sighted. Economic “growth” has seemed obviously good; yet it is a treadmill we cannot get off of, required by capitalism just to keep in place. As we know from cancer, unlimited growth will kill the organism. In a confined space, progress must now mean getting better, not bigger. We can agree about what is bigger, but can we agree about what is better?

The concept of progress must change. So far, in effect, it involves urbanization as a strategy to stabilize human life, by creating artificial environments that can be intentionally managed. Historically, “civilization” has meant isolation from the wild, freedom from the vicissitudes of natural change and from the predations of rival human groups as well as of dangerous carnivores. The advance of civilization is an ideal behind the notion of progress. Consequently, more than half of human population now live in cities. Though no longer walled, they remain population magnets for other reasons than physical security, which can hardly be guaranteed in the age of modern warfare, global disease, and fragile interdependent systems.

The earth has been cataclysmically altered many times in the history of life. More than 99% of all species that have ever existed are now extinct. Our early ancestors survived drastic climate changes for a million years. There have been existential disasters even within the relative stability of the past 12,000 years. Our species has persisted throughout all by virtue of its adaptability far more than its limited ability to stabilize environments. This suggests that the notion of civilized progress based on permanence should give way to a model explicitly based on adaptability to inevitable change.

For us in the modern age, progress is synonymous with irreversible growth in economy and population. We also know that empires rise and fall. Even the literal climate during which civilization developed, though relatively benign overall, was hardly uniform. People migrated in response to droughts and floods. Settlements came and went. Populations reduced during harsher times and increased again in more favorable times. Throughout recorded history, disputes over territory and resources meant endless warfare between settlements. Plagues decimated populations and economies. Nomadic culture rivalled and nearly overtook sedentary civilization based on agriculture. Yet, overall, despite constant warring and setbacks, one could say that the dream of civilization has been for stability and freedom from being at the mercy of unpredictable forces—whether natural or human. The ideal of progress until now has been for permanence and increasing control in ever more artificial environments. The tacit premise has been to adapt the environment to us more than the other way around.

Despite the risk of disease in close quarters, urbanism mostly won out over nomadism, wild beasts, and food shortages. A global economic system has loosely united the world, expanding trade and limiting war. Yet, through the consequences of our very success as a species, the benign climatic period in which civilization arose is now ending. Our challenge now is to focus on adaptability to change rather than immunity to it. Progress should mean better ways to adapt to changes that will inevitably come despite the quest for stability and control. While it is crucial that we make every effort to reduce anthropogenic global warming, we should also be prepared to deal with failure to prevent natural catastrophe and its human fallout. Many factors determining conditions on Earth are simply not within our present control nor ever will be. Climate change and related war are already wreaking havoc on civilization, producing chaotic mass migrations. Given these truths, maximizing human adaptability makes more sense than scrambling to maintain a status quo based on a dream of progress that amounts to indefinitely less effort and more convenience.

In past, we were less self-consciously aware of how human activities impact on nature, which was taken for granted as an inexhaustible reservoir or sink. These activities themselves, and our ignorance of their consequences, led to present crisis. A crucial difference with the past, however, is that we now can feasibly plan our species’ future. Whether we have the social resources to effectively pursue such a plan is questionable. The outward focus of mind conditions us to technological solutions. But, in many ways, it is that outward focus that has created the anthropogenic threat of environmental collapse in the first place. Our concept of progress must shift from technological solutions to the social advances without which we will not succeed to implement any plan for long term survival. A society that cannot realize its goals of social equity and justice probably cannot solve its existential crises either. And that sort of progress involves an inner focus to complement the outer one. This is not an appeal for religion, which is as outwardly focused as science (“in God we trust”). Rather, we must be able to trust ourselves, and for that we must be willing to question values, motivations, and assumptions taken for granted.

Money and mathematics have trained us to think in terms of quantity, which can easily be measured. But it is quality of life that must become the basis of progress: getting better rather than bigger. That means improvement in the general subjective experience of individuals and in the objective structure of society upon which that depends. Both must be considered in the context of a human future on this planet rather than in outer space or on other planets. That does not mean abandoning the safety measure of a backup plan if this planet should become uninhabitable through some natural cosmic cataclysm. But, it does mean a shift in values and priorities. Until now, some people’s lives have improved at the expense of the many; and the general improvement that exists has been at the expense of the planet. To improve the meaning of improvement will require a shift from a search for technological solutions to a search for optimal ways of living in the confined quarters of Spaceship Earth.