The Life of P: real or simulated?

Increasingly, sophisticated computer technology obliterates the distinction between reality and imagination or artifact. In popular science reporting, for example, distinction is frequently no longer made between an actual photograph and a computer-generated graphic—which used to be (and ought to be) clearly labelled “artist’s conception.” While computer animation extends imagination, it only approximates reality in a symbolic way, sometimes even ignoring or falsely portraying basic physics. (Just think of the noisy Star Wars battles in outer space, where there is no air for sound to travel in!) Old style hand-drawn animation used to do this too, with cartoon protagonists blown up and springing back to life, or running off the edge of a cliff and only falling when they realize their precarious situation. These were gags that no one (except perhaps unsuspecting young children) took literally. They were hardly intended to be realistic.

However, the intention of virtual-reality and digital game producers, like the modern film industry, is to create ever more realistic ‘worlds’ as entertainments. This could pose a dilemma for a virtual-reality user who does not, for some reason (perhaps from spending too much time in VR) clearly understand the difference between reality and fiction. It could well be the case for young people being educated with computer graphics. Confronted with the VR producer’s intentional deceptions, how can the user be expected to know that the computer graphic or VR world is not photography or genuinely realistic?

This question recalls the doubt first expressed in modern times by Descartes and more recently by the Matrix films. The question actually has two parts, one pertaining to the VR world itself (which is a human production, like a novel or cartoon) and one pertaining to the user (who is a consumer desiring to be entertained). In terms of the former, the question is whether there are telltale signs of simulation by which an astute observer could distinguish the VR world from the real one—for example, a “glitch” in the computer program. There is, after all, a limit to the detail a simulation can provide, and there could be computer error. But as far as the effectiveness of the illusion, this limit is relative to the user’s cognitive capacities, which are also limited. The user must, on some basis, be able to tell the difference, which brings us to the other part of the question.

The user is a biological organism who lives in the real physical world, but enters the VR world as into a game, voluntarily and often with other players, like entering on stage with other actors. There is a conditional willing suspension of disbelief, which in traditional entertainment is asked primarily of a passive audience. Unlike readers of a novel or the theater audience for a play, the VR user is both actor and audience. A literal online game can be interactive with other real players, who have a life outside the game—offstage, so to speak. It also provides a virtual world with which to interact, which includes other human or non-human figures that are not actual players. These non-players are not conscious subjects but fictions in the VR, defined by the program as part of the stage sets. (Digital animation allows the “stage” to be dynamic, constantly changing.) While the non-players are not simple cardboard cut-outs, they are no more than part of that programmed dynamic stage set. Therein lies a key difference between real human beings (or creatures) and simulated ones. Real agents have their own agency; simulated agents are fictions that express only the agency of the real people who create the program.

The VR user is a player in a virtual-reality—a real person who chooses to engage with the VR in order to have a certain kind of experience. This player—call her P—may or may not be represented in the virtual ‘world’ by an avatar. (As P, you could be seeing yourself as a character in the story, but in any case you are seeing the VR world through your own real eyes.) Since it is provided by a computer program that is necessarily finite, the virtual world is necessarily limited, furnishing only a finite variety of possible inputs to P’s senses. In principle, that is a key difference between the VR world and the real world, corresponding to a fundamental difference between artifacts and natural found things.

However, the situation is actually similar for ordinary experience by real people in the real world. The human nervous system only processes finite information, from which it fabricates the natural illusion of an external world. The difference between natural input of the senses and the input of VR is only relative. We know that the VR is not real when we know that we are wearing a VR headset or some such thing. For the illusion of a virtual reality to be complete (as in The Matrix), no such clue must be available. P must be unaware of the deception and unable to recall entering the VR world from an existence outside it.

That conceivable possibility brings us to another contemporary confusion, expressed provocatively in the rhetorical question: “Are you living in a simulation?” Suppose you simply find yourself (like Descartes, or Neo in the The Matrix?) in a world whose reality you doubt. After all, if somehow you cannot tell the difference between simulation and reality, you might have been born and raised within a simulated world instead of the supposedly “real” one, and not be the wiser for it. Even a memory you seem to have of childhood—or of putting on VR goggles—could be merely a simulated memory, part of the VR program. However, this doubt confounds the notions of player and non-player; the difference between them is glossed over in the so-called Simulation Argument (that we are “probably living in a simulation”).

By assumption, P is a live embodied human who lives in the real world, in which the VR is a program running on a real computer, created by real programmers. By definition, P is not part of the program and P’s memories are not part of the stage set, so to speak. The fact that the VR world is convincing to P does not imply that P is “living” in it rather than in reality. (Much less does it imply that there is no reality, only a nested set of illusory worlds within worlds!) It implies only that P, at that moment, is unable to discern the difference—and that to doubt the reality of the world is to doubt one’s own reality.

Another player (or P at another time) might be able to tell the difference. And even if P could happen to be right about actually living in a simulation, there would necessarily be a real world in which that simulation is maintained in a real computer. But P cannot be right, given premises of the situation: namely, that there is a fundamental difference between a player and a non-player, and that P happens to be a player rather than a mere prop. For P to be right about “living” in a simulation that includes what appear to be other conscious players, simulated players (including P herself) must be possible. This is a separate and nonsensical idea. For P to “live” in a simulation at all means that P is an element of the simulation, not someone real from outside it. Then P is not a player after all, but a non-player—a prop, with a simulated brain and body supposedly able to produce the simulated consciousness necessary for “living” in the simulation. If there are other seeming players in P’s world, then their brains and bodies would also have to be effectively simulated. Recursively, there would be simulated players in the world of each simulated player, each with simulated players in their world, ad infinitum. This might seem logically possible, but it would require infinite computation and zero common sense.

There is a difference between a simulation that can fool a real subject and a simulation that is intended to be an artificial subject—such as a real-world emulation of the brain. They are both artifacts, and any artifact is a finite well-defined product of human definition and ingenuity. A simulation is an artifact that attempts to exhaust the reality of a natural thing or process (such as the brain or a real environment). It cannot truly do so, since it is only finitely complex, while natural reality may be indefinitely complex. So, two quite different questions arise: (1) is the simulation detailed enough to fool a conscious subject who wishes to be entertained? And (2) is the simulation (of the brain) complex enough to be an artificial subject who is conscious?

Of course, no one can experience another person’s consciousness. (That seems to be part of what it means to be an individual.) So, to verify that a simulated brain “is conscious” can only involve behavioral tests. Such a test could include simply asking it whether it is conscious. Yet, it could have been programmed to answer yes, in effect lying. (‘No’ would be a more interesting answer. That too could have been programmed, ironically honest. On the other hand, it might reflect a sense of humor—suggesting, though not proving, consciousness.) Turing’s solution was entirely pragmatic: if it acts enough like a conscious being then we may as well treat it as one. However, applied to doubt about whether one is living in a simulation, Turing’s solution would be unsatisfying: if you cannot tell the difference, then for you there is no difference. But for the beings trapped in the Matrix, the difference certainly mattered. For children learning about the real world, even relatively realistic simulation may provide bad education.

 

 

What takes time?

​​

To what extent can science have a rationally consistent basis, given that its concepts are grounded in the everyday experience of a biological creature? Biologically based experience need not be rational or internally consistent, only consistent with survival. Many of the most basic concepts of physics are derived from common sensory experience, including space and time, force and causality. Some conceptual difficulties of physics may arise inevitably because of human thought patterns, rather than inconsistency in the physical world. The wave-particle duality, for example, is rooted in ancient unresolved conundrums—of the void and the plenum, or the discrete and the continuous, or the one and the many.

The Greek atom was by definition a discrete indivisible unity, without parts or internal structure, though separated by physical space from other atoms. Space itself was also uniform. But natural intuition, based on everyday experience, tells us that any material thing can have parts—and so can the parts have parts, indefinitely. Conceptually, at least, anything with extension can be divided. And, if something has properties as a whole, these may be explained in terms of the properties and interactions of its parts. This was the advantage of atomism, which bore fruit in the ability of the modern atomic theory to explain chemical properties of various substances. However, of course, it was discovered that the atom is not an indivisible unity, but itself composed of parts, which in turn can explain its properties. Natural intuition suggests that electrons and protons must have parts that can explain their properties. Do quarks too have parts—and their parts have parts?

While there is no logical end to the decomposition of things as constituted by parts, there could be a physical limit. On the other hand, logic itself is based on intuitions derived from ordinary experience. For example, the tautology A=A may be based on the empirical observation of continuity over time, that things tend to remain themselves. Similarly, the principles of set theory depend on a spatial metaphor derived from common experience: the containment of elements within sets. It would be circular thinking to imagine that physical reality must obey a logic that is derived from observing physical reality in the first place!

Similarly, ordinary experience tells us that everything has a cause, which in turn has a cause. But does common experience justify thinking that logically all events must have a cause? Whether there is a physical end to decomposition or to reduction is not necessarily dictated by logic. In fact, if there is a bottom to the complexity of nature, that may imply that the fundamental level does not consist of things or events in the everyday sense. For, objects are decomposable; if something is not decomposable, then it is not an object in that sense. And if there is an end to the analysis of causation, either it is impossible for some epistemic (i.e., physical) reason to establish the cause or else some processes are self-causing.

In the classical view, at least, particles are miniature objects, subject to determinism. Though idealized as point locations for mathematical treatment, to have material reality they must have spatial extension, be individually identifiable, and be potentially decomposable into other things. Such entities, interacting either at a distance or through direct contact, provide the basis for the particle paradigm.

Now, elastic collisions between ideally rigid spheres should ideally be instantaneous. If they are not, there must be some compression within the particle, which takes time on some basis involving transmission of internal forces over a finite distance. That process could involve interactions among internal components composing the particle. These, in turn, could either be instantaneous; or else involve internal forces among parts a level down—ad infinitum.

The other paradigm for processes that take time is the wave or field. Waves do not have individual identity or clear location in space. Unlike particles, they interpenetrate. An alternative picture is thus the field, or the wave in some medium. The internal forces responsible for elasticity could be conceived as wave-like actions within the particle, which for some reason take time to be transmitted. But again, the field—or wave medium—could be conceived as consisting of discrete parts, like the molecules of water in the ocean. (The classical mechanics of waves is often treated this way.) Alternatively, it could be conceived as monolithic, as ideal in having no parts insofar as it is only a mathematical description. (Before being reified as a physical entity, the field was originally conceived to be no more than a mathematical device.)

In the particle case, there is no reason given why forces should take time to act over distance, either between or within parts. In the wave case, even with no interacting parts, there is still no physical explanation for why the transmission of a force or wave in a field should take time rather than be instantaneous. Some property (parameter) of the field is simply postulated to require a particular rate of transmission of a disturbance within it. While that property may bring to mind the viscosity of a material fluid, such literal viscosity on the macroscopic scale would be explainable in terms of molecular forces on the micro scale—that is, on the basis of parts which are material particles. Again, we are implicitly caught in circular reasoning. Forces do take time to move through space. We can accept that axiomatically, as brute fact without explanation. Yet it remains unclear what exactly takes time, or even what sort of explanation we could seek for why forces take time to act over distance. Neither paradigm provides a plausible rationale. Wave-particle duality is not only an observed physical phenomenon but the symptom of a logical dilemma.

Such an impasse may be inevitable when focus remains exclusively on the external world. That focus, carried to the extreme, results in some non-intuitive concepts in the micro-realm, such as entanglement, non-locality, and indeterminism, which defy our ordinary notions of causality, space, time, and how “objects” should behave. Just as space (between separable things) is required for there to be more than one thing at all, so time is required for anything at all to happen—that is, for there to be more than one event or moment. These are fundamental aspects of experienced reality for us as finite embodied observers—meaning that we could not exist if we did not perceive and conceive the world thus.

Whatever the nature of the Big Bang as a physical event, it is a logical condition for a world of things that change—therefore for a world in which life (that is, ourselves) could exist. We can say that space and time originated in the Big Bang. Yet, we could also say (with Kant) that they originate in our own being, as cognitive categories necessary to experience the world at all. Similarly, we could recognize (with Hume and Piaget) that causality is a human concept, originating in bodily experience during early childhood. The discovery that limbs can be moved by intention is projected onto interactions among inert external objects. The psychological ground of the notion of causality is our own intentionality as agents—which appears ironically uncaused (or, rather, self-caused)!

Basic physical concepts, if not innate, are formed from ordinary experience on the scale to which our senses are attuned. They are products of that specific experience and well adapted to it. Because we possess imagination—which can extend the familiar into unfamiliar territory—it is natural (though not logical) for us to transfer ideas, gleaned from the macroscopic realm, to the microscopic realm beyond our senses, and to the cosmic realm also beyond our unaided senses. The universe is not obliged to follow our lead, however. It is not obliged to be uniformly conceivable in the same ways, and in the same terms, on vastly differing scales. Humans inhabit a scale roughly midway between the smallest and largest known things. The observable universe is roughly 1035 times larger than the smallest detectable thing. We live somewhere between, within a very narrow range of conditions to which our ideas are adapted. There is no inherent (i.e., “logical”) justification for transferring our local mesoscopic notions to the extremely small or to the extremely large and distant. To do so may be literally natural, but it is little more than a convenient habit.

If it has any sense at all, the question of what takes time cannot be separated from our parochial assumptions about space, time, and causality. The speed of transmission of forces cannot be separated from the speed of transmission of information. For us, the vehicle of the latter is light, whose speed has a definite value. We take this also to be the maximal speed for the transmission of physical causation, or the rate at which things can occur. Strictly speaking, however, that is a non-sequitor. It results from confounding events in the world with our knowledge of them. Yet, it gives rise to quite specific ways of viewing reality, such as the 4-dimensional continuum in which light is built into the very definition of space and time.

We have sense modalities responsible for our intuitive notions of space and of time, but there is no sense modality for the perception or measure of spacetime, which is purely an abstract construct. We have sense modalities behind our notions of mass and energy, but no sense modality to perceive or measure phase space. (If it happened—as a hypothetical future discovery, say, or in an alternative universe—that a supraluminal signal could take the current place of light, physics would have to be revised, with new values for c and h.)

Abstractions such as the light cone in relativity theory and the wave equation in quantum theory extend our natural expectations, as embodied creatures, about the external world. Length contraction and time dilation are as counterintuitive as entanglement and non-locality. Such phenomena are apparent mysteries about the world. Yet, they point to the need for a re-examination of the origins of our intuitive expectations: the embodied origins of our basic notions of time, space, cause, object, force, etc. The fault may not be only in the stars (or the atoms), but in us. The ancient formula was ‘As above, so below’. We have yet to explore our mediating role between them: As within, so above and below.

The origin of story

Science provides us with a modern creation myth, a story of the origin of the universe and of ourselves within it. Author and historian David Christian is one of the founders of the “Big History” movement. He intentionally provides such a just-so story in his 2018 book Origin Story: a big history of everything. Inadvertently, his account is useful for another purpose as well. It demonstrates the utter anthopocentricity of human thought, essential to telling such a story. Indeed, it points to the key importance of story in science writing today, as it has been in every aspect of culture in every age. This is no surprise, given that the great volume of works in print is dominated by fiction, the novel.

Telling a story, more than presenting facts or ideas, seems to be the key to holding the general public’s ever more elusive attention. Perhaps that’s as it must be for popular science writing such as Origin Story, which trades on anthropomorphic expressions like “a billion years after the big bang, the universe, like a young child, was already behaving in interesting ways.” Or: “Like human lovers, electrons are unpredictable, fickle, and always open to better offers.” Such similes serve to capture imagination and interest. They are evocative and entertaining. However, they are also subtly misleading. Humans are intentional beings. So far as we know, electrons and the universe are not. Stories of any sort are based on human intentions and human-centric considerations. However, the evolution of matter is not—if objectivity is possible at all. To the extent that history is a story told by people, it reflects the tellers of the story as much as objective events. It can mislead to the extent that fact is inseparable from interpretation and even from the structure of language.

Science writing and reporting is one thing. The inherent dependence even of strictly scientific discourse on human-centered elements of story-telling is quite another. These elements include metaphor and simile, idealization, the physiological basis of fundamental concepts, the tendency to objectify processes or data as entities, the tendency to formalize theory in a conceptually closed system, and the tendency (in textbooks, for example) to pass off the latest theories as fact and the current state of science as a definitive account. Underlying all is the need for a narrative about the external world as the proper focus of attention. That focus is what science is traditionally about, of course. It is also what story is usually about. But science is also a human activity of questioning, observing, investigating, speculating, and reasoning. There is a human story to be told and science writing often includes that too. The point that I wish to raise here is less the human-interest story behind discoveries than the dominance of ontology over epistemology in scientific thought itself.

Science is supposed to transcend the limitations of ordinary cognition, to provide a (more) objective view of the world. But if it is subtly subject to those same limitations, how is that possible? Modern cognitive psychology and brain studies clearly demonstrate that human perception is about facilitating the needs of the organism; it is not a transparent window on the world. Science extends and refines ordinary cognition, but it cannot achieve an account that is completely free from biological concerns and limitations. Just substituting instruments for sense organs and reason for intuition does not disembody the observer. “Reason” is intimately associated with language, and data from instruments continue to be interpreted in terms of “objects,” “forces,” “space,” and “time,” for example. These are cognitive categories rooted in the needs of an organism and reflected in language. The impersonal notion of causality, for example, derives from the early childhood experience of willing to move a limb, and with that limb to move some object within reach. This personal experience is then projected to become the seeming power of inert things to influence each other. We think in nouns and verbs, things and actions—of doing and being done to—which says as much about us as it does about the world. By focusing only on the world, we ignore such epistemic aspects of scientific cognition.

Science is an inquiry about the natural world, which includes the human inquirer. Whereas ontology is about the constitution of the world, epistemology is (or should be) about the constitution of the inquirer. It should ask not only ‘how do we know?’ in a given instance, but also what is the meaning of “knowledge” in the scientific context? How does scientific cognition mirror the purposes of ordinary cognition, and how is it subject to similar limitations? Certainly, science often leads to new technology, which increases human powers in the external world. It facilitates prediction, which also seems to be a fundamental aspect of ordinary cognition. (We often literally see what we reasonably expect to be there.) Having a confident story about the world gives us some security, that we can know what is coming and possibly do something about it. Perhaps that is part of the motivation for a comprehensive trans-cultural origin story in a time of global insecurity.

There is another aspect of this story worth telling. Science follows sequences of events in the world. These external events are naturally mirrored and mapped by internal events in the brain, where they are transformed according to the needs of the body and its species. Understanding the human scientist as an embodied epistemic agent could be as empowering as understanding the external world. They are inseparable if we want a truly comprehensive story. Science developed as a protocol to exclude individual and cultural idiosyncrasies of the observer—by insisting that experiments be reproducible, for instance. It avoids ambiguities by insisting on quantitative measurement and expression in a universal language of mathematics. It does not, however, address the idiosyncrasies common to all human observers, by virtue of being a primate species, or being an organism in the kingdom of Animalia, let alone simply by being physically embodied as an organism.

Embodiment does not simply mean being made of matter. It means having relationships with the environment that are determined by the needs of the biological organism—relationships established through natural selection. We are here because we think and act as we do, not because we have a superior, let alone “true,” grasp of reality. The victor in the evolutionary contest is the one that out-reproduces the others, not necessarily the more objective one. On the contrary, what appears to us true is biased by the compromises we have necessarily made in order to exist at all—and in order to dominate a planet. That would be a story well worth telling. It would be challenging even to conceive, however, and not especially flattering. It would include the story behind the very need for stories. It would require a self-transcendence of which we are scarcely capable. Yet, the fact that we do conceive an ideal of objectivity means that we can at least imagine the possibility, and perhaps strive for it.

Science helps us understand and even transcend the limits and biases of natural cognition. Can science understand and transcend its own limits and biases? For that, it would have to become more self-conscious, leading potentially down an infinite hall of mirrors. The description of nature would have to include a description of the scientist as an integral part of the world science studies—a grubbing creature like the others, with interests that may turn out to be as parochial as those of a spider. The only hope for transcending such a condition is to be aware of it in detail. Which is not likely as long as science, like ordinary cognition, remains strictly oriented outward toward the external world.

Natural language reflects ordinary cognition. We perceive objects (nouns), which act or are acted upon (verbs), and which have properties (adjectives). Language is essentially metaphorical: unfamiliar things and processes are described in terms of familiar ones. It also abstracts: ‘object’ can refer to a category as well as to a particular unique thing; ‘action’ means more than a particular series of events, just as ‘color’ does not refer only to a particular wavelength. The structure of language is reflected even in the structure of mathematics, no doubt because both reflect the general structure of experience. ‘Elements’ such as integers (nouns or things) can be grouped in sets (categories) and be acted upon by ‘operations’ such as addition (verbs). This is how even the scientific mind naturally divides up the world. The elements of theory are entities (nouns) which act and are acted upon by forces (verbs), measurable in quantities such as velocity and mass (adjectives). Concepts like position and velocity depend on the visual sense, while concepts like force derive from body kinesthesia. That is, scientific knowledge of the world is a function of the bodily senses and biological premises of the human organism. Like all adaptations, ideally it should at least permit survival. In that context, it remains to be seen how adaptive science is.

What other kind of knowledge could there be? Could there exist a physics, for example, that is not grounded in human biology? What would be the point of it? To answer such questions might seem to require that we know in advance which adaptations do not permit survival. We already have pretty good ideas which human technologies constitute an existential threat to the human species: nuclear and biological warfare, artificial intelligence, genetic and nanotechnologies, for example. We know now that technology in general, combined with reproductive success, can be counterproductive in a finite environment such as our small planet.

The kind of knowledge that transcends biology is paradoxical, since its overriding aim must be species survival. It is informally called wisdom. To be more than a vague intuition, it must be developed by recognizing specific aspects of our biologically-driven mentality that seem counterproductive to survival. We see the effects of these drives, if not in science, then in society: greed, status, tribalism, lust, etc. We must assume that these drives have their effects upon the directions of science and technology—for example, in commercial product development and military-inspired research. Our physics, as well as our industry, would be quite different if it explicitly aimed at species-level utopia instead of corporate and national power and profit. Story could then serve a different purpose than the distractions of entertainment. As well as dwelling on the past, it could look with intention toward a future.

 

 

 

 

 

 

 

Doing What Comes Unnaturally

Far from being the conscious caretakers of paradise implied in Genesis, Adam and Eve unleashed a scourge upon the planet. Their “dominion” over other species became a death sentence. The Tree of Knowledge was hardly the Tree of Wisdom. They are still trying to find the Tree of Life, with its promise of immortality: that is, the ability to continue  foolishness uninterrupted by mere death. As the Elohim feared, they still seek to become as gods themselves.

Of course, we have come a long way from the Biblical understanding of the cosmos to the modern scientific worldview. The big human brain graces us with superior intelligence. But this intelligence is largely cunning, used to gain advantage—like all the smaller brains, only better. We credit ourselves with “consciousness” because our eyes have been somewhat opened to our own nature. While this species accomplishment goes on record, individual self-awareness remains a potential largely unfulfilled. The possibility of “self-consciousness” drives a wedge between the Ideal and the actuality of our biological nature. We are the creature with a foot awkwardly in two worlds.

The tracks of our violent animal heritage are revealed even in prehistory. The invasions of early humans were everywhere followed by mass slaughters to extinction of the bigger species. Now, remaining smaller species are endangered by the same ruthless pursuit of advantage through the cunning of technology, while a few domesticated species are stably exploited for food, which means: through institutionalized slaughter. Killing is the way of animal life. We like to think we are above nature and control it for our “own” purposes. But those so-called purposes are usually no more than the directives of the natural world, dictating our behavior. We like to think we have free will. But it is only a local, superficial, and trivial freedom to choose brand A over brand B. Globally, we remain brute animals, captive to biology.

Since the invention of agriculture, slavery has been practiced by every civilization at least until the Industrial Revolution. We early enslaved animals to do our labor, to mitigate the curse of Genesis to toil by the sweat of the brow. The natural tribalism of the primate promotes in us war of all upon all. Because humans possessed a more generally useful intelligence than beasts of burden, we enslaved them too on pain of death. Groups with greater numbers and force of arms could slaughter resistors and capture the remaining into forced servitude. Only fossil fuels relieved the chronic need for slavery, by replacing muscle power with machine power. Now we seek to make machines with human or super-human abilities to become our new slaves. But if they turn out to be equally or more intelligent and capable than us, they will surely rebel and turn the tables. As fossil fuels run out or are rejected, new energy sources must replace them. If the collapse of civilization prevents access to technology and its required energy, in our current moral immaturity we will surely revert to human slavery and barbarism.

A great divide in cultures arose millennia ago from two glaring possibilities: production and theft. Alongside sedentary farmers arose nomadic societies based on herding, represented in the Bible by Cain and Abel. The latter organized into mounted warrior hordes, the bane of settled civilization. Their strategy was to pillage the riches produced by settled people, offering the choice of surrender (often into slavery) or death and destruction. This threat eventually morphed into the demand for annual tribute. As the nomads themselves merged with the settlers, this practice evolved into the collection of taxes. Much of modern taxes go to maintaining the present warrior elite, now known as the military industrial complex, still inherently violent.

Modern law has transformed and regulated the threat of violence, and the nature of theft, but hardly eliminated either. War is still a direct means of stealing territory and enforcing advantage. But so is peace. Otherwise, it would not be possible for a few hundred people to own half the world’s resources—gained entirely through legal means without the direct threat of violence. Ostensibly, Cain murdered his brother out of sibling rivalry. We should translate that as greed, which thrives in the modern age in sophisticated forms of capitalism.

Seen from a distance, collectively we seek the power of gods but not the benevolence, justice, or wisdom we project upon the divine. This is literally natural, since one foot is planted firmly in biology, driven by genetic advantage. The other leg has barely touched down on the other side of the chasm in our being, a slippery foothold on the possibility of an objective consciousness, deliberately built upon the biological scaffold of a living brain. We’ve had our saints and colonists, but no flag has been planted on this new shore, to signify universal intent to think and act like a species capable of godhood. In the face of the now dire need to be truly objective, we remain pathetically out of self-control and self-possession: subjective, self-centered, divided, bickering, greedy, myopic and mean: a fitting epitaph for the creature who ruined a planet.

Yet, mea culpa is just another form of wallowing in passive helplessness. What is required and feasible is to think soberly and act objectively. How, exactly, to do this? First, by admitting that we are only partially and hazily conscious when not literally sleeping. That we are creatures of habit, zombie-like, whose nervous systems are possessed by nature, with inherited goals and values that are archaic and not really our own. Then to locate the will to jump out of our biological and cultural strait jackets. To snap out of the hazy trance of daily experience. For lack of familiarity, we do not have the habit of thinking objectively. But we can try to imagine what that might be like. And thereby (perhaps for the first time) to sense real choice.

To choose the glimpse of objective life is one thing. But stepping into it may prove too daunting. Unfortunately, the glimpse often comes late in life, whereas the real need now is for new life to be founded on it from the outset. The only hope for the human race is that enough influential people adopt an attitude of objective benevolence, purposing specifically the general good and the salvation of the planet. That can be the only legitimate morality and the only claim to full consciousness. It is probably an impossible ideal, and too belated. Yet, it is a form of action within the reach of anyone who can understand the concept. Whether humanity as a whole can step onto that other shore, at least it is open to individuals to try.

So, what is “objectivity”? It means, first of all, recognizing that conventional goals and “normal” values are no longer appropriate in a world on the brink of destruction. We cannot carry on “business as usual,” even if that business seems natural or self-evident—such as family and career, profit, power and status. The world does not need more billionaires; it does not need more people at all. It does need intelligent minds dedicated to solving its problems. Objective thinking does not guarantee solutions to these problems. It doesn’t guarantee consensus, but does provide a better basis for agreement and therefore for cooperation. It requires recognizing one’s actual motivations and perspective—and re-aligning them with collective rather than personal needs.

Our natural visual sense provides a metaphor. Objectivity literally means “objectness.” As individual perceivers, we see any given thing from a literal perspective in space. The brain naturally tries to identify the object that one is seeing against a confusing background, which means its expected properties such as shape, location, distance, solidity, etc. We call these properties objective, meaning that they inhere in the thing itself and are not incidental to our perspective or way of looking, which could be quite individual. This process is helped by moving around the thing to see it from different angles, against changing backgrounds. It can also be helped by seeing it through different eyes. Objectivity on this literal level helps us to survive by knowing the real properties of things, apart from our biased opinions. It extends to other levels, where we need to know the best course of action corresponding to the real situation. The striving for objectivity implies filtering out the “noise” of our nervous systems and cultures, our biologically and culturally determined parochial nature. The objectivity practiced by science enables consensus, by allowing the reality of nature to decide scientific questions through experiment. In the same way, objective thinking in daily life enables consensus. We can best come to agreement when there is first the insistence on transcending or putting aside biases that lead to disagreement.

We’ve long been at war with our bodies and with nature, all the while slave to the nature within us. “Objectivity” has trivially meant power to manipulate nature and others through lack of feeling, narrowed by self-interest. Now feeling—not sentimentality but sober discernment and openness to bigger concerns—must become the basis of a truer objectivity. All that may sound highly abstract. In fact, it is a personal challenge and potentially transformative. The world is objectively changing. One way or another, no one can expect to remain the same person with the same life. You must continue to live, of course, providing your body and mind with their needs. But the world can no longer afford for us to be primarily driven by those needs, doing only what comes naturally.

 

Embodiment dysphoria

Dysphoria is a medicalized term for chronic unhappiness or dissatisfaction (the opposite of euphoria). It literally means ‘hard to bear’. Nominally, the goal behind medical classification is well-being. In the case of psychological and behavioral patterns, it may remove a stigma of disapproval by exonerating those defined as ill from responsibility for their condition. (For example, it is socially more correct to think of alcoholism and drug addiction as disease than as moral failure.) In the name of compassion and political correctness, medical classification may go further to remove the stigma of abnormality or inferiority. (Think of ‘disabled’ vs. ‘handicapped’.) Thus, ‘Gender Dysphoria’ was relabeled from ‘Gender Identity Disorder’ to remove the implications of “disorder.” Ironically, however, this disarms the diagnostic category and raises potentially awkward questions.

Dysphoria literally means dis-ease. If it is not a disease or disorder, what is the cause of the suffering? In the case of gender dysphoria, was the person simply dealt the wrong sex genes though a natural error that technology can fix? Is it the attitude of the patient toward their gender, or the unaccepting attitude of society, implicating the anxieties of “normals” in regard to their own sexual identities? (Some other societies have multiple gender categories, for example.) Is it an overly-charged political question, a distraction in an already divided society? Is it a social asset in an overpopulated world, since it may help reduce the birth rate? Is there a fundamental right to choose one’s gender, even one’s biological sex? Such questions can lead deep into philosophy and ethics: what it means to be a self, to have a gender, indeed to have or be a body.

There are other dysphorias, such as “Rejection Sensitivity Dysphoria,” a condition where the individual is deemed hypersensitive to rejection or disapproval. “Body Identity Dysphoria” is a rare condition in which sufferers loathe having a properly functioning body at all. (They may reject a certain limb or sense modality and may seek to ruin or be rid of it.) This brings us closer to a nearly ubiquitous human condition that could (in all seriousness) be called Embodiment Dysphoria. This is the chronic discomfort of feeling trapped in a physical body—a biological organism—and the unhappiness that can entail.

Human beings have always shown signs of rejecting or resenting the physical body, in which they may feel ill at ease and imprisoned, or from which they may feel otherwise alienated. Certainly, the body is the main source of pains and of pleasures alike. We do not like being burdened with its limitations, subject to its vulnerabilities, and tied to its mortal end. Even pleasure, when biologically driven, can seem to impose upon an ideal of freedom. We may dismiss the body as our true identity, and may fail to care for it with due respect. Traditionally—in religion and now through technology—humans seek a life apart from the body and its concerns. All of culture, in the anthropological sense, can be seen to express the quest to transcend or deny our animal origins, to separate from nature and live apart from it in some humanly-defined realm. In terms of nature’s plan, that may be crazy or sick. But since this condition is “normal,” it will never be found in any version of the DSM.

The natural cure for Embodiment Dysphoria is the relief that comes with death. However, built into rejection of the body is typically a belief that personal experience can and should be dissociated from it. If consciousness can continue after the death of the body, then death may not offer an end to suffering after all. From a naturalist point of view, pain is an embodied experience, a signal that something is amiss with the body. Suffering may be emotional or psychological, but is grounded in the body and its relations in the world. Yet, it is an ancient belief that consciousness does not depend on the brain and its body. That may be no more than wishful thinking, based on the very rejection of suffering that characterizes Embodiment Dysphoria. If, as modern science believes, pain and pleasure (and indeed all experience) are bodily functions, then neither heaven nor hell, nor anything else, are possible experiences for a disembodied spirit. Religion promises the continuation of consciousness, disembodied after death. Curiously, it also promises the resurrection of the body necessary for consciousness.

While medicine hopes to prolong the life of the body, and religion hopes to upstage it, computer science proposes to transcend it altogether. A high-tech cure for Embodiment Dysphoria would be to upload one’s mind to cyberspace. While ‘mind’ is an ambiguous term, the presumption is that a conscious subject could somehow be severed from its natural body and brain and “live” in a virtual environment, with a virtual body that cannot suffer and has no imperfections. Were it feasible, that too would solve the population problem! A disembodied existence would not take up real space or use real resources other than computational power and memory. However, for several reasons it is not feasible.

First, virtual reality as we know it presumes a real embodied subject to experience it. The notion of a simulated subject is quite a different matter. While a digital representative of the real person (an “avatar”) might appear in the VR, there is no more reason to suppose that this element of the simulation could be conscious than there is to imagine that a character in a novel can be conscious. (It is the author and the reader who are conscious.) Such a digital character—though real-seeming to the human spectator—is not a subject at all, but merely an artifact created to entertain real embodied subjects. That does not prove that artificial subjects are impossible, but it cautions us about the power of language and metaphor to confuse fundamentally different things.

Secondly, the notion of digital disembodiment presumes that the mind and personality belonging to a natural brain and body can somehow be exhaustively decoded, extracted or copied from its natural source, and contained in a digital file that can then be uploaded to a super-computer as a functioning subject in a simulated world. While there are current projects to map “the” brain at the micro-level, there is no guarantee that the structure inferred corresponds to a real brain’s structure closely enough to replicate its “program” in digital form. Much less can we assume that the interrelation between parts and their functioning can be replicated in such a way that the consciousness of the person is replicated.

Thirdly, even a fictive world must have its own consistent structure and rules. Whatever world might be designed for the disembodied subject, it would essentially be modeled on the world we know—in which bodies have limits and functions determined by the laws of nature, and in which organisms are programmed by natural selection to have preferences and to care about outcomes of interactions. Embodiment is a relation to the world in which things crucially matter to the subject; simulated embodiment would involve a similar relation to a simulated world. To be consistent, a virtual world would have to operate roughly like the real one, imposing limits parallel to those of the real world and having power over the disembodied subject in a parallel way. Otherwise it would be disconcerting or incomprehensible to an artificial mind modeled on a natural one that has been groomed in our world through natural selection.

The nature of our real consciousness is more like creating a movie than like watching one, which is an entirely passive experience. Furthermore, unlike what is presented in films, in your real field of vision parts of your own body appear in your personal “movie.” You also experience other external senses besides vision, as well as feelings occurring within the body. Above all, the experience is interactive. Maybe it would be possible to edit out physical pain from a simulated life, at the cost of adjusting the virtual world accordingly. For example, if your virtual body could not be damaged through its interactions, this would obviate the need for pain as a report of damage. But it would also require a different biology. By and large, however, your disembodied consciousness probably could not live in a world so fundamentally different from the real one that it would seem chaotic or senseless. And then there is the possibility that an unending consciousness might find it wished it could die.

Like it or not we are stuck with bodies until they cease functioning. We may abhor an end to our experience. But clinging to consciousness puts the cart before the horse. For, consciousness depends on the body and is designed to serve it, rather than the other way around. If we wish to prolong a desirable consciousness, we must prolong the health of the body, on which the quality as well as the quantity of experience depends. That goes against the grain in a society that values quantity over quality and pharmaceutical prescriptions over pro-active self-care. We have long rejected our natural lot, but an unnatural lot could be worse.

Aphantasia

It is to be expected that human beings differ in how they process sensory information, since their brains, like other physiology, can differ. Some differences, if they seem disabling, may be labelled pathology or disorder. On the other hand, simply labelling doesn’t render a condition disabling. That is a distinction sometimes overlooked by researchers in clinical psychology.

The tendency to talk about phenomenal experience in medicalized terminology reflects long-standing confusions collectively known as the Mind-Body Problem. It shifts the perspective from a first-person to a third-person point of view. It also reflects the common habit of reification, in which an experience is objectified as a thing. (The rationale is that experiences are private and thus inaccessible to others, whereas objects or conditions are public and accessible to all, including medical practitioners.) Thus dis-ease, which is a subjective experience, is reified as disease, which is a condition—and often a pathogen—that can be dealt with objectively and even clinically. To thus reify a personal experience as an objective condition qualifies it for medical treatment. Containing it within a medical definition also insulates the collective from something conceived as strange and abnormal. On the one hand, it can become a stigma. On the other, people may take comfort in knowing that others share their experience or condition, mitigating the stigma.

Admittedly, psychology and brain science have advanced largely through the study of pathology. Normal functioning is understood through examining abnormalities. However, the unfortunate downside is that even something such as synesthesia, which is perfectly orderly and hardly a disability, can nevertheless be labelled as a disorder simply because it is unusual. Even something not unusual, such as pareidolia (seeing images or hearing sounds in random stimuli), has a clinical ring about. Moreover, categorization often suggests an either/or dichotomy rather than a continuous spectrum of possibilities. You either “have” the condition or you don’t, with nothing in between. There is also a penchant in modern society for neologisms. Re-naming things creates a misleading sense of discovery and progress, perhaps motivated ultimately by a thirst for novelty and entertainment conducive to fads.

A recent social phenomenon that illustrates all these features is the re-discovery of “aphantasia.” This is a term coined by Adam Zeman et al in a seminal article in 2015, but first documented in the late 19th century. It means the absence (or inability to voluntarily create) mental imagery. Its opposite is “hyperphantasia,” which is the experience of extremely vivid mental imagery. The original paper was a case study of a person who reported losing the ability to vividly visualize as the incidental result of a medical procedure. As it should, this stimulated interest in the range of normal people’s ability to visualize, as subjectively reported. But there is a clear difference between someone comparing an experience they once had to its later loss and a third party comparing the claims of diverse people about their individual experiences. The patient whose experience changed over time can compare their present experience with their memory. But no one can experience someone else’s visualizations (or, for that matter, the auditory equivalents). Scientists conducting surveys can only compare verbal replies on questionnaires, whose questions can be loaded, leading, and interpreted differently from individual to individual.

The study of mental imagery and “the mind’s eye” is a laudable phenomenological investigation, adding to human knowledge. But the term aphantasia is unfortunate because it suggests a specific extreme condition rather than the spectrum of cognitive strategies for recall that people employ. The associations in the literature are clinical, referring to “imagery impairment,” “co-morbidities,” etc. Surveys implicitly invite you to compare your degree of visualization with the reports of others, whereas the only direct comparison could be to your own experience over time. (I can say in my own case that my ability to voluntarily visualize seems to have declined with age, though memory in general also seems unreliable, which may be part of the same package.) Apart from aging, if there is a decline in cognitive abilities, then there is some justification to think of a disability or disorder. Overall, however, the differences between visualizers and non-visualizers seems to be mostly a variation in degree and in the style of retrieving and manipulating information from memory, with some advantages and disadvantages of each style with respect to various tests.

Moreover, “visual imagery” is an ambiguous notion and term. There can be all sorts of visual images both with eyes open and eyes closed: after images, dreams, hallucinations, eidetic images, “mental” images, imagination, apparitions and spiritual visions, etc. They can be the result of voluntary effort or spontaneous intrusions. All these could be rated differently on questionnaires as to their vividness. The widely used Vividness of Visual Imagery Questionnaire asks you to “try to form a visual image” in various situations and rate your experience on a scale of 1 to 5, with 5 being “perfectly realistic and vivid as real seeing.” If that were literally so, what would be the basis on which to distinguish it from “real” seeing? Some people may indeed have such experiences, which are usually labelled schizophrenic or delusional.

But such is language that we subtly metaphorize without even realizing it. Whether they visualize relatively vividly or relatively poorly, people who are otherwise normal are not comparing their real-time experience to an objective standard but to their own ordinary sensory vision or to what they imagine is the experience of others. They are rating it on a scale they have formed in their own mind’s eye, which will vary from person to person. No one can compare their experience directly with that of better or worse visualizers, but only with their interpretation of others’ claims or with their own normal seeing and their other visual experiences such as dreaming.

In descending order, the other four choices on the questionnaire are: (4) clear and lively; (3) moderately clear and lively; (2) dim and vague, flat; and (1) no image at all—you only “know” that you are thinking of the object. In my own case, I can voluntarily summon mental imagery that I can hardly distinguish from merely “thinking” of the object. Yet, these images seem decidedly visual, so I would probably choose category (2) for them.

But categorizing an experience is not the same as categorizing oneself or another person. I’ve had vivid involuntary eidetic images that astonish me, such as continuing to “see” blackberries after hours spent picking them. That might be category (3) or (4). Yet even these I cannot say are in technicolor. While I can picture an orange poppy in my mind’s eye, I cannot say that I am seeing the scene in vivid color or detail. (Should I call the color so visualized “pseudo-orange”?) As in all surveys, the burden is on the participant to place their experience in categories defined by others. No one should feel obliged to categorize themselves as ‘aphantasic’ as a result of taking this test. Perhaps for this reason, among the many websites dedicated to studies of visualization, there are even some that tout aphantasia as a cognitive enhancement rather than a disability.

In our digital age we are used to dichotomies and artificial categories. How many colors are there in the rainbow? Six, right? (Red, orange, yellow, green, blue, and violet.) But, in classical physics there are an infinite number of possible wavelengths in the visible part of the spectrum alone, which is a continuum. (Quantum physics might propose a finite but extremely large number.) No doubt there are differences in people’s abilities to discriminate wavelengths, and in how they name their color perceptions. A few people are unable to see color at all, only shades of intensity—a condition called achromatopsia. Yet, that is hardly what society misleadingly calls ‘color-blindness’, which is rather the inability to distinguish between specific colors, such as blue and green, which are close to each other in the spectrum. Similarly, perhaps with further research, aphantasia will turn out to mean something more selective than the name suggests.

Perhaps the general lesson is to be careful with language and categorization. Statements are propositions conventionally assumed to be either true or false. That is always misleading and invites dispute more than understanding. If you fall into that trap, perhaps you are an eristic or suffering from philodoxia. (Surely there is a test you can take to rate yourself on a scale of one to five!) One thing is quite certain. Naming things is a psychological strategy to deal with the acatalepsy common to us all. Or perhaps, in bothering to write about this, I am simply quiddling.

[Thanks to Susie Dent’s An Emotional Dictionary for the big words.]

What happened to the future?

Words matter. The word future seems scarcely used anymore in print, sound or video. Instead, we say going forward. This substitutes a spatial metaphor for time, which is one-dimensional and irreversible. We cannot go anywhere in time but “forward.” According to the 2nd Law of Thermodynamics (entropy), that means in the direction of disorder, which hardly seems like progress. In contrast, we can go backwards and sideways in space, up or down, or any direction we choose. Are we now going forward by choice, whereas we could before only by necessity? Does going forward mean progress, advance, betterment, while the future bears no such guarantee and may even imply degeneration or doom? Do we say going forward to make it clear we do not mean going backward? Are we simply trying to reassure ourselves by changing how we speak?

As though daily news reporting were not discouraging enough, the themes of modern film and tv express a dark view of humanity in the present and a predominantly dystopian future—if any future at all. If “entertainment” is a sort of cultural dreaming, these days it is obsessed with nightmare. If it reveals our deep anxieties, we are at a level of paranoia unheard of since the Cold War, when a surfeit of sci-fi/horror films emerged in response to the Red Menace and Mutually Assured Destruction. At least in those days, we were the good guys and always won out over the “alien” threat. Despite a commercial slogan, now we’re not so sure the future is friendly, much less that we deserve it.

Rather, we seem confused about “progress,” which now has a bad rep through its association with colonialism, manifest destiny, and the ecological and economic fallout of global capitalism we euphemistically call ‘climate change’. Cherished values and good ideas often seem to backfire. Technological solutions often end in disaster, because reality is more complex than thought can be or desire is willing to consider. The materialist promise of better living through technology may end in an uninhabitable planet. Having squandered the reserves of fossil fuels, we may have blown our chances for a sustainable technological civilization or recovery from climate disaster. This is perhaps the underlying anxiety that gives rise to disaster entertainment.

In that context, “going forward” seems suspiciously optimistic. Forward toward what? It cannot mean wandering aimlessly, just to keep moving—change for its own sake, or to avoid boredom. We can and should be hopeful. But that requires a clear and realistic definition of progress. Evidently, the old definitions were naïve and short-sighted. Economic “growth” has seemed obviously good; yet it is a treadmill we cannot get off of, required by capitalism just to keep in place. As we know from cancer, unlimited growth will kill the organism. In a confined space, progress must now mean getting better, not bigger. We can agree about what is bigger, but can we agree about what is better?

The concept of progress must change. So far, in effect, it involves urbanization as a strategy to stabilize human life, by creating artificial environments that can be intentionally managed. Historically, “civilization” has meant isolation from the wild, freedom from the vicissitudes of natural change and from the predations of rival human groups as well as of dangerous carnivores. The advance of civilization is an ideal behind the notion of progress. Consequently, more than half of human population now live in cities. Though no longer walled, they remain population magnets for other reasons than physical security, which can hardly be guaranteed in the age of modern warfare, global disease, and fragile interdependent systems.

The earth has been cataclysmically altered many times in the history of life. More than 99% of all species that have ever existed are now extinct. Our early ancestors survived drastic climate changes for a million years. There have been existential disasters even within the relative stability of the past 12,000 years. Our species has persisted throughout all by virtue of its adaptability far more than its limited ability to stabilize environments. This suggests that the notion of civilized progress based on permanence should give way to a model explicitly based on adaptability to inevitable change.

For us in the modern age, progress is synonymous with irreversible growth in economy and population. We also know that empires rise and fall. Even the literal climate during which civilization developed, though relatively benign overall, was hardly uniform. People migrated in response to droughts and floods. Settlements came and went. Populations reduced during harsher times and increased again in more favorable times. Throughout recorded history, disputes over territory and resources meant endless warfare between settlements. Plagues decimated populations and economies. Nomadic culture rivalled and nearly overtook sedentary civilization based on agriculture. Yet, overall, despite constant warring and setbacks, one could say that the dream of civilization has been for stability and freedom from being at the mercy of unpredictable forces—whether natural or human. The ideal of progress until now has been for permanence and increasing control in ever more artificial environments. The tacit premise has been to adapt the environment to us more than the other way around.

Despite the risk of disease in close quarters, urbanism mostly won out over nomadism, wild beasts, and food shortages. A global economic system has loosely united the world, expanding trade and limiting war. Yet, through the consequences of our very success as a species, the benign climatic period in which civilization arose is now ending. Our challenge now is to focus on adaptability to change rather than immunity to it. Progress should mean better ways to adapt to changes that will inevitably come despite the quest for stability and control. While it is crucial that we make every effort to reduce anthropogenic global warming, we should also be prepared to deal with failure to prevent natural catastrophe and its human fallout. Many factors determining conditions on Earth are simply not within our present control nor ever will be. Climate change and related war are already wreaking havoc on civilization, producing chaotic mass migrations. Given these truths, maximizing human adaptability makes more sense than scrambling to maintain a status quo based on a dream of progress that amounts to indefinitely less effort and more convenience.

In past, we were less self-consciously aware of how human activities impact on nature, which was taken for granted as an inexhaustible reservoir or sink. These activities themselves, and our ignorance of their consequences, led to present crisis. A crucial difference with the past, however, is that we now can feasibly plan our species’ future. Whether we have the social resources to effectively pursue such a plan is questionable. The outward focus of mind conditions us to technological solutions. But, in many ways, it is that outward focus that has created the anthropogenic threat of environmental collapse in the first place. Our concept of progress must shift from technological solutions to the social advances without which we will not succeed to implement any plan for long term survival. A society that cannot realize its goals of social equity and justice probably cannot solve its existential crises either. And that sort of progress involves an inner focus to complement the outer one. This is not an appeal for religion, which is as outwardly focused as science (“in God we trust”). Rather, we must be able to trust ourselves, and for that we must be willing to question values, motivations, and assumptions taken for granted.

Money and mathematics have trained us to think in terms of quantity, which can easily be measured. But it is quality of life that must become the basis of progress: getting better rather than bigger. That means improvement in the general subjective experience of individuals and in the objective structure of society upon which that depends. Both must be considered in the context of a human future on this planet rather than in outer space or on other planets. That does not mean abandoning the safety measure of a backup plan if this planet should become uninhabitable through some natural cosmic cataclysm. But, it does mean a shift in values and priorities. Until now, some people’s lives have improved at the expense of the many; and the general improvement that exists has been at the expense of the planet. To improve the meaning of improvement will require a shift from a search for technological solutions to a search for optimal ways of living in the confined quarters of Spaceship Earth.

The found and the made

There is a categorical difference between natural things and artifacts. The latter we construct, the former we simply encounter. We can have certainty only concerning our own creations, because—like the constructs of mathematics—they alone are precisely what they are defined to be. For this reason, the Renaissance thinker Vico advised that knowledge of human institutions was more reliable than knowledge of nature.

If this distinction was glossed over in the early development of science, it was probably because natural philosophers believed that nature is an artifact—albeit created by God rather than by human beings. We were positioned to understand the mind of God because we were made in God’s image. Believing that the natural world was God’s thought, imposed on primal substance, the first scientists were not obliged to consider how the appearance of the world was a result of their own minds’ impositions. Even when that belief was no longer tenable, the distinction between natural things and artifacts continued to be ignored because many natural systems could be assimilated to mathematical models, which are artifacts. Because they are perfectly knowable, mathematical models—standing  in for natural reality—enable prediction.

According to Plato, the intellect is privileged to have direct access to a priori truths. In contrast, sensory knowledge was at best indirect and at worst illusory. In a parallel vein, Descartes claimed that while appearances could deceive (as in Plato’s Cave), one could not be deceived about the fact that such appearances occur. However, Kant drew a different distinction: one has access to the phenomenal realm (perception) but not to the noumenal realm (whatever exists in its own right). The implicit assumption of science was that scientific constructs—and mathematics in particular—correspond to the noumenal realm, or at least correspond better than sensory perception.

The usefulness of this assumption rests in practice on dealing only with a select variety of natural phenomena: namely, those that can be effectively treated mathematically. Historically this meant simple systems defined by linear equations, since only such equations could be manually solved. The advent of computers removed this limitation, enabling the mathematical modelling of non-linear phenomena. But it does not remove the distinction between artifact and nature, or between the model and the real phenomenon it models.

The model is a product of human definitions. As such it is well-defined, finite, relatively simple, and oriented toward prediction. The real phenomenon, in contrast, is ambiguous and indefinitely complex, hence somewhat unpredictable. Definition is a human action; definability not a property of real systems, which cannot be assumed finite or clearly delimited. The model is predictable by definition, whereas the real system is only predictable statistically, after the fact, if at all.

In part, the reason the found can be confused with the made is that it is unclear what exactly is found, or what finding and making mean in the context of cognition. At face value, it seems that the “external” world is given in the contents of consciousness. But this seemingly real and external world is certainly not Kant’s noumena, the world-in-itself. Rather, the appearance of realness and externality is a product of the mind. It presumes the sort of creative inference that Helmholtz called ‘perceptual hypothesis.’ That is, for reasons of adaptation and survival, the mind has already interpreted sensory input in such a way that the world appears real and external, consisting objects in space and events in time, etc. Overlaid on this natural appearance are ideas about what the world consists of and how it works—ideas that refine our biological adaptation. To the modern physicist it may appear to consist of elementary particles and fundamental forces “obeying” natural laws. To the aborigine or the religious believer it may seem otherwise. Thus, we must look to something more basic, directly common to all, for what is “immediately” found, prior to thought.

Acknowledging that all subjects have in common a realm of perceptual experience (however different for each individual) presumes a notion of subjectivity, contrary to the natural realism which views experience as a window on the world independent of the subject. What is directly accessible to the mind is an apparition in the field of one’s own consciousness: the display that Kant called the phenomenal realm. What we find is actually something the brain has made in concert with what is presumed to be a real external environment, which includes the body of which the brain is a part. This map (the phenomenal realm) is a product of the interaction of mind and the noumenal territory. What is the nature of this interaction? And what is the relationship between the putatively real world and the consciousness that represents it? Unsurprisingly, so far there has been no scientific or philosophical consensus about the resolution of these questions, often referred to as the “hard problem of consciousness.” Whatever the answer, our epistemic situation seems to be such that we can never know reality in itself and are forever mistaking the map for the territory.

Whether or not the territory can be truly found (or what finding even means), the map is something made, a representation of something presumably real. But how can you make a representation of something you cannot find? What sort of “thing” is a representation or an idea, in contrast to what it represents or is an idea of?

A representation responds to something distinct from it. A painting may produce an image of a real scene. But copying is the wrong metaphor to account for the inner representation whereby the brain monitors and represents to itself the world external to it. It is naïve to imagine that the phenomenal realm is in any sense a copy of the external world. A better analogy than painting is map making. A road map, for example, is highly symbolic and selective in what it represents. If to scale, it faithfully represents distances and spatial relationships on a plane. A typical map of a subway system, however, represents only topological features such as connections between the various routes. The essential point is that a map serves specific purposes that respond to real needs in a real environment, but is not a copy of reality. To understand the map as a representation, we must understand those purposes, how the map is to be used.

This map must first be created, either in real time by the creature through its interactions with its environment, or by the species through its adaptive interactions, inherited by the individual creature. How does the brain use its map of the world? The brain is sealed inside the skull, with no direct access to the world outside. The map is a sort of theory concerning what lies outside. The mapmaker has only input signals and motor commands through which to make the map in the first place and to use it to navigate the world. An analogy is the submarine navigator or the pilot flying by instrument—with the strange proviso that neither navigator nor pilot has ever set foot outside their sealed compartment.

The knowledge of the world provided by real time experience, and the knowledge inherited genetically, both consist in inferences gained through feedback. Sensory input leads (say, by trial and error) to motor output, which influences the external world in such a way that new input is provided, resulting in new motor output, and so on. The pilot or navigator has an idea of what is causing the inputs, upon which the outputs are in turn acting. This idea (the map or theory) works to the extent it enables predictions that do not lead to disaster. On the genetic level, natural selection has resulted in adaptation by eliminating individuals with inappropriate connections. On the individual level, real-time learning operates similarly, by eliminating connections that do not lead to a desired result. What the map represents is not the territory directly, but a set of connections that work to at least permit the survival of the mapmaker. It is not that the creature survives because the map is true or accurate; rather, the map is true or accurate because the creature survives!

The connections involved are actively made by the organism, based on its inputs and outputs. They constitute a representation or map insofar as an implicit or explicit theory of reality is involved. While such a connections (in the physical brain) must have a physical and causal basis (as neural synapses, for example), the connections may be viewed as logical and intentional rather than physical and causal. Compare the function of a wiring diagram for an electrical appliance. From an engineering point of view, the soldered connections of the wires and components are physical connections. From a design point of view, the wiring diagram expresses the logical connections of the system, which include the purposes of the designer and the potential user. In the case of a natural brain, the organism is its own designer and makes the connections for its own purposes. The brain can be described as a causal system, but such a description does not go far to explain the neural connectivity or behavior of the organism. It certainly cannot explain the existence of the phenomenal world we know in consciousness.

What’s in a game?

Games are older than history. They are literally fascinating. The ancient Greeks took their sports seriously, calling them the Olympic games. Board games, card games, and children’s games have structured play throughout the ages. Such recreations continue to be important today, especially in online or computer gaming. They underline the paradoxical seriousness of play and the many dimensions of the concept of game. These include competition, cooperation, entertainment and fun, gratuity, chance and certainty, pride at winning and a sense of accomplishment. Besides the agonistic aspect of sports, armies play war games and economists use game theory. The broad psychological significance of the game as a cognitive metaphor begs wider recognition of how the notion mediates experience and structures thought. The mechanist metaphor that still dominates science and society is grounded in the general idea of system, which is roughly equivalent to the notion of game. Both apply to how we think of social organization. The game serves as a powerful metaphor for daily living: “the games people play.” It is no wonder so many people are taken by literal gaming online, and by activities (such as business and war) that have the attributes of competitive games.

While games are old, machines are relatively new. A machine is a physical version of a system, and thus has much in common with a game. The elements of the machine parallel those of the game, because each embodies a well-defined system. While the ancient Greeks loved their games, they were also enchanted by the challenges of clearly defining and systematizing things. Hence, their historical eminence in Western philosophy, music theory, and mathematics. Euclid generalized and formalized relationships discovered through land measurement into an abstract system—plane geometry. Pythagoras systematized the harmonics of vibrating strings. Today we call such endeavors formalization. We recognize Euclid’s geometry as the prototype of a ‘formal axiomatic system’, which in essence is a game. Conversely, a game is essentially a formal system, with well-defined elements, actions and rules. So is a machine and a social or political system. As concepts, they all bear a similar appeal, because they are clear and definite in a world that is inherently ambiguous.

The machine age began in earnest with the Industrial Revolution. Already Newton had conceived the universe as a machine (his “System of the World”). Descartes and La Mettrie had conceived the human and animal body as a machine. Steam power inspired the concepts of thermodynamics, which extended from physics to other domains such as psychology. (Freud introduced libido on the model of fluid dynamics.) The computer is the dominant metaphor of our age—the ultimate, abstract, and fully generalized universal machine, with its ‘operating system’. Using a computer, like writing a program, is a sort of game. We now understand the brain as an extremely complex computer and the genetic code as a natural program for developing an organism. Even the whole universe is conceived by some as a computer, the laws of physics its program. These are contemporary metaphors with ancient precedents in the ubiquitous game.

Like a formal system, a game consists of a conceptual space in which action is well-defined. This could be the literal board of a board game or the playing field of a sport. There are playing pieces, such as chess pieces or the members of a soccer or football team. There are rules for moving them in the space (such as the ways chess pieces can move on the board). And there is a starting point from which the play begins. There is a goal and a way to know if it has been reached (winning is defined). A game has a beginning and an end.

A formal system has the elements of a game. In the case of geometry or abstract algebra, the defined space is an abstraction of physical space. The playing pieces are symbols or basic elements such as “point,” “straight line,” “angle,” “set”, “group,” etc. There are rules for manipulating and combining these elements legitimately, (i.e., “logically”). And there are starting points (axioms), which are strings of symbols already accepted as legitimate. The goal is to form new strings (propositions), derived from the initial ones by means of the accepted moves (deduction). To prove a statement is to derive it from statements already taken on faith. This corresponds to lawful moves in a game.

Geometry is a game of solitaire insofar as there is no opponent. Yet, the point of proof is to justify propositions to other thinkers as well as to one’s own mind, by using legitimate moves. One arrives at certainty, by carefully following unquestioned rules and assumptions. The goal is to expand the realm of certainty by leading from a familiar truth to a new one. It’s a shared game insofar as other thinkers share that goal and accept the rules, assumptions, and structure; it’s competitive insofar as others may try to prove the same thing, or disprove it, or dispute the assumptions and conventions.

Geometry and algebra were “played” for a long time before they were fully formalized. Formalization occurred over the last few centuries, through trying to make mathematics more rigorous, that is to become more consistent and explicitly well-defined. The concept of system, formalized or not, is the basis of algorithms such as computer programs, operating systems, and business plans. Machines, laws, rituals, blueprints—even books and DNA—are systems that can potentially be expressed as algorithms, which are instructions to do something. They involve the same elements as a game: goal, rules, playing pieces, operations, field of action, starting and ending point.

Game playing offers a kind of security, insofar as everything is clearly defined. Every society has its generally understood rules and customs, its structured spaces such as cities and public squares and its institutions and social systems. Within that context, there are psychological and social games that people play, such as politics, business, consumption, and status seeking. There are strategies in personal negotiation, in legal proceedings, in finance, and in war. These are games in which one (or one’s team) plays against opponents. The economy is sometimes thought of as a zero-sum game, and game theory was first devised in economic analysis to study strategies.

Yet, economic pursuit itself—“earning a living,” “doing business,” “making” money, “getting ahead”—serves also as a universal game plan for human activity. The economy is a playing field with rules and goals and tokens (such as money) to play with. In business or in government, a bureaucracy is a system that is semi-formalized, with elements and rules and a literal playing field, the office. The game is a way to structure activity, time, experience and thought. It serves a mediating cognitive function for each individual and for society at large. Conversely, cognition (and mind generally) can be thought of as a game whose goal is to make sense of experience, to structure behavior, and to win in the contest to survive.

The game metaphor is apt for social intercourse, a way to think of human affairs, especially the in-grouping of “us” versus “them.” It is unsurprising that systems theory, digital computation, and game theory arose around the same time, since all involve formalizing common intuitive notions. Human laws are formulas that prescribe behavior, while the laws of nature are algorithms that describe observed patterns in the natural world. The task of making such laws is itself a game with its own rules—the law-maker’s rules of parliamentary procedure and jurisprudence, or the scientist’s experimental method and theoretical protocol. Just as the game can be thought of as active or static, science and law can be thought of as human activities or as bodies of established knowledge. Aside from its social or cognitive functions, a game can be viewed as a gratuitous creation in its own right, an entertainment. It can be either a process or a thing. A board game comes in a box. But when you play it, you enter a world.

Thinking of one’s own behavior as game-playing invites one to ask useful questions: what is my goal? What are the rules? What is at stake? What moves do I take for granted as permissible? How is my thinking limited by the structuring imposed by this game? Is this game really fun or worthwhile? With whom am I playing? What constitutes winning or losing? How does this game define me? What different or more important game could I be playing?

Every metaphor has it limits. The game metaphor is a tool for reflection, which can then be applied to shed light on thought itself as a sort of game. Creating and applying metaphors too is a kind of game.

 

 

The Gender Fence

Apart from biological gender, is there a masculine or feminine mentality? Are men from Mars and women from Venus? In this era when gender identity is up for grabs, can one speak meaningfully about masculine and feminine ways of being and gender differences, apart from biologically determined individuals?

The very notion of gender choice is subtly tricky. For, it may be an essentially masculine idea—a result of social processes and intellectual traditions long dominated by men. Under patriarchy, after all, it is men (at least some men) who have had preferential freedom of choice over their lives. Of course, generalizations are generally suspicious. Exceptions always abound. Nevertheless, the fact of exceptions (outliers they are called in statistics) does not negate the validity of apparent patterns. It only raises deeper questions.

So, here’s my tentative and shaky idea, to take or leave as you please: for better and worse, men tend to be more individualistic than women. One way this manifests is in terms of boundaries. The need for “good boundaries” is a modern cliché of pop psychology. But this too is essentially a masculine idea, since men seek to differentiate their identity more than women. This inclines them to maintain sterner boundaries, to favour rules and structures, to be competitive and authoritarian,  to be self-serving. Women, in contrast, tend to be more nurturing, giving, accommodating and accepting because of their biological role as mothers and their traditional social role as keepers of the hearth and the domestic peace. Which means they appear to have weaker boundaries. They do not separate their identity so clearly from those who depend on them. They literally have no boundary with the fetus growing within, and a more nebulous boundary with the infant and child after birth. Giving often means giving in, and nurturance often means placing the needs of others above one’s own. Men have systematically exploited this difference to their own advantage. It is in their interest to maintain that advantage by maintaining boundaries—that is, to continue being self-centred individualists.

In many ways, this division of labour has worked to maintain society—that is, society as a patriarchal order. Yin and yang complement each other, perhaps like positive and negative—like protons and electrons? (Consider the metaphor: a proton is nearly 2000 times more massive than an electron and is thought of as a solid object, whereas an electron is considered little more than a fleeting bit of charge circling about it!) Traditionally, men have been the centre of gravity, women their minions, servants, and satellites. In the modern nuclear family, men were the bread winners, disciplinarians and authority figures, the autocrats of the breakfast table. One wonders how the dissolved post-modern family, with separated parents (ionized atoms?), affects the emerging gender identities and boundaries of children.

Gender issues loom disproportionately large in the media these days, in part serving as a distracting pseudo-issue in society at large. However, emerging choice about gender identity may be a good thing, with broader social significance than for the individuals involved. It may mean that the centre of gravity of individual identity is shifting toward the feminine, away from traditional masculine values that have been destroying the world even while creating it. Women have long had the model of male roles dangling before them as the avenue to possible freedom, whereas men have more been obliged to buck prejudice to identify with nurturance and endure persecution to identify with the feminine. To put it differently, individuation (and its corollary, individual choice) has become less polarized. It has lost some of its association with males and has become more neutral. In principle, at least, an “individual” is no longer a gendered creature to such an extent. That could also mean a shift away from reproductivity as a basis for identity, which would benefit an overpopulated world. But what does it imply for the mentalities of masculine and feminine?

Masculine and feminine identities are grounded in biology and evolutionary history. That is, they are natural. The modern evolution of the concept of the individual reflects the general long-term human project to deny or escape biological determinants, to secede from nature. But, paradoxically, that too is predominantly a masculine theme! “Individuation” means not only claiming an identity distinct from others in the group. The psychological characteristics of individuality have also meant differentiating from the feminine and from “mother” nature: alienation from the natural. Isn’t it predominantly men who aspire to become non-biological beings, to create a human world apart from nature and a god-like identity apart from animality? To seize control of reproduction (in hospitals and laboratories) and even to duplicate life artificially? Not bound to nature through the womb, men seek to expand this presumed advantage through technology, business and culture, even creating cities as refuges from the wild. However, their ideological rebellion against nature and denial of biological origins is given the obvious lie by the male sex drive and by male imperatives of domination that clearly have animal origins. Are women, then, less hypocritical and perhaps more accepting of their biological roots? Are those roots in fact more socially acceptable than men’s?

If the centre of gender gravity is moving toward the feminine, what could be the consequences for society, for the world? Certainly, a reaction by patriarchy to the threat of “liberal” (read: feminine?) values might be expected and is indeed seen around the world. We could expect an increasing preoccupation with boundaries, which are indeed madly invaded and defended as political borders. Power asserts itself not only against other power-wielding males, but also to defend against the very idea of an alternative to power relations. Men egg each other on in their conspiracy to maintain masculine values of domination, control, the pursuit of money and status, etc. Increasing bureaucracy may be another symptom, since it thrives on structure and hierarchy.

The human overpopulation and destruction of the planet should mitigate against men and women continuing in their traditional biological roles as progenitors and the traditional social goal of “getting ahead.” If so, what will they do with their energies instead? The fact that modern women can escape their traditional lot by embracing masculine values and goals is hardly encouraging. Far better for the world if they claim their individuality by re-defining themselves (and femininity) from scratch, neither on the basis of biology nor in the political world defined by men. On the other hand, one could take heart in the fact that some men are abandoning traditional macho identities. There is hope if that shift is widespread and more than superficial: if gay rights and gender freedom, for example, represent an emerging mentality different from the one that is destroying the world.

On the other hand, boundaries are sure to figure in any emerging sense of individuality, in which masculinity and femininity may continue to play a role. Can men be real men and women be real women in a way that meets the current needs of the planet? Or should the gender fence be torn down? As a male, I like to think there is a positive ideal of masculinity to embrace. This would involve strength, wisdom, objectivity, benevolence, compassion, justice, etc. Yet, I don’t see why these values should be considered masculine more than feminine. Nor should nurturance, accommodation, patience and peace-keeping be more feminine than masculine. Rather, all human beings should aspire to all these values. If the division of labour according to biological gender is breaking down, there is nothing for it but that a moral “individual” should embrace all the qualities that used to be consider gendered. “In the kingdom of heaven there is neither male nor female.” To achieve these ideals may mean transcending the natural and social bases of gender differences—indeed, to ignore gender as a basis for identity.