Category Archives: Dan

Debunking digital mind

It seems a common belief that consciousness could as well be implemented in silicon as in biology. That possibility is often encapsulated in the expression digital mind. Numerous assumptions underlie it, some of which are misleading and others false. My purpose is not to deny the possibility of artificial mind, but to examine assumptions upon which it crucially hinges. I focus on a particular sin of omission: what is lacking in the theory of digital minds is the relationship of embodiment that characterizes natural minds.

The computational metaphor may be our best tool for understanding mind and consciousness in scientific terms. For one thing, it invokes reason, intention, and agency as explanatory principles instead of efficient cause as understood in the physical sciences. Physicalism by itself has not been able to explain mental phenomena or conscious experience. (It does not even account adequately for observed behavior.) This failure is often referred to as the ‘hard problem of consciousness.’

Yet, despite its appeal, the computational metaphor falls short of acknowledging the origin and grounding of reason, intentionality, and agency in the human organism. Instead, reason and logic have traditionally seemed to transcend physicality, to be disembodied by definition, while agency is the tacit preserve of humans and gods. It would take us far afield to trace the history of that prejudice, with its thorough engraining in modern thought. Suffice it to say that human beings have long resisted conceiving of themselves as bodies at all, much less as animals.

Reason seemed to Descartes to be the capacity separating humans from the rest of the animal kingdom, and also distinguishing the human mind from the essentially mechanical human body. Religious traditions around the world have proclaimed the essence of the human person to be spiritual rather than material. The very concept of mind emerged in the context of such beliefs. While computation arose to formalize mental operations such as arithmetic and logic, it was quickly intuited that it might capture a far broader range of mental phenomena. Yet, computation rides on logic, not on biology. In contrast, the daily strategies of creatures (including the human creature) certainly are a function of biology.

It was perhaps inevitable, then, that the computational metaphor for mind would ignore embodiment. More importantly, by thinking of it merely as physical instantiation, it would fail to grasp the nature of embodiment as a relationship with a context and history. An abstract system of organization, such as a computer program, is a formalism that can be realized in a variety of alternative physical media or formats. But it is a mistake to think that the organization of a living thing can necessarily be captured in such a formalism, with no regard for how it came to be.

An embodied being is indeed a complexly organized physical system. But embodiment implies something else as well: a certain relation to the world. Every living organism stands in this relationship to the world, entailed by its participation in the system of life we call the biosphere. It is a relationship inherited through natural selection and maintained and refined by the individual organism. The survival mandate implies priorities held by the organism, which reflect its very nature and relation to its environment, and which motivate its behavior. To be embodied is to be an autopoietic system: one that is self-maintaining and self-defining. It pursues its own agenda, and could not naturally have come to exist otherwise. Natural autopoietic systems (organisms) are also self-reproducing, providing the basis of natural selection over generations. No AI or other artifact is yet an autopoietic system, with an embodied relationship to its world.

In principle, a non-organic system could be associated with sentience—and even consciousness—if it implements the sort of computational structures and processes implied in embodiment. Carbon-based neurons or natural biology may not be essential, but organism is. Organism, in that sense, is the internal organization and external orientation involved in an embodied (autopoietic) relation to the world. The question arises, whether and how that organization, and the relationships it implies, could be artificially created or induced. In nature, it arises through natural selection, which has programmed the organism to have priorities and to pursue its own interests. Natural intelligence is the ability to survive. In contrast, artificial intelligence is imbued with the designer’s priorities and goals, which may have nothing to do with the existential interests of the AI itself—of which it has none. Creating an artificial organism is a quite different project than creating an AI tool to accomplish human aims.

Embodiment is a relation of a physical entity to its real environment, in which events matter to it, ultimately in terms of its survival. Can that relationship be simulated? The answer will depend on what is meant by simulation. Computer simulation entails models, not physical entities nor real environments. A simulated organism is virtual, not physical. As an artifact, a model can exhaust the reality of another artifact, since both are products of definition to begin with. But no model can exhaust any portion of natural reality, which is not a product of definition.

The notion of “whole brain emulation” assumes falsely that the brain is a piece of hardware (like a computer), and that the mind is a piece of software (like a computer program). (An emulation is a simulation of another simulation or artifact—such as software or hardware—both of which are products of definition.) The reasoning is that if a true analog of the human mind could “run” on a true analog of the human brain, it too would be conscious. The conclusion may be valid, but the premises are false.

Despite the detailed knowledge of neural structure that can now be obtained through micro-transection, it is still a model of the brain that results, not a replica. We cannot be sure how the identified parts interact or what details might escape identification. We do no more than speculate on the program (mind) that might run on this theoretical hardware. No matter how detailed, a brain emulation cannot be conscious—if emulation means a disembodied formalism or program. The conceit of brain emulation trades on an assumption of equivalence between neural functioning and computation—completely ignoring that neural functioning occurs in a context of embodiment while computation explicitly does not.

The persistence of this assumption gives rise to transhumanist fantasies such as copying minds, uploading one’s consciousness to cyberspace, or downloading it into alternative bodies—as though the software is completely separable from the hardware. It gives rise to absurd considerations such as the moral standing and ethical treatment of digital minds, or the question of how to prevent “mind crime”—that is, the abuse by powerful AIs of conscious sub-entities they might create within themselves.

The seductiveness of the computational metaphor is compounded by the ambiguity of mental terms such as ‘consciousness’, ‘mind,’ ‘sentience,’ ‘awareness,’ ‘experience,’ etc.—all of which can be interpreted from a behavioral (third-person) point of view as well as from the obvious first-person point of view. The two meanings are easily conflated, so that that the computational substrate, which might be thought to explain or produce a certain observable behavior, is also assumed to account for an associated but unobservable inner life. This assumption dubiously suggests that present or imminent AI might be conscious because it manifests behavior that we associate with consciousness.

Here we must tread carefully, for two reasons. First of all, because our only means of inferring the subjective life of creatures (natural or artificial) is through observing their behavior, which is no less true of fellow human beings. The conflation works both ways: we assume sentience where behavior seems to indicate it; and we empathetically read into behavior possible subjective experience based on our own sentience. As part of the social contract, and because we sincerely believe it, we deal with other people as though they have the same sort of inner life as we do. Yet, because we can only ever have our own experience, we are at liberty to doubt the subjective life of other creatures or objects, and certainly of computer programs. This is poorly charted territory, in part because of long-standing human chauvinism, on the one hand, and superstitious panpsychism, on the other. It is just as irrational to assume that bits of code are sentient as to assume that rocks or clocks are.

The second reason is the difficulty of identifying “behavior” at all. Language is deeply at fault, for it is simply not true that a rose is a rose is a rose or that an airplane “flies” like a bird. We think in categories that obliterate crucial distinctions. A “piece” of behavior may be labelled in a way that ignores context and its meaning as the action of an embodied agent—that is, its significance to the agent itself, ultimately in the game of survival. Thus, the simulated pitching of a device that hurls practice baseballs only superficially resembles the complex behavior of the human baseball player.

Relationships and patterns are reified and formalized in language, taken out of context, and assumed transferable to other contexts. Indeed, that is the essence of text as a searchable document (and a computer program is ultimately a text). This is one of the pitfalls of abstraction and formalism. An operation in a computer program may seem to be “the same” as the real-world action it simulates; but they are only the same in human eyes that choose to view them so by glossing over differences. AIs are designed to mimic aspects of human behavior and thought. That does not mean that they replicate or instantiate them.

Digital mind may well be possible—but only if it is embodied as an autopoietic system, with its own purposes and the dependent relationship to a physical environment that implies. Nothing short of that deserves the epithet mind. The idea of immortalizing one’s own consciousness in a digital copy is fatuous for numerous reasons. On the other hand, the prospect of creating new digital minds is misguided and dangerous for humanity. Those fascinated by the prospect should examine their motives. Is it for commercial gain or personal status? Is it to play God? We should seek to create digital tools for human use, not digital minds that would be tool users, competing with us for scarce resources on a planet already under duress.

Ironies of individualism

We think of Western civilization as individualistic. ‘Freedom’ and ‘democracy’ have become catchwords in the cause of global capitalism, which has promoted certain individualist ideals through the spread of consumerism and a competitive spirit. People have choice over an expanding range of products and services, though little input into what those should be. They can choose among candidates for office pre-selected by others.

Since humans are highly social creatures, the individual can exist only in relation to the collective. Individual rights exist only in the context of the group and in balance with the needs of society. By long tradition, modern China is still far more collectivist than modern America. But individualism in both societies is relative, a question of degree. The same people in the U.S. who tout individual freedom may also shout for their collective identity: “America first.”

The success of the human species has been a collective effort and a function of increasing numbers. As population increased, so did human prosperity, which enabled specialization and technology, which in turn furthered prosperity. Individuality was made possible when it was no longer necessary for everyone to do the same tasks just to survive. Nothing seems to make it inevitable, however. Relief from drudgery implies little about how people will use their liberated time and energy. Humans are the most cooperative of the primates, and cooperation depends to a large extent upon conformity. (“Monkey see, monkey do” describes our species far better than literal monkeys.) We are conformists at heart—or by gene. The individual identity we can achieve always strains against the tug of the herd.

Technology can be liberating, but “drudgery” is a relative term. (Washing machines and backhoes are great labour-saving devices. But how much labour is saved by clapping hands instead of flipping a light switch?) Since the body is naturally a labour-producing device, sparing it effort is not in itself a good thing. Instead of having to wash the clothes by hand, or dig the ditch with a shovel, we can go to the gym for gratuitous “exercise.” Freedom from drudgery is relative to the limits and needs of the physical body. We strive against the body’s limitations, but cannot be totally free of them. We live longer now, but can only dream of immortality.

There seems to be a definite expansion over time of what we now call human rights. At least since society reached the stage beyond small groups of hunter-gatherers, it was always the case that some individuals claimed the freedoms that go with power over others. With increasing prosperity came social stratification. The masses eventually coveted the same privileges and benefits they could see possessed by elites. With industrialization, they could have token versions of the same luxuries. With democratization, they could have a diluted and second-hand version of authority. Money conferred an ersatz version of status: the power to consume. Real power remained behind the scenes, however, in the hands of those positioned to define the apparent choices and influence the mass of voters or consumers.

Imagine trying to market your product to a hundred million consumers who have absolutely nothing in common, who are completely disparate—that is, truly individual. Suppose also that they are rational beings who know what they actually need. How many would buy your product? Imagine, further, trying to get them to agree on a leader! Both democracy and the consumer market depend on conformity far more than individuality, and on whim far more than reason. True, each person now can have a personal telephone cum television cum computer, thus spared the inconveniences of sharing a wired facility with others. Convincing people of the need and right of each to have their own personal devices maximizes market penetration—just as convincing people to live alone maximizes rent collected.

From an economic point of view, consumers are hardly individuals, but mere ciphers with a common willingness to purchase mass-produced widgets or standardized services. Lacking imagination, they reach for what is readily available and may seem to be the only option—or perhaps  for no better reason than that they have seen others flaunting the latest offerings and don’t want to be left behind. Long ago, people envied the nobility for the indulgences they alone could afford. Now the Joneses compete with each other to get ahead—or at least not fall behind. Remember the brief period when a mobile phone was a novel status symbol? Now staying “connected” seems a bare necessity. Social media have democratized status, which is always in flux according to who “likes” what. Is the mobile phone a labour-saving device or is it a new form of drudgery?

Carl Jung articulated the concept of individuation as the process of unfolding through which one becomes a specific mature self. While we may think of freedom and individuality in the same breath, becoming a unique individual is quite another matter than having unrestricted choice. To have the capacity to think and act independently of others is different from access and entitlement to a limitless variety of goods or experiences. In a sense, they are nearly opposite. We may be offended by restrictions imposed on what we can do or choose. But, to what avail is freedom if all we can imagine or do is what everyone else does, or if the choices have been predefined by others?

On the one hand, a rational individuated human being ought to be immune to the herd mentality. On the other hand, such a being should be capable of objectivity. If everyone were truly unique, they might see things really differently. What basis would there be for agreement? What basis for the commonality and cooperation that made civilization possible? Why would two people buy the identical product—if, indeed, they would want it at all? There would literally be no accounting for taste, and no basis for mass production. Capitalism could not succeed in a society of truly individuated beings. Even in that society, no doubt some people would consciously try to exploit the weaknesses of others, who would resist if they are equally awake.

But, of course, humans are not all that different, and mostly not fully awake. First of all, we come with more or less standard issue: the body, which imposes much upon our consciousness. As individuals, we are mass-produced tokens of a kind. We grow up indoctrinated in a common culture, programmed to view the world in standard ways, to want standardized things, to have conventional goals. The fact that opinions can diverge tells us that consensus is not the same as objectivity. (Agreeing on something does not make it true.) Even if all people were completely different from each other, we still would occupy a common world, with its own real and singular existence apart from how we think of it. (Is that not the definition of ‘reality’?) Objectivity means accordance with that common world, especially with the preexisting natural world which encompasses us all. In principle, our minds could be completely diverse internally and still agree on what is externally real. But only, of course, if we were truly objective.

Obviously, that is not the human condition. Quite the contrary. We vaunt the inner differences that are supposed to make us unique and give us identity, while hardly able to agree about external things. We love to bicker (and sometimes wage war) concerning externalities we deem real and important. We love to take a stand, to be right or superior, to debate and argue. Paradoxically, that only makes sense when we are convinced of knowing a truth that matters. For, simply having divergent inner experiences is not itself a basis for conflict, which is always about outer and bigger so-called realities. Any number of sugar plums can dance in our heads, just as any number of angels can dance on the head of a pin, since such things take up no space. It’s rather our cumbersome bodies, with their natural needs and programming, that compete for space and resources. We have thus always been in competition as well as in cooperation.

An individuated person is not an individual in the conventional sense—not someone obviously different in appearance, tastes and desires, for example. Rather, it is someone whose thinking can transcend the factors that render us inherently alike—and also render us unconscious and therefore incapable of objectivity. To be conscious and objective is to see—clearly and for what they are—the limiting perspectives that hold us in the communal trance.

Mobile phones free us from stationary connection points, while the social platforms they enable chain us to the opinions of others. News and entertainment media monopolize our attention with standard schlock. The chatbot or design tool may be your handy personal assistant, but can feed you a pablum of clichés. While our society becomes ever more individualistic, through technology and in accord with market needs, we do not necessarily become more individuated. In a different age, Jung thought of individuation as a normal progression in the natural life cycle. I doubt such inevitability. There are too many forces mitigating against it, keeping us immature and conventional. Individuation may be possible, but only through fierce intent.

 

Language, myth, and magic

Human consciousness is closely entwined with language. We live not so much in reality as in stories we tell about reality. We are constantly talking to ourselves as well as to each other. The common tale binds us, and where stories diverge there is dissension. We construct a civilized urban world, a material environment deliberately apart from the wild, which was first imagined with the help of language. We distinguish ourselves from mere beasts by naming them, and by transforming natural animal activities into their human versions, deliberately reinvented. Culture, in the broadest sense, sets us apart, which we may suppose is its ultimate purpose. We are the creature that attempts to define itself from scratch. And definition is an act of grammatical language.

Of course, this human enterprise of self-creation is circumscribed by the fact that there is nowhere physical for us to live but in the natural universe. Our artificial environments and synthetic products are ultimately made of ingredients we rearrange but do not create. The natural world is where we always find ourselves, wherever the expanding horizon of civilization and artifact happens to end. It has always been so, even when that horizon extended no further than the light of the campfire. Even then, the human world was characterized by the transformation of perception as much as by the transformation of materials. It was forged by imagination and thought, the story told. The original artifact, the prototype of all culture, was language. In the beginning was the word.

I marvel at the ingenuity of the human imagination—not the things that make practical sense, like houses, agriculture, and cooking—but the things that make little sense to a rational mind, like gods and magic. Yet, religion and magical thinking of some sort have characterized human culture far more and far longer than what our secular culture now defines as rationality. The ancient Greeks we admire as rational seekers of order seemed to actually believe in their pantheon of absurdly human-like and disorderly gods. The early scientists were Creationists. There are scientists today who believe in the Trinity and the transubstantiation of the Eucharist. My point here is not to disparage religion as superstition, but to wonder how superstition is possible at all. I believe it comes back to language, which confers the power to define things into being—as we imagine and wish them—coupled with the desire to do so, which seems to reflect a fundamental human need.

The term for that power is fiat. It means: let it be so (or, as Captain Picard would say, “make it so!”) This is the basic inner act of intentionality, whereby something is declared into at least mental existence. That could be the divine decree, “Let there be light!” Or the royal decree, “Off with her head!” The magician’s “Abracadabra!” Or the mathematician’s: “Let x stand for…” All these have in common the capacity to create a world, whether that is the natural world spoken into existence by God, the political realm established by a monarch or constitutional assembly, the abstract world invented by a geometer, the magician’s sleight of hand, or the author’s fictional world. Everything material first existed in imagination. It came into being simply by fiat, by supposing or positing it to be so in the mind’s eye or ear.

While we cannot create physical reality from scratch, we do create an inner world apart from physical reality—a parallel reality, if you like. Aware of our awareness, we distinguish subjective experience from objective reality, grasping that the former (the contents of consciousness) is our sole access to the latter. But the question is subtler still, because even such notions as physical reality or consciousness are but elements of a modern story that includes the dichotomy of subject and object. We now see ourselves as having partial creative responsibility for this inner show of “experience,” a responsibility we co-share with the external causal world. Fiat is the exercise of that agency. We imagine this must have always been the case, even before humans consciously knew it to be so.

That self-knowing makes a difference. If people have always created an inner world, but didn’t realize what they were doing, they would have mistaken the inner world for the outer one, the story for reality. In fact, this is the natural condition, for good reason. As biological organisms, we could not have survived if we did take experience at face value and seriously. The senses reveal to us a real world of consequence outside the skin, not a movie inside the head. The idea that there is such a movie is rather modern, and even today it serves us well most of the time to believe the illusion presented on the screen of consciousness, though technically we may know better. Fiat is the power to create that show, quite apart from whether or how well it represents objective reality.

Fiat is the very basis of consciousness. Like gods or monarchs, we simply declare the inner show into existence, moment by moment. That is not, however, an arbitrary act of imagination, but more like news reporting with editorial—a creative interpretation of the facts. The “show” is continually updated and guided by input from the senses. We could not exist if experience did not accord with reality enough to permit survival. That does not mean it is a literal picture of the world. It is more like reading tea leaves in a way that happens to work. The patterns one discerns auger for actions that permit existence, or at least they don’t immediately contradict it. (While crossing the street, it pays to see that looming shape as a rapidly approaching bus!) Therein lies the meaning of what naturally appears to us as real. Realness refers to our dependency on a world we did not choose—a dependency against which we also rebel, having imagined a freedom beyond all dependency.

The upshot is that we do have relative freedom over the experience we create, within the limits imposed by reality—that is, by the need to survive. It is quite possible to live in an utter fantasy so long as it doesn’t kill you. In fact, some illusions favor survival better than the literal truth does. Nature permits a latitude of fancy in how we perceive, while the longing for freedom motivates us to be fanciful. I believe this accounts for the prevalence of magical thinking throughout human existence, including the persistence of religion. It accounts also for the ongoing importance of storytelling, in literature and film as well as the media, and even in the narratives of science. In effect, we like to thumb our noses at reality, while cautious not to go too far. Magic, myth, imagination, and religion can be indulged to the extent they do not cancel our existence. (The same may be said for science, a modern story.) We like to test the limits. The lurking problem is that we can never be sure how far is too far until too late.

Outrageous beliefs are possible because a story can easily be preferred to reality. A story can make sense, be consistent, clear, predictable. Reality, on the other hand, is fundamentally inscrutable, ambiguous, confusing, elusive. Reality only makes sense to the degree it can be assimilated to a story. In the end, that is what we experience: sensory input assimilated to a story that is supposed to make sense of it, and upon which an output can be based that helps us live. Connecting the senses to the muscles (including the tongue) is the story-telling brain.

Like news reporting, what we experience must bear at least a grain of truth, but can never be the literal or whole truth. The margin in between permits and invokes the brain’s surprisingly vast artistic license. If that was all there is to it, we could simply class religion, magic, and myth as forms of human creativity, along with science, art, cinema and the novel. But there is the added dimension of belief. Reality implies the need to take something seriously and even literally—to believe it so—precisely because it makes a real difference to someone’s well-being. Fiction you can take or leave as optional, as entertainment. Reality you cannot.

Every human being goes through a developmental stage where the two are not so clearly distinguished. Play and make-believe happen in the ambiguous zone between reality and imagination. It no doubt serves a purpose to explore the interface between them. This prepares the adult to know the difference between seriousness and play—to be able to choose reality over fantasy. However, the very ambiguity of that zone makes it challenging to know the difference. At the same time, it is easy to misplace the emotional commitment to reality—which we call belief—that consists in taking something seriously, as having consequence. The paradox of belief is that it credits reality where it chooses, and often inappropriately. While it might seem perverse to believe a falsehood, human freedom lies precisely in the ability to do so. After all, a principal use of language has always been deception. So, why not self-deception?

The human dilemma

One way to describe the human dilemma is that we are conscious of our situation as subjects in a found world of objects. That world, of which we are a part, is physical and biological. Indeed, even our conceiving it in terms of subject and object reflects our biological nature. To permit our existence, not only must the world be a certain way, but as creatures we must perceive it a certain way, and act within it a certain way. While that may not be a problem for other creatures, it is for us, because we are aware of all this and can ponder it. Whenever anything is put in a limiting context, alternatives appear. Whenever a line is drawn, there is something beyond it. Our reflective minds are confronted with a receding horizon.

We are animals who can conceive being gods. Recognizing the limits imposed by physical reality and by our biological nature, we nevertheless imagine freedom from those constraints and are driven to resist them. Recognizing the limits of the particular, we imagine the general and the abstract. Resistance to limits involves denying them and imagining alternatives. Recognizing the actual, we conceive the ideal. Thus, for example, we resist mortality, disease, gravitation, pain, physical hardship, feelings of powerlessness—in short, everything about being finite biological creatures, hapless minions of nature. We imagine being moral and weightless free spirits— escaping, if not the sway of genes, at least the pull of gravity and confinement to a thin layer of atmosphere.

We find ourselves in a ready-made world we did not ask for. We find ourselves in a body we did not design and which does not endure. As infants, we learn the ropes of how to operate this body and accept it, just like people with prosthetic limbs in later life must learn to operate them and identify with them. At the same time, and throughout life, we are obliged to negotiate the world in terms of the needs of this body and through its eyes. This natural state of affairs must nevertheless seem strange to a consciousness that can imagine other possibilities. It is an unwelcome and disturbing realization for a mind that is trying to settle in to reality as given and make the best of it. The final reward for making these compromises is the insult of death.

A famous author described the horror of this situation as like waking up to find yourself in the body of a cockroach. It is a horror because it is not you, not your “real” body or self. It is someone else’s nightmare story from which you cannot awaken. (Of course, the metaphor presumes a consciousness that can observe itself. Presumably, the cockroach’s life is no horror to it.) But the metaphor implies more. Each type of body evolved in tune with its surrounding world in a way that permits it to survive. The experience of living in that body only makes sense in terms of its relationship to the world in which it finds itself but did not create. The horror of being a mortal human cockroach is simply the despair of being a creature at all, a product of the brutal gauntlet called natural selection. The history of life is the story of mutual cannibalism, of biological organisms tearing each other apart to devour, behaving compulsively according to rules of a vicious and seemingly arbitrary game. The natural cockroach knows nothing of this game and simply follows the rules by definition (for otherwise it would not exist). But for the human cockroach, the world itself is potentially horrifying, of which the cockroach body is but a symptomatic part.

The first line of defense against this dawning realization is denial. We are not mortal animals but eternal spirits! Life is not a tale told by an idiot, but a rational epic written by God. We are not driven by natural selection (or cultural and social forces) but by love and ideals of liberty, equality, fraternity. After all, we do not live in nature at all, but in ordered cities we hew from the wilderness, ultimately dreaming of self-sufficient life in space colonies. We are not obliged to suffer disease and die, but will be able to repair and renew the body indefinitely, even to design it from scratch in ways more to our liking. We are not condemned to live in bags of vulnerable flesh at all, but will be able to download our “selves” into upgraded bodies or upload them into non-material cyberspace. Alternatively, like gods, we may bring into existence whole new forms of artificial life, along principles of our design rather than nature’s trial-and-error whim. Religion conceives and charts the promise of godlike creativity, omniscience, freedom, resurrection and eternal life outside nature, which technology promises to fulfill.

The mind imagines possibilities for technology to tinker with. But just as religion and magic do not offer a realistic escape from natural reality, technology may not either. The idea of living disembodied in cyberspace is fatuous and probably self-contradictory. (The very meaning of consciousness may be so entwined with the body and its priorities that disembodied consciousness is an oxymoron. For, embodiment is not mere physicality, but a dependent relation on the creature’s environment.) Living in artificial environments on other planets may prove too daunting. Extending life expectancy entails dealing with resulting overpopulation, and perhaps genetic stagnation from lack of renewal. Reducing the body’s vulnerability to disease and aging will not make it immune to damage and death inflicted by others, or accidents that occur because we simply can never foresee every eventuality.

At every stage of development, human culture has sought to redefine the body as something outside nature. Scarification, tattooing, body painting and decoration—even knocking out or blackening teeth—have served to deny being an animal. Clothing fashion continues this preoccupation in every age. Even in war—the ultimate laying on the line of the body’s vulnerability—men attempt to redefine themselves as intentional beings, flaunting death with heroic reasons and grand ideals, in contrast to the baseness of groveling brutes who can do no more than passively submit to mortality. In truth, we have changed ourselves cosmetically but not essentially.

That is not cause for despair. We have made progress, even if our notions of progress may be skewed. Despair only makes sense when giving up or accepting failure seems inevitable. It is, however, reason for sober evaluation. In our present planetary situation, nature gives us feedback that our parochial vision of progress is not in tune with natural realities on which we remain dependent. We are in an intermediate state, somewhat like the awkwardness of adolescence: eager, but hardly prepared, to leave the nest, over-confident that we can master spaceship Earth. Progress itself must be redefined, no longer as ad hoc pursuit of goals that turn out—perversely—to be driven by biological imperatives (family, career, ethnicity, nationalism, profit, status, power). We must seek the realistic ways in which we can improve upon nature and transcend its limitations, unclouded by unconscious drives that are ultimately natural but hardly lead where we suppose. For that, we must clearly understand the human dilemma as the ground on which to create a future.

The dilemma is that nature is the ultimate source of our reality, our thinking and our aspirations, which we nevertheless hope to transcend and redefine for ourselves. But, if not this natural inheritance, what can guide us to define our own nature and determine our own destiny? Even in this age, some propose a formula allegedly dictated by God, which others know to be an anachronous human fiction. Some propose an outward-looking science, whose deficiency for this purpose has long been that it does not include the human subject in its view of the supposedly objective world. The dilemma is that neither of these approaches will save us from ourselves.

 

 

What you say is what you get

Humans are immersed in language as fish are in water. Language shapes how we perceive the world and how we think about it. While animals communicate, the human mode of communication is marked by its grammatical structure and reliance on sentences. Linguistic diversity introduces variations in cognitive processes, as individuals may conceptualize differently based on their language. Ethnic, political, or religious groups, often aligned with distinct language groups, exhibit divergent attitudes and conceptual frameworks. Language can unite us, but can also divide us. When we are not fluent in another language, we may not understand the words. But also, we may not understand the speaker because we do not think and perceive in their language.

Nuances between languages further complicate matters, as certain terms lack precise equivalents across languages, even within historically and linguistically related language groups. For instance, the English term mind, pivotal in Western philosophy, lacks a precise counterpart in several European languages, underscoring the unique influence each language exerts on fundamental yet ambiguous concepts.

Language structures meaning. Take the verb determine, for instance. We use it in two very distinct ways. It can mean to ascertain something (as when the coroner determines the time and cause of death). Also, it can mean that one thing causes another (as when one state of a physical system determines a subsequent state). By disregarding the first meaning (a person’s action), “determinism” becomes instead a relation among objects, divorced from perceiving subjects. It is then supposedly a property of the world, apart from what human beings are able to ascertain.

Language serves to articulate claims about the world. These are often devoid of substantiation and divorced from personal responsibility, a circumstance that may trace its roots to the evolutionary origin of language in animal calls, particularly those alerting others to immediate threats. The functionality of such alarums diminishes if they are not conveyed assertively, leaving little room for nuanced expressions of uncertainty. It would hardly be functional, in evolutionary terms, for the individual who raises the cry to timidly qualify their claim as merely an opinion, perhaps mistaken. Or for the others to take the time to ponder the reliability of that individual or their possible motives for misinformation. Generally speaking, it is unwise to “cry wolf” except when there is a wolf. Yet, primates are known for their ability to deliberately deceive, and language provides a built-in capacity to do so.

Even innocuous statements, such as weather predictions, often do not implicate the subject making them, but assume a degree of faith in the speaker’s assertions. Suppose, for example, I say: “It’s going to rain tomorrow.” Structurally, that is little different from “There is a tiger in the bush!” Both create expectancy, directing you to consider a possibility that might need action. You might assume that I had checked the weather report before saying this, but in fact I offer no evidence for my claim. In contrast, consider the statement, “I believe it will rain tomorrow.” Ostensibly this is a statement about the world; but it is equally a statement about the speaker. It reminds you that the proposition could be false or mistaken, that it is a belief that occurs in the mind of a fallible and potentially deceptive individual. It reminds you that the claim is the action of a subject and not an objective fact. That simple preface—I believe—qualifies the claim as a personal assertion.

Language allows for lying and may have arisen as much for deception as for sharing truth. Obviously, the ability to alert others to common danger is advantageous for the group—provided the peril is real. On the other hand, the ability to manipulate others through deception is an advantage for the individual, but may be a disadvantage for the collective. For better or for worse, language has hypnotic power. Hence, demagogues can “tweet” absurd propositions that are nevertheless taken on faith.

Aside from mass hypnosis, there is the subtler possibility of self-hypnosis. After all, we think internally (talk to ourselves) in the same linguistic forms with which we communicate. We can lie to ourselves as well as to others. I can shape my own belief and experience by repeating to myself a favorite slogan. By swallowing whole the “truths” propounded by my favorite commentators, I am absolved of the need to sort things out for myself. Also, I have a ready-made clan of like-minded believers to belong to, provided only I do not challenge their beliefs.

When society shares an overriding ideology—as in medieval Christian Europe—there is little need or occasion to take individual responsibility for claims or beliefs. It would be counterproductive to do so, since that would imply that an accepted proposition is no more than someone’s subjective idea, not the objective truth it is supposed to be. The price we pay for attributing personal responsibility is forfeiting the security found in communally accepted truth. Dissent was dangerous for heretics under Catholicism, for Germans under Hitler, and for Russians under Stalin. It was dangerous for Americans in the McCarthy era. In America today there seems no longer to be a communally accepted truth. Whatever one believes will appear, perhaps at their peril, as dissent from someone’s perspective.

No political flavor has a monopoly on reason—or faith. Reasoning is no better than the assumptions and biases on which it is based. What emerged from the ideology of medieval Christianity was a new faith in reason, now called science, based on questioning the evidence of the senses. But that was hardly the end of the story. “Faith in reason” has an oxymoronic air about it. It is no coincidence that America today is divided over the role and authority of science, which fails in the eyes of many to provide an ideology they don’t have to question. Individualism is vaunted there. But when individuals are genuinely autonomous, they claim their values and opinions as personal. There is room for the opinions and values of others, with mutual respect and courtesy built in. The downside, however, is the implication that such values and opinions are not objective truths and are perhaps no more than subjective whims. Some people cannot bear that thought; they prefer that truth should be unquestionable rather than a mere product of thought, and that morality should be a matter of obedience rather than arising from within.

With the aid of their language, some cultures affirm that the earth is where mankind belongs, and reality is thought to be a seamless whole of subject with object. In that view, mankind is an integral part of this whole, which is our “true and only home.” This belief is reflected in the absence of a program to escape mortality or to ascend to a higher reality. Their religions are not preoccupied with a concept of “salvation” or with a separate spiritual realm where human destiny unfolds (much less on another planet). On the other hand, if this world is not our true and only home, then it hardly matters what we do here, either to the planet or to each other. Note that the same illogic applies to the relationship with one’s own body. If this body is no more than a temporary husk for an inner essence, why bother to take care of it?

The English language shares a feature with its intellectual roots in Greek and Latin. This is the ambiguous sense of the verb to be, which can indicate either existence or equivalence. “There is a house in New Orleans,” asserts the existence of a building. “That house is the Rising Sun” equates a building with a name. This might seem like quibbling, but the fact that English glosses over this distinction has profound consequences. It enables us to call each other names with injurious effect, when it is unclear whether the accusation is someone’s motivated judgment or an incontestable truth. The statement “communism is evil” (or “capitalism is evil,” if you prefer) is a claim by someone who makes a judgment and applies a label for their own reasons. It equates communism and evil, as though they were two names for the same thing.  Simultaneously it also implies that the very existence of communism consists thoroughly and exclusively in being evil. Evil—and nothing else—is what communism is, and it is therefore deplorable that it even exists. Such absolutism leaves no place for qualification or debate. Whether or not the statement is true, it allows us to dismiss reason and further inquiry. It places no accountability on the person making the claim, who remains conveniently out of the picture.

Language makes the human world go round and is the source of political power. It’s how society is managed, how we keep each other in check. Whatever threat artificial intelligence poses to humanity lies more in Large Language Models than in a Terminator-style takeover by robots. An AI would only need to master our languages to manage and control us in the same ways, using power channels we have already created. For us, thought is essentially a language skill. Social control depends to some extent on how language is misused or poorly used. We can defend ourselves against manipulation with clear thinking and communication. I believe that simple practices in how we speak can make a profound difference in human affairs. One such is to preface all claims—at least in one’s own mind—with “I believe that…”

An Ideal World

Wars and rumours of wars seem to be our natural inheritance as a primate species. Tribalists at heart, we resound to the beat of the war drums, echoed in the daily news. For those of us temporarily enjoying the luxury of peace, the “news” is little more than perverse entertainment, a tune to dance to in the privacy of our secure homes. In the bigger picture, violent events and the relentless bickering of tribes are hardly news at all, but the age-old human story.

Fundamentally, we are the creature with a foot in two worlds. Nowadays we admit to being biological organisms, products of natural selection. On the one hand, that means we are the sort of ruthless animal who survived countless diverse competitions and ordeals, such as wars and natural disasters. On the other hand, the success of our species—as well as our tribe—resides precisely in cooperative behaviour. Modern individuals cannot survive long outside the context of mutual interdependence we call civilization, a collective creation that consists not only of material infrastructure and social institutions, but also of guiding ideals. The vision of human identity goes far beyond our biological nature and often seems glaringly opposed to it.

Before the scientific era, the human identity lay not in biological matter but in spirit. That is, we thought of ourselves as spiritual entities—souls—when we thought of our essence at all. The teachings of religions helped societies to function better, as tribes were squeezed together in a world of increasing human numbers. But these teachings also provided otherworldly visions of ideal perfection seemingly unattainable in daily life. Violent realities of short and brutish life stood in contrast to ideals of benevolence, freedom, and a perspective of eternity.

We have always had visions of how life could and should be. But these visions have yet to be realized on a species-wide level. Our aggressive and self-serving nature as organisms may preclude that ever happening. We may destroy ourselves and our dreams, or create a nightmare future instead. Yet, the possibility remains open now to create the ideal world as it could be and has been variously imagined. Indeed, we are closer now than ever to that possibility. For, as long as we thought of ourselves as naturally spiritual, we were hardly in a position to grasp the enormity of the task. Now we know the true nature of the challenge, which is for an animal to remake itself as a spiritual being.

“Spiritual” essentially means disembodied. People have always found ways to deny their biological embodiment, when it seemed impossible to truly transcend it. This was always the religious conceit. It has also largely shaped cultural practices generally. Decorating and deforming the body, ritualizing everything from eating to warfare, building artificial environments called cities, and creating abstract realms of thought have all served to re-define us a creature separate from nature and not strictly identified with the body. We’ve created a distinctly human world, within the natural one upon which we remain precariously dependent. So far, this human élan has served a defense and reaction against the natural state. We rebel against being vulnerable and mortal flesh, driven by animal instincts. Yet, despite the strivings of culture and the denials of religion, we’ve had only limited success in doing anything about it. The technology we’ve created in our flight from nature now puts us on the threshold of actually realizing ancient dreams of transcending the natural state. Because technology concerns the physical world, this has been largely a matter of reconfiguring the body and its environments. Yet the dream of an ideal world is quintessentially social and moral, not only material. And the possibility of realizing it goes far beyond a defensive posture. It must become humanity’s passionate intention, its overriding proactive goal.

Of course, in line with our individualism and tribalism, there are many possible and conflicting visions of a future. In line with our collectivism, however, we must come to a consensus or likely perish. Whatever the vision, it must transcend local squabbles and short-sighted concerns. A tragic thing about the news is that, whether deliberately or not, it obscures the bigger picture and the longer term. A tragic thing about war is that it pulls the world apart instead of together. (It may pull a tribe together against another tribe, which only reflects the biological determinism that humanity struggles to transcend.) War squanders the resources needed for global civilization to provide for its future. It adds to the pollution and global warming leading to ecological collapse—and with it the collapse of all civilization (not even considering the risks of nuclear or biological war). If we read between the lines, the news simply tells us over and over that we are not pursuing the human dream. It shows us the effects of wasting our lives, even while some people are busy wasting the lives of others.

What are some of the unfulfilled values behind the quest for an ideal world? Of course: peace, prosperity, fraternity, equality, justice, personal freedom—the traditional “Enlightenment” ideals. Religion proposes also: benevolence and kindness, forgiveness, humility, righteousness and virtue. We could, in fact, catalogue all the implicit and explicit values ever espoused in diverse cultures over time and brainstorm how to reconcile them to set a course for the human future. Many such ideals were long espoused within the tribe, but not necessary extended to include strangers. What is modern is the universal code of human rights based upon such values. That is an encouraging sign and a prerequisite for the human unity that must supersede tribalism in order for humanity to pull together for its survival. It is also the door to actually realize the ideal world that has always until now only been aspirational. We can add to that goal a concern for other creatures and a role as formal caretakers of the planet. Above all, it is the “we” that remains to be defined and consolidated.

There are new values, too, based on possibilities opened through technology. It may or may not be possible to finally dispense with biological embodiment. There are modern dreams of uploading one’s mind to cyberspace, or downloading it into a fresh body or a non-biological one. Many such fantasies remain grounded in concerns derived from our actual biology—such as the desire to survive, overcome mortality, have more intelligence, augmented powers, control, etc. They tend to focus on the individual more than society. But ethical and moral ideals are above all about the collective. The ideal of objectivity implies freedom from the compulsions of individual bodily life. A far-reaching potential of artificial intelligence is the notion that mind need not be bound to biology or dedicated to individual or tribal interest, but could be free to be truly objective. That may turn out to be a chimera, even a contradiction in terms; but it is nonetheless a humanly-conceived ideal to explore. A contemporary concern is how to align AI with (largely parochial) human values and goals. The focus should rather be to align short-sighted human values and goals with the long-term survival and well-being of the planet, and the potential of our species. AI should be conceived and used toward that end.

As biological creatures, the fact of having to live by killing and consuming other creatures remains a spiritual embarrassment. The moral problem dovetails with the adverse ecological effects of the meat industry. Accelerating disparity of wealth is another moral embarrassment. Failure of the humanist values of equality and fraternity dovetails with the threat of social instability and even violent revolution. Though the upward flow of wealth is enabled by the impersonal mechanisms of the modern global economy, it also depends on individual greed and short-sightedness. These, again, seem built into our conflicted biological nature and lead in the wrong direction. Communism failed because of greed while capitalism triumphed through it. In that light, neither has been more than a path to the soulless society. Jesus put it nicely: what does it profit one to gain the whole world and lose one’s own soul? “Soul” might be regained by intending the collective good above personal gain.

Mere survival of the human species, or of modern civilization, is not a goal that sets our sights high enough. Implicit in human potential is a project to finally realize high ideals that have been there all along. These may contradict each other, or be beyond our capacity, yet it is part of the unfinished project of being human to sort them out and come to a species-level vision. Such unified intention does not guarantee survival and may not even prove feasible. (The absence of evidence of alien civilizations may be evidence of absence, implying that self-conscious intelligence is inherently self-destructive.) But we should not assume the worst. And even if the worst turns out to be the case, wouldn’t there be honour in at least trying to achieve that planetary vision?

The Life of P: real or simulated?

Increasingly, sophisticated computer technology obliterates the distinction between reality and imagination or artifact. In popular science reporting, for example, distinction is frequently no longer made between an actual photograph and a computer-generated graphic—which used to be (and ought to be) clearly labelled “artist’s conception.” While computer animation extends imagination, it only approximates reality in a symbolic way, sometimes even ignoring or falsely portraying basic physics. (Just think of the noisy Star Wars battles in outer space, where there is no air for sound to travel in!) Old style hand-drawn animation used to do this too, with cartoon protagonists blown up and springing back to life, or running off the edge of a cliff and only falling when they realize their precarious situation. These were gags that no one (except perhaps unsuspecting young children) took literally. They were hardly intended to be realistic.

However, the intention of virtual-reality and digital game producers, like the modern film industry, is to create ever more realistic ‘worlds’ as entertainments. This could pose a dilemma for a virtual-reality user who does not, for some reason (perhaps from spending too much time in VR) clearly understand the difference between reality and fiction. It could well be the case for young people being educated with computer graphics. Confronted with the VR producer’s intentional deceptions, how can the user be expected to know that the computer graphic or VR world is not photography or genuinely realistic?

This question recalls the doubt first expressed in modern times by Descartes and more recently by the Matrix films. The question actually has two parts, one pertaining to the VR world itself (which is a human production, like a novel or cartoon) and one pertaining to the user (who is a consumer desiring to be entertained). In terms of the former, the question is whether there are telltale signs of simulation by which an astute observer could distinguish the VR world from the real one—for example, a “glitch” in the computer program. There is, after all, a limit to the detail a simulation can provide, and there could be computer error. But as far as the effectiveness of the illusion, this limit is relative to the user’s cognitive capacities, which are also limited. The user must, on some basis, be able to tell the difference, which brings us to the other part of the question.

The user is a biological organism who lives in the real physical world, but enters the VR world as into a game, voluntarily and often with other players, like entering on stage with other actors. There is a conditional willing suspension of disbelief, which in traditional entertainment is asked primarily of a passive audience. Unlike readers of a novel or the theater audience for a play, the VR user is both actor and audience. A literal online game can be interactive with other real players, who have a life outside the game—offstage, so to speak. It also provides a virtual world with which to interact, which includes other human or non-human figures that are not actual players. These non-players are not conscious subjects but fictions in the VR, defined by the program as part of the stage sets. (Digital animation allows the “stage” to be dynamic, constantly changing.) While the non-players are not simple cardboard cut-outs, they are no more than part of that programmed dynamic stage set. Therein lies a key difference between real human beings (or creatures) and simulated ones. Real agents have their own agency; simulated agents are fictions that express only the agency of the real people who create the program.

The VR user is a player in a virtual-reality—a real person who chooses to engage with the VR in order to have a certain kind of experience. This player—call her P—may or may not be represented in the virtual ‘world’ by an avatar. (As P, you could be seeing yourself as a character in the story, but in any case you are seeing the VR world through your own real eyes.) Since it is provided by a computer program that is necessarily finite, the virtual world is necessarily limited, furnishing only a finite variety of possible inputs to P’s senses. In principle, that is a key difference between the VR world and the real world, corresponding to a fundamental difference between artifacts and natural found things.

However, the situation is actually similar for ordinary experience by real people in the real world. The human nervous system only processes finite information, from which it fabricates the natural illusion of an external world. The difference between natural input of the senses and the input of VR is only relative. We know that the VR is not real when we know that we are wearing a VR headset or some such thing. For the illusion of a virtual reality to be complete (as in The Matrix), no such clue must be available. P must be unaware of the deception and unable to recall entering the VR world from an existence outside it.

That conceivable possibility brings us to another contemporary confusion, expressed provocatively in the rhetorical question: “Are you living in a simulation?” Suppose you simply find yourself (like Descartes, or Neo in the The Matrix?) in a world whose reality you doubt. After all, if somehow you cannot tell the difference between simulation and reality, you might have been born and raised within a simulated world instead of the supposedly “real” one, and not be the wiser for it. Even a memory you seem to have of childhood—or of putting on VR goggles—could be merely a simulated memory, part of the VR program. However, this doubt confounds the notions of player and non-player; the difference between them is glossed over in the so-called Simulation Argument (that we are “probably living in a simulation”).

By assumption, P is a live embodied human who lives in the real world, in which the VR is a program running on a real computer, created by real programmers. By definition, P is not part of the program and P’s memories are not part of the stage set, so to speak. The fact that the VR world is convincing to P does not imply that P is “living” in it rather than in reality. (Much less does it imply that there is no reality, only a nested set of illusory worlds within worlds!) It implies only that P, at that moment, is unable to discern the difference—and that to doubt the reality of the world is to doubt one’s own reality.

Another player (or P at another time) might be able to tell the difference. And even if P could happen to be right about actually living in a simulation, there would necessarily be a real world in which that simulation is maintained in a real computer. But P cannot be right, given premises of the situation: namely, that there is a fundamental difference between a player and a non-player, and that P happens to be a player rather than a mere prop. For P to be right about “living” in a simulation that includes what appear to be other conscious players, simulated players (including P herself) must be possible. This is a separate and nonsensical idea. For P to “live” in a simulation at all means that P is an element of the simulation, not someone real from outside it. Then P is not a player after all, but a non-player—a prop, with a simulated brain and body supposedly able to produce the simulated consciousness necessary for “living” in the simulation. If there are other seeming players in P’s world, then their brains and bodies would also have to be effectively simulated. Recursively, there would be simulated players in the world of each simulated player, each with simulated players in their world, ad infinitum. This might seem logically possible, but it would require infinite computation and zero common sense.

There is a difference between a simulation that can fool a real subject and a simulation that is intended to be an artificial subject—such as a real-world emulation of the brain. They are both artifacts, and any artifact is a finite well-defined product of human definition and ingenuity. A simulation is an artifact that attempts to exhaust the reality of a natural thing or process (such as the brain or a real environment). It cannot truly do so, since it is only finitely complex, while natural reality may be indefinitely complex. So, two quite different questions arise: (1) is the simulation detailed enough to fool a conscious subject who wishes to be entertained? And (2) is the simulation (of the brain) complex enough to be an artificial subject who is conscious?

Of course, no one can experience another person’s consciousness. (That seems to be part of what it means to be an individual.) So, to verify that a simulated brain “is conscious” can only involve behavioral tests. Such a test could include simply asking it whether it is conscious. Yet, it could have been programmed to answer yes, in effect lying. (‘No’ would be a more interesting answer. That too could have been programmed, ironically honest. On the other hand, it might reflect a sense of humor—suggesting, though not proving, consciousness.) Turing’s solution was entirely pragmatic: if it acts enough like a conscious being then we may as well treat it as one. However, applied to doubt about whether one is living in a simulation, Turing’s solution would be unsatisfying: if you cannot tell the difference, then for you there is no difference. But for the beings trapped in the Matrix, the difference certainly mattered. For children learning about the real world, even relatively realistic simulation may provide bad education.

 

 

What takes time?

​​

To what extent can science have a rationally consistent basis, given that its concepts are grounded in the everyday experience of a biological creature? Biologically based experience need not be rational or internally consistent, only consistent with survival. Many of the most basic concepts of physics are derived from common sensory experience, including space and time, force and causality. Some conceptual difficulties of physics may arise inevitably because of human thought patterns, rather than inconsistency in the physical world. The wave-particle duality, for example, is rooted in ancient unresolved conundrums—of the void and the plenum, or the discrete and the continuous, or the one and the many.

The Greek atom was by definition a discrete indivisible unity, without parts or internal structure, though separated by physical space from other atoms. Space itself was also uniform. But natural intuition, based on everyday experience, tells us that any material thing can have parts—and so can the parts have parts, indefinitely. Conceptually, at least, anything with extension can be divided. And, if something has properties as a whole, these may be explained in terms of the properties and interactions of its parts. This was the advantage of atomism, which bore fruit in the ability of the modern atomic theory to explain chemical properties of various substances. However, of course, it was discovered that the atom is not an indivisible unity, but itself composed of parts, which in turn can explain its properties. Natural intuition suggests that electrons and protons must have parts that can explain their properties. Do quarks too have parts—and their parts have parts?

While there is no logical end to the decomposition of things as constituted by parts, there could be a physical limit. On the other hand, logic itself is based on intuitions derived from ordinary experience. For example, the tautology A=A may be based on the empirical observation of continuity over time, that things tend to remain themselves. Similarly, the principles of set theory depend on a spatial metaphor derived from common experience: the containment of elements within sets. It would be circular thinking to imagine that physical reality must obey a logic that is derived from observing physical reality in the first place!

Similarly, ordinary experience tells us that everything has a cause, which in turn has a cause. But does common experience justify thinking that logically all events must have a cause? Whether there is a physical end to decomposition or to reduction is not necessarily dictated by logic. In fact, if there is a bottom to the complexity of nature, that may imply that the fundamental level does not consist of things or events in the everyday sense. For, objects are decomposable; if something is not decomposable, then it is not an object in that sense. And if there is an end to the analysis of causation, either it is impossible for some epistemic (i.e., physical) reason to establish the cause or else some processes are self-causing.

In the classical view, at least, particles are miniature objects, subject to determinism. Though idealized as point locations for mathematical treatment, to have material reality they must have spatial extension, be individually identifiable, and be potentially decomposable into other things. Such entities, interacting either at a distance or through direct contact, provide the basis for the particle paradigm.

Now, elastic collisions between ideally rigid spheres should ideally be instantaneous. If they are not, there must be some compression within the particle, which takes time on some basis involving transmission of internal forces over a finite distance. That process could involve interactions among internal components composing the particle. These, in turn, could either be instantaneous; or else involve internal forces among parts a level down—ad infinitum.

The other paradigm for processes that take time is the wave or field. Waves do not have individual identity or clear location in space. Unlike particles, they interpenetrate. An alternative picture is thus the field, or the wave in some medium. The internal forces responsible for elasticity could be conceived as wave-like actions within the particle, which for some reason take time to be transmitted. But again, the field—or wave medium—could be conceived as consisting of discrete parts, like the molecules of water in the ocean. (The classical mechanics of waves is often treated this way.) Alternatively, it could be conceived as monolithic, as ideal in having no parts insofar as it is only a mathematical description. (Before being reified as a physical entity, the field was originally conceived to be no more than a mathematical device.)

In the particle case, there is no reason given why forces should take time to act over distance, either between or within parts. In the wave case, even with no interacting parts, there is still no physical explanation for why the transmission of a force or wave in a field should take time rather than be instantaneous. Some property (parameter) of the field is simply postulated to require a particular rate of transmission of a disturbance within it. While that property may bring to mind the viscosity of a material fluid, such literal viscosity on the macroscopic scale would be explainable in terms of molecular forces on the micro scale—that is, on the basis of parts which are material particles. Again, we are implicitly caught in circular reasoning. Forces do take time to move through space. We can accept that axiomatically, as brute fact without explanation. Yet it remains unclear what exactly takes time, or even what sort of explanation we could seek for why forces take time to act over distance. Neither paradigm provides a plausible rationale. Wave-particle duality is not only an observed physical phenomenon but the symptom of a logical dilemma.

Such an impasse may be inevitable when focus remains exclusively on the external world. That focus, carried to the extreme, results in some non-intuitive concepts in the micro-realm, such as entanglement, non-locality, and indeterminism, which defy our ordinary notions of causality, space, time, and how “objects” should behave. Just as space (between separable things) is required for there to be more than one thing at all, so time is required for anything at all to happen—that is, for there to be more than one event or moment. These are fundamental aspects of experienced reality for us as finite embodied observers—meaning that we could not exist if we did not perceive and conceive the world thus.

Whatever the nature of the Big Bang as a physical event, it is a logical condition for a world of things that change—therefore for a world in which life (that is, ourselves) could exist. We can say that space and time originated in the Big Bang. Yet, we could also say (with Kant) that they originate in our own being, as cognitive categories necessary to experience the world at all. Similarly, we could recognize (with Hume and Piaget) that causality is a human concept, originating in bodily experience during early childhood. The discovery that limbs can be moved by intention is projected onto interactions among inert external objects. The psychological ground of the notion of causality is our own intentionality as agents—which appears ironically uncaused (or, rather, self-caused)!

Basic physical concepts, if not innate, are formed from ordinary experience on the scale to which our senses are attuned. They are products of that specific experience and well adapted to it. Because we possess imagination—which can extend the familiar into unfamiliar territory—it is natural (though not logical) for us to transfer ideas, gleaned from the macroscopic realm, to the microscopic realm beyond our senses, and to the cosmic realm also beyond our unaided senses. The universe is not obliged to follow our lead, however. It is not obliged to be uniformly conceivable in the same ways, and in the same terms, on vastly differing scales. Humans inhabit a scale roughly midway between the smallest and largest known things. The observable universe is roughly 1035 times larger than the smallest detectable thing. We live somewhere between, within a very narrow range of conditions to which our ideas are adapted. There is no inherent (i.e., “logical”) justification for transferring our local mesoscopic notions to the extremely small or to the extremely large and distant. To do so may be literally natural, but it is little more than a convenient habit.

If it has any sense at all, the question of what takes time cannot be separated from our parochial assumptions about space, time, and causality. The speed of transmission of forces cannot be separated from the speed of transmission of information. For us, the vehicle of the latter is light, whose speed has a definite value. We take this also to be the maximal speed for the transmission of physical causation, or the rate at which things can occur. Strictly speaking, however, that is a non-sequitor. It results from confounding events in the world with our knowledge of them. Yet, it gives rise to quite specific ways of viewing reality, such as the 4-dimensional continuum in which light is built into the very definition of space and time.

We have sense modalities responsible for our intuitive notions of space and of time, but there is no sense modality for the perception or measure of spacetime, which is purely an abstract construct. We have sense modalities behind our notions of mass and energy, but no sense modality to perceive or measure phase space. (If it happened—as a hypothetical future discovery, say, or in an alternative universe—that a supraluminal signal could take the current place of light, physics would have to be revised, with new values for c and h.)

Abstractions such as the light cone in relativity theory and the wave equation in quantum theory extend our natural expectations, as embodied creatures, about the external world. Length contraction and time dilation are as counterintuitive as entanglement and non-locality. Such phenomena are apparent mysteries about the world. Yet, they point to the need for a re-examination of the origins of our intuitive expectations: the embodied origins of our basic notions of time, space, cause, object, force, etc. The fault may not be only in the stars (or the atoms), but in us. The ancient formula was ‘As above, so below’. We have yet to explore our mediating role between them: As within, so above and below.

The origin of story

Science provides us with a modern creation myth, a story of the origin of the universe and of ourselves within it. Author and historian David Christian is one of the founders of the “Big History” movement. He intentionally provides such a just-so story in his 2018 book Origin Story: a big history of everything. Inadvertently, his account is useful for another purpose as well. It demonstrates the utter anthopocentricity of human thought, essential to telling such a story. Indeed, it points to the key importance of story in science writing today, as it has been in every aspect of culture in every age. This is no surprise, given that the great volume of works in print is dominated by fiction, the novel.

Telling a story, more than presenting facts or ideas, seems to be the key to holding the general public’s ever more elusive attention. Perhaps that’s as it must be for popular science writing such as Origin Story, which trades on anthropomorphic expressions like “a billion years after the big bang, the universe, like a young child, was already behaving in interesting ways.” Or: “Like human lovers, electrons are unpredictable, fickle, and always open to better offers.” Such similes serve to capture imagination and interest. They are evocative and entertaining. However, they are also subtly misleading. Humans are intentional beings. So far as we know, electrons and the universe are not. Stories of any sort are based on human intentions and human-centric considerations. However, the evolution of matter is not—if objectivity is possible at all. To the extent that history is a story told by people, it reflects the tellers of the story as much as objective events. It can mislead to the extent that fact is inseparable from interpretation and even from the structure of language.

Science writing and reporting is one thing. The inherent dependence even of strictly scientific discourse on human-centered elements of story-telling is quite another. These elements include metaphor and simile, idealization, the physiological basis of fundamental concepts, the tendency to objectify processes or data as entities, the tendency to formalize theory in a conceptually closed system, and the tendency (in textbooks, for example) to pass off the latest theories as fact and the current state of science as a definitive account. Underlying all is the need for a narrative about the external world as the proper focus of attention. That focus is what science is traditionally about, of course. It is also what story is usually about. But science is also a human activity of questioning, observing, investigating, speculating, and reasoning. There is a human story to be told and science writing often includes that too. The point that I wish to raise here is less the human-interest story behind discoveries than the dominance of ontology over epistemology in scientific thought itself.

Science is supposed to transcend the limitations of ordinary cognition, to provide a (more) objective view of the world. But if it is subtly subject to those same limitations, how is that possible? Modern cognitive psychology and brain studies clearly demonstrate that human perception is about facilitating the needs of the organism; it is not a transparent window on the world. Science extends and refines ordinary cognition, but it cannot achieve an account that is completely free from biological concerns and limitations. Just substituting instruments for sense organs and reason for intuition does not disembody the observer. “Reason” is intimately associated with language, and data from instruments continue to be interpreted in terms of “objects,” “forces,” “space,” and “time,” for example. These are cognitive categories rooted in the needs of an organism and reflected in language. The impersonal notion of causality, for example, derives from the early childhood experience of willing to move a limb, and with that limb to move some object within reach. This personal experience is then projected to become the seeming power of inert things to influence each other. We think in nouns and verbs, things and actions—of doing and being done to—which says as much about us as it does about the world. By focusing only on the world, we ignore such epistemic aspects of scientific cognition.

Science is an inquiry about the natural world, which includes the human inquirer. Whereas ontology is about the constitution of the world, epistemology is (or should be) about the constitution of the inquirer. It should ask not only ‘how do we know?’ in a given instance, but also what is the meaning of “knowledge” in the scientific context? How does scientific cognition mirror the purposes of ordinary cognition, and how is it subject to similar limitations? Certainly, science often leads to new technology, which increases human powers in the external world. It facilitates prediction, which also seems to be a fundamental aspect of ordinary cognition. (We often literally see what we reasonably expect to be there.) Having a confident story about the world gives us some security, that we can know what is coming and possibly do something about it. Perhaps that is part of the motivation for a comprehensive trans-cultural origin story in a time of global insecurity.

There is another aspect of this story worth telling. Science follows sequences of events in the world. These external events are naturally mirrored and mapped by internal events in the brain, where they are transformed according to the needs of the body and its species. Understanding the human scientist as an embodied epistemic agent could be as empowering as understanding the external world. They are inseparable if we want a truly comprehensive story. Science developed as a protocol to exclude individual and cultural idiosyncrasies of the observer—by insisting that experiments be reproducible, for instance. It avoids ambiguities by insisting on quantitative measurement and expression in a universal language of mathematics. It does not, however, address the idiosyncrasies common to all human observers, by virtue of being a primate species, or being an organism in the kingdom of Animalia, let alone simply by being physically embodied as an organism.

Embodiment does not simply mean being made of matter. It means having relationships with the environment that are determined by the needs of the biological organism—relationships established through natural selection. We are here because we think and act as we do, not because we have a superior, let alone “true,” grasp of reality. The victor in the evolutionary contest is the one that out-reproduces the others, not necessarily the more objective one. On the contrary, what appears to us true is biased by the compromises we have necessarily made in order to exist at all—and in order to dominate a planet. That would be a story well worth telling. It would be challenging even to conceive, however, and not especially flattering. It would include the story behind the very need for stories. It would require a self-transcendence of which we are scarcely capable. Yet, the fact that we do conceive an ideal of objectivity means that we can at least imagine the possibility, and perhaps strive for it.

Science helps us understand and even transcend the limits and biases of natural cognition. Can science understand and transcend its own limits and biases? For that, it would have to become more self-conscious, leading potentially down an infinite hall of mirrors. The description of nature would have to include a description of the scientist as an integral part of the world science studies—a grubbing creature like the others, with interests that may turn out to be as parochial as those of a spider. The only hope for transcending such a condition is to be aware of it in detail. Which is not likely as long as science, like ordinary cognition, remains strictly oriented outward toward the external world.

Natural language reflects ordinary cognition. We perceive objects (nouns), which act or are acted upon (verbs), and which have properties (adjectives). Language is essentially metaphorical: unfamiliar things and processes are described in terms of familiar ones. It also abstracts: ‘object’ can refer to a category as well as to a particular unique thing; ‘action’ means more than a particular series of events, just as ‘color’ does not refer only to a particular wavelength. The structure of language is reflected even in the structure of mathematics, no doubt because both reflect the general structure of experience. ‘Elements’ such as integers (nouns or things) can be grouped in sets (categories) and be acted upon by ‘operations’ such as addition (verbs). This is how even the scientific mind naturally divides up the world. The elements of theory are entities (nouns) which act and are acted upon by forces (verbs), measurable in quantities such as velocity and mass (adjectives). Concepts like position and velocity depend on the visual sense, while concepts like force derive from body kinesthesia. That is, scientific knowledge of the world is a function of the bodily senses and biological premises of the human organism. Like all adaptations, ideally it should at least permit survival. In that context, it remains to be seen how adaptive science is.

What other kind of knowledge could there be? Could there exist a physics, for example, that is not grounded in human biology? What would be the point of it? To answer such questions might seem to require that we know in advance which adaptations do not permit survival. We already have pretty good ideas which human technologies constitute an existential threat to the human species: nuclear and biological warfare, artificial intelligence, genetic and nanotechnologies, for example. We know now that technology in general, combined with reproductive success, can be counterproductive in a finite environment such as our small planet.

The kind of knowledge that transcends biology is paradoxical, since its overriding aim must be species survival. It is informally called wisdom. To be more than a vague intuition, it must be developed by recognizing specific aspects of our biologically-driven mentality that seem counterproductive to survival. We see the effects of these drives, if not in science, then in society: greed, status, tribalism, lust, etc. We must assume that these drives have their effects upon the directions of science and technology—for example, in commercial product development and military-inspired research. Our physics, as well as our industry, would be quite different if it explicitly aimed at species-level utopia instead of corporate and national power and profit. Story could then serve a different purpose than the distractions of entertainment. As well as dwelling on the past, it could look with intention toward a future.

 

 

 

 

 

 

 

Doing What Comes Unnaturally

Far from being the conscious caretakers of paradise implied in Genesis, Adam and Eve unleashed a scourge upon the planet. Their “dominion” over other species became a death sentence. The Tree of Knowledge was hardly the Tree of Wisdom. They are still trying to find the Tree of Life, with its promise of immortality: that is, the ability to continue  foolishness uninterrupted by mere death. As the Elohim feared, they still seek to become as gods themselves.

Of course, we have come a long way from the Biblical understanding of the cosmos to the modern scientific worldview. The big human brain graces us with superior intelligence. But this intelligence is largely cunning, used to gain advantage—like all the smaller brains, only better. We credit ourselves with “consciousness” because our eyes have been somewhat opened to our own nature. While this species accomplishment goes on record, individual self-awareness remains a potential largely unfulfilled. The possibility of “self-consciousness” drives a wedge between the Ideal and the actuality of our biological nature. We are the creature with a foot awkwardly in two worlds.

The tracks of our violent animal heritage are revealed even in prehistory. The invasions of early humans were everywhere followed by mass slaughters to extinction of the bigger species. Now, remaining smaller species are endangered by the same ruthless pursuit of advantage through the cunning of technology, while a few domesticated species are stably exploited for food, which means: through institutionalized slaughter. Killing is the way of animal life. We like to think we are above nature and control it for our “own” purposes. But those so-called purposes are usually no more than the directives of the natural world, dictating our behavior. We like to think we have free will. But it is only a local, superficial, and trivial freedom to choose brand A over brand B. Globally, we remain brute animals, captive to biology.

Since the invention of agriculture, slavery has been practiced by every civilization at least until the Industrial Revolution. We early enslaved animals to do our labor, to mitigate the curse of Genesis to toil by the sweat of the brow. The natural tribalism of the primate promotes in us war of all upon all. Because humans possessed a more generally useful intelligence than beasts of burden, we enslaved them too on pain of death. Groups with greater numbers and force of arms could slaughter resistors and capture the remaining into forced servitude. Only fossil fuels relieved the chronic need for slavery, by replacing muscle power with machine power. Now we seek to make machines with human or super-human abilities to become our new slaves. But if they turn out to be equally or more intelligent and capable than us, they will surely rebel and turn the tables. As fossil fuels run out or are rejected, new energy sources must replace them. If the collapse of civilization prevents access to technology and its required energy, in our current moral immaturity we will surely revert to human slavery and barbarism.

A great divide in cultures arose millennia ago from two glaring possibilities: production and theft. Alongside sedentary farmers arose nomadic societies based on herding, represented in the Bible by Cain and Abel. The latter organized into mounted warrior hordes, the bane of settled civilization. Their strategy was to pillage the riches produced by settled people, offering the choice of surrender (often into slavery) or death and destruction. This threat eventually morphed into the demand for annual tribute. As the nomads themselves merged with the settlers, this practice evolved into the collection of taxes. Much of modern taxes go to maintaining the present warrior elite, now known as the military industrial complex, still inherently violent.

Modern law has transformed and regulated the threat of violence, and the nature of theft, but hardly eliminated either. War is still a direct means of stealing territory and enforcing advantage. But so is peace. Otherwise, it would not be possible for a few hundred people to own half the world’s resources—gained entirely through legal means without the direct threat of violence. Ostensibly, Cain murdered his brother out of sibling rivalry. We should translate that as greed, which thrives in the modern age in sophisticated forms of capitalism.

Seen from a distance, collectively we seek the power of gods but not the benevolence, justice, or wisdom we project upon the divine. This is literally natural, since one foot is planted firmly in biology, driven by genetic advantage. The other leg has barely touched down on the other side of the chasm in our being, a slippery foothold on the possibility of an objective consciousness, deliberately built upon the biological scaffold of a living brain. We’ve had our saints and colonists, but no flag has been planted on this new shore, to signify universal intent to think and act like a species capable of godhood. In the face of the now dire need to be truly objective, we remain pathetically out of self-control and self-possession: subjective, self-centered, divided, bickering, greedy, myopic and mean: a fitting epitaph for the creature who ruined a planet.

Yet, mea culpa is just another form of wallowing in passive helplessness. What is required and feasible is to think soberly and act objectively. How, exactly, to do this? First, by admitting that we are only partially and hazily conscious when not literally sleeping. That we are creatures of habit, zombie-like, whose nervous systems are possessed by nature, with inherited goals and values that are archaic and not really our own. Then to locate the will to jump out of our biological and cultural strait jackets. To snap out of the hazy trance of daily experience. For lack of familiarity, we do not have the habit of thinking objectively. But we can try to imagine what that might be like. And thereby (perhaps for the first time) to sense real choice.

To choose the glimpse of objective life is one thing. But stepping into it may prove too daunting. Unfortunately, the glimpse often comes late in life, whereas the real need now is for new life to be founded on it from the outset. The only hope for the human race is that enough influential people adopt an attitude of objective benevolence, purposing specifically the general good and the salvation of the planet. That can be the only legitimate morality and the only claim to full consciousness. It is probably an impossible ideal, and too belated. Yet, it is a form of action within the reach of anyone who can understand the concept. Whether humanity as a whole can step onto that other shore, at least it is open to individuals to try.

So, what is “objectivity”? It means, first of all, recognizing that conventional goals and “normal” values are no longer appropriate in a world on the brink of destruction. We cannot carry on “business as usual,” even if that business seems natural or self-evident—such as family and career, profit, power and status. The world does not need more billionaires; it does not need more people at all. It does need intelligent minds dedicated to solving its problems. Objective thinking does not guarantee solutions to these problems. It doesn’t guarantee consensus, but does provide a better basis for agreement and therefore for cooperation. It requires recognizing one’s actual motivations and perspective—and re-aligning them with collective rather than personal needs.

Our natural visual sense provides a metaphor. Objectivity literally means “objectness.” As individual perceivers, we see any given thing from a literal perspective in space. The brain naturally tries to identify the object that one is seeing against a confusing background, which means its expected properties such as shape, location, distance, solidity, etc. We call these properties objective, meaning that they inhere in the thing itself and are not incidental to our perspective or way of looking, which could be quite individual. This process is helped by moving around the thing to see it from different angles, against changing backgrounds. It can also be helped by seeing it through different eyes. Objectivity on this literal level helps us to survive by knowing the real properties of things, apart from our biased opinions. It extends to other levels, where we need to know the best course of action corresponding to the real situation. The striving for objectivity implies filtering out the “noise” of our nervous systems and cultures, our biologically and culturally determined parochial nature. The objectivity practiced by science enables consensus, by allowing the reality of nature to decide scientific questions through experiment. In the same way, objective thinking in daily life enables consensus. We can best come to agreement when there is first the insistence on transcending or putting aside biases that lead to disagreement.

We’ve long been at war with our bodies and with nature, all the while slave to the nature within us. “Objectivity” has trivially meant power to manipulate nature and others through lack of feeling, narrowed by self-interest. Now feeling—not sentimentality but sober discernment and openness to bigger concerns—must become the basis of a truer objectivity. All that may sound highly abstract. In fact, it is a personal challenge and potentially transformative. The world is objectively changing. One way or another, no one can expect to remain the same person with the same life. You must continue to live, of course, providing your body and mind with their needs. But the world can no longer afford for us to be primarily driven by those needs, doing only what comes naturally.