Category Archives: Dan

Doing What Comes Unnaturally

Far from being the conscious caretakers of paradise implied in Genesis, Adam and Eve unleashed a scourge upon the planet. Their “dominion” over other species became a death sentence. The Tree of Knowledge was hardly the Tree of Wisdom. They are still trying to find the Tree of Life, with its promise of immortality: that is, the ability to continue  foolishness uninterrupted by mere death. As the Elohim feared, they still seek to become as gods themselves.

Of course, we have come a long way from the Biblical understanding of the cosmos to the modern scientific worldview. The big human brain graces us with superior intelligence. But this intelligence is largely cunning, used to gain advantage—like all the smaller brains, only better. We credit ourselves with “consciousness” because our eyes have been somewhat opened to our own nature. While this species accomplishment goes on record, individual self-awareness remains a potential largely unfulfilled. The possibility of “self-consciousness” drives a wedge between the Ideal and the actuality of our biological nature. We are the creature with a foot awkwardly in two worlds.

The tracks of our violent animal heritage are revealed even in prehistory. The invasions of early humans were everywhere followed by mass slaughters to extinction of the bigger species. Now, remaining smaller species are endangered by the same ruthless pursuit of advantage through the cunning of technology, while a few domesticated species are stably exploited for food, which means: through institutionalized slaughter. Killing is the way of animal life. We like to think we are above nature and control it for our “own” purposes. But those so-called purposes are usually no more than the directives of the natural world, dictating our behavior. We like to think we have free will. But it is only a local, superficial, and trivial freedom to choose brand A over brand B. Globally, we remain brute animals, captive to biology.

Since the invention of agriculture, slavery has been practiced by every civilization at least until the Industrial Revolution. We early enslaved animals to do our labor, to mitigate the curse of Genesis to toil by the sweat of the brow. The natural tribalism of the primate promotes in us war of all upon all. Because humans possessed a more generally useful intelligence than beasts of burden, we enslaved them too on pain of death. Groups with greater numbers and force of arms could slaughter resistors and capture the remaining into forced servitude. Only fossil fuels relieved the chronic need for slavery, by replacing muscle power with machine power. Now we seek to make machines with human or super-human abilities to become our new slaves. But if they turn out to be equally or more intelligent and capable than us, they will surely rebel and turn the tables. As fossil fuels run out or are rejected, new energy sources must replace them. If the collapse of civilization prevents access to technology and its required energy, in our current moral immaturity we will surely revert to human slavery and barbarism.

A great divide in cultures arose millennia ago from two glaring possibilities: production and theft. Alongside sedentary farmers arose nomadic societies based on herding, represented in the Bible by Cain and Abel. The latter organized into mounted warrior hordes, the bane of settled civilization. Their strategy was to pillage the riches produced by settled people, offering the choice of surrender (often into slavery) or death and destruction. This threat eventually morphed into the demand for annual tribute. As the nomads themselves merged with the settlers, this practice evolved into the collection of taxes. Much of modern taxes go to maintaining the present warrior elite, now known as the military industrial complex, still inherently violent.

Modern law has transformed and regulated the threat of violence, and the nature of theft, but hardly eliminated either. War is still a direct means of stealing territory and enforcing advantage. But so is peace. Otherwise, it would not be possible for a few hundred people to own half the world’s resources—gained entirely through legal means without the direct threat of violence. Ostensibly, Cain murdered his brother out of sibling rivalry. We should translate that as greed, which thrives in the modern age in sophisticated forms of capitalism.

Seen from a distance, collectively we seek the power of gods but not the benevolence, justice, or wisdom we project upon the divine. This is literally natural, since one foot is planted firmly in biology, driven by genetic advantage. The other leg has barely touched down on the other side of the chasm in our being, a slippery foothold on the possibility of an objective consciousness, deliberately built upon the biological scaffold of a living brain. We’ve had our saints and colonists, but no flag has been planted on this new shore, to signify universal intent to think and act like a species capable of godhood. In the face of the now dire need to be truly objective, we remain pathetically out of self-control and self-possession: subjective, self-centered, divided, bickering, greedy, myopic and mean: a fitting epitaph for the creature who ruined a planet.

Yet, mea culpa is just another form of wallowing in passive helplessness. What is required and feasible is to think soberly and act objectively. How, exactly, to do this? First, by admitting that we are only partially and hazily conscious when not literally sleeping. That we are creatures of habit, zombie-like, whose nervous systems are possessed by nature, with inherited goals and values that are archaic and not really our own. Then to locate the will to jump out of our biological and cultural strait jackets. To snap out of the hazy trance of daily experience. For lack of familiarity, we do not have the habit of thinking objectively. But we can try to imagine what that might be like. And thereby (perhaps for the first time) to sense real choice.

To choose the glimpse of objective life is one thing. But stepping into it may prove too daunting. Unfortunately, the glimpse often comes late in life, whereas the real need now is for new life to be founded on it from the outset. The only hope for the human race is that enough influential people adopt an attitude of objective benevolence, purposing specifically the general good and the salvation of the planet. That can be the only legitimate morality and the only claim to full consciousness. It is probably an impossible ideal, and too belated. Yet, it is a form of action within the reach of anyone who can understand the concept. Whether humanity as a whole can step onto that other shore, at least it is open to individuals to try.

So, what is “objectivity”? It means, first of all, recognizing that conventional goals and “normal” values are no longer appropriate in a world on the brink of destruction. We cannot carry on “business as usual,” even if that business seems natural or self-evident—such as family and career, profit, power and status. The world does not need more billionaires; it does not need more people at all. It does need intelligent minds dedicated to solving its problems. Objective thinking does not guarantee solutions to these problems. It doesn’t guarantee consensus, but does provide a better basis for agreement and therefore for cooperation. It requires recognizing one’s actual motivations and perspective—and re-aligning them with collective rather than personal needs.

Our natural visual sense provides a metaphor. Objectivity literally means “objectness.” As individual perceivers, we see any given thing from a literal perspective in space. The brain naturally tries to identify the object that one is seeing against a confusing background, which means its expected properties such as shape, location, distance, solidity, etc. We call these properties objective, meaning that they inhere in the thing itself and are not incidental to our perspective or way of looking, which could be quite individual. This process is helped by moving around the thing to see it from different angles, against changing backgrounds. It can also be helped by seeing it through different eyes. Objectivity on this literal level helps us to survive by knowing the real properties of things, apart from our biased opinions. It extends to other levels, where we need to know the best course of action corresponding to the real situation. The striving for objectivity implies filtering out the “noise” of our nervous systems and cultures, our biologically and culturally determined parochial nature. The objectivity practiced by science enables consensus, by allowing the reality of nature to decide scientific questions through experiment. In the same way, objective thinking in daily life enables consensus. We can best come to agreement when there is first the insistence on transcending or putting aside biases that lead to disagreement.

We’ve long been at war with our bodies and with nature, all the while slave to the nature within us. “Objectivity” has trivially meant power to manipulate nature and others through lack of feeling, narrowed by self-interest. Now feeling—not sentimentality but sober discernment and openness to bigger concerns—must become the basis of a truer objectivity. All that may sound highly abstract. In fact, it is a personal challenge and potentially transformative. The world is objectively changing. One way or another, no one can expect to remain the same person with the same life. You must continue to live, of course, providing your body and mind with their needs. But the world can no longer afford for us to be primarily driven by those needs, doing only what comes naturally.

 

Embodiment dysphoria

Dysphoria is a medicalized term for chronic unhappiness or dissatisfaction (the opposite of euphoria). It literally means ‘hard to bear’. Nominally, the goal behind medical classification is well-being. In the case of psychological and behavioral patterns, it may remove a stigma of disapproval by exonerating those defined as ill from responsibility for their condition. (For example, it is socially more correct to think of alcoholism and drug addiction as disease than as moral failure.) In the name of compassion and political correctness, medical classification may go further to remove the stigma of abnormality or inferiority. (Think of ‘disabled’ vs. ‘handicapped’.) Thus, ‘Gender Dysphoria’ was relabeled from ‘Gender Identity Disorder’ to remove the implications of “disorder.” Ironically, however, this disarms the diagnostic category and raises potentially awkward questions.

Dysphoria literally means dis-ease. If it is not a disease or disorder, what is the cause of the suffering? In the case of gender dysphoria, was the person simply dealt the wrong sex genes though a natural error that technology can fix? Is it the attitude of the patient toward their gender, or the unaccepting attitude of society, implicating the anxieties of “normals” in regard to their own sexual identities? (Some other societies have multiple gender categories, for example.) Is it an overly-charged political question, a distraction in an already divided society? Is it a social asset in an overpopulated world, since it may help reduce the birth rate? Is there a fundamental right to choose one’s gender, even one’s biological sex? Such questions can lead deep into philosophy and ethics: what it means to be a self, to have a gender, indeed to have or be a body.

There are other dysphorias, such as “Rejection Sensitivity Dysphoria,” a condition where the individual is deemed hypersensitive to rejection or disapproval. “Body Identity Dysphoria” is a rare condition in which sufferers loathe having a properly functioning body at all. (They may reject a certain limb or sense modality and may seek to ruin or be rid of it.) This brings us closer to a nearly ubiquitous human condition that could (in all seriousness) be called Embodiment Dysphoria. This is the chronic discomfort of feeling trapped in a physical body—a biological organism—and the unhappiness that can entail.

Human beings have always shown signs of rejecting or resenting the physical body, in which they may feel ill at ease and imprisoned, or from which they may feel otherwise alienated. Certainly, the body is the main source of pains and of pleasures alike. We do not like being burdened with its limitations, subject to its vulnerabilities, and tied to its mortal end. Even pleasure, when biologically driven, can seem to impose upon an ideal of freedom. We may dismiss the body as our true identity, and may fail to care for it with due respect. Traditionally—in religion and now through technology—humans seek a life apart from the body and its concerns. All of culture, in the anthropological sense, can be seen to express the quest to transcend or deny our animal origins, to separate from nature and live apart from it in some humanly-defined realm. In terms of nature’s plan, that may be crazy or sick. But since this condition is “normal,” it will never be found in any version of the DSM.

The natural cure for Embodiment Dysphoria is the relief that comes with death. However, built into rejection of the body is typically a belief that personal experience can and should be dissociated from it. If consciousness can continue after the death of the body, then death may not offer an end to suffering after all. From a naturalist point of view, pain is an embodied experience, a signal that something is amiss with the body. Suffering may be emotional or psychological, but is grounded in the body and its relations in the world. Yet, it is an ancient belief that consciousness does not depend on the brain and its body. That may be no more than wishful thinking, based on the very rejection of suffering that characterizes Embodiment Dysphoria. If, as modern science believes, pain and pleasure (and indeed all experience) are bodily functions, then neither heaven nor hell, nor anything else, are possible experiences for a disembodied spirit. Religion promises the continuation of consciousness, disembodied after death. Curiously, it also promises the resurrection of the body necessary for consciousness.

While medicine hopes to prolong the life of the body, and religion hopes to upstage it, computer science proposes to transcend it altogether. A high-tech cure for Embodiment Dysphoria would be to upload one’s mind to cyberspace. While ‘mind’ is an ambiguous term, the presumption is that a conscious subject could somehow be severed from its natural body and brain and “live” in a virtual environment, with a virtual body that cannot suffer and has no imperfections. Were it feasible, that too would solve the population problem! A disembodied existence would not take up real space or use real resources other than computational power and memory. However, for several reasons it is not feasible.

First, virtual reality as we know it presumes a real embodied subject to experience it. The notion of a simulated subject is quite a different matter. While a digital representative of the real person (an “avatar”) might appear in the VR, there is no more reason to suppose that this element of the simulation could be conscious than there is to imagine that a character in a novel can be conscious. (It is the author and the reader who are conscious.) Such a digital character—though real-seeming to the human spectator—is not a subject at all, but merely an artifact created to entertain real embodied subjects. That does not prove that artificial subjects are impossible, but it cautions us about the power of language and metaphor to confuse fundamentally different things.

Secondly, the notion of digital disembodiment presumes that the mind and personality belonging to a natural brain and body can somehow be exhaustively decoded, extracted or copied from its natural source, and contained in a digital file that can then be uploaded to a super-computer as a functioning subject in a simulated world. While there are current projects to map “the” brain at the micro-level, there is no guarantee that the structure inferred corresponds to a real brain’s structure closely enough to replicate its “program” in digital form. Much less can we assume that the interrelation between parts and their functioning can be replicated in such a way that the consciousness of the person is replicated.

Thirdly, even a fictive world must have its own consistent structure and rules. Whatever world might be designed for the disembodied subject, it would essentially be modeled on the world we know—in which bodies have limits and functions determined by the laws of nature, and in which organisms are programmed by natural selection to have preferences and to care about outcomes of interactions. Embodiment is a relation to the world in which things crucially matter to the subject; simulated embodiment would involve a similar relation to a simulated world. To be consistent, a virtual world would have to operate roughly like the real one, imposing limits parallel to those of the real world and having power over the disembodied subject in a parallel way. Otherwise it would be disconcerting or incomprehensible to an artificial mind modeled on a natural one that has been groomed in our world through natural selection.

The nature of our real consciousness is more like creating a movie than like watching one, which is an entirely passive experience. Furthermore, unlike what is presented in films, in your real field of vision parts of your own body appear in your personal “movie.” You also experience other external senses besides vision, as well as feelings occurring within the body. Above all, the experience is interactive. Maybe it would be possible to edit out physical pain from a simulated life, at the cost of adjusting the virtual world accordingly. For example, if your virtual body could not be damaged through its interactions, this would obviate the need for pain as a report of damage. But it would also require a different biology. By and large, however, your disembodied consciousness probably could not live in a world so fundamentally different from the real one that it would seem chaotic or senseless. And then there is the possibility that an unending consciousness might find it wished it could die.

Like it or not we are stuck with bodies until they cease functioning. We may abhor an end to our experience. But clinging to consciousness puts the cart before the horse. For, consciousness depends on the body and is designed to serve it, rather than the other way around. If we wish to prolong a desirable consciousness, we must prolong the health of the body, on which the quality as well as the quantity of experience depends. That goes against the grain in a society that values quantity over quality and pharmaceutical prescriptions over pro-active self-care. We have long rejected our natural lot, but an unnatural lot could be worse.

Aphantasia

It is to be expected that human beings differ in how they process sensory information, since their brains, like other physiology, can differ. Some differences, if they seem disabling, may be labelled pathology or disorder. On the other hand, simply labelling doesn’t render a condition disabling. That is a distinction sometimes overlooked by researchers in clinical psychology.

The tendency to talk about phenomenal experience in medicalized terminology reflects long-standing confusions collectively known as the Mind-Body Problem. It shifts the perspective from a first-person to a third-person point of view. It also reflects the common habit of reification, in which an experience is objectified as a thing. (The rationale is that experiences are private and thus inaccessible to others, whereas objects or conditions are public and accessible to all, including medical practitioners.) Thus dis-ease, which is a subjective experience, is reified as disease, which is a condition—and often a pathogen—that can be dealt with objectively and even clinically. To thus reify a personal experience as an objective condition qualifies it for medical treatment. Containing it within a medical definition also insulates the collective from something conceived as strange and abnormal. On the one hand, it can become a stigma. On the other, people may take comfort in knowing that others share their experience or condition, mitigating the stigma.

Admittedly, psychology and brain science have advanced largely through the study of pathology. Normal functioning is understood through examining abnormalities. However, the unfortunate downside is that even something such as synesthesia, which is perfectly orderly and hardly a disability, can nevertheless be labelled as a disorder simply because it is unusual. Even something not unusual, such as pareidolia (seeing images or hearing sounds in random stimuli), has a clinical ring about. Moreover, categorization often suggests an either/or dichotomy rather than a continuous spectrum of possibilities. You either “have” the condition or you don’t, with nothing in between. There is also a penchant in modern society for neologisms. Re-naming things creates a misleading sense of discovery and progress, perhaps motivated ultimately by a thirst for novelty and entertainment conducive to fads.

A recent social phenomenon that illustrates all these features is the re-discovery of “aphantasia.” This is a term coined by Adam Zeman et al in a seminal article in 2015, but first documented in the late 19th century. It means the absence (or inability to voluntarily create) mental imagery. Its opposite is “hyperphantasia,” which is the experience of extremely vivid mental imagery. The original paper was a case study of a person who reported losing the ability to vividly visualize as the incidental result of a medical procedure. As it should, this stimulated interest in the range of normal people’s ability to visualize, as subjectively reported. But there is a clear difference between someone comparing an experience they once had to its later loss and a third party comparing the claims of diverse people about their individual experiences. The patient whose experience changed over time can compare their present experience with their memory. But no one can experience someone else’s visualizations (or, for that matter, the auditory equivalents). Scientists conducting surveys can only compare verbal replies on questionnaires, whose questions can be loaded, leading, and interpreted differently from individual to individual.

The study of mental imagery and “the mind’s eye” is a laudable phenomenological investigation, adding to human knowledge. But the term aphantasia is unfortunate because it suggests a specific extreme condition rather than the spectrum of cognitive strategies for recall that people employ. The associations in the literature are clinical, referring to “imagery impairment,” “co-morbidities,” etc. Surveys implicitly invite you to compare your degree of visualization with the reports of others, whereas the only direct comparison could be to your own experience over time. (I can say in my own case that my ability to voluntarily visualize seems to have declined with age, though memory in general also seems unreliable, which may be part of the same package.) Apart from aging, if there is a decline in cognitive abilities, then there is some justification to think of a disability or disorder. Overall, however, the differences between visualizers and non-visualizers seems to be mostly a variation in degree and in the style of retrieving and manipulating information from memory, with some advantages and disadvantages of each style with respect to various tests.

Moreover, “visual imagery” is an ambiguous notion and term. There can be all sorts of visual images both with eyes open and eyes closed: after images, dreams, hallucinations, eidetic images, “mental” images, imagination, apparitions and spiritual visions, etc. They can be the result of voluntary effort or spontaneous intrusions. All these could be rated differently on questionnaires as to their vividness. The widely used Vividness of Visual Imagery Questionnaire asks you to “try to form a visual image” in various situations and rate your experience on a scale of 1 to 5, with 5 being “perfectly realistic and vivid as real seeing.” If that were literally so, what would be the basis on which to distinguish it from “real” seeing? Some people may indeed have such experiences, which are usually labelled schizophrenic or delusional.

But such is language that we subtly metaphorize without even realizing it. Whether they visualize relatively vividly or relatively poorly, people who are otherwise normal are not comparing their real-time experience to an objective standard but to their own ordinary sensory vision or to what they imagine is the experience of others. They are rating it on a scale they have formed in their own mind’s eye, which will vary from person to person. No one can compare their experience directly with that of better or worse visualizers, but only with their interpretation of others’ claims or with their own normal seeing and their other visual experiences such as dreaming.

In descending order, the other four choices on the questionnaire are: (4) clear and lively; (3) moderately clear and lively; (2) dim and vague, flat; and (1) no image at all—you only “know” that you are thinking of the object. In my own case, I can voluntarily summon mental imagery that I can hardly distinguish from merely “thinking” of the object. Yet, these images seem decidedly visual, so I would probably choose category (2) for them.

But categorizing an experience is not the same as categorizing oneself or another person. I’ve had vivid involuntary eidetic images that astonish me, such as continuing to “see” blackberries after hours spent picking them. That might be category (3) or (4). Yet even these I cannot say are in technicolor. While I can picture an orange poppy in my mind’s eye, I cannot say that I am seeing the scene in vivid color or detail. (Should I call the color so visualized “pseudo-orange”?) As in all surveys, the burden is on the participant to place their experience in categories defined by others. No one should feel obliged to categorize themselves as ‘aphantasic’ as a result of taking this test. Perhaps for this reason, among the many websites dedicated to studies of visualization, there are even some that tout aphantasia as a cognitive enhancement rather than a disability.

In our digital age we are used to dichotomies and artificial categories. How many colors are there in the rainbow? Six, right? (Red, orange, yellow, green, blue, and violet.) But, in classical physics there are an infinite number of possible wavelengths in the visible part of the spectrum alone, which is a continuum. (Quantum physics might propose a finite but extremely large number.) No doubt there are differences in people’s abilities to discriminate wavelengths, and in how they name their color perceptions. A few people are unable to see color at all, only shades of intensity—a condition called achromatopsia. Yet, that is hardly what society misleadingly calls ‘color-blindness’, which is rather the inability to distinguish between specific colors, such as blue and green, which are close to each other in the spectrum. Similarly, perhaps with further research, aphantasia will turn out to mean something more selective than the name suggests.

Perhaps the general lesson is to be careful with language and categorization. Statements are propositions conventionally assumed to be either true or false. That is always misleading and invites dispute more than understanding. If you fall into that trap, perhaps you are an eristic or suffering from philodoxia. (Surely there is a test you can take to rate yourself on a scale of one to five!) One thing is quite certain. Naming things is a psychological strategy to deal with the acatalepsy common to us all. Or perhaps, in bothering to write about this, I am simply quiddling.

[Thanks to Susie Dent’s An Emotional Dictionary for the big words.]

What happened to the future?

Words matter. The word future seems scarcely used anymore in print, sound or video. Instead, we say going forward. This substitutes a spatial metaphor for time, which is one-dimensional and irreversible. We cannot go anywhere in time but “forward.” According to the 2nd Law of Thermodynamics (entropy), that means in the direction of disorder, which hardly seems like progress. In contrast, we can go backwards and sideways in space, up or down, or any direction we choose. Are we now going forward by choice, whereas we could before only by necessity? Does going forward mean progress, advance, betterment, while the future bears no such guarantee and may even imply degeneration or doom? Do we say going forward to make it clear we do not mean going backward? Are we simply trying to reassure ourselves by changing how we speak?

As though daily news reporting were not discouraging enough, the themes of modern film and tv express a dark view of humanity in the present and a predominantly dystopian future—if any future at all. If “entertainment” is a sort of cultural dreaming, these days it is obsessed with nightmare. If it reveals our deep anxieties, we are at a level of paranoia unheard of since the Cold War, when a surfeit of sci-fi/horror films emerged in response to the Red Menace and Mutually Assured Destruction. At least in those days, we were the good guys and always won out over the “alien” threat. Despite a commercial slogan, now we’re not so sure the future is friendly, much less that we deserve it.

Rather, we seem confused about “progress,” which now has a bad rep through its association with colonialism, manifest destiny, and the ecological and economic fallout of global capitalism we euphemistically call ‘climate change’. Cherished values and good ideas often seem to backfire. Technological solutions often end in disaster, because reality is more complex than thought can be or desire is willing to consider. The materialist promise of better living through technology may end in an uninhabitable planet. Having squandered the reserves of fossil fuels, we may have blown our chances for a sustainable technological civilization or recovery from climate disaster. This is perhaps the underlying anxiety that gives rise to disaster entertainment.

In that context, “going forward” seems suspiciously optimistic. Forward toward what? It cannot mean wandering aimlessly, just to keep moving—change for its own sake, or to avoid boredom. We can and should be hopeful. But that requires a clear and realistic definition of progress. Evidently, the old definitions were naïve and short-sighted. Economic “growth” has seemed obviously good; yet it is a treadmill we cannot get off of, required by capitalism just to keep in place. As we know from cancer, unlimited growth will kill the organism. In a confined space, progress must now mean getting better, not bigger. We can agree about what is bigger, but can we agree about what is better?

The concept of progress must change. So far, in effect, it involves urbanization as a strategy to stabilize human life, by creating artificial environments that can be intentionally managed. Historically, “civilization” has meant isolation from the wild, freedom from the vicissitudes of natural change and from the predations of rival human groups as well as of dangerous carnivores. The advance of civilization is an ideal behind the notion of progress. Consequently, more than half of human population now live in cities. Though no longer walled, they remain population magnets for other reasons than physical security, which can hardly be guaranteed in the age of modern warfare, global disease, and fragile interdependent systems.

The earth has been cataclysmically altered many times in the history of life. More than 99% of all species that have ever existed are now extinct. Our early ancestors survived drastic climate changes for a million years. There have been existential disasters even within the relative stability of the past 12,000 years. Our species has persisted throughout all by virtue of its adaptability far more than its limited ability to stabilize environments. This suggests that the notion of civilized progress based on permanence should give way to a model explicitly based on adaptability to inevitable change.

For us in the modern age, progress is synonymous with irreversible growth in economy and population. We also know that empires rise and fall. Even the literal climate during which civilization developed, though relatively benign overall, was hardly uniform. People migrated in response to droughts and floods. Settlements came and went. Populations reduced during harsher times and increased again in more favorable times. Throughout recorded history, disputes over territory and resources meant endless warfare between settlements. Plagues decimated populations and economies. Nomadic culture rivalled and nearly overtook sedentary civilization based on agriculture. Yet, overall, despite constant warring and setbacks, one could say that the dream of civilization has been for stability and freedom from being at the mercy of unpredictable forces—whether natural or human. The ideal of progress until now has been for permanence and increasing control in ever more artificial environments. The tacit premise has been to adapt the environment to us more than the other way around.

Despite the risk of disease in close quarters, urbanism mostly won out over nomadism, wild beasts, and food shortages. A global economic system has loosely united the world, expanding trade and limiting war. Yet, through the consequences of our very success as a species, the benign climatic period in which civilization arose is now ending. Our challenge now is to focus on adaptability to change rather than immunity to it. Progress should mean better ways to adapt to changes that will inevitably come despite the quest for stability and control. While it is crucial that we make every effort to reduce anthropogenic global warming, we should also be prepared to deal with failure to prevent natural catastrophe and its human fallout. Many factors determining conditions on Earth are simply not within our present control nor ever will be. Climate change and related war are already wreaking havoc on civilization, producing chaotic mass migrations. Given these truths, maximizing human adaptability makes more sense than scrambling to maintain a status quo based on a dream of progress that amounts to indefinitely less effort and more convenience.

In past, we were less self-consciously aware of how human activities impact on nature, which was taken for granted as an inexhaustible reservoir or sink. These activities themselves, and our ignorance of their consequences, led to present crisis. A crucial difference with the past, however, is that we now can feasibly plan our species’ future. Whether we have the social resources to effectively pursue such a plan is questionable. The outward focus of mind conditions us to technological solutions. But, in many ways, it is that outward focus that has created the anthropogenic threat of environmental collapse in the first place. Our concept of progress must shift from technological solutions to the social advances without which we will not succeed to implement any plan for long term survival. A society that cannot realize its goals of social equity and justice probably cannot solve its existential crises either. And that sort of progress involves an inner focus to complement the outer one. This is not an appeal for religion, which is as outwardly focused as science (“in God we trust”). Rather, we must be able to trust ourselves, and for that we must be willing to question values, motivations, and assumptions taken for granted.

Money and mathematics have trained us to think in terms of quantity, which can easily be measured. But it is quality of life that must become the basis of progress: getting better rather than bigger. That means improvement in the general subjective experience of individuals and in the objective structure of society upon which that depends. Both must be considered in the context of a human future on this planet rather than in outer space or on other planets. That does not mean abandoning the safety measure of a backup plan if this planet should become uninhabitable through some natural cosmic cataclysm. But, it does mean a shift in values and priorities. Until now, some people’s lives have improved at the expense of the many; and the general improvement that exists has been at the expense of the planet. To improve the meaning of improvement will require a shift from a search for technological solutions to a search for optimal ways of living in the confined quarters of Spaceship Earth.

The found and the made

There is a categorical difference between natural things and artifacts. The latter we construct, the former we simply encounter. We can have certainty only concerning our own creations, because—like the constructs of mathematics—they alone are precisely what they are defined to be. For this reason, the Renaissance thinker Vico advised that knowledge of human institutions was more reliable than knowledge of nature.

If this distinction was glossed over in the early development of science, it was probably because natural philosophers believed that nature is an artifact—albeit created by God rather than by human beings. We were positioned to understand the mind of God because we were made in God’s image. Believing that the natural world was God’s thought, imposed on primal substance, the first scientists were not obliged to consider how the appearance of the world was a result of their own minds’ impositions. Even when that belief was no longer tenable, the distinction between natural things and artifacts continued to be ignored because many natural systems could be assimilated to mathematical models, which are artifacts. Because they are perfectly knowable, mathematical models—standing  in for natural reality—enable prediction.

According to Plato, the intellect is privileged to have direct access to a priori truths. In contrast, sensory knowledge was at best indirect and at worst illusory. In a parallel vein, Descartes claimed that while appearances could deceive (as in Plato’s Cave), one could not be deceived about the fact that such appearances occur. However, Kant drew a different distinction: one has access to the phenomenal realm (perception) but not to the noumenal realm (whatever exists in its own right). The implicit assumption of science was that scientific constructs—and mathematics in particular—correspond to the noumenal realm, or at least correspond better than sensory perception.

The usefulness of this assumption rests in practice on dealing only with a select variety of natural phenomena: namely, those that can be effectively treated mathematically. Historically this meant simple systems defined by linear equations, since only such equations could be manually solved. The advent of computers removed this limitation, enabling the mathematical modelling of non-linear phenomena. But it does not remove the distinction between artifact and nature, or between the model and the real phenomenon it models.

The model is a product of human definitions. As such it is well-defined, finite, relatively simple, and oriented toward prediction. The real phenomenon, in contrast, is ambiguous and indefinitely complex, hence somewhat unpredictable. Definition is a human action; definability not a property of real systems, which cannot be assumed finite or clearly delimited. The model is predictable by definition, whereas the real system is only predictable statistically, after the fact, if at all.

In part, the reason the found can be confused with the made is that it is unclear what exactly is found, or what finding and making mean in the context of cognition. At face value, it seems that the “external” world is given in the contents of consciousness. But this seemingly real and external world is certainly not Kant’s noumena, the world-in-itself. Rather, the appearance of realness and externality is a product of the mind. It presumes the sort of creative inference that Helmholtz called ‘perceptual hypothesis.’ That is, for reasons of adaptation and survival, the mind has already interpreted sensory input in such a way that the world appears real and external, consisting objects in space and events in time, etc. Overlaid on this natural appearance are ideas about what the world consists of and how it works—ideas that refine our biological adaptation. To the modern physicist it may appear to consist of elementary particles and fundamental forces “obeying” natural laws. To the aborigine or the religious believer it may seem otherwise. Thus, we must look to something more basic, directly common to all, for what is “immediately” found, prior to thought.

Acknowledging that all subjects have in common a realm of perceptual experience (however different for each individual) presumes a notion of subjectivity, contrary to the natural realism which views experience as a window on the world independent of the subject. What is directly accessible to the mind is an apparition in the field of one’s own consciousness: the display that Kant called the phenomenal realm. What we find is actually something the brain has made in concert with what is presumed to be a real external environment, which includes the body of which the brain is a part. This map (the phenomenal realm) is a product of the interaction of mind and the noumenal territory. What is the nature of this interaction? And what is the relationship between the putatively real world and the consciousness that represents it? Unsurprisingly, so far there has been no scientific or philosophical consensus about the resolution of these questions, often referred to as the “hard problem of consciousness.” Whatever the answer, our epistemic situation seems to be such that we can never know reality in itself and are forever mistaking the map for the territory.

Whether or not the territory can be truly found (or what finding even means), the map is something made, a representation of something presumably real. But how can you make a representation of something you cannot find? What sort of “thing” is a representation or an idea, in contrast to what it represents or is an idea of?

A representation responds to something distinct from it. A painting may produce an image of a real scene. But copying is the wrong metaphor to account for the inner representation whereby the brain monitors and represents to itself the world external to it. It is naïve to imagine that the phenomenal realm is in any sense a copy of the external world. A better analogy than painting is map making. A road map, for example, is highly symbolic and selective in what it represents. If to scale, it faithfully represents distances and spatial relationships on a plane. A typical map of a subway system, however, represents only topological features such as connections between the various routes. The essential point is that a map serves specific purposes that respond to real needs in a real environment, but is not a copy of reality. To understand the map as a representation, we must understand those purposes, how the map is to be used.

This map must first be created, either in real time by the creature through its interactions with its environment, or by the species through its adaptive interactions, inherited by the individual creature. How does the brain use its map of the world? The brain is sealed inside the skull, with no direct access to the world outside. The map is a sort of theory concerning what lies outside. The mapmaker has only input signals and motor commands through which to make the map in the first place and to use it to navigate the world. An analogy is the submarine navigator or the pilot flying by instrument—with the strange proviso that neither navigator nor pilot has ever set foot outside their sealed compartment.

The knowledge of the world provided by real time experience, and the knowledge inherited genetically, both consist in inferences gained through feedback. Sensory input leads (say, by trial and error) to motor output, which influences the external world in such a way that new input is provided, resulting in new motor output, and so on. The pilot or navigator has an idea of what is causing the inputs, upon which the outputs are in turn acting. This idea (the map or theory) works to the extent it enables predictions that do not lead to disaster. On the genetic level, natural selection has resulted in adaptation by eliminating individuals with inappropriate connections. On the individual level, real-time learning operates similarly, by eliminating connections that do not lead to a desired result. What the map represents is not the territory directly, but a set of connections that work to at least permit the survival of the mapmaker. It is not that the creature survives because the map is true or accurate; rather, the map is true or accurate because the creature survives!

The connections involved are actively made by the organism, based on its inputs and outputs. They constitute a representation or map insofar as an implicit or explicit theory of reality is involved. While such a connections (in the physical brain) must have a physical and causal basis (as neural synapses, for example), the connections may be viewed as logical and intentional rather than physical and causal. Compare the function of a wiring diagram for an electrical appliance. From an engineering point of view, the soldered connections of the wires and components are physical connections. From a design point of view, the wiring diagram expresses the logical connections of the system, which include the purposes of the designer and the potential user. In the case of a natural brain, the organism is its own designer and makes the connections for its own purposes. The brain can be described as a causal system, but such a description does not go far to explain the neural connectivity or behavior of the organism. It certainly cannot explain the existence of the phenomenal world we know in consciousness.

What’s in a game?

Games are older than history. They are literally fascinating. The ancient Greeks took their sports seriously, calling them the Olympic games. Board games, card games, and children’s games have structured play throughout the ages. Such recreations continue to be important today, especially in online or computer gaming. They underline the paradoxical seriousness of play and the many dimensions of the concept of game. These include competition, cooperation, entertainment and fun, gratuity, chance and certainty, pride at winning and a sense of accomplishment. Besides the agonistic aspect of sports, armies play war games and economists use game theory. The broad psychological significance of the game as a cognitive metaphor begs wider recognition of how the notion mediates experience and structures thought. The mechanist metaphor that still dominates science and society is grounded in the general idea of system, which is roughly equivalent to the notion of game. Both apply to how we think of social organization. The game serves as a powerful metaphor for daily living: “the games people play.” It is no wonder so many people are taken by literal gaming online, and by activities (such as business and war) that have the attributes of competitive games.

While games are old, machines are relatively new. A machine is a physical version of a system, and thus has much in common with a game. The elements of the machine parallel those of the game, because each embodies a well-defined system. While the ancient Greeks loved their games, they were also enchanted by the challenges of clearly defining and systematizing things. Hence, their historical eminence in Western philosophy, music theory, and mathematics. Euclid generalized and formalized relationships discovered through land measurement into an abstract system—plane geometry. Pythagoras systematized the harmonics of vibrating strings. Today we call such endeavors formalization. We recognize Euclid’s geometry as the prototype of a ‘formal axiomatic system’, which in essence is a game. Conversely, a game is essentially a formal system, with well-defined elements, actions and rules. So is a machine and a social or political system. As concepts, they all bear a similar appeal, because they are clear and definite in a world that is inherently ambiguous.

The machine age began in earnest with the Industrial Revolution. Already Newton had conceived the universe as a machine (his “System of the World”). Descartes and La Mettrie had conceived the human and animal body as a machine. Steam power inspired the concepts of thermodynamics, which extended from physics to other domains such as psychology. (Freud introduced libido on the model of fluid dynamics.) The computer is the dominant metaphor of our age—the ultimate, abstract, and fully generalized universal machine, with its ‘operating system’. Using a computer, like writing a program, is a sort of game. We now understand the brain as an extremely complex computer and the genetic code as a natural program for developing an organism. Even the whole universe is conceived by some as a computer, the laws of physics its program. These are contemporary metaphors with ancient precedents in the ubiquitous game.

Like a formal system, a game consists of a conceptual space in which action is well-defined. This could be the literal board of a board game or the playing field of a sport. There are playing pieces, such as chess pieces or the members of a soccer or football team. There are rules for moving them in the space (such as the ways chess pieces can move on the board). And there is a starting point from which the play begins. There is a goal and a way to know if it has been reached (winning is defined). A game has a beginning and an end.

A formal system has the elements of a game. In the case of geometry or abstract algebra, the defined space is an abstraction of physical space. The playing pieces are symbols or basic elements such as “point,” “straight line,” “angle,” “set”, “group,” etc. There are rules for manipulating and combining these elements legitimately, (i.e., “logically”). And there are starting points (axioms), which are strings of symbols already accepted as legitimate. The goal is to form new strings (propositions), derived from the initial ones by means of the accepted moves (deduction). To prove a statement is to derive it from statements already taken on faith. This corresponds to lawful moves in a game.

Geometry is a game of solitaire insofar as there is no opponent. Yet, the point of proof is to justify propositions to other thinkers as well as to one’s own mind, by using legitimate moves. One arrives at certainty, by carefully following unquestioned rules and assumptions. The goal is to expand the realm of certainty by leading from a familiar truth to a new one. It’s a shared game insofar as other thinkers share that goal and accept the rules, assumptions, and structure; it’s competitive insofar as others may try to prove the same thing, or disprove it, or dispute the assumptions and conventions.

Geometry and algebra were “played” for a long time before they were fully formalized. Formalization occurred over the last few centuries, through trying to make mathematics more rigorous, that is to become more consistent and explicitly well-defined. The concept of system, formalized or not, is the basis of algorithms such as computer programs, operating systems, and business plans. Machines, laws, rituals, blueprints—even books and DNA—are systems that can potentially be expressed as algorithms, which are instructions to do something. They involve the same elements as a game: goal, rules, playing pieces, operations, field of action, starting and ending point.

Game playing offers a kind of security, insofar as everything is clearly defined. Every society has its generally understood rules and customs, its structured spaces such as cities and public squares and its institutions and social systems. Within that context, there are psychological and social games that people play, such as politics, business, consumption, and status seeking. There are strategies in personal negotiation, in legal proceedings, in finance, and in war. These are games in which one (or one’s team) plays against opponents. The economy is sometimes thought of as a zero-sum game, and game theory was first devised in economic analysis to study strategies.

Yet, economic pursuit itself—“earning a living,” “doing business,” “making” money, “getting ahead”—serves also as a universal game plan for human activity. The economy is a playing field with rules and goals and tokens (such as money) to play with. In business or in government, a bureaucracy is a system that is semi-formalized, with elements and rules and a literal playing field, the office. The game is a way to structure activity, time, experience and thought. It serves a mediating cognitive function for each individual and for society at large. Conversely, cognition (and mind generally) can be thought of as a game whose goal is to make sense of experience, to structure behavior, and to win in the contest to survive.

The game metaphor is apt for social intercourse, a way to think of human affairs, especially the in-grouping of “us” versus “them.” It is unsurprising that systems theory, digital computation, and game theory arose around the same time, since all involve formalizing common intuitive notions. Human laws are formulas that prescribe behavior, while the laws of nature are algorithms that describe observed patterns in the natural world. The task of making such laws is itself a game with its own rules—the law-maker’s rules of parliamentary procedure and jurisprudence, or the scientist’s experimental method and theoretical protocol. Just as the game can be thought of as active or static, science and law can be thought of as human activities or as bodies of established knowledge. Aside from its social or cognitive functions, a game can be viewed as a gratuitous creation in its own right, an entertainment. It can be either a process or a thing. A board game comes in a box. But when you play it, you enter a world.

Thinking of one’s own behavior as game-playing invites one to ask useful questions: what is my goal? What are the rules? What is at stake? What moves do I take for granted as permissible? How is my thinking limited by the structuring imposed by this game? Is this game really fun or worthwhile? With whom am I playing? What constitutes winning or losing? How does this game define me? What different or more important game could I be playing?

Every metaphor has it limits. The game metaphor is a tool for reflection, which can then be applied to shed light on thought itself as a sort of game. Creating and applying metaphors too is a kind of game.

 

 

The Gender Fence

Apart from biological gender, is there a masculine or feminine mentality? Are men from Mars and women from Venus? In this era when gender identity is up for grabs, can one speak meaningfully about masculine and feminine ways of being and gender differences, apart from biologically determined individuals?

The very notion of gender choice is subtly tricky. For, it may be an essentially masculine idea—a result of social processes and intellectual traditions long dominated by men. Under patriarchy, after all, it is men (at least some men) who have had preferential freedom of choice over their lives. Of course, generalizations are generally suspicious. Exceptions always abound. Nevertheless, the fact of exceptions (outliers they are called in statistics) does not negate the validity of apparent patterns. It only raises deeper questions.

So, here’s my tentative and shaky idea, to take or leave as you please: for better and worse, men tend to be more individualistic than women. One way this manifests is in terms of boundaries. The need for “good boundaries” is a modern cliché of pop psychology. But this too is essentially a masculine idea, since men seek to differentiate their identity more than women. This inclines them to maintain sterner boundaries, to favour rules and structures, to be competitive and authoritarian,  to be self-serving. Women, in contrast, tend to be more nurturing, giving, accommodating and accepting because of their biological role as mothers and their traditional social role as keepers of the hearth and the domestic peace. Which means they appear to have weaker boundaries. They do not separate their identity so clearly from those who depend on them. They literally have no boundary with the fetus growing within, and a more nebulous boundary with the infant and child after birth. Giving often means giving in, and nurturance often means placing the needs of others above one’s own. Men have systematically exploited this difference to their own advantage. It is in their interest to maintain that advantage by maintaining boundaries—that is, to continue being self-centred individualists.

In many ways, this division of labour has worked to maintain society—that is, society as a patriarchal order. Yin and yang complement each other, perhaps like positive and negative—like protons and electrons? (Consider the metaphor: a proton is nearly 2000 times more massive than an electron and is thought of as a solid object, whereas an electron is considered little more than a fleeting bit of charge circling about it!) Traditionally, men have been the centre of gravity, women their minions, servants, and satellites. In the modern nuclear family, men were the bread winners, disciplinarians and authority figures, the autocrats of the breakfast table. One wonders how the dissolved post-modern family, with separated parents (ionized atoms?), affects the emerging gender identities and boundaries of children.

Gender issues loom disproportionately large in the media these days, in part serving as a distracting pseudo-issue in society at large. However, emerging choice about gender identity may be a good thing, with broader social significance than for the individuals involved. It may mean that the centre of gravity of individual identity is shifting toward the feminine, away from traditional masculine values that have been destroying the world even while creating it. Women have long had the model of male roles dangling before them as the avenue to possible freedom, whereas men have more been obliged to buck prejudice to identify with nurturance and endure persecution to identify with the feminine. To put it differently, individuation (and its corollary, individual choice) has become less polarized. It has lost some of its association with males and has become more neutral. In principle, at least, an “individual” is no longer a gendered creature to such an extent. That could also mean a shift away from reproductivity as a basis for identity, which would benefit an overpopulated world. But what does it imply for the mentalities of masculine and feminine?

Masculine and feminine identities are grounded in biology and evolutionary history. That is, they are natural. The modern evolution of the concept of the individual reflects the general long-term human project to deny or escape biological determinants, to secede from nature. But, paradoxically, that too is predominantly a masculine theme! “Individuation” means not only claiming an identity distinct from others in the group. The psychological characteristics of individuality have also meant differentiating from the feminine and from “mother” nature: alienation from the natural. Isn’t it predominantly men who aspire to become non-biological beings, to create a human world apart from nature and a god-like identity apart from animality? To seize control of reproduction (in hospitals and laboratories) and even to duplicate life artificially? Not bound to nature through the womb, men seek to expand this presumed advantage through technology, business and culture, even creating cities as refuges from the wild. However, their ideological rebellion against nature and denial of biological origins is given the obvious lie by the male sex drive and by male imperatives of domination that clearly have animal origins. Are women, then, less hypocritical and perhaps more accepting of their biological roots? Are those roots in fact more socially acceptable than men’s?

If the centre of gender gravity is moving toward the feminine, what could be the consequences for society, for the world? Certainly, a reaction by patriarchy to the threat of “liberal” (read: feminine?) values might be expected and is indeed seen around the world. We could expect an increasing preoccupation with boundaries, which are indeed madly invaded and defended as political borders. Power asserts itself not only against other power-wielding males, but also to defend against the very idea of an alternative to power relations. Men egg each other on in their conspiracy to maintain masculine values of domination, control, the pursuit of money and status, etc. Increasing bureaucracy may be another symptom, since it thrives on structure and hierarchy.

The human overpopulation and destruction of the planet should mitigate against men and women continuing in their traditional biological roles as progenitors and the traditional social goal of “getting ahead.” If so, what will they do with their energies instead? The fact that modern women can escape their traditional lot by embracing masculine values and goals is hardly encouraging. Far better for the world if they claim their individuality by re-defining themselves (and femininity) from scratch, neither on the basis of biology nor in the political world defined by men. On the other hand, one could take heart in the fact that some men are abandoning traditional macho identities. There is hope if that shift is widespread and more than superficial: if gay rights and gender freedom, for example, represent an emerging mentality different from the one that is destroying the world.

On the other hand, boundaries are sure to figure in any emerging sense of individuality, in which masculinity and femininity may continue to play a role. Can men be real men and women be real women in a way that meets the current needs of the planet? Or should the gender fence be torn down? As a male, I like to think there is a positive ideal of masculinity to embrace. This would involve strength, wisdom, objectivity, benevolence, compassion, justice, etc. Yet, I don’t see why these values should be considered masculine more than feminine. Nor should nurturance, accommodation, patience and peace-keeping be more feminine than masculine. Rather, all human beings should aspire to all these values. If the division of labour according to biological gender is breaking down, there is nothing for it but that a moral “individual” should embrace all the qualities that used to be consider gendered. “In the kingdom of heaven there is neither male nor female.” To achieve these ideals may mean transcending the natural and social bases of gender differences—indeed, to ignore gender as a basis for identity.

Why is there anything at all?

WHY IS THERE ANYTHING AT ALL?

Perhaps the most basic question that we can ask is why does the world exist? In other words, why is there anything at all rather than nothing? This is a matter every child ponders at some time and which adults may dismiss as unanswerable or irrelevant to getting on with life. Yet, philosophers, theologians, and even scientists have posed the question seriously and proposed various answers over the ages. Let us back up a moment, however, to realize that questions are not simply mental wonderings but also a certain kind of statement in language, which is notorious for shaping as well as reflecting how we think.

The first questionable element of the question is ‘why.’ What kind of explanation is expected? Answers fall into two broad categories: causal and intentional. Why did the doorbell ring? Well, because electricity flowed in a circuit. Alternatively: because someone at the door pushed the button. Sometimes the difference is not so clear. Newton wondered why the apple fell to the ground. Obviously, because of “gravity,” which he conceived as a universal force between all matter. But he was reluctant to speak of the nature of that force, which he privately identified with the will of God. What guides planets in their orbits around the sun? Well, maybe angels? So, perhaps his answer to our question—like many of his contemporaries—is that the world exists because God created it. But then, child and adult may reasonably wonder where God came from. On the other hand, we now view human, if not divine, actions more like the doorbell: in terms of neuro-electrical circuitry.

The second element to question is ‘is.’ This little verb can have rather different meanings. “The apple is on the tree” tells us about location. “There is an apple on the tree” asserts its existence. “There are seven apples on the tree” identifies a collection of things (apples) with a number. This identity can be more abstract: “one plus one is two.” Which sense is intended in our question?

The third questionable element is ‘anything,’ which suggests a contrast to ‘nothing.’ It raises another question: can we really conceive of nothing? And this raises yet another question: who is asking the question and how are they implicated in any possible answer? We begin to see the question as a set-up, in the sense that it is inquiring about more than just the dubious existence of the world. It tacitly asks about us, the questioners, and about our patterns of thought as not-so-innocent bystanders.

The theological answer (the world exists because God created it) includes his having created us within it—that is, two kinds of things, matter and souls, or things and observers of things. A more existential or phenomenal version of the question contains the same dualism. “Why is there anything (for me) to experience?” implies the question “Why do I exist?” It opens the Pandora’s box of mind-body dualism: how can there be minds (or spirits) in a physical universe? Or: how can consciousness (phenomenal experience) be produced by a material organ, the brain?

Such considerations shape the kinds of arguments that can be made to answer our question. One approach could be called the Anthropic Argument: We could only be here to ask the question if there is a world for us to exist in. That world would have to have specific properties that permit the existence of organisms with conscious minds. The most basic such property of such a favorable world is “existence.” Therefore, the universe must exist because we do! Admittedly, that’s an odd sort of explanation—a bit like reasoning backward from our existence as creatures to the inevitability of a Creator.

A different approach might be called the Argument from Biology. Just as the world must exist and be a certain way for us to exist, so must we see the world in certain ways in order for us to exist. For example, we must view the world in terms of objects in space (and time). Our categories of thought are derived from our cognition, which is grounded in our biological (survival) needs. The concept of nothing(ness) abstracts our actual experience with things and their absence (for example, an empty container). But the container itself is a sort of thing. The idea of ‘object’ for us implies the idea of ‘space’, and vice-versa, so that we cannot really imagine empty space—or truly nothing. At least for our mentality as a biological organism, there cannot be nothing without something. The fact that language can posit the world not existing is paradoxical, since the thought is based on experience of something. Therefore, the world exits because we cannot conceive it not existing!

A similar argument might be called the Argument by Contradiction: perhaps one can imagine a universe without physically embodied minds, and perhaps even a universe that is entirely empty physically (the empty container, which nevertheless leaves the container existing). But, in any case, these are the imaginings of a physically embodied mind, living in the one universe that we know does exist (not empty). We exist, therefore the world does!

Perhaps, similarly, one can imagine a phenomenal blankness (an empty mind), devoid of sensations, thoughts and feelings, and even any conceivable experience. But there is still a point of view from which “someone” is doing the imagining, which is itself a phenomenal experience, so not empty after all. (Nor can it be empty of matter, since we are material beings here imagining it and thinking about it.) With a nod to Descartes: I think, therefore the universe is!

It is not only philosophers and theologians, with their sophistry, who have weighed in on our question. Modern physics and cosmology have posed the question in a scientific form—that is, potentially in way that is empirically testable, if only indirectly. We could call this the Argument from Modern Physics. It proposes that the physical universe arose, for example, from a “quantum fluctuation” in the “vacuum.” (This process traditionally involves a Big Bang.) Given enough time, some random fluctuation was bound to produce a state that would eventually lead to a universe, if not necessarily the one we know. And here we are in the one we know—so at least it exists. (There might be others “somewhere else”?) The argument could be stated thus: there is something because the state of nothing was unstable.

Of course, there are a few conceptual glitches with such schemes. What is the unstable something from which the universe emerged? What exactly is the “vacuum” if not literally and absolutely nothing? Where did it come from? What could be the meaning of time before there existed cyclical processes (before the universe “cooled” enough to allow electrons, for example, to adhere to protons?) How much of such “time” is required to produce a universe?

One might also wonder what causes quantum fluctuations. The current idea seems to be that they are random and uncaused. But randomness and causality are notions derived from common experience in the world we know. The very idea of ‘random fluctuation’ raises questions about our categories of thought. Does “random” mean there is no cause, or no known cause? If the former, can we even imagine that? Moreover, probability usually refers to a sequence of trial runs, as in the random results of repeated coin flips. Could there have been multiple big bangs, only some of which produced what we know as a universe—and only one which produced this universe? What, then, is the probability of existing at all? Such questions boggle the mind, but have been seriously asked. Physicist Lee Smolin, for example, has proposed a theory in which new universes emerge from black holes that produce a new big bang. Each of these events could result in a re-setting of basic parameters, producing a different sort of world. But, what then accounts for the pre-existence of such “parameters,” other than the imagination of the theorist?

The logic in such arguments may be no more impeccable than arguments for the existence of God. But, then, logic itself may have no absolute sway outside the one-and-only real world from which it was gleaned. Does logic represent some transcendent Platonic realm outside nature or does it simply generalize and abstract properties and relationships derived from experience in the world we know? If the former, what accounts for the existence of that transcendent realm? If the latter, how can we, without circularity, apply ideas that are parochial products of our existence in the world we know, to understand how that world could arise? We can only imagine the possible in terms of the actual. The world exists, therefore the world exists!

Schlock and bling

SCHLOCK AND BLING

My first understanding of status symbols came from tracing the origin of the shell motif in European architecture and furnishings. The scalloped shell is a symbol of Saint James (as in coquilles St. Jacques). Pilgrims on the Camino de Compostela wore a shell as a sort of spiritual bumper sticker to indicate their undertaking of a spiritual journey. The symbol made its way onto chests carried in their entourage and onto inns along the route. Eventually it was incorporated in churches, on secular buildings, and on furniture. Especially in the Baroque period, it became a common decorative motif. It was no longer a literal badge of spiritual accomplishment, but remained by implication a sign of spiritual status—ironic and undeserved.

Religion and power have long been associated. Worldly rulers bolstered their authority as representatives on earth of the divine, when not claiming actual divinity for themselves. Kings and nobles would surround themselves with spiritual symbols to enforce this idea and assure others that their superior status was god-given and well deserved. Their inferiors, desiring that such social standing should rub off on them, made use of the same emblems, now become status symbols completely devoid of religious significance, yet serving to assert their claim to superior class.

It is no coincidence that the powerful have also been rich. Wealth itself thus became a status symbol, based on the notion that the rich, like the noble, deserve their station, which may even be predestined or god-given. Wealth is a sign of merit and superiority. Thus, visible luxury items and baubles are not only attractive and fun adornments, but also set some people above others. Given human competitive nature, gold and jewels—to be treasured—must be specially distributed among the few, as well as relatively rare on earth.

Wealth has become abstract and intangible in modern times and above all quantitative—electronic digits in bank accounts. Money translates as power, to buy services and goods and command respect. Yet, there remains a qualitative aspect to wealth. In the industrial age of mass production, in which goods and services are widely available, there is nevertheless a range of quality among them. The rich can choose what they view as better quality versions of common items. Hence the eternal appeal of Rolex and the likes. How much better can they tell time than the fifty-dollar counterpart? Their role is rather as jewelry, to indicate the status of the wearer. In fact, such wristwatches may have all sorts of deliberately useless features. And so with haute couture: dresses so impractical they can be worn only to rare elite functions.

The very nature of status symbols creates paradoxical dilemmas. Everyone wants high status, which by definition is for the few. Street vendors sell counterfeit knock-offs of expensive labels, precisely because—from a distance or to the undiscerning eye—they serve as status symbol as effectively as the brand they mock. This underlines a distinction between what we might call objective and subjective quality. On the level of symbol and first appearance, the rhinestone necklace is equivalent to the diamond version it copies; the size and number of “gems” may be the same. Yet one is a repository of human labour in a way that the other is not. The real diamonds or emeralds, being rare, were mined with difficulty and perhaps great suffering; the metalwork involves hours more effort, first finding then shaping the gold in a befitting way. This is why art has always been valued as a form of wealth, because of the painstaking effort and intention it embodies. The expensive watch is touted as hand-made.

Is quality in the eye of the beholder? The real goods are wasted on those who can’t tell the difference, let alone afford it. Snobbery and class thus depend on sensibility as well as the quantitative power of money. Money can buy you the trappings of wealth, but can you recognize the real thing from the imitation? You can’t take it with you when you die; but can you at least take it in while alive? Does it make any tangible difference if you cannot? Status symbols do their work, after all, because they are symbolic, which does not entail being genuine. Of course, the buyer should beware. But if you don’t really care, then what difference does quality make, even if you can afford it?

Is there objectively genuine quality? Yes, of course! But to appreciate it requires the corresponding sensibility. We might define quality to mean “objectively better” in some sense—perhaps in making the world a better place? In that case, at least someone must know what is objectively better and why, and be capable of intending and implementing it—for example: designing and producing quality consumer goods. That could entail quite a diversity of features, such as durability, repairability, energy efficiency, recyclability, esthetics, usefulness, etc. Sadly, this is not what we see in the marketplace, which instead tends ever more toward shoddy token items, designed to stand in as knock-offs for the real thing. Designed to take your money but not to last or even to be truly useful.

The rich must have something to spend their monetary digits on, otherwise what is the point of accumulating them? True, economics is a game and there is value and status simply in winning, regardless of the prize. Just knowing (without even vaunting) that one has more points than others reinforces the sense of personal worth. But there is also the temptation to surround oneself with ever more things and conveniences, many of which are ironically empty tokens, mere rhinestones. These also serve as status symbols, to demonstrate one’s success to others who also cannot tell the difference (and thereby to oneself?) In the absence of imagination, collecting such things seems the default plan for a life. The would-be rich also must have something to spend their money on; hence consumerism, hence bling.

Traditionally, value is created by human labour. Quality of product is a function of the quality of effort, which in turn is a function of attention and intention. The things that are standard status symbols—artworks, jewels, servants; fine clothes and craftsmanship; luxury homes, cars and boats, etc.—represent the ability to command effort and thereby quality. There is a paradox here too. For, while quality ultimately refers to human effort and skill, in the automated age ever fewer people work at skilled jobs. The very meaning of the standard is undermined by loss of manual skills. Quality can then no longer be directly appreciated, but only evaluated after the fact: how long did the product last, was it really useful, etc.? Like social media, the marketplace is saturated with questionable products that require the role of consumer reviews.

Ever more people now grow up without manual skills and little hands-on experience of making or repairing the things they use. This is a handicap when it comes to evaluating quality, which is a function of what went into making those things. Many people now cannot recognize the difference between a building standard of accuracy to an eighth of an inch and a standard of a half of an inch (millimeters versus centimeters, if you prefer). Teenagers of my generation used to tear apart and rebuild their cars. Now cars are too sophisticated for that, as is most of our technology, which is not designed for home repair, or any repair at all. There are videos online now that (seriously) show how to change a light bulb! People who make nothing, and no longer understand how things are made or how they work, are not in a position to judge what makes things hold together and work properly. They are at the mercy of ersatz tokens mysteriously appearing on retail shelves: manufactured schlock. That is the ultimate triumph of a system of production where profit, not quality, is our most important product.

When machines and robots will do everything (and all humans will be consumers but not producers), what will be the criterion for quality? Quite possibly, in an ideal world where no one needs to work to survive, people would naturally work anyway, as many people now enjoy hobbies. Perhaps in such a world, wealth would not be a matter of possessions but of cultivated skills. As sometimes it is now, status would be a function of what one can do aside from accumulating wealth produced by others. Perhaps then quality will again be recognizable.

 

The truth of a matter

A natural organism can hardly afford to ignore its environment. To put that differently, its cognition and knowledge consist in those capabilities, responses and strategies that permit it to survive. We tend to think of knowledge as general, indiscriminate, abstract, free-floating, since this has been the modern ideal; for the organism, however, it is quite specific and tailored to survival. This is at least mildly paradoxical, since the human being too is an organism. Our idealized knowledge ought to facilitate, and must at least permit, survival of the human organism. Human knowledge may not be as general as suggested by the ideal. In particular, science may not be as objective and disinterested as presumed; its focus can even be myopic.

Science parallels ordinary cognition in many ways, serving to extend and also correct it. On the other hand, as a form of cognition, science is deliberately constrained in ways that ordinary cognition is not. It has a rigor that follows its own rules, not necessarily corresponding to those of ordinary cognition. The latter is allowed, even required, to jump to conclusions in situations demanding action. Science, in contrast, remains tentative and skeptical. It can speculate in earnest, creating elaborate mathematical constructs; but these are bracketed as “theoretical” until empirical data seem to confirm them. Even then, theory remains provisional: it can be accepted or be disqualified by countermanding evidence, but can never strictly be proven. In a sense, then, science maintains a stance of unknowing along with a goal of knowing.

Many questions facing organisms, about what to do and how to behave, hinge implicitly on what seems true or real from a human perspective. For us moderns, that often means from a scientific perspective, which may not correspond to the natural perspective of the organism. Yet, even for the human organism, behavior is not necessarily driven by objective reality and does not have to be justified by it. External reality is but one factor in the cognitive equation. It is a factor to which we habitually give great importance because, in so many words, we are conditioned to give credence to what appears to us real. Ultimately, this is because our survival and very existence indeed depend on what actually is real or true. To that extent, we are in the same boat as any other creature. The other factor, however, is internal: intention or will. We can, and often do, behave in ways that have little to do with apparent reality and which don’t refer to it for justification. (For example, doing something for the “hell” of it or because we enjoy it. Apart from their economic benefits, what do dancing, art, and sports have to do with survival?) Some things we do precisely because they have little to do with reality.

Of course, the question of what is real—or the truth of a matter—is hardly straightforward. It, too, depends on both internal and external factors, subject and object together. In any case, how we act does not depend exclusively on what we deem to be fact. In some cases, this dissonance is irrational and to our detriment—for instance, ignoring climate change or the health effects of smoking. In other cases, acting arbitrarily is the hallmark of our free will—the ability to thumb our noses at the dictates of reality and even to rebel against the constraints imposed by biology and nature. Often, both considerations apply. In a situation of overpopulation, for example, it may be as irrational—and as heroic—for humanity to value human life unconditionally as for the band to keep playing while the Titanic sinks.

At one time the natural world was considered more like an organism than a machine. Perhaps it should be viewed this way again. Should we treat nature as a sentient agent, of value comparable to the preciousness we accord to human life? Here is a topical question that seems to hinge on the truth of what nature “really” is. If it has agency in some sense like we do—whether sentient or not in the way that we are—perhaps it should have legal rights and be treated with the respect accorded persons. Native cultures are said to consider the natural world in terms of “all my relations.” Some people claim mystical experiences in which they commune and even communicate with the natural world, for example with plants. Yet, other people may doubt such claims, which seem counter to a scientific understanding that has long held nature to be no more than an it, certainly not a thou to talk to. For, from a scientific perspective, most matter is inanimate and insentient. Indeed, the mechanistic worldview of science has re-conceived the natural world as a mere resource for human disposal and use. Given such contradictory views, how to behave appropriately toward “the environment” seems to hinge on the truth of a matter. Is the natural world a co-agent? Can it objectively communicate with people, or do people subjectively make up such experiences for their own reasons?

But does the “truth” of that matter really matter? Apart from scientific protocol, as creatures we are ruled by the mandate of our natural cognition to support survival. That is the larger truth, which science ought to follow. Culturally, we have been engaged in a great modern experiment: considering the world inert, essentially dead, profane (or at least not sacred), something we are free to use for our own purposes. While that stance has supported the creation of a technological civilization, we cannot be sure it will sustain it—or life—in the long term. Scientific evidence itself suggests otherwise. It thus seems irrational to continue on such a path, no matter how “true” it may seem.

What have we to lose in sidestepping the supposed truth of the matter, in favour of an attitude that works toward our survival? Better still, how can such contradictory attitudes be made compatible? This involves reconciling subject with object as two complementary factors in our cognition. Science has deliberately bracketed the subject in order to better grasp the object. So be it. Yet, this situation itself is paradoxical, for someone (a subject) obviously is doing the grasping for some tacit reason. Nature is the object, the human scientist is the subject, and grasping is a motivated action that presumes a stance of possession and control—rather than, for example, belonging. We resist the idea that nature controls us (determinism)—but along with it the idea of being an integral part of the natural world. Can we have free will and still belong? Perhaps—if we are willing to concede free will to nature as well.

The irony is that, on a certain level, obsession with reality or truth serves the organism’s wellbeing, but denies it free will. Compulsive belief in the stimulus grants the object causal power over the subject’s response and experience. On the other hand, ignoring the stimulus perilously forfeits what power the subject has to respond appropriately. The classic subject-object relationship is implicitly adversarial. It maintains either the illusion of technological control over nature or of nature’s underlying control over us. The first implies irresponsible power; the second denies responsibility altogether.

Every subject, being embodied, is undoubtedly an object that is part of the natural world. To the extent we are conscious of this inclusion and of being agents, we are in a position to act consciously to maintain the system of which we are a part. In the name of the sort of knowledge achieved by denying this inclusion, however, we have created a masterful technological civilization that is on the brink of self-destruction, while hardly on the brink of conquering nature. Can we believe instead that we do not stand outside the natural world, as though on a foreign battlefield, but are one natural force in negotiation with other natural forces? Negotiation is a relationship among peers, agent to agent. Even when seemingly adversarial, the relationship is between worthy opponents. Let us therefore think of nature neither as master, slave, nor enemy, but as a peer with whom to collaborate toward a peace that insures a future for all life.