Form and content

FORM AND CONTENT

That all things have form and content reflects an analysis fundamental in our cognition and an dichotomy fundamental to language. Language is largely about content—semantic meaning. Yet, it must have syntactical form to communicate successfully. The content of statements is their nominal reason for being; but their effectiveness depends on how they are expressed. In poetry and song, syntax and form are as important as semantics and content. They may even dominate in whimsical expressions of nonsense, where truth or meaning is not the point.

The interplay of form and content applies even in mathematics, which we think of as expressing timeless truths. ‘A=A’ is the simplest sort of logical truth—a tautology, a sheer matter of definition. It applies to anything, any time. By virtue of this abstractness and generality, it is pure syntax. As a statement, it bears no news of the world. Yet, mathematics arose to describe the world in its most general features. Its success in science lies in the ability to describe reality precisely, to pinpoint content quantitatively. The laws of nature are such generalities, usually expressed mathematically. They are thus sometimes considered transcendent in the way that mathematics itself appears to be. That is, they appear as formal rules that govern the behavior of matter. You could say that mathematics is the syntax of nature.

The ancient Greeks formalized the relation between syntax and semantics in geometry. Euclid provided the paradigm of a deductive method, by applying formal rules to logically channel thought about the world, much as language does intuitively. Plato considered the world of thought, including geometry, to be the archetypal reality, which the illusory sensory world only crudely copies. This inverted the process we today recognize as idealization, in which the mind abstracts an essence from sensory experience. For him, these intuitions (which he called Forms) were the real timeless reality behind the mundane and ever-changing content of consciousness.

The form/content distinction pertains especially perhaps in all that is called “art.” Plato had dismissed art as dealing only with appearances, not the truth or reality of things. According to him art should no more be taken seriously than play. However, it is precisely as a variety of play that we do take art seriously. What we find beautiful or interesting about a work of art most often involves its formal qualities, which reveal the artist’s imagination at play. Art may literally depict the world through representation; but it may also simply establish a “world” indirectly, by assembling pertinent elements through creative play. Whatever its serious themes, all art involves play, both for the producer and the consumer.

Meaning is propositional, the content of a message. It is goal-oriented, tied to survival and Freud’s reality principle. But the mind also picks up on formal elements of what may or may not otherwise bear a message or serve a practical function, invoking more the pleasure principle. The experience of beauty is a form of pleasure, and “form” is a form of play with (syntactic) elements that may not in themselves (semantically) signify anything or have any practical use. Art thus often simply entertains. This is no less the case when it is romanticized as a grand revelation of beauty than when it is dismissed as trivially decorative. Of course, art combines seriousness and play in varying ways that can place greater emphasis on either form or content. While these were most often integrated before the 19th century, relatively speaking modern art liberated form from content.

For most of European history, artists were expected to do representational work, to convey a socially approved message—usually religious—through images. At least in terms of content, art was not about personal expression. That left form as the vehicle for individual expression, though within limits. Artists could not much choose their themes, but they could play with style. The rise of subjectivity thematically in art mirrors the rise of subjectivity in society as a whole; it recapitulates the general awakening of individuality. Yet, even today, a given art work is a compromise between the artist’s vision and social dynamics that limit its expression and reception.

From the very rise of civilization, art had served as propaganda of one sort or another. For example, Mesopotamian kings had built imposing monuments to their victories in war, giving a clear message to any potentially rebellious vassals. Before the invention of printing, pictures and sculptures in Europe had been an important form of religious teaching. Yet, even in churches, the role of iconic art was from the beginning a divisive issue. On the one hand, there was the biblical proscription against idolatry. On the other hand, the Church needed a form of propaganda that worked for an illiterate populace. Style and decoration were secondary to the message and used to support it. In the more literate Islamic culture, the written message took precedence, but the formal element was expressed in the esthetics of highly stylized decorative calligraphy. In either case, the artist usually did little more than execute themes determined by orthodoxy, giving expression to ideas the artist may or may not have personally endorsed. But the invention of printing changed the role of graphic art, as later would the invention of photography.

Except to serve as political or commercial propaganda (advertising), today representational art holds a diminished place, superseded by photography and computer graphics. Yet, artists continue to paint and sculpt figures and scenes as well as decorative or purely abstract creations. In the age of instant images (provided by cell phones, for instance), what is the ongoing appeal of hand-made images? How and why is a painting based on a photograph received differently than the photo itself, and why do people continue to make and buy such a thing? The answer surely lies in the interplay of form and content. The representational content of the photo is a given which inspires and constrains the play with form.

Skill is involved in accurately reproducing a scene. We appreciate demonstrations of sheer skill, so that hyper-realist painting and sculpture celebrate technical proficiency at imitation. Then, too, a nostalgia is associated with the long tradition of representational art. Thirdly, status is associated with art as a form of wealth. An artwork is literally a repository of labor-intensive work, which formerly often embodied precious materials as well as skill. Photographic images are mostly cheap, but art is mostly expensive. Lastly, there are conventional ideas about decoration and how human space should be furnished. Walls must have paintings; public space must have sculptures. In general, art serves the purpose of all human enterprise: to establish a specifically human world set apart from nature. This is no less so when nature itself is the medium, as in gardens and parks that redefine the wild as part of the human world.

Nevertheless, it is fair to say that the essence of modern art—as sheer play with materials, images, forms, and ideas—is no longer representational. Art is no longer bound to a message; form reigns over content. Perhaps this feature is liberating in the age of information, when competing political messages overwhelm and information is often threatening. Art that dwells on play with formal elements refrains from imposing a message—unless its iconoclasm is the message. Abstraction does not demand allegiance to an ideology—except when it is the ideology. But in that case, it is no longer purely play. Art can serve ideology; but it can also reassure by the very absence of an editorial program. Playfulness, after all, does not intimidate or discriminate, though it may be contagious. It engages us on a level above personal or cultural differences.

Decoration has always been important to human beings, who desire to embellish and shape both nature and human artifacts. Decoration may incorporate representation or elements from nature, but usually in a stylized way that emphasizes form, while tailoring it to function. Yet, even decorative motifs constitute an esthetic vocabulary that can carry meaning or convey feeling. A motif can symbolize power and military authority, for example. Such are the fasces and the bull of Roman architecture; the “heroic” architecture, sculpture, and poster art of Fascism or Communism; or the Napoleonic “Empire” style of furnishings. It can be geometric and hard-edged, expressing mental austerity. Equally, it can express a more sensuous and intimate spirit, often floral or vegetal—as in the wallpapers of William Morris and the Art Nouveau style of architecture, furniture, and posters. In other words, decoration too reflects intent. It can reinforce or soften an obvious message. But it can also act independently of content, even subversively to convey an opposing ethos.

Even when no message seems intended, there is a meta-message. Whatever is well-conceived and well-executed uplifts and heartens us because it conveys the caring of the artist, artisan, or engineer. On the other hand, the glib cliché and the shoddily made product spread cynicism and discouragement. They reveal the callousness of the producer and inure us to a world in which quantity prevails over quality. Every made thing communicates an intent, for better or worse.

 

 

 

The power and the glory

THE POWER AND THE GLORY

Human beings are eminently social creatures. Our religions remind us to love one another and our laws require us to consider each other’s needs. One’s self-image depends on the good opinion of others and on status—comparative standing in a pecking order. Like other primates, human society is hierarchical. One strives to be better than others—in one’s own eyes and in theirs. Things that serve as symbols and visible trappings of status are a primary form of wealth. On the other hand, we also seek comfort and ease, and wealth consists of things that make our lot better. We are a masterful species not content to live in the abject condition of other creatures, nor content with our natural limitations and dependency on nature. We seek power to define and control our environs—collectively to make a specifically human world, and individually to improve our physical well-being and social standing within it.

The other side of wealth is economic dependency. And the other side of status is psychological dependency. Status and power over others complement each other, since status is essentially power that others have over us. There are those who achieve their relative economic sufficiency by exploiting the dependency of others, just as there are those who rely on the opinions of others for their good opinion of themselves. Independence means not only self-sufficiency (of food production, for example) but also immunity to the opinions of others. There are people for whom material ease and social approval are not paramount. Yet, even they might not be able to defend against others who would compel them with the threat of violence. On your own plot of land, it is possible to subsist and thumb your nose at others trying to buy your services (which provides you no means to control others). But, even if you are food-secure, someone with weapons—or who can pay someone with weapons—can force you to do their bidding or take away your land. When very few own the land required to raise food, most are in an awkward position of dependency.

Control of the physical environment and control over other people dovetail when both can be made to serve one’s purposes. This requires the ability to command or induce others to do one’s bidding. How does this power over others come about? In particular, how does the drive for status mesh with the drive for wealth and the ability to command others? Power must be merited in the eyes of society, and the justification is typically status. How separate can they be? Certainly, we honor some individuals who are not wealthy in material possessions or politically powerful. On the other hand, we may be awed by individuals we despise.

Power can take different forms in different societies. It can be a competition to determine status: who is best able to rule by virtue of their perceived qualities. Leaders are then obeyed out of loyalty to their personal charisma, or because they somehow represent divine authority in the imagination of others. God represents human ideals of omnipotence, omniscience, and benevolence; so does the monarch, ruling by divine proxy, symbolically represent these ideals in society. On the other hand, bureaucratic power is rule by impersonal law. Yet, even its ability to require obedience may have originally derived from divine authority, later replaced by institutions such as parliaments and courts of law, enforced by arms. Like values in general, once considered unquestionable because divinely sanctioned, authority becomes secularized. As the individual’s subjectivity grew more significant in society, so did individual responsibility to endorse ruling authority—through voting in elections, for example. As arbitrary and absolute authority gave way to institutions, equality of subjects under God or king gave way to equality under law. To replace the (theoretically absolute) authority of the monarch with the limited authority of elected representatives changes the political game: from common acceptance of a transcendent reality to a spectator sport of factions supporting competing mortal personalities.

A basic problem of social organization is how to get people to defer to a will that transcends the wills of the individuals constituting society. Just as siblings may bicker among themselves but defer to parental authority, so people seek an impartial, fair, and absolute source of authority—a binding arbitration, so to speak. That is a large part of the appeal of God or king, as givers of law who stand above the law and the fray of mere humans. (Psychologically, the very arbitrariness of royal whim points to the transcendent status of the ruler as above the law, therefore the one who can invest the law with absolute authority.) This is the background of modern deference to codified civil law, which was originally the edict of the king or of God. On the other hand, tradition has the authority of generations. Especially when expressed in writing, precedent has an objective existence that anyone can refer to—and thus defer to—though always subject to interpretation. This too explains the willingness to abide by the law even when in disagreement, provided the law has this explicit objective existence preserved in writing. It may also explain the authority of religious texts for believers.

Effective rule depends not only on charisma but also on delegation of authority to others, to tradition, and to institutions such as laws and bureaucracies. The appeal of law and administration over the whim of rulers lies in its equal application to all: fairness. A law or rule that does not apply to everyone is considered unjust. The other side of such uniformity is that one size must fit all: it is also unfair when individual circumstance is not considered. Acceptance of authority can grow out the success of a triumphant player or out of the rule of law through tradition and bureaucracy. When it fails, it can degenerate into either agonistic populism or bureaucracy run amok—or both. Either way, when authority breaks down, politics degenerates into a popularity contest among personalities mostly preselected from a privileged class. Indeed, that is what ‘democracy’ is, as we have come to know it! A true democratic system would not involve election at all, but selection by lottery—a civic duty like jury duty or military service.

Wealth has the dimensions of status and power. It consists of some form of ownership. In our society, every form of property is convertible to cash and measurable by it. Money has universal value by common agreement, to purchase what is needed for comfort, to purchase status, and to command others by purchasing their services. The rich enjoy the use of capital (property used to gain more wealth), the ability to command a wide variety of services money can buy, and the status symbols it can buy: artworks, jewelry, luxury cars and boats, villas maintained by servants, etc. Yet, most people have little capital and their wealth is little more than the momentary means to survive.

In general, money is now the universal measure of value and success. It also enables the accumulation of capital. Yet, status and power may well have been separate in societies that did not use money as we do. Without money as a medium of exchange, possessions alone cannot serve to command others. There must also be the ability to get others to do one’s bidding by paying them or by coercing them by (paid) force of arms. Without money, as a standard quantized medium of exchange, trade must be a direct exchange of goods and services—i.e., barter. All dollars are created equal (just as all people are, theoretically before the law). But the universal equality of units of money only led to its unequal distribution among people. In that sense, money is the root of economic inequality, if not of all evil. If only barter were possible, it would be difficult (short of outright theft) for one person to accumulate very much more than another. Money promotes plunder, legal and otherwise, by its very intangibility and ease of passing from hand to hand.

We are used to the idea of respecting property ownership and obeying the law, and to hierarchical structures in which one follows orders. Some indigenous societies simply rejected the idea of obeying orders or telling others what to do. Status was important to them, but not power over others. Or, rather, they took measures against the possibility of institutionalized power relations in their society. We tend to project modern power relations and structures back upon the past, so that the quest to understand the origins of power presumes current understandings and arrangements. This can blind us to alternative forms of political process, to real choice we may yet have.

Hardly anyone now could disagree with Plato’s idea that only a certain type of well-motivated and wise individual is truly qualified to lead society. That would mean someone unmotivated by status, wealth or power. But there does not seem to be a modern version of his Academy to train statespersons. (Instead, they graduate from business schools or Hollywood.) There are think tanks, but not wisdom tanks. If the political task is to plan humanity’s future, it might better be done by a technocracy of experts in the many disciplines relevant to that task, including global management of population and resources. They would make and enforce laws designed to ensure a viable future.

Such a governing committee might operate by consensus; but society as a whole (potentially the world) would not be ruled by democratically elected representatives. Instead, staggered appointments would be drawn by lottery among qualified candidates. The term of office would be fixed, non-renewable, and only modestly paid. This arrangement would bypass many of the problems that plague modern democracies, beginning with de facto oligarchy. There would be no occasion to curry favor with the public nor fear its disaffection, since the “will of the people” would be irrelevant. Hence, the nefarious aspects of social media (or corporately controlled official media) wouldn’t touch the political process. There would be no election campaigns, no populist demagoguery, no contested voting results, no need for fake news or disinformation. (Validation of knowledge within scientific communities has its own well-established protocols that remain relatively immune to the toxic by-products and skepticism of the Internet Age.)

Admittedly, members of this governing committee would not be immune to bribery or to using the office for personal benefit (just as juries and judges are sometimes corrupted). Spiritual advice before the modern age was to be in the world and not of it. Taking that seriously today may be the only cure for humanity’s age-old obsession with power and glory. Still, technocracy might be an improvement over the present farce of democracy.

[Acknowledgement: many of the ideas in this post were inspired by The Dawn of Everything by David Graeber and David Wingrow, McClelland and Stewart, 2021—a challenging and rewarding read.]

The mechanist fallacy and the prospect of artificial life

The philosophy of mechanism treats all physical reality as though it were a machine. Is an organism a machine? Under what circumstances could a machine become an organism? Clear answers to such questions are important to evaluate the feasibility and desirability of artificial life.

The answer to the first question is negative: an organism is not a machine, because it is not an artifact. The answer to the second question follows from an understanding of how the philosophy of mechanism leads falsely to the conclusion that natural reality can be formally exhausted in thought and recreated as artifact. A machine can become an organism only by designing itself, from the bottom up, as organisms in effect have done. An artificial organism cannot be both autonomous and fully subject to human control, any more than natural organisms are. This trade-off presents a watershed choice: to create artifacts as tools of human intent or to foster autonomous systems that may elude human control and pose a threat to us and all life.

Much of the optimism of genetic engineering rests on treating organisms as mechanisms, whose genetic program is their blueprint. But no natural thing is literally a machine, because (as far as we know) natural reality is found, not made. The quest to engineer the artificial organism from the top down rests on the theoretical possibility to analyze the natural one exhaustively, just as simulation relies on formal coding of the thing to be simulated. But, unlike machines and other artifacts, no natural thing can be exhaustively analyzed. Only things that were first encoded can be decoded.

As a way of looking, the philosophy of mechanism produces artifacts at a glance.  While this has been very fruitful for technology, imitating organisms is not an effective strategy for producing them artificially, because it can only produce other artifacts. The implicit idealist faith behind theoretical modelling and the notion of perfect simulation is that each and every property of a thing can be completely represented. A ‘property’, however, is itself an artifact, an assertion that disregards a potential infinity of other assertions. The collection of properties of a natural thing does not constitute it, although it does constitute an artifact.

A machine might be inspired by observing natural systems, but someone designed and built it. It has a finitely delimited structure, a precise set of well-defined parts. It can be dismantled into this same set of parts by reversing the process of construction. The mechanistic view of the cosmos assumes that the universe itself is a machine that can be deconstructed into its “true” parts in the same way that an engine can be assembled and disassembled. However, we are always only guessing at the parts of any natural system and how they relate to each other. The basic problem for those who want to engineer life is that they did not make the original.

We cannot truly understand the functioning of even the simplest creature and its genetic blueprint without grasping its complex interactions with environments that are the source and reference of its intentionality. Just as a computer program draws not only upon logic and the mechanics of the computer but also upon the semantically rich environment of the programmer (which ultimately includes the whole of the real world), so the developing embryo, for instance, does not simply unfold according to a program spelled out in genes, but through complex chemical interactions with the uterine environment and beyond. The genetic “program”, in other words, is not a purely syntactic system, but is rich in references that extend indefinitely beyond itself. The organism is both causally and intentionally connected to the rest of the world. Simply identifying genetic units of information cannot be taken as exhaustive understanding of the genetic “code”, any more than identifying units of a foreign language as words implies understanding their meaning.

Simulation involves the general idea that natural processes and objects can be reverse-engineered. They are taken apart in thought, then reconstructed as an artifact from the inferred design. The essence of the Universal Machine (the digital computer) is that it can simulate any other machine exhaustively. But whether any machine, program, artifact, model, or design can exhaustively simulate an organism—or, for that matter, any aspect of natural reality—is quite another question.

The characteristic of thought and language, whereby a rose is a rose is a rose, makes perfect simulation seem feasible. But there are many varieties of rose and every individual flower is unique. The baseball player and the pitching machine may both be called pitchers, but the device only crudely imitates the man, no matter how accurately it hurls the ball. Only in thought are they the “same” action. When a chunk of behavior (whether performed by a machine or a natural creature) seems to resemble a human action, it is implicitly being compared not to the human action itself but to an abstraction (“pitching”) that is understood as the essence of that behavior. Similarly, the essence or structure of an object (the “pitcher”) is only falsely imagined to be captured in a program or blueprint for its construction. Common sense recognizes the differences between the intricate human action of throwing and the mechanical hurling of the ball. Yet, the concept of simulation rests on obscuring such distinctions by conflating all that can pass under a given rubric. The algorithm, program, formalism, or definition is the semantic bottleneck through which the whole being of the object or behavior must be squeezed.

One thing simulates another when they both embody a common formalism. This can work perfectly well for two machines or artifacts that are alternative realizations of a common design. It is circular reasoning, however, to think that the being of a natural thing is exhausted in a formalism that has been abstracted from it, which is then believed to be its blueprint or essence. The structure, program, or blueprint is imposed after the fact, inferred from an analysis that can never be guaranteed complete. The mechanist fallacy implies that it is possible to replicate a natural object by first formalizing its structure and behavior and then constructing an artifact from that design. The artifact will instantiate the design, but it will not duplicate the natural object, any more than an airplane duplicates a bird.

If an organism is not a machine, can a machine be an organism? Perhaps—but only if, paradoxically, it is not an artifact! What begins as an artifact must bootstrap itself into the autonomy that characterizes organism. An organism is self-defining, self-assembling, self-maintaining, self-reproducing—in a word, autopoietic. In order to become an organism, a machine must acquire its own purposes. That property of organisms has come about through natural selection over many generations—a process that depends on birth and death. While a machine exhibits only the intentionality of its designers, the organism derives its own intentionality from participation in an evolutionary contest, through a long history of interactions that matter to it, in an environment of co-participants.

Technological development as we know it expresses human purposes; natural evolution does not. The key concepts that distinguish organism from machine are the organism’s own intentionality and its embodiment in an evolutionary contest. While a machine may be physical, it is not embodied, because embodiment means the network of relationships developed in an evolutionary context. No machine yet, however complex, is embodied in that sense or has its own purposes. Indeed, this has never been the goal of human engineers.

Quite apart from feasibility, we must ask what would be the point of facilitating the evolution of true artificial life, aside from the sheer claim to have done it? The autonomy of organisms limits how they can be controlled. We would have no more control over artificial organisms than we presently have over wild or domesticated ones. We could make use of an artificial ecology only in the ways that we already use the natural one. While it is conceivable that artificial entities could self-create under the right circumstances—after all, life did it—these would not remain within the sort of human control, or even understanding, exerted over conventional machines. We must distinguish clearly between machines that are tools, expressing their designers’ motivations, and machines that are autonomous creatures with their own motivations and survival instincts. The latter, if successful in competing in the biosphere, could displace natural creatures and even all life.

If we wish to retain human hegemony on the planet, there will be necessary limits to the autonomy of our technology. That, in turn, imposes limits on its capabilities and intelligence, especially the sort of general and self-interested intelligence we expect from living beings. We must choose—while we still can—between controllable technology to serve humans and the dubious accomplishment of siring new forms of being that could drive us to extinction. This is a political as well as a design choice. Only clarity of intention can avoid disaster resulting from the naive and confused belief that we can both retain control and create truly autonomous artifacts.

 

Origins of the sacred

Humanity and religion seem coeval. From the point of view of the religious mind, this hardly requires explanation. But from a modern scientific or secular view, religion appears to be an embarrassing remnant. There must be a reason why religion has played such a central and persistent role in human affairs. If not a matter of genes or evolutionary strategy, it must have a psychological cause deeply rooted in our very nature. Is there a core experience that sheds light on the phenomenon of religion?

The uncanny is one response to unexpected and uncontrolled experience. It is not solely the unpredictable external world that confounds the mind, which can produce from within its own depths terrifying, weird, or at least unsettling experiences outside the conscious ego’s comfort zone. One can suffer the troubling realization that the range of possible experience is hardly guaranteed to remain within the bounds of the familiar, and that the conscious mind’s strategies are insufficient to keep it there. The ego’s grasp of this vulnerability, to internal as well as external disturbance, may be the ground from which arises the experience of the numinous, and hence the origin of the notion of the sacred or holy. Essentially it is the realization that there will always be something beyond comprehension, which perhaps underlies the familiar like the hidden bulk of an iceberg.

To actually experience the numinous or “wholly other” seems paradoxical to the modern mind, given that all experience is considered a mediated product of the biological nervous system. For, the noumenon is that which, by Kant’s definition, cannot be experienced at all. Its utter inaccessibility has never been adequately rationalized, perhaps because our fundamental epistemic situation precludes knowing the world-in-itself in the way that we know our sensory experience. Kant acknowledged this situation by clearly distinguishing phenomenal experience from the inherent reality of things-in-themselves—a realm off-limits to our cognition by definition. He gave a name to that transcendent realm, choosing to catalogue it as a theoretical construct rather than to worship it. Yet, reason is a late comer, just as the cortex is an evolutionary addition to older parts of the brain. We feel things before we understand them. Rudolf Otto called this felt inaccessibility of the innate reality of things its ‘absolute unapproachability’. He deemed it the foundation of all religious experience. Given that we are crucially dependent on the natural environment, and are also psychologically at the mercy of our own imaginings, I call it holy terror.

In addition to being a property of things themselves, realness is a quality with which the mind imbues certain experiences. Numinosity may be considered in the same light. The perceived realness of things refers to their existence outside of our minds; but it is also how we experience our natural dependency on them. Real things command a certain stance of respect, for the benefit or the harm they can bring. Perhaps perceived sacredness or holiness instills a similar attitude in regard to the unknown. In both cases, the experienced quality amounts to judgment by the organism. Those things are cognitively judged real that can affect the organism for better or worse, and which it might affect in turn. Things judged sacred might play a similar role, not in regard not to the body but to the self as a presumed spiritual entity.

The quality of sacredness is not merely the judgment that something is to be revered; nor is holiness merely the judgment that something or someone is unconditionally good. These are socially-based assessments secondary to a more fundamental aspect of the numinous as something judged to be uncanny, weird, otherworldly, confounding, entirely outside ordinary human experience. The uncanny is at once real and unreal. The sacred commands awe in the way that the real compels a certain involuntary respect. Yet, numinous experiences do more than elicit awe. They also suggest a realm entirely beyond what one otherwise considers real. Paradoxically, this implies that we do not normally know reality as it really is.

Indeed, as Kant showed, we cannot know the world as it is “in itself,” apart from the limited mediating processes of our own consciousness. All experience is thus potentially uncanny; the very fact that we consciously experience anything at all is an utter mystery! We can never know with certainty what to make of experience or our own presence as experiencers. It is only through the mind’s chronically inadequate efforts to make sense that anything can ever appear ordinary or profane. Mystery does not just present a puzzle that we might hope to resolve with further experience and thought. Sometimes it is a tangible revelation of utter incomprehensibility, which throws us back to a place of abject dependency.

We are self-conscious beings endowed with imagination and the tendency to imbue our imaginings with realness. We have developed the concept of personhood, as a state distinct from the mere existence of objects or impersonal forces. We seem compelled in general to imagine an objective reality underlying experience. A numinous experience is thus reified as a spiritual force or reality, which may be personified as a “god.” When the relationship of dependence—on a reality beyond one’s ken and control—is thus personified, it aligns with the young child’s experience of early dependence on parents, who must seem all powerful and (ideally) benevolent. Hence, the early human experience of nature as the Great Mother—and later, as God the Father. In the modern view, these family figures reveal the human psyche attempting to come to terms with its dependent status.

But nature is hardly benevolent in the consistent way humans would like their parents to be. Psychoanalysis of early childhood reveals that even the mother is perceived as ambivalent, sometimes depriving and threatening as well as nourishing. The patriarchal god projects the male ego’s attempt to trump the intimidating raw power of nature (read: the mother) by defining a “spiritual” (read: masculine) world both apart from it and somehow above it. The Semitic male God becomes the creator of all. He embodies the ideal father, at once severe and benevolent. But he also embodies the heroic quest to self-define and to re-create the world to human taste. In other words, the human aspiration to become as the gods.

On the one hand, this ideal projects onto an invisible realm the aspiration to achieve the moral perfection of a benevolent provider, and reflects how one would wish others (and nature) to behave. It demands self-mastery, power over oneself. The path of submission to a higher power acknowledges one’s abject dependence in the scheme of things, to resist which is “sin” by definition. On the other hand, it represents the quest for power over the other: to turn the tables on nature, uncertainty, and the gods—to be the ultimate authority that determines the scheme of things.

One first worships what one intends to master. Worship is not abject submission, but a strategy to dominate. Religion demonstrates the human ability to idealize, capture, and domesticate the unknown in thought. It feigns submission to the gods, even while its alter ego—science—covets and acquires their powers. Thus, the religious quest to mitigate the inaccessibility and wrath of God, which lurks behind the inscrutability of nature, is taken over by the scientific quest for order and control. The goal is to master the natural world by re-creating it, to become omniscient and omnipotent.

Relations of domination and submission play out obviously in human history. A divinely authorized social relationship is classically embodied in two kinds of players: kings and peasants. Yet, history also mixes these and blurs boundaries. Like some entropic process, the quest for empowerment is dispersed, so that it becomes a universal goal no longer projected upon the gods or reserved to kings. We see this “democratization” in the modern expectation of social progress through science and global management. While enjoying the benefits of technology, deeply religious people may not share this optimism, remaining skeptical that power rests forever in the inscrutable hands of God. Those who imagine a judgmental, vindictive, and jealous male god have the most reason to be doubtful of human progress, while those who identify with the transcendent aspect of religion are more likely to feel themselves above specific outcomes in the historical fray.

The ability of mind to self-transcend is a double-edged sword. It is the ability to conceive something beyond any proposed limit or system. This enables a dizzying intimation of the numinous; more importantly, it enables the human being to step beyond mental confines, including ideas and fears about the nature of reality and what lies beyond. On the one hand, we know that we know little for certain. To fully grasp that inspires the goosebumps of holy terror. One defensive response is to pretend that some text, creed, or dogma provides an ultimate assurance; yet we know in our bones that is wishful thinking. The experience of awe may incline one to bow down before the Great Mystery. Yet, we are capable of knowledge such as it can be, for which we (not the gods) are responsible. We are cursed and blessed with at least a measure of choice over how to relate to the unknown.

Uncommon sense

Common sense is a vague notion. Roughly it means what would be acceptable to most people. Yet how can there be such a thing as common sense in a divided world? And how can a common understanding of the world be achieved in the face of information that is doubly overwhelming—too much to process and also unreliable?

In half a century, we have gone from a dearth of information crucial for an informed electorate, to a flood of information that people ironically cannot use, do not trust, and are prone to misuse. We now rely less (and with more circumspection) on important traditional appraisers of information, such as librarians, teachers, academics and peer-reviewed journals, text-book writers, critics, censors, journalists and newscasters, civil and religious authorities, etc. The Internet, of course, is largely responsible for this change. On the one hand, it has democratized access to information; on the other, it has shifted the burden of interpreting information—from those trained for it onto the unprepared public, which now has little more than common sense to rely upon to decide what sources or information to trust.

Which brings us to a Catch-22: how to use common sense to evaluate information when the formation of common sense depends on a flow of reliable information? How does one get common sense? It was formerly the role of education to decide what information was worthy of transmission to the next generation, and to impart the wisdom of how to use it. (Also, at a time when there existed less specialized expertise, people formerly had a wider general experience and competence of their own to draw upon.) Now there is instant access to a plethora of influences besides the voices of official educators and recognized experts. The nature of education itself is up for grabs in a rapidly changing present and unpredictable future. Perhaps education should now aim at preparation for change, if such is not an oxymoron. That sort of education would mean not learning facts or skills that might soon become obsolete, but meta-skills of how to adapt and how to use information resources. In large part, that would mean how to interpret and reconcile diverse claims.

One such skill is “reason,” meaning the ability to think logically. If we cannot trust the information we are supposed to think about, at least we could trust our ability to think. If we cannot verify the facts presented, at least we can verify that the arguments do not contradict themselves. Training in critical thinking, logic, research protocols, data analysis, and philosophical critique are appropriate preparations for citizenship, if not for jobs. This would give people the socially useful skill to evaluate for themselves information that consists inevitably of the claims of others rather than “facts” naively presumed to be objective. Perhaps that is as close as we can come to common sense in these times.

Since everything is potentially connected to everything else, even academic study is about making connections as well as distinctions. The trouble with academia partly concerns the imbalance between analysis (literally taking things apart) and synthesis (putting back together a whole picture). Intellectual pursuit has come to overemphasize analysis, differentiation, and hair-splitting detail, often to the detriment of the bigger picture. Consequently, knowledge and study become ever more specialized and technical, with generalists reduced to another specialty. The result is an ethos of bickering, which serves to differentiate scholars within a niche more than to sift ideas ultimately for the sake of a greater synthesis. This does not serve as a model of common sense for society at large.

Technocratic language makes distinctions in the name of precision, but obstructs a unifying understanding that could be the basis for common sense. Much technical literature is couched in language that is simply inaccessible to lay people. Often it is spiced with gratuitous equations, graphs, and diagrams, as though sheer quantification or graphic summaries of data automatically guarantee clarity or plausibility, let alone truth. Sometimes the arguments are opaque even to experts outside that field. Formalized language and axiomatic method are supposed to structure thought rigorously, to facilitate deriving new knowledge deductively. Counter-productively, a presentation that serves ostensibly to clarify, support, and expand on a premise often seems to obfuscate even the thinking of those presenting it. How can the public assimilate such information, which deliberately misses the forest for the trees? How can we have confidence in complex argumentation that pulls the wool over the eyes even of its proponents?

Academic writing must meet formal requirements proposed by the editors of journals. There are motions to go through which have little to do with truth. Within such a framework, literary merit and even skill at communication is not required. Awkward complex sentences fulfill the minimal requirements of syntax. While this is frustrating for outsiders, such formalism permits insiders to identify themselves as members of an elite club. The danger is inbreeding within a self-contained realm. When talking to their peers, academics may feel little need to address the greater world.

For the preservation of common sense, an important lay skill might be the ability to translate academese, like legal jargon, into plain language. One must also learn to skim through useless or misleading detail to get to the essential points. Much popular non-fiction, like some academic books, ironically have few main ideas (and sometimes but one), amply fluffed out to book length with gratuitous anecdotes to make it appeal to a wider audience. Learning to recognize the essential and sort the wheat from the chaff now seems like a basic survival skill even outside academia.

Perhaps as a civilization we have simply become too smart for our own good. There is now such a profusion of knowledge that not even the smartest individuals, with the time to read, can keep up with it. Somehow the information bureaucracy works to expand technological production. But does it work to produce wisdom that can direct the use of technology? The means exist for global information sharing and coordination, but is there the political will to do the things we know are required for human thriving?

Part of the frustration of modern times is the sense of being overwhelmed yet powerless. We may suffer in having knowledge without the power to act effectively, as though we had heightened sensation despite physical paralysis. Suffering is a corollary of potential action and control. Suffering can occur only in a central nervous system, which doubles to inform the creature of its situation and provide some way to do something about it. Sensory input is paired with motor output; perception is paired with response.

Cells do not suffer, though they may respond and adapt (or die). It is the creature as a whole that suffers when it cannot respond effectively. If society is considered an organism, individual “cells” may receive information appropriate at the creature level yet be unable to respond to it at that level. Perhaps that is the tragedy of the democratic world, where citizens are expected to be informed and participate (at least through the vote) in the affairs of society at large—and to share its concerns—but are able to act only at the cellular level. To some extent, citizens have the same information available to their leaders, who are often experts only in the art of staying in power. They may have a better idea of what to do, but are not positioned to do it.

Listening to the news is a blessing when it informs you concerning something you can plausibly do. Even then, one must be able to recognize what is actual news and distill it from editorial, ideology, agenda, and hype. Otherwise it is just another source of anxiety, building a pressure with no sensible release. To know what to do, one must also know that it is truly one’s own idea and free decision and not a result of manipulation by others. That should be the role of common sense: to enable one to act responsibly in an environment of uncertainty.

Unfortunately, human beings tend to abhor uncertainty—a dangerous predicament in the absence of reliable information and common sense. The temptation is to latch onto false certainties to avoid the sheer discomfort of not knowing. These can serve as pseudo-issues, whose artificial simplicity functions to distract attention from problems of overwhelming complexity. Pseudo-issues tend to polarize opinion and divide people into strongly emotional camps, whose contentiousness further distracts attention from the true urgencies and the cooperative spirit required to deal with them. While common sense may be sadly uncommon, it remains our best hope.

Life and work in the paradise of machines

What would we do if we didn’t have to do anything? What would a world be like where nearly all work is done by machines? If machines did all the production, humans would have to find some other way to occupy their time. They would also have to find some other way to justify the cost to society for their upkeep and their right to exist. In the current reality, one’s income is roughly tied to one’s output—though hardly in an equitable way. Investors and upper management are typically rewarded grossly more than employees for their efforts. Yet their needs as organisms are no greater. In a world where all production and most services would be done by machines, human labour would no longer be the basis for either the production or the distribution of wealth. Society would have to find some other arrangement.

In that situation, a basic income could be an unconditional human right. When automation meets all survival needs, food, housing, education and health care could be guaranteed. All goods and services necessary for living a satisfying life would be a birthright, so that no one would be obliged to work in order to live. Time and effort would be discretionary and uncoupled from survival. What to do with one’s time would not be driven by economic need but by creative vision. Thus, the challenge to achieve freedom from toil cannot be separated from the problem of how to distribute wealth, which we already face. Nor can it be separated from the question of what to do with free time, which in turn cannot be separated from how we view the purpose of life.

As biological creatures, our existence is beholden to natural laws and biological necessities. We need food, shelter and clothing and must act to provide for these needs. A minimal definition of work is what must be done to sustain life. The hand-to-mouth subsistence of pre-industrial societies involved a relatively direct relationship between personal effort and survival. Industrial society organizes production by divisions of labour, providing an alternative concept of work with a less direct relationship. Production involves the cooperation of many people, among whom the resulting wealth must somehow be divided up. Work takes on a different meaning as the justification for one’s slice of the economic pie. It is less about production, per se, than the rationale for consumption: a symbolic dues paid to merit one’s keep and secure one’s place on the planet.

Early predictions that machines would create massive unemployment have not materialized. Nor have predictions that people would work far less because of automation. Instead, new forms of employment have replaced the older ones now automated, with people typically working longer hours. Whether or not these forms of work really add to the general wealth and welfare, they serve to justify the incomes of new types of workers. As society adjusts to automation, wealth is redistributed accordingly, though not equitably. Work is redefined but not reduced. In the present economy, those who own the means of production benefit most and control society, in contrast to those who perform labour. When machines are both the means of production and labour combined, how will ownership be distributed? What would be the relationship between, for example, 99% of people unemployed and the 1% who own the machines?

With advances in AI, newly automated tasks continue to encroach on human employment. In principle, any conceivable activity can be automated; and any role in the economy can be taken over by machines—even war, government, and the management of society. We are talking, of course, about superintelligent machines that are better than humans at most, if not all, tasks. But better how, according to which values? If we entrust machines to implement human goals efficiently, why not entrust them to set the goals as well? Why not let them decide what is best for us and sit back to let them provide it? On the one hand, that seems like a timeless dream come true, freedom from drudgery at last. Because physical labour is tiring and wears on the body, we may at least prefer mental to physical activity. The trend has been to become more sedentary, as machines take over grunt work and as forms of work evolve that are less physical and more mental. White-collar work is preferred to blue-collar or no-collar, and rewarded accordingly. Yet work is still tied to survival.

Humans have always struggled against the limitations of the body, the dictates of biology and physics, the restrictions imposed by nature. In particular, that means freedom from the work required to maintain life. In Christian culture, work was a punishment for original sin: the physical pain attending the sweat of the brow and the labour of childbirth alike. Work has had a redeeming quality, as an expiation or spiritual cleanse. The goal of our rebellion against the natural condition is return to paradise, freedom again from painful labour or any effort deemed unpleasant. Our very idea of progress implies the increase of leisure, if not immediately then in the long term: work now for a better future. This has guided the sort of work that people undertake, resulting in the achievements of technology, including artificial intelligence. Humans first eased their toil by forcing it upon animals and other humans they enslaved. Machines now solve that moral dilemma by performing tasks we find burdensome. So far at least, they do not tire, or suffer, or rebel against their slavery.

On the other hand, humans have also always been creative and playful, pursuing activity outside the mandate of Freud’s reality principle and the logic of delayed gratification. We find direct satisfaction in accomplishment of any sort. We deliberately strain the body in exercise and sport, climb mountains for recreation, push our physical limits. We seek freedom from necessity, not from all activity or effort. We covet the leisure to do as we please, what we freely decide upon. In an ideal world, then, work is redefined as some form of play or gratuitous activity, liberated from economic necessity. There have always existed non-utilitarian forms of production, such as music, art, dance, hobbies, and much of academic study. Though not directly related to survival, these have always managed to find an economic justification. When machines supply our basic needs, everyone could have the time for pursuits that are neither utilitarian nor economic.

Ironically, some people now express their creativity by trying to automate creativity itself: writing programs to do art, compose music, play games, etc. No doubt there are already robots that can dance. While AI tools and “expert” programs assist scientists with data analysis, so far there are no artificial scientists or business magnates. Yet, probably anything that humans do machines will eventually do at least as well. The advance of AI seems inevitable in part because some people are determined to duplicate every natural human function artificially through technology. There is an economic incentive, to be sure, yet there is also a drive to push AI to ever further heights purely for the creative challenge and the accomplishment. Because this drive often goes unrecognized even by those involved, it is especially crucial to harness it to an ideal social vision if humanity is to have a meaningful future. Where is the reasonable limit to what should be automated? If the human goal is not simply relief from drudgery, but that machines should ultimately do everything for us, does that not imply that we consider all activity onerous? What, then, would be the point of our existence? Are we here just to consume experience, or are we not by nature doers as well?

Some visionaries think that machines should displace human beings, who have outlived their role at the top of an evolutionary ladder. They view the human form as a catalyst for machine intelligence. However, that post-humanist dream is quintessentially a humanist ideal, invoking transcendence of biological limits. It is a future envisioned not by machines or cyborgs but by conventional human beings alive today. To fulfill it, AI would have to embody current human nature and values in many ways—not least by being conscious. Essentially, we are looking to AI for perfection of ourselves—to become or give birth to the gods we have idolized. But AI could only be conscious if it is effectively an artificial organism, vulnerable and limited in some of the ways we are, even if not in all. To create insentient superintelligence merely for its own sake (rather than its usefulness to us) makes no human sense. Art for art’s sake may make sense, but not automation for automation’s sake. Nor can the goal be to render us inactive, relieved even of creative effort. We must come to understand clearly what we expect from machines—and what we desire for ourselves.

On intentionality

Intentionality is an elusive concept that fundamentally means reference of something to something else. Reference, however, is not a property, state, or relationship inhering in things or symbols, nor between them; it is rather an action performed by an agent, who should be specified. It is an operation of relating or mapping one thing or domain to another. These domains may differ in their character (again, as defined by some agent). A picture, for example, might be a representation of a real landscape, in the domain of painted images. As such it refers to the landscape, and it is the painter who does the referring. Similarly, a word or sentence might represent a person’s thought, perception, or intention. The relevant agents, domains, and the nature of the mappings must be included before intentionality can be properly characterized.

In these terms, the rings of a tree, for example, may seem to track or indicate the age of the tree or periods favorable to growth. Yet, it is the external observer, not the tree, who establishes this connection and who makes the reference. Connections made by the tree itself (if such exist) are of a different sort. In all likelihood, the tree rings involve causal but not intentional connections.

A botanist might note connections she considers salient and may conclude that they are causal. Thus, changing environmental conditions can be deemed a cause of tree ring growth. Alternatively, it would stretch imagination to suppose that the tree intended to put on growth in response to favorable conditions. Or that God (or Nature) intended to produce the tree ring pattern in response to weather conditions. These suppositions would project human intentionality where it doesn’t belong. Equally, it would be far-fetched to think that the tree deliberately created the rings in order to store in itself a record of those environmental changes, either for its own future use or for the benefit of human observers. The tree is simply not the kind of system that can do that. The intentionality we are dealing with is rather that of the observer. On the other hand, there are systems besides human beings that can do the kind of things we mean by referring, intending, and representing. In the case of such systems, it is paramount to distinguish clearly the intentionality of the system itself from that of the observer. This issue arises frequently in artificial intelligence, where the intentionality of the programmer is supposed to transfer to the automated system.

The traditional understanding of intentionality generally fails to make this distinction, largely because it is tied to human language usage. “Reference” is taken for granted to mean linguistic reference or something modeled on it. Intentionality is thus often considered inherently propositional even though, as far as we know, only people formulate propositions. If we wish to indulge a more abstract notion of ‘proposition’, we must concede that in some sense the system makes assertions itself, for its own reasons and not those of the observer. If ‘proposition’ is to be liberated from human statements and reasoning, the intention behind it must be conceived in an abstract sense, as a connection or mapping (in the mathematical sense) made by an agent for its own purposes.

Human observers make assertions of causality according to human intentions, whereas intentional systems in general make their own internal (and non-verbal) connections, for their own reasons, regardless of whatever causal processes a human observer happens to note. Accordingly, an ‘intentional system’ is not merely one to which a human observer imputes her own intentionality as an explanatory convenience (as in Dennett’s “intentional stance”). Such a definition excludes systems from having their own intentionality, which reflects the longstanding mechanist bias of western science since its inception: that matter inherently lacks the power of agency we attribute to ourselves, and can only passively suffer the transmission of efficient causes.

An upshot of all this is that the project to explain consciousness scientifically requires careful distinctions that are often glossed over. One must distinguish the observer’s speculations about causal relations—between brain states and environment—from speculations about the brain’s tracking or representational activities, which are intentional in the sense used here. The observer may propose either causal or intentional connections, or both, occurring between a brain (or organism) and the world. But, in both cases, these are assertions made by the observer, rather than by the brain (organism) in question. The observer is at liberty to propose specific connections that she believes the brain (organism) makes, in order to try to understand the latter’s intentionality. That is, she may attempt to model brain processes from the organism’s own point of view, attempting as it were to “walk in the shoes of the brain.” Yet, such speculations are necessarily in the domain of the observer’s consciousness and intentionality. In trying to understand how the brain produces phenomenality (the “hard problem of consciousness”), one must be clear about which agent is involved and which point of view.

In general, one must distinguish phenomenal experience itself from propositions (facts) asserted about it. I am the witness (subject, or experiencer) directly to my own experience, about which I may also have thoughts in the form of propositions I could assert regarding the content of the experience. These could be proposed as facts about the world or as facts about the experiencing itself. Along with other observers, I may speculate that my brain, or some part of it, is the agent that creates and presents my phenomenal experience to “me.” Other people might also have thoughts (assert propositions) about my experience as they imagine it; they may also observe my behavior and propose facts about it they associate with what they imagine my experience to be. All these possibilities involve the intentionality of different agents in differing contexts.

One might think that intentionality necessarily involves propositions or something like them. This is effectively the basis on which an intentional analysis of brain processes inevitably proceeds, since it is a third-person description in the domain of scientific language. This is least problematic when dealing with human cognition, since humans are language users who normally translate their thoughts and perceptions into verbal statements. It is more problematic when dealing with other creatures. However, in all cases such propositions are in fact put forward by the observer rather than by the system observed. (Unless, of course, these happen to be the same individual; but even then, there are two distinct roles.)

The observer can do no better than to theoretically propose operations of the system in question, formulated in ordinary or some symbolic language. The theorist puts herself in the place of the system to try to fathom its strategies—what she would do, given what she conceives as its aims. This hardly implies that the system in question (the brain) “thinks” in human-language sentences (let alone equations) any more than a computer does. But, with these caveats, we can say that it is a reasonable strategy to translate the putative operations of a cognitive system into propositions constructed by the observer.

In the perspective presented here, phenomenality is grounded in intentionality, rather than the other way around. This does not preclude that intentionality can be about representations themselves or phenomenal experience per se (rather than about the world), since the phenomenal content as such can be the object of attention. The point to bear in mind is that two domains of description are involved, which should not be conflated. Speculation about a system’s intentionality is an observer’s third-person description; whereas a direct expression of experience is a first-person description by the subject. This is so, even when subject and observer happen to be the same person. It is nonsense to talk of phenomenality (qualia) as though it were a public domain like the physical world, to which multiple subjects can have access. It is the external world that offers common access. We are free to imagine the experience of agents similar to ourselves. But there is no verifiable common inner world.

All mental activity, conscious or unconscious, is necessarily intentional, insofar as the connections involved are made by the organism for its own purposes. (They may simultaneously be causal, as proposed by an observer.) But not all intentional systems are conscious. Phenomenal states are thus a subset of intentional states. All experience depends on intentional connections (for example, between neurons); but not all intentional connections result in conscious experience.

Sentience and selfhood

‘Consciousness’ is a vague term in the English language. Where existent, its counterparts in other languages often carry several meanings. To be conscious can be either transitive or intransitive; it can mean simply to be aware of something—to have an experience—or it can mean a state opposed to sleep, coma, or inattention. While consciousness clearly involves the role of the subjective self, one is not necessarily aware of that role in the moment. That is, one can be conscious though not self-conscious. The latter notion also is ambiguous: in everyday talk, self-consciousness refers to a potentially embarrassing awareness of one’s relationship to others, perhaps social strategizing. Here, it will mean something more technical: simply the momentary awareness of one’s own existence as a conscious subject.

It might be assumed that to be conscious is to be self-conscious, since the two are closely bound up for human beings. I propose rather to make a distinction between sentience (simply having experience) and the awareness of having that experience. The first involves no more than the naïve appearance of an external world as well as internal sensations—what Kant called phenomena and more recent philosophers call “contents of consciousness” or qualia. No concept of self enters into sentience as such. The second involves, additionally, the awareness of self and of the act or fact of experiencing. One should thus be able to imagine, at least, that other creatures can be sentient—even if they do not seem aware of their individual existence in our human way, and regardless of whether one can imagine just what it is like to be them.

Language complicates the issue. For, we can scarcely speak or think of sentience (or awareness, consciousness, experience, etc.) in general without reference to our familiar human sentience. We are thereby reminded of our own existence—indeed, of our presence in the moment of speaking or thinking about it. Nevertheless, it is as possible to be caught up in thought as to be caught up in sensation. (We all daydream, for example, only “awakening” when we realize that is what we have been doing.) Then the object is the focus rather than its subject. This outward focus is, in fact, the default state. Often, we are simply aware of the world around us, or of some thought in regard to it; we are not aware of being aware. Perhaps it is the fluidity of this boundary—between the state of self-awareness and simple awareness of the contents of experience—which gives the impression that sentience necessarily involves self-awareness. After all, as soon as we notice ourselves being sentient, we are self-aware. It is illogical, however, to conclude that creatures without the capability of self-awareness are not sentient. Language plays tricks with labels. At one time, animals were considered mere insensate machines—incapable of feeling, let alone thought, because these properties could belong only to the human soul.

One might even suppose that self-consciousness is a function of language, since the act of speaking to others directly entails and reflects one’s own existence in a way that merely perceiving the world or one’s sensations does not. Yet, it hardly follows that either sentience or self-consciousness is limited to language users. The problem, again, is that we are ill-equipped to imagine any form of experience than our own, which we are used to expressing in words, both to others and to ourselves.

This raises the question of the nature and function of self-consciousness, if it is not simply a by-product of the highly evolved communication of a social species. The question is complicated by the fact that identifiable tags of self-consciousness (such as recognizing one’s image in a reflection) seem to be restricted to intelligent creatures with large brains—such as chimpanzees, cetaceans, and elephants—all of which are also social creatures. On the other hand, social insects communicate, but we do not thereby suppose that they are conscious as individuals. To attribute a collective consciousness to the hive or colony extends the meaning of the term beyond the subjective sense we are considering here. It becomes a description of emergent behavior, observed, rather than individual experience perceived. In some sense, consciousness emerges in the brain; but few today would claim that individual neurons are “conscious” because the brain (or rather the whole organism) is conscious.

Closely related to the distinction between simple awareness and self-awareness is the distinction between object and subject, and the corresponding use of person in language. We describe events around us in the third person, as though their appearance is simply objective fact, having nothing to do with the perceiver. For the most part, for us the world simply is. Self-conscious in theory, naïve realism is our actual default state of mind. With good (evolutionary) reason, the object dominates our attention. Yet, self-awareness, too, is functional for us as highly social creatures. We get along, in part through the ability to imagine the subjective experience of others, which means first recognizing our own subjectivity. The very fact that we conceive of sentience at all is only possible because of this realization. The subject (self) emerges in our awareness as an afterthought with profound implications. As in the biblical Fall, our eyes are opened to our existence as perceiving agents, and we are cast from the state of unselfconscious being.

The modern understanding of consciousness (i.e., awareness of the world as distinct from the world itself) is that the object’s appearance is constructed by the subject. Our daily experience is a virtual reality produced in the brain, an internal map constantly updated from external input. This realization entails metaphysical questions, such as the relationship between that virtual inner show and the reality that exists “out there.” But that is also a practical question. We need an internal account of external reality that is adequate for survival, independent of how “true” it might or might not be. Self-consciousness is functional in that way too. It serves us to know that we co-create a model of external reality, and that the map is not the territory itself, but something we create as a useful guide to navigate it. Knowing the map as a symbolic representation rather than objective fact, means we are free to revise it according to changing need. The moment or act of self-consciousness awakens us from the realist trance. One is no longer transfixed by experience taken at face value. Suddenly we are no longer looking at the world but at our own looking.

This capacity to “wake up” serves both the individual and society. It enables the person or group to stand back from an entrapping mindset, or viewpoint, to question it, which opens the possibility of a broader perspective. Literally, this means a bigger picture, encompassing more of reality, which is potentially more adequate for survival both individually and collectively. Knowledge is empowering; yet it is also a trap when it seems to form a definitive account. The map is then mistaken for the territory and we fall again into trance. So, there is a dialectical relationship between knowing and questioning, between certainty and uncertainty. The ability to break out of a particular viewpoint or framework establishes a new ground for an expanded framework; but that can only ever be provisional, for the new ground must eventually give way again to a yet larger view—ad infinitum. That, of course, is challenging for a finite creature. We are obliged to trust the knowledge we have at a given time, while aware that it may not be adequate. That double awareness is fraught with anxiety. The psychological tendency is to take refuge in what we take to be certain, ignoring the likelihood that it is illusory.

Sentience arose in organisms as a guide to survival, an internal model of the world. Self-consciousness arose—at least in humans—as a further survival tool, the ability to transcend useful appearances in favor of potentially more useful ones. It comes, however, at the price of ultimate uncertainty. One may prefer the trance to the anxiety. From a species point of view, that may be a luxury that expendable individuals can afford, which the planetary collective cannot. Individuals and even nations can stand or fall by their mere beliefs, through some version of natural selection. But what inter-galactic council will be there to give the Darwin Award to a failed human species?

The equation of experience

I cringe when I hear people speak casually of their reality, since I think what they mean is their personal experience and not the reality we live in together. Speaking about “realities” in the plural is more than an innocent trope. It is often a way to justify belief or opinion, as though private experience is all that matters because there is no objective reality to arbitrate between perspectives, or because the task of approaching it seems hopeless. But clearly there is an objective reality of nature, even if people cannot agree upon it, and what we believe certainly does matter to our survival. So, it seems important to express the relationship between experience and reality in some clear and concise way.

The “equation of experience” is my handy name for the idea that everything a person can possibly experience or do—indeed all mental activity and associated behavior—is a function of self and world conjointly. Nothing is ever purely subjective or purely objective. There is always a contribution to experience, thought, and behavior from within oneself, and likewise a contribution from the world outside. On the analogy of a mathematical function, this principle reads E = f(s,w). The relative influence of these factors may vary, of course. Sensory perception obviously involves a strong contribution from the external world; nevertheless, the organization of the nervous system determines how sensory input is processed and interpreted, resulting in how it is experienced and acted upon. At the other extreme, the internal workings of the nervous system dominate hallucination and imagination; nevertheless, the images and feelings produced most often refer to the sort of experiences one normally has in the external world.

Of course, one should define terms. Experience here means anything that occurs in the consciousness of a cognitive agent (yet the “equation” extends to include behavior that other agents may observe, whether one is conscious of it or not). Self means the cognitive agent to whom such experience occurs—usually a human being or other sentient organism. World means the real external world that causes an input to that agent’s cognitive system.

But the “equation” can be put in a more general form., which simply expresses the input/output relations of a system. Then, O = f(is, iw), where O is the output of the agent or system, is is input from the system itself, and iw is the input of the world outside the system or agent. This generalization does not distinguish between behavior and experience. Either is an “output” of a bounded system defined by input/output relations. For organisms, the boundary is the skin, which also is a major sensory surface.

While it seems eminently a matter of common sense that how we perceive and behave is always shaped both by our own biological nature and by the nature of the environing world, human beings have always found reasons to deny this simple truth, either pretending to an objective view independent of the subject, or else pretending that everything is subjective or “relative” and no more than a matter of personal belief.

The very ideal of objectivity or truth attempts to factor out the subjectivity of the self. Science attempts to hold the “self” variable constant, in order to explore the “world” variable. In principle, it does this by excluding what is idiosyncratic for individual observers and by imposing experimental protocols and a common mathematical language embraced by all standardized observers. Yet, this does not address cognitive biases that are collective, grounded in the common biology of the species. Science is, after all, a human-centric enterprise. To focus on one “variable” backs the other into a corner, but does not eliminate it.

Even within the scientific enterprise, there are conflicting philosophical positions. The perennial nature versus nurture debate, for example, emphasizes one factor over the other—though clearly the “equation” tells us there should be no such debate because nature and nurture together make the person! At the other extreme, politics and the media amount to a free-for-all of conflicting opinions and beliefs. Consensus is rarely attempted—which hardly means that no reality objectively exists. Sadly, “reality” is a wild card played strategically according to the subjective needs of the moment, by pointing disingenuously to select information to support a viewpoint, while an opposing group points to other select information. The goal is to appear clever and right—and to belong, within the terms of one’s group—precisely by opposing some other group, dismissing and mocking their views and motives. Appeal to reality becomes no more than a strategy of rhetoric, rather than a genuine inquiry into what is real, true, or objective.

How does such confusion arise? The basic challenge is to sort out the influence of the internal and external factors, without artificially ignoring one or the other. However, an equation in two variables cannot be solved without a second equation to provide more information—or by deliberately holding one variable constant, as in controlled experiments. The problem is that in life there is no second equation and little control. This renders all experience ambiguous and questionable. But that is a vulnerable psychological state, which we are programmed to resist. On the one hand, pretending that the “self” factor has no effect on how we perceive reality is willful ignorance. On the other hand, so is pretending that there is no objective reality or that it can be taken for granted as known. How one views oneself and how one views the world are closely related. Both are up for grabs, because they are themselves joint products of inner and outer factors together. How, then, to sort out truth?

I think the first step is to recognize the problem, which is the basic epistemic dilemma facing embodied biological beings. We are not gods but human creatures. In terms of knowing reality, this means acknowledging the subjective factor that always plays a part in all perception and thought. It means transcending the naïve realism that is our biological inheritance, which has served us well in many situations, but has its limits. We know that appearances can be deceptive and that communication often serves to deceive others. Our brains are naturally oriented outward and toward survival; we are programmed to take experience at “face value,” which is as much determined by biological or subjective need as by objective truth. We now know something of how our own biases shape how we perceive and communicate. We know something about how brains work to gain advantage rather than truth. Long ago we were advised to “Know Thyself.” There is still no better recipe for knowing others or knowing reality.

The second—and utterly crucial—step is to act in good faith, using that knowledge. That is, to intend truth or reality rather than personal advantage. To aim for objectivity, despite the stacked odds. This means being honest with oneself, trying earnestly to recognize one’s personal bias or interest for the sake of getting to a truth that others can recognize who also have that aim and who practice that sincerity. Holding that intention in common allows convergence. Intending to find that common ground presumes that it should be mutually approachable by those who act in good faith. In contrast, the attitude of all against all tacitly denies the common ground of an objective reality.

No doubt convergence is easier said than done, for the very reasons here discussed—namely, our biological nature and the ambiguity inhering in all experience because of the inextricable entanglement of subject and object. With no gods-eye view, that is the disadvantage of being a finite and limited creature, doomed to see the everything through a glass darky. But there is also an advantage in knowing this condition and the limitations it imposes. To realize the influence of the mind over experience is sobering but also empowering. We are no longer passive victims of experience but active co-creators of it, who can join with others of good will to create a better world.

Compromise is a traditional formula to overcome disagreement; yet, it presumes some grumbling forfeit by all parties for the sake of coming to a begrudged decision. In the wake of the decision, it assumes that people will nevertheless continue to differ and disagree, in the same divergent pattern. There is an alternative. While perceiving differently, we can approach agreement from different angles by earnestly intending to focus on the reality that is common to all. Then, like the blind men trying to describe the elephant in the room, each has something important to contribute to the emerging picture upon which the fate of all depends.

From taking for granted to taking charge

In our 200,000 years as a species, humankind has been able to take for granted a seemingly boundless ready-made world, friendly enough to permit survival. Some of that was luck, since there were relatively benign periods of planetary stability, and some of it involved human resourcefulness in being able to adapt or migrate in response to natural changes of conditions—even changes brought about by people themselves. Either way, our species was able to count on the sheer size of the natural environment, which seemed unlimited in relation to the human presence. (Today we recognize the dimensions of the planet, but for most of that prehistory there was not even a concept of living on a “planet.”) There was no need—and really no possibility—to imagine being responsible for the maintenance of what has turned out to be a finite and fragile closed system. Perhaps there was a local awareness among hunter-gatherers about cause and effect: to browse judiciously and not to poo in your pond. Yet the evidence abounds that early humans typically slaughtered to extinction all the great beasts. Once “civilized,” the ancients cut down great forests—and even bragged about it, as Gilgamesh pillaged the cedars of Lebanon for sport.

Taming animals and plants (and human slaves) for service required a mentality of managing resources. Yet, this too was in the context of a presumably unlimited greater world that could absorb any catastrophic failures in a regional experiment. We can scarcely know what was in the minds of people in transition to agriculture; but it is very doubtful that they could have thought of “civilization” as a grand social experiment. Even for kings, goals were short-term and local; for most people, things mostly changed slowly in ways they tried to adjust to. Actors came and went in the human drama, but the stage remained solid and dependable. Psychologically, we have inherited that assumption: human actions are still relatively local and short-sighted; the majority feel that change is just happening around them and to them. The difference between us and people 10,000 years ago (or even 500 years ago) is that we finally know better. Indeed, only in the past few decades has it dawned on us that the theatre is in shambles.

I grew up in 1950s Los Angeles, when gasoline was 20 cents the gallon, and where you might casually drive 20 miles to go out for dinner. As a child, that environment seemed the whole world, totally “natural,” just how things should be. My job was to learn the ropes of that environment. But, of course, I had little knowledge of the rest of the planet and certainly no notion of a ‘world’ in the cultural sense. Only when I traveled to Europe as a young man did I experience something different: instead of the ephemera of L.A., an environment that was old and made of stone, in which people organized life in delightfully different ways. No doubt that cultural enlightenment would have been more extreme had I traveled in Africa instead of Europe. But it was the beginning of an awareness of alternatives. Still, I could not then imagine that cheap gas was ruining the planet. That awareness only crept upon the majority of my generation in our later years, coincident with the maturing consciousness of the species.

We’ve not had the example of another planet to visit, whose wise inhabitants have learned to manage their own numbers and effects in such a way to keep the whole thing going. We have only imagination and history on this planet to refer to. Yet, the conclusion is now obvious: we have outgrown the mindset of taking for granted and must embrace the mindset of taking charge if we are to survive.

What happened to finally bring about this species awakening? To sum it up: a global culture. When people were few, they were relatively isolated, the world was big, and the capacity to affect their surroundings was relatively small. Now that we are numerous and our effects highly visible, we are as though crowded together in a tippy lifeboat, where the slightest false move threatens to capsize Spaceship Earth. Through physical and digital proximity, we can no longer help being aware of the consequences of our own existence and attendant responsibility. Yet, a kind of schizophrenia sets in from the fact that our inherited mentality cannot accommodate this sudden awareness of responsibility. It is as though we hope to bring with us into the lifeboat all our bulky possessions and conveniences and all the behaviors we took for granted as presumed rights in a “normally” spacious and stable world.

We are the only species capable of deliberately doing something about its fate. But that fact is not (yet) engrained in our mentality. Of course, there are futurists and transhumanists who do think very deliberately about human destiny, and now there are think tanks like the Future of Humanity Institute. Individual authors, speakers, and activists are deeply concerned about one dire problem or another facing humanity, such as climate change, social inequity, and continuing nuclear threat, along with the brave new worlds of artificial intelligence and genetic engineering. Some of them have been able to influence public policy, even on the global scale. Most of us, however, are not directly involved in those struggles, and are only beginning to be touched directly by the issues. Like most of humanity throughout the ages, we simply live our lives, with the daily concerns that have always monopolized attention.

However, the big question now looming over all of us is: what next for humanity? It is not about predicting the future but about choosing and making it. (Prediction is just more of bracing ourselves for what could happen, and we are well past that.) We know what will happen if we remain in the naïve mindset of all the creatures that have competed for existence in evolutionary history: homo sapiens will inevitably go extinct, like the more than 99% of all species that have ever existed. Given our accelerating lifestyle, this will likely be sooner than later. They passively suffered changes they could not conceive, let alone consciously control, even when they had contributed to those changes. We are forced to the terrible realization that only our own intervention can rectify the imbalances that threaten us. Let us not underestimate the dilemma: for, we also know that “intervention” created many of those problems in the first place!

Though it is the nature of plans to go awry, humanity needs a plan and the will to follow it if we are to survive. That requires a common understanding of the problems and agreement on the solutions. Unfortunately, that has always been a weak point of our species, which has so far been unable to act on a species level, and until very recently has been unable even to conceive of itself as a unified entity with a possible will. We are stuck at the tribal level, even when the tribes are nations. More than ever we need to brainstorm toward a calm consensus and collective plan of action. Ironically, there is now the means for all to be heard. Yet, our tribal nature and selfish individualist leanings result in a cacophony of contradictory voices, in a free-for-all bordering on hysteria. There is riot, mutiny and mayhem on the lifeboat, with no one at the tiller. No captain has the moral (much less political) authority to steer Spaceship Earth. What can we then hope for but doom?

Some form of life will persist on this planet, perhaps for several billion years to come. But the experiment of civilization may well fail. And what is that experiment but the quest to transcend the state of nature given us, which no other creature has been able to do? We were not happy as animals, having imagined the life of gods. With one foot on the shore of nature and one foot in the skiddy raft of imagination, we do the splits. The two extreme scenarios are a retreat into the stone age or charging brashly into a post-humanist era. Clearly, eight billion people cannot go back to hunting and gathering. Nor can they all become genetically perfect immortals, colonize Mars, or upload to some more durable form of embodiment. The lifeboat will empty considerably if it does not sink first.

Whatever the way forward, it must be with conscious intent on a global level. We will not go far bumbling as usual. Whether salvation is possible or not, we ought to try our best to achieve the best of human ideals. Whether the ship of state (or Spaceship Earth) floats or sinks, we can behave in ways that honour the best of human aspirations. To pursue another metaphor, in the board game of life, though ever changing, at any given moment there are rules and other elements. The point is not just to win but also to play well, even as we attempt to re-define the rules and even the game. That means to behave nobly, as though we are actually living in that unrealized dream. Our experiment all along has been to create an ideal world—using the resources of the real one. Entirely escaping physical embodiment is a pipe-dream; but modifying ourselves physically is a real possibility. In a parallel way, a completely man-made world is an oxymoron, for it will always exist in the context of some natural environment, with its own rules—even in outer space. Yet coming to a workable arrangement with nature should be possible. After all, that’s what life has always done. With no promise of success, our best strategy is a planetary consciousness willing to take charge of the Earth’s future. To get there, we must learn to regulate our own existence.