To choose or not to choose

Choice is often fraught with anxiety. We can agonize over decisions and are happy enough when an outcome is decided for us. That’s why we flip coins. Perhaps this says only that human beings loathe responsibility, which means accountability to others for possible error. We are essentially social creatures, after all. The meaning and value of our acts is always in relation to others, whose opinions we curry and fear. Even those unconcerned about reputation while they live may hope for approval in the long-term by posterity.

Perhaps there is a more fundamental reason why choice can be anxious. We have but one life. To choose one option or path seems to forfeit others. The road taken implies other roads not taken; one cannot have the cake and eat it. Choice implies a loss or narrowing of options, which perhaps explains why it invokes negative feelings: one grieves in advance the loss of possible futures, and fears the possibility of choosing the wrong future. Nature created us as individual organisms, distinct from others. That means we are condemned to the unique experience and history of a particular body, out of all the myriad life histories that others experience. Each of us has to be somebody, which means we must live a particular life, shaped by specific choices. We may regret them, but we can hardly avoid them. A life is defined by choices made, which can seem a heavy burden.

Yet, choice can also be viewed more positively as freedom. Choice is the proactive assertion of self and will, not a passive forfeit of options. It affords the chance to self-limit and self-define through one’s own actions, rather than be victimized by chance or external forces. To choose is to take a stand, to gather solid ground under one’s feet where there was but nebulous possibility. Rather than remaining vaguely potential, one becomes tangibly actual, by voluntarily sacrificing some options to achieve one’s goals. This is how we bring ourselves into definition and become response-able. We may be proud or ashamed of choices made. Yet, whatever the judgment, one gains experience and density through deliberate action.

To do nothing is also a choice—sometimes the wisest. The positive version of timidity or paralysis is deliberate restraint. Sometimes we chomp at the bit to act, perhaps prematurely, while the wiser alternative is to wait. Instinct and emotion prompt us to react impulsively. To be sure, such fast response serves a purpose: it can mean the difference between life and death. Yet, some situations allow, and even require, restraint and more careful thought. When there is not enough information for a proper decision, sometimes the responsible choice is to wait and see, while gathering more information. This too strengthens character.

Life tests us—against our own expectations and those of others. Perhaps the kindest measure of our actions is their intent: the good outcome hoped for. We may not accurately foresee the outcome, but at least we can know the desire. Yet, even that is no simple matter. For, we are complex beings with many levels of intention, some of which are contradictory or even unknown to us. We make mistakes. We can fool ourselves. The basic problem is that reality is complex, whereas mind and thought, feeling and intention, are relatively simplistic. We are like the blind men who each felt a part of the elephant and came to very different conclusions about the unseen beast that could crush them at any time. With all our pretense to objectivity, perhaps we are the elephant in the room!

Choice can be analog as well as digital. Plants interact with the world more or less in place, continuously responsive to changes in soil condition, humidity, temperature and lighting. Animals move, to pursue their food and avoid becoming food. Their choices have a more discrete character: yes or no. Yet, there are levels and nuances of choice, and choice about choice. We can be passive or aggressive, reactive or proactive. We can choose not to act, to be ready to act, or to seek a general policy or course of action instead of a specific deed. We can opt for a more analog approach, to adjust continuously, to keep error in small bounds, to play it by ear rather than be too decisive and perhaps dangerously wrong.

Of course, one may wonder whether choice and will are even possible. Determinism is the idea that one thing follows inexorably from another, like falling dominoes, with no intervening act of choosing. The physical world seems to unfold like that, following causes instead of goals. And perhaps there is even a limit to this unfolding, where nothing further can happen: the ultimate playing out of entropy. Yet these are ideas in the minds of living beings who do seem to have choice, and who seem to defy entropy. Determinism, and not free, will may well be the illusion. For, while concepts may follow one from another logically, there is (as Hume noted) no metaphysical binding between real events in time. The paradox is that we freely invent concepts that are supposed to tie the universe together—and bind us as well.

Where there is no free choice there is no responsibility. Determinism is a tool to foresee the future, but can also serve as a place of refuge from guilt over the past. If my genes, my upbringing, my culture or my diet made me do it, then am I accountable for my deeds, either morally or before the law? On the other hand, where there is no responsibility, there is no dignity. If my actions are merely the output of a programmed machine, then I am no person but a mere a thing. Of what account is my felt experience if it does not serve to inform and guide my behavior? I cannot rightfully claim to be a subject at all—to have my inner life be valued by others—unless I also claim responsibility for my outer life as an agent in the world.

Easier said than done, of course. Supposing that one tries to act morally and for the best, one may nevertheless fail. Worse, perhaps, one may wonder whether one’s thoughts and deeds will make any difference at all in the bigger picture. Especially at this crossroads—of human meddling and eleventh-hour concern for the future of all life—it may seem that the course is already set and out of one’s personal hands. Yet, what is unique about this time is precisely that we are called upon to find how to be personally and effectively responsible for the whole of the planet. The proper use of information in the information age is to enable informed choice and action. That no longer concerns only one’s personal—or local or even national—world, but now the world. This is the meta-choice confronting at least those who are in a position to think about it. Whatever our fate and whatever our folly, we at least bring ourselves more fully into being by choosing to think about it and, hopefully, choosing the right course of action.

A credible story about money as the root of evil

The word ‘credit’, like ‘credible’, comes from the Latin credo, to believe. It refers to the trust that must exist between a borrower and a lender. In his monumental work, Debt: the first 5000 years, anthropologist and philosopher-activist David Graeber proposes that credit, in one way or another, is the very basis of sociability and of society. He reverses the traditional dictum in economics that barter came first, then coinage, and finally credit. Quite the contrary: barter was ever only practical in exceptional circumstances; the actual basis of trade for most of human existence was some form of credit. Borrowing worked well in communities where everyone was known and reputation was crucial. Say you need something made, a favour, or service performed. You are then indebted to whoever helps you and at some point you will reciprocate. That sort of cooperation and mutual support is the essence of community.

This is not a review of Graeber’s wide-ranging book or thought, but a reflection on the deep and unorthodox perspective he brings to such questions as: what happens to community when money displaces the honor system of credit? Or: how did the introduction of money change the nature of debt and credit, and therefore society?

Let us note at the outset that many of the evils we associate with money and capitalism already existed in ancient societies that relied on credit, namely: usury. The extortion of “interest” on loans is already a different matter than simply repaying a debt (the “principle”). In a small community, or within families, such extortion would be unfriendly and unconscionable. In larger societies, relations are less personal. The psychological need to honour debt, based on trust, holds over, but without the intimate connection between persons. The debtor—who before was a friend, relative or neighbor—becomes a “stranger,” even when known. The person becomes a thing to exploit; the subject becomes an object.

Lending for gain was no longer a favour to someone in your community, which you knew would eventually be reciprocated fairly. It became something to do for calculated and often excessive profit. It thus became increasingly difficult to repay debts. Securities put up for the loan (even family members or one’s own person!) could be confiscated pending repayment. Usury—and debt in general—became such a problem even in ancient times that kings and rulers were obliged to declare debt amnesties periodically to avoid rebellion. And one of the first things rebellions would do is destroy records of debt. The sacred texts of many religions proscribe usury, but usually only regarding their own people. “Strangers” remained fair game as potential enemies.

The concept of interest has a precedent in the growth of natural systems. Large trees grow from tiny seeds; animal bodies grow from small eggs. Populations expand. Such growth is distinct from static self-maintenance or a population’s self-replenishment. People noticed this surplus when they began to grow crops and manage domesticated animals. The increase of the herd or crop served as metaphor for the interest expected on any sort of “investment.” However, the greedy expectations of loan sharks in all ages usually far exceed the rate of natural growth. Even the “normal” modest return on investment (consistently about 5%) exceeds the rate of growth of natural systems, such as forests. Moreover, there are always limits to natural growth. The organism reaches maturity and stops growing. (The refusal of cells to stop multiplying when they are supposed to is cancer.) A spreading forest reaches a shoreline or a treeline imposed by elevation and cold. The numbers of a species are held in check by other species and by limited resources. Nature as a whole operates within these bounds of checks and balances, which humans tend to ignore.

Money, credit, and debt are ethical issues because they directly involve how people treat one another. Credit in the old sense—doing a favour that will eventually be returned—involves one way of treating others, which is quite different from usury, which often resulted in debt peonage (often literally slavery). For good reason, usury was frowned upon as a practice within the group—i.e., amongst “ourselves.” The group needed to have an ethics in place that ensures its own coherence. But as societies expanded and intermingled, membership in the group became muddied. Trade and other relations with other groups created larger groupings. New identities required a new ethics.

Amalgamation led to states. War between states exacerbated the ethical crisis. War was about conquest, which reduced the defeated to chattel (war was another source of slaves). People, like domesticated animals, could become property bought and sold. Slaves were people ripped from their own community, the context that had given them identity and rights. Similarly, domestic animals had been removed from their natural life and context and forced into servitude to people. We may speak even of handmade things as being wrested from their context as unique objects, personally made and uniquely valued, when they enter the marketplace. Manufactured things are designed to be identical and impersonal, not only to economize through mass production, but also to standardize their value. Mass production of standard things matched mass production of money.

Enter coinage. Rather than supply armies through expensive supply lines, soldiers could be paid in coin to spend locally rather than pillage the countryside. These coins could then be returned to the central government in the form of taxes. Coinage standardized value by quantifying it precisely. But it did something more as well. It rendered trade completely impersonal. Before, you had a reciprocal relationship of dependency and trust with your trade partner or creditor—an ongoing relationship. In contrast to credit, the transfer of coins completed the transaction, cancelling the relationship; both parties could walk away and not assume any future dealings. Personal trust was not required because the value exchanged was fixed and clear, transferable, and redeemable anywhere. Indeed, money met a need because people were already involved in trade with people they might never see again and whom they did not necessarily trust. But this was a very different sort of transaction than the personal sort of exchange that bound parties together.

Yet, trust was still required, if on a different level. Using money depends on other people accepting it as payment. While money seemed to be a measure of the value of things, it implicitly depended on trust among people—no longer the direct personal trust between individuals but ongoing faith in the system. Coins had a symbolic value, regulated by the state, independent of the general valuation of the metals they were made of. (The symbolic value was usually greater than the value of the gold, silver or copper, since otherwise the coins would be hoarded.) The shift toward symbolic value was made clear with the introduction of paper money. But in fact, promissory notes had long been used before official paper money or coinage. The transition to purely symbolic (virtual) money was complete when the U.S. dollar was taken off the gold standard in 1971.

Unfortunately, some of the laws restricting usury were abandoned soon after. “Credit,” in its commercialized form, returned with a vengeance. Credit cards and loan sharks aggressively offered indiscriminate lending for the sake of the profit to be gained, never mind the consequences for the borrower. Hence, the international crisis of 2008—and the personal crises of people who lost their homes, of students who spend half their lives repaying student loans, of consumers always on the verge of bankruptcy, and of publics forced to bail out insolvent corporations.

The idea of credit evolved from a respectable mutual relationship of trust to a shady extortion business. The idea of indebtedness has accordingly long been tinged with sin, as a personal and moral failing. A version of the Lord’s Prayer reads, “forgive us our debts as we forgive our debtors.” (Alternatively: “forgive us our trespasses”, referring to the “sacredness” of private property rights.) As Graeber points out, we generally do not forgive debt, but have made it the basis of modern economics. There is no mention of forgiving the sins of creditors. The “ethics” of the marketplace is a policy to exploit one’s “neighbor,” who can now be anyone in the world—the further out of sight the better.

Usury now deals with abstractions that hide the nature of the activity: portfolios, mutual funds, financial “instruments,” stocks and bonds, “derivatives,” etc. The goal is personal gain, not social benefit, mutual relationship, or helping one another. Cash is going out of fashion in favour of plastic, which is no more than ones and zeros stored in a computer. The whole system is vulnerable to cyberattack. Worse, the confidence that underwrites the system runs on little more than inertia. It will eventually break down, if not renewed by a basis for trust more genuine, tangible and personal.

Apart from climate change, the other crisis looming is the unsustainability of our civilization. The global system of usury (let’s call a spade a spade: we’re talking about capitalism) unreasonably exploits not only human beings but the whole of nature. Like population growth, economic growth cannot continue indefinitely. The sort of growth implied by “progress” is a demented fantasy, with collapse lurking around the corner. Moreover, the fruits of present growth are siphoned by a small elite and hardly shared, while the false promise of a better life for all is the only thing keeping the system going. We cannot be any more ethical in regard to nature than we are in regard to fellow human beings. While people may or may not revolt against the greed of other people, we can be sure that nature will.

Relativity theory and the subject-object relationship

Concepts of the external world have evolved in the history of Western thought, from a naïve realism toward an increasing recognition of the role of the subject in all forms of cognition, including science. The two conceptual revolutions of modern physics both acknowledge the role of the observer in descriptions of phenomena observed. That is significant, because science traditionally brackets the role of the observer for the sake of a purely objective description of the world.  The desirability of an objective description is self-evident, whether to facilitate control through technology or to achieve a possibly disinterested understanding. Yet the object cannot be truly separated from the subject, even in science.

Knowledge of the object tacitly refers back to the participation of the observer as a physical organism, motivated by a biologically-based need to monitor the world and regulate experience. On the other hand, knowledge may seem to be a mental property of the subject, disembodied as “information.” However, the subject is necessarily also an object: there are no disembodied observers. Information, too, is necessarily embodied in physical signals.

A characteristic of all physical processes, including the conveyance of signals, seems to be that they take time and involve transfers of energy. These facts could long be conveniently ignored in the case of information conveyed by means of light, which for most of human history seemed instantaneous and with negligible physical effect. Eventually, it was realized through observation (Eotvos), in experiment (Fizeau), and in theory (Maxwell) that the speed of light is finite and definite, though very large. Since that was true all along, it could have posed a conceptual dilemma for physicists long before the late 19th century, since the foundation of Newtonian physics was instantaneous action-at-distance. Even for Einstein and his contemporaries, however, the approach to problems resulting from the finite speed of light was less about incorporating the subject into an objective worldview than to compensate the subject’s involvement in order to preserve that worldview. Einstein’s initial motivation for relativity theory lay less in the observational consequences of the finite speed of light signals than in resolving conceptual inconsistencies in Maxwell’s electrodynamics.

Nevertheless, perhaps for heuristic reasons, Einstein began his 1905 paper with an argument about light signals, in which the signal was defined to travel with the same finite speed for all observers. This, of course, violated the foundational principle of the addition of velocities. It skirted the issue of the physical nature of the signal (particle or wave?), since some observations seemed to defy either the wave theory or the emission theory of light. Something had to give, and Einstein decided it was the concept of time. What remained implicit was the fact that non-local measurement of events in time or space must be made via intervening light signals.

When the distant system being measured is in motion with respect to the observer, the latter’s measurement will differ from the local measurement by an observer at rest in the distant system. The difference will be proportional to their relative speed compared to the speed of light. By definition, these are line of sight effects. By the relativity postulate, the effects must be reciprocal, so that whether the observers are approaching each other or receding, each would perceive the other’s ruler to have contracted and clock to have slowed! Such a conclusion could not be more contrary to common sense. But that meant simply that common sense is based on assumptions that may hold true only in limited circumstances (namely, when the observation is presumed instantaneous). In other words, circumstances that are non-physical.

The challenge embraced by Einstein was to achieve coherence within the framework of physics as a logical system, which is a human construct, a product of definitions. Physics may aim to reflect the structure of the real world, but invokes the freedom of the human agent to define its axioms and elements. Einstein postulated two axioms in his famous paper: the laws of physics are the same for observers in uniform relative motion; and the speed of light does not depend on the motion of its source. From these it follows that simultaneity can have no absolute meaning and that measurements involving time and space depend on the observers’ relative state of motion. In other words, the fact that the subject does not stand outside the system, but is a physical part of it, affects how the object is perceived or measured. Yet, a contrary meta-truth is paradoxically also insinuated: to the degree that the system is conceptual and not physical, the theorist does stand outside the system. Einstein’s freedom to choose the axioms he thought fundamental to a consistent physics implied the four-dimensional space-time continuum (the so-called block universe), which consists of objective events, not acts of observation.

Could other axioms have been chosen—alternatives to his postulates? Indeed, they had been. The problem was in the air in the late 19th century. In effect, Lorentz and FitzGerald had proposed that movement through the ether somehow causes a change in intermolecular forces, so that apparently rigid bodies in motion literally change shape in such a way that rulers “really” contract in length in the direction of motion. This was an ontological (electrodynamic) explanation of the null result of the crucial Michelson-Morley experiment. (Poincaré was also working on an ontological solution.) That approach made sense, since the space between atoms in solid bodies depends on electrical forces. Though Einstein knew about the Michelson-Morley experiment, his epistemic (kinematic) approach did not focus on that experiment, but originated with his reflections in a youthful thought experiment concerning what it would be like to travel along with a light beam. It continued with reflections on apparent contradictions in Maxwell’s electrodynamics. Yet, it returned to focus on the physical nature of light, which bore fruit in the equivalence of matter and energy and in General Relativity as a theory of gravitation.

Despite his early positivism, it was Einstein’s lifelong concern to preserve the objectivity, rationality and consistency of physics, the principle challenges to which were the dilemmas that gave birth to the two great modern revolutions, relativity and quantum theory. His solutions involved taking the observer into account, but with an aim to preserve an essentially observer-independent worldview—the fundamental stance of classical physics. While he chose an epistemic over an ontological analysis, he was deeply committed to realism. There were real, potentially observable, consequences to his theories, which have since been confirmed in many experiments. Yet alternative interpretations are conceivable, formulated on the basis of different axioms, to account for the same—mostly subtle—effects. While relativity theory renders appearances a function of the observer’s state of motion, it is really about preserving the form of physical laws for all observers—reasserting the possibility of objective truth.

One ironic consequence is that space and time are no longer considered from the point of view of the observer but are objectified in a god’s-eye view. The four-dimensional manifold is mathematically convenient; yet it also makes a difference in how we understand reality. As a theory of gravitation, General Relativity asserts the substantial existence of a real entity called spacetime. Space and time are no longer functions of the observer and of the means of observation (light); now they have an existence independent of the observer—ironically, much as Newton had asserted. What was grasped as a relationship returned to being a thing.

Even in the Special theory, there is confusion over the interpretation of time dilation. In SR, time dilation was initially a mutually perceived phenomenon, which makes sense as a line-of-site effect. In modern expositions, however, mechanical clocks are replaced by “light clocks,” and the explanation of time dilation refers to the lengthened path of light in the moving clock. This is no longer a line-of-site or mutual effect, since the light path is no longer in the direction of motion relative to the observer. Instead, it substitutes a definition of time that circularly depends on light. While “objective” in the sense that it is not mutual, the explanation for the gravitational time dilation of General Relativity rests on an incoherent interpretation of time dilation in SR.

Einstein derived both the famous matter-energy equivalence and General Relativity using arguments based on Special Relativity. These arguments slide inconsistently from an epistemic to an ontological interpretation. While the predictions of GR and E=mc2 may be accurate, their theoretical dependence on SR remains unfounded if the effects are purely epistemic: that is, if they do not invoke a physical interaction of things with an ether, when they accelerate with respect to it (the so-called clock hypothesis). Or, to put it the other way around, GR and the mass-energy equivalence actually imply such an interaction.

The Lorentz transformation could as well be interpreted in purely epistemic terms, of observers’ mutually relative state of motion, given the finite intermediary of light. Spacetime need not be treated as an object if the subject’s role is fully taken into account. The invariance of the speed of light could have a different interpretation, not as a cosmic speed limit but as a side-effect of light’s unique role as signal between frames of reference. Time dilation could have a different explanation, as a function of moving things physically interacting with an ether.

Form and content

FORM AND CONTENT

That all things have form and content reflects an analysis fundamental in our cognition and an dichotomy fundamental to language. Language is largely about content—semantic meaning. Yet, it must have syntactical form to communicate successfully. The content of statements is their nominal reason for being; but their effectiveness depends on how they are expressed. In poetry and song, syntax and form are as important as semantics and content. They may even dominate in whimsical expressions of nonsense, where truth or meaning is not the point.

The interplay of form and content applies even in mathematics, which we think of as expressing timeless truths. ‘A=A’ is the simplest sort of logical truth—a tautology, a sheer matter of definition. It applies to anything, any time. By virtue of this abstractness and generality, it is pure syntax. As a statement, it bears no news of the world. Yet, mathematics arose to describe the world in its most general features. Its success in science lies in the ability to describe reality precisely, to pinpoint content quantitatively. The laws of nature are such generalities, usually expressed mathematically. They are thus sometimes considered transcendent in the way that mathematics itself appears to be. That is, they appear as formal rules that govern the behavior of matter. You could say that mathematics is the syntax of nature.

The ancient Greeks formalized the relation between syntax and semantics in geometry. Euclid provided the paradigm of a deductive method, by applying formal rules to logically channel thought about the world, much as language does intuitively. Plato considered the world of thought, including geometry, to be the archetypal reality, which the illusory sensory world only crudely copies. This inverted the process we today recognize as idealization, in which the mind abstracts an essence from sensory experience. For him, these intuitions (which he called Forms) were the real timeless reality behind the mundane and ever-changing content of consciousness.

The form/content distinction pertains especially perhaps in all that is called “art.” Plato had dismissed art as dealing only with appearances, not the truth or reality of things. According to him art should no more be taken seriously than play. However, it is precisely as a variety of play that we do take art seriously. What we find beautiful or interesting about a work of art most often involves its formal qualities, which reveal the artist’s imagination at play. Art may literally depict the world through representation; but it may also simply establish a “world” indirectly, by assembling pertinent elements through creative play. Whatever its serious themes, all art involves play, both for the producer and the consumer.

Meaning is propositional, the content of a message. It is goal-oriented, tied to survival and Freud’s reality principle. But the mind also picks up on formal elements of what may or may not otherwise bear a message or serve a practical function, invoking more the pleasure principle. The experience of beauty is a form of pleasure, and “form” is a form of play with (syntactic) elements that may not in themselves (semantically) signify anything or have any practical use. Art thus often simply entertains. This is no less the case when it is romanticized as a grand revelation of beauty than when it is dismissed as trivially decorative. Of course, art combines seriousness and play in varying ways that can place greater emphasis on either form or content. While these were most often integrated before the 19th century, relatively speaking modern art liberated form from content.

For most of European history, artists were expected to do representational work, to convey a socially approved message—usually religious—through images. At least in terms of content, art was not about personal expression. That left form as the vehicle for individual expression, though within limits. Artists could not much choose their themes, but they could play with style. The rise of subjectivity thematically in art mirrors the rise of subjectivity in society as a whole; it recapitulates the general awakening of individuality. Yet, even today, a given art work is a compromise between the artist’s vision and social dynamics that limit its expression and reception.

From the very rise of civilization, art had served as propaganda of one sort or another. For example, Mesopotamian kings had built imposing monuments to their victories in war, giving a clear message to any potentially rebellious vassals. Before the invention of printing, pictures and sculptures in Europe had been an important form of religious teaching. Yet, even in churches, the role of iconic art was from the beginning a divisive issue. On the one hand, there was the biblical proscription against idolatry. On the other hand, the Church needed a form of propaganda that worked for an illiterate populace. Style and decoration were secondary to the message and used to support it. In the more literate Islamic culture, the written message took precedence, but the formal element was expressed in the esthetics of highly stylized decorative calligraphy. In either case, the artist usually did little more than execute themes determined by orthodoxy, giving expression to ideas the artist may or may not have personally endorsed. But the invention of printing changed the role of graphic art, as later would the invention of photography.

Except to serve as political or commercial propaganda (advertising), today representational art holds a diminished place, superseded by photography and computer graphics. Yet, artists continue to paint and sculpt figures and scenes as well as decorative or purely abstract creations. In the age of instant images (provided by cell phones, for instance), what is the ongoing appeal of hand-made images? How and why is a painting based on a photograph received differently than the photo itself, and why do people continue to make and buy such a thing? The answer surely lies in the interplay of form and content. The representational content of the photo is a given which inspires and constrains the play with form.

Skill is involved in accurately reproducing a scene. We appreciate demonstrations of sheer skill, so that hyper-realist painting and sculpture celebrate technical proficiency at imitation. Then, too, a nostalgia is associated with the long tradition of representational art. Thirdly, status is associated with art as a form of wealth. An artwork is literally a repository of labor-intensive work, which formerly often embodied precious materials as well as skill. Photographic images are mostly cheap, but art is mostly expensive. Lastly, there are conventional ideas about decoration and how human space should be furnished. Walls must have paintings; public space must have sculptures. In general, art serves the purpose of all human enterprise: to establish a specifically human world set apart from nature. This is no less so when nature itself is the medium, as in gardens and parks that redefine the wild as part of the human world.

Nevertheless, it is fair to say that the essence of modern art—as sheer play with materials, images, forms, and ideas—is no longer representational. Art is no longer bound to a message; form reigns over content. Perhaps this feature is liberating in the age of information, when competing political messages overwhelm and information is often threatening. Art that dwells on play with formal elements refrains from imposing a message—unless its iconoclasm is the message. Abstraction does not demand allegiance to an ideology—except when it is the ideology. But in that case, it is no longer purely play. Art can serve ideology; but it can also reassure by the very absence of an editorial program. Playfulness, after all, does not intimidate or discriminate, though it may be contagious. It engages us on a level above personal or cultural differences.

Decoration has always been important to human beings, who desire to embellish and shape both nature and human artifacts. Decoration may incorporate representation or elements from nature, but usually in a stylized way that emphasizes form, while tailoring it to function. Yet, even decorative motifs constitute an esthetic vocabulary that can carry meaning or convey feeling. A motif can symbolize power and military authority, for example. Such are the fasces and the bull of Roman architecture; the “heroic” architecture, sculpture, and poster art of Fascism or Communism; or the Napoleonic “Empire” style of furnishings. It can be geometric and hard-edged, expressing mental austerity. Equally, it can express a more sensuous and intimate spirit, often floral or vegetal—as in the wallpapers of William Morris and the Art Nouveau style of architecture, furniture, and posters. In other words, decoration too reflects intent. It can reinforce or soften an obvious message. But it can also act independently of content, even subversively to convey an opposing ethos.

Even when no message seems intended, there is a meta-message. Whatever is well-conceived and well-executed uplifts and heartens us because it conveys the caring of the artist, artisan, or engineer. On the other hand, the glib cliché and the shoddily made product spread cynicism and discouragement. They reveal the callousness of the producer and inure us to a world in which quantity prevails over quality. Every made thing communicates an intent, for better or worse.

 

 

 

The power and the glory

THE POWER AND THE GLORY

Human beings are eminently social creatures. Our religions remind us to love one another and our laws require us to consider each other’s needs. One’s self-image depends on the good opinion of others and on status—comparative standing in a pecking order. Like other primates, human society is hierarchical. One strives to be better than others—in one’s own eyes and in theirs. Things that serve as symbols and visible trappings of status are a primary form of wealth. On the other hand, we also seek comfort and ease, and wealth consists of things that make our lot better. We are a masterful species not content to live in the abject condition of other creatures, nor content with our natural limitations and dependency on nature. We seek power to define and control our environs—collectively to make a specifically human world, and individually to improve our physical well-being and social standing within it.

The other side of wealth is economic dependency. And the other side of status is psychological dependency. Status and power over others complement each other, since status is essentially power that others have over us. There are those who achieve their relative economic sufficiency by exploiting the dependency of others, just as there are those who rely on the opinions of others for their good opinion of themselves. Independence means not only self-sufficiency (of food production, for example) but also immunity to the opinions of others. There are people for whom material ease and social approval are not paramount. Yet, even they might not be able to defend against others who would compel them with the threat of violence. On your own plot of land, it is possible to subsist and thumb your nose at others trying to buy your services (which provides you no means to control others). But, even if you are food-secure, someone with weapons—or who can pay someone with weapons—can force you to do their bidding or take away your land. When very few own the land required to raise food, most are in an awkward position of dependency.

Control of the physical environment and control over other people dovetail when both can be made to serve one’s purposes. This requires the ability to command or induce others to do one’s bidding. How does this power over others come about? In particular, how does the drive for status mesh with the drive for wealth and the ability to command others? Power must be merited in the eyes of society, and the justification is typically status. How separate can they be? Certainly, we honor some individuals who are not wealthy in material possessions or politically powerful. On the other hand, we may be awed by individuals we despise.

Power can take different forms in different societies. It can be a competition to determine status: who is best able to rule by virtue of their perceived qualities. Leaders are then obeyed out of loyalty to their personal charisma, or because they somehow represent divine authority in the imagination of others. God represents human ideals of omnipotence, omniscience, and benevolence; so does the monarch, ruling by divine proxy, symbolically represent these ideals in society. On the other hand, bureaucratic power is rule by impersonal law. Yet, even its ability to require obedience may have originally derived from divine authority, later replaced by institutions such as parliaments and courts of law, enforced by arms. Like values in general, once considered unquestionable because divinely sanctioned, authority becomes secularized. As the individual’s subjectivity grew more significant in society, so did individual responsibility to endorse ruling authority—through voting in elections, for example. As arbitrary and absolute authority gave way to institutions, equality of subjects under God or king gave way to equality under law. To replace the (theoretically absolute) authority of the monarch with the limited authority of elected representatives changes the political game: from common acceptance of a transcendent reality to a spectator sport of factions supporting competing mortal personalities.

A basic problem of social organization is how to get people to defer to a will that transcends the wills of the individuals constituting society. Just as siblings may bicker among themselves but defer to parental authority, so people seek an impartial, fair, and absolute source of authority—a binding arbitration, so to speak. That is a large part of the appeal of God or king, as givers of law who stand above the law and the fray of mere humans. (Psychologically, the very arbitrariness of royal whim points to the transcendent status of the ruler as above the law, therefore the one who can invest the law with absolute authority.) This is the background of modern deference to codified civil law, which was originally the edict of the king or of God. On the other hand, tradition has the authority of generations. Especially when expressed in writing, precedent has an objective existence that anyone can refer to—and thus defer to—though always subject to interpretation. This too explains the willingness to abide by the law even when in disagreement, provided the law has this explicit objective existence preserved in writing. It may also explain the authority of religious texts for believers.

Effective rule depends not only on charisma but also on delegation of authority to others, to tradition, and to institutions such as laws and bureaucracies. The appeal of law and administration over the whim of rulers lies in its equal application to all: fairness. A law or rule that does not apply to everyone is considered unjust. The other side of such uniformity is that one size must fit all: it is also unfair when individual circumstance is not considered. Acceptance of authority can grow out the success of a triumphant player or out of the rule of law through tradition and bureaucracy. When it fails, it can degenerate into either agonistic populism or bureaucracy run amok—or both. Either way, when authority breaks down, politics degenerates into a popularity contest among personalities mostly preselected from a privileged class. Indeed, that is what ‘democracy’ is, as we have come to know it! A true democratic system would not involve election at all, but selection by lottery—a civic duty like jury duty or military service.

Wealth has the dimensions of status and power. It consists of some form of ownership. In our society, every form of property is convertible to cash and measurable by it. Money has universal value by common agreement, to purchase what is needed for comfort, to purchase status, and to command others by purchasing their services. The rich enjoy the use of capital (property used to gain more wealth), the ability to command a wide variety of services money can buy, and the status symbols it can buy: artworks, jewelry, luxury cars and boats, villas maintained by servants, etc. Yet, most people have little capital and their wealth is little more than the momentary means to survive.

In general, money is now the universal measure of value and success. It also enables the accumulation of capital. Yet, status and power may well have been separate in societies that did not use money as we do. Without money as a medium of exchange, possessions alone cannot serve to command others. There must also be the ability to get others to do one’s bidding by paying them or by coercing them by (paid) force of arms. Without money, as a standard quantized medium of exchange, trade must be a direct exchange of goods and services—i.e., barter. All dollars are created equal (just as all people are, theoretically before the law). But the universal equality of units of money only led to its unequal distribution among people. In that sense, money is the root of economic inequality, if not of all evil. If only barter were possible, it would be difficult (short of outright theft) for one person to accumulate very much more than another. Money promotes plunder, legal and otherwise, by its very intangibility and ease of passing from hand to hand.

We are used to the idea of respecting property ownership and obeying the law, and to hierarchical structures in which one follows orders. Some indigenous societies simply rejected the idea of obeying orders or telling others what to do. Status was important to them, but not power over others. Or, rather, they took measures against the possibility of institutionalized power relations in their society. We tend to project modern power relations and structures back upon the past, so that the quest to understand the origins of power presumes current understandings and arrangements. This can blind us to alternative forms of political process, to real choice we may yet have.

Hardly anyone now could disagree with Plato’s idea that only a certain type of well-motivated and wise individual is truly qualified to lead society. That would mean someone unmotivated by status, wealth or power. But there does not seem to be a modern version of his Academy to train statespersons. (Instead, they graduate from business schools or Hollywood.) There are think tanks, but not wisdom tanks. If the political task is to plan humanity’s future, it might better be done by a technocracy of experts in the many disciplines relevant to that task, including global management of population and resources. They would make and enforce laws designed to ensure a viable future.

Such a governing committee might operate by consensus; but society as a whole (potentially the world) would not be ruled by democratically elected representatives. Instead, staggered appointments would be drawn by lottery among qualified candidates. The term of office would be fixed, non-renewable, and only modestly paid. This arrangement would bypass many of the problems that plague modern democracies, beginning with de facto oligarchy. There would be no occasion to curry favor with the public nor fear its disaffection, since the “will of the people” would be irrelevant. Hence, the nefarious aspects of social media (or corporately controlled official media) wouldn’t touch the political process. There would be no election campaigns, no populist demagoguery, no contested voting results, no need for fake news or disinformation. (Validation of knowledge within scientific communities has its own well-established protocols that remain relatively immune to the toxic by-products and skepticism of the Internet Age.)

Admittedly, members of this governing committee would not be immune to bribery or to using the office for personal benefit (just as juries and judges are sometimes corrupted). Spiritual advice before the modern age was to be in the world and not of it. Taking that seriously today may be the only cure for humanity’s age-old obsession with power and glory. Still, technocracy might be an improvement over the present farce of democracy.

[Acknowledgement: many of the ideas in this post were inspired by The Dawn of Everything by David Graeber and David Wingrow, McClelland and Stewart, 2021—a challenging and rewarding read.]

The mechanist fallacy and the prospect of artificial life

The philosophy of mechanism treats all physical reality as though it were a machine. Is an organism a machine? Under what circumstances could a machine become an organism? Clear answers to such questions are important to evaluate the feasibility and desirability of artificial life.

The answer to the first question is negative: an organism is not a machine, because it is not an artifact. The answer to the second question follows from an understanding of how the philosophy of mechanism leads falsely to the conclusion that natural reality can be formally exhausted in thought and recreated as artifact. A machine can become an organism only by designing itself, from the bottom up, as organisms in effect have done. An artificial organism cannot be both autonomous and fully subject to human control, any more than natural organisms are. This trade-off presents a watershed choice: to create artifacts as tools of human intent or to foster autonomous systems that may elude human control and pose a threat to us and all life.

Much of the optimism of genetic engineering rests on treating organisms as mechanisms, whose genetic program is their blueprint. But no natural thing is literally a machine, because (as far as we know) natural reality is found, not made. The quest to engineer the artificial organism from the top down rests on the theoretical possibility to analyze the natural one exhaustively, just as simulation relies on formal coding of the thing to be simulated. But, unlike machines and other artifacts, no natural thing can be exhaustively analyzed. Only things that were first encoded can be decoded.

As a way of looking, the philosophy of mechanism produces artifacts at a glance.  While this has been very fruitful for technology, imitating organisms is not an effective strategy for producing them artificially, because it can only produce other artifacts. The implicit idealist faith behind theoretical modelling and the notion of perfect simulation is that each and every property of a thing can be completely represented. A ‘property’, however, is itself an artifact, an assertion that disregards a potential infinity of other assertions. The collection of properties of a natural thing does not constitute it, although it does constitute an artifact.

A machine might be inspired by observing natural systems, but someone designed and built it. It has a finitely delimited structure, a precise set of well-defined parts. It can be dismantled into this same set of parts by reversing the process of construction. The mechanistic view of the cosmos assumes that the universe itself is a machine that can be deconstructed into its “true” parts in the same way that an engine can be assembled and disassembled. However, we are always only guessing at the parts of any natural system and how they relate to each other. The basic problem for those who want to engineer life is that they did not make the original.

We cannot truly understand the functioning of even the simplest creature and its genetic blueprint without grasping its complex interactions with environments that are the source and reference of its intentionality. Just as a computer program draws not only upon logic and the mechanics of the computer but also upon the semantically rich environment of the programmer (which ultimately includes the whole of the real world), so the developing embryo, for instance, does not simply unfold according to a program spelled out in genes, but through complex chemical interactions with the uterine environment and beyond. The genetic “program”, in other words, is not a purely syntactic system, but is rich in references that extend indefinitely beyond itself. The organism is both causally and intentionally connected to the rest of the world. Simply identifying genetic units of information cannot be taken as exhaustive understanding of the genetic “code”, any more than identifying units of a foreign language as words implies understanding their meaning.

Simulation involves the general idea that natural processes and objects can be reverse-engineered. They are taken apart in thought, then reconstructed as an artifact from the inferred design. The essence of the Universal Machine (the digital computer) is that it can simulate any other machine exhaustively. But whether any machine, program, artifact, model, or design can exhaustively simulate an organism—or, for that matter, any aspect of natural reality—is quite another question.

The characteristic of thought and language, whereby a rose is a rose is a rose, makes perfect simulation seem feasible. But there are many varieties of rose and every individual flower is unique. The baseball player and the pitching machine may both be called pitchers, but the device only crudely imitates the man, no matter how accurately it hurls the ball. Only in thought are they the “same” action. When a chunk of behavior (whether performed by a machine or a natural creature) seems to resemble a human action, it is implicitly being compared not to the human action itself but to an abstraction (“pitching”) that is understood as the essence of that behavior. Similarly, the essence or structure of an object (the “pitcher”) is only falsely imagined to be captured in a program or blueprint for its construction. Common sense recognizes the differences between the intricate human action of throwing and the mechanical hurling of the ball. Yet, the concept of simulation rests on obscuring such distinctions by conflating all that can pass under a given rubric. The algorithm, program, formalism, or definition is the semantic bottleneck through which the whole being of the object or behavior must be squeezed.

One thing simulates another when they both embody a common formalism. This can work perfectly well for two machines or artifacts that are alternative realizations of a common design. It is circular reasoning, however, to think that the being of a natural thing is exhausted in a formalism that has been abstracted from it, which is then believed to be its blueprint or essence. The structure, program, or blueprint is imposed after the fact, inferred from an analysis that can never be guaranteed complete. The mechanist fallacy implies that it is possible to replicate a natural object by first formalizing its structure and behavior and then constructing an artifact from that design. The artifact will instantiate the design, but it will not duplicate the natural object, any more than an airplane duplicates a bird.

If an organism is not a machine, can a machine be an organism? Perhaps—but only if, paradoxically, it is not an artifact! What begins as an artifact must bootstrap itself into the autonomy that characterizes organism. An organism is self-defining, self-assembling, self-maintaining, self-reproducing—in a word, autopoietic. In order to become an organism, a machine must acquire its own purposes. That property of organisms has come about through natural selection over many generations—a process that depends on birth and death. While a machine exhibits only the intentionality of its designers, the organism derives its own intentionality from participation in an evolutionary contest, through a long history of interactions that matter to it, in an environment of co-participants.

Technological development as we know it expresses human purposes; natural evolution does not. The key concepts that distinguish organism from machine are the organism’s own intentionality and its embodiment in an evolutionary contest. While a machine may be physical, it is not embodied, because embodiment means the network of relationships developed in an evolutionary context. No machine yet, however complex, is embodied in that sense or has its own purposes. Indeed, this has never been the goal of human engineers.

Quite apart from feasibility, we must ask what would be the point of facilitating the evolution of true artificial life, aside from the sheer claim to have done it? The autonomy of organisms limits how they can be controlled. We would have no more control over artificial organisms than we presently have over wild or domesticated ones. We could make use of an artificial ecology only in the ways that we already use the natural one. While it is conceivable that artificial entities could self-create under the right circumstances—after all, life did it—these would not remain within the sort of human control, or even understanding, exerted over conventional machines. We must distinguish clearly between machines that are tools, expressing their designers’ motivations, and machines that are autonomous creatures with their own motivations and survival instincts. The latter, if successful in competing in the biosphere, could displace natural creatures and even all life.

If we wish to retain human hegemony on the planet, there will be necessary limits to the autonomy of our technology. That, in turn, imposes limits on its capabilities and intelligence, especially the sort of general and self-interested intelligence we expect from living beings. We must choose—while we still can—between controllable technology to serve humans and the dubious accomplishment of siring new forms of being that could drive us to extinction. This is a political as well as a design choice. Only clarity of intention can avoid disaster resulting from the naive and confused belief that we can both retain control and create truly autonomous artifacts.

 

Origins of the sacred

Humanity and religion seem coeval. From the point of view of the religious mind, this hardly requires explanation. But from a modern scientific or secular view, religion appears to be an embarrassing remnant. There must be a reason why religion has played such a central and persistent role in human affairs. If not a matter of genes or evolutionary strategy, it must have a psychological cause deeply rooted in our very nature. Is there a core experience that sheds light on the phenomenon of religion?

The uncanny is one response to unexpected and uncontrolled experience. It is not solely the unpredictable external world that confounds the mind, which can produce from within its own depths terrifying, weird, or at least unsettling experiences outside the conscious ego’s comfort zone. One can suffer the troubling realization that the range of possible experience is hardly guaranteed to remain within the bounds of the familiar, and that the conscious mind’s strategies are insufficient to keep it there. The ego’s grasp of this vulnerability, to internal as well as external disturbance, may be the ground from which arises the experience of the numinous, and hence the origin of the notion of the sacred or holy. Essentially it is the realization that there will always be something beyond comprehension, which perhaps underlies the familiar like the hidden bulk of an iceberg.

To actually experience the numinous or “wholly other” seems paradoxical to the modern mind, given that all experience is considered a mediated product of the biological nervous system. For, the noumenon is that which, by Kant’s definition, cannot be experienced at all. Its utter inaccessibility has never been adequately rationalized, perhaps because our fundamental epistemic situation precludes knowing the world-in-itself in the way that we know our sensory experience. Kant acknowledged this situation by clearly distinguishing phenomenal experience from the inherent reality of things-in-themselves—a realm off-limits to our cognition by definition. He gave a name to that transcendent realm, choosing to catalogue it as a theoretical construct rather than to worship it. Yet, reason is a late comer, just as the cortex is an evolutionary addition to older parts of the brain. We feel things before we understand them. Rudolf Otto called this felt inaccessibility of the innate reality of things its ‘absolute unapproachability’. He deemed it the foundation of all religious experience. Given that we are crucially dependent on the natural environment, and are also psychologically at the mercy of our own imaginings, I call it holy terror.

In addition to being a property of things themselves, realness is a quality with which the mind imbues certain experiences. Numinosity may be considered in the same light. The perceived realness of things refers to their existence outside of our minds; but it is also how we experience our natural dependency on them. Real things command a certain stance of respect, for the benefit or the harm they can bring. Perhaps perceived sacredness or holiness instills a similar attitude in regard to the unknown. In both cases, the experienced quality amounts to judgment by the organism. Those things are cognitively judged real that can affect the organism for better or worse, and which it might affect in turn. Things judged sacred might play a similar role, not in regard not to the body but to the self as a presumed spiritual entity.

The quality of sacredness is not merely the judgment that something is to be revered; nor is holiness merely the judgment that something or someone is unconditionally good. These are socially-based assessments secondary to a more fundamental aspect of the numinous as something judged to be uncanny, weird, otherworldly, confounding, entirely outside ordinary human experience. The uncanny is at once real and unreal. The sacred commands awe in the way that the real compels a certain involuntary respect. Yet, numinous experiences do more than elicit awe. They also suggest a realm entirely beyond what one otherwise considers real. Paradoxically, this implies that we do not normally know reality as it really is.

Indeed, as Kant showed, we cannot know the world as it is “in itself,” apart from the limited mediating processes of our own consciousness. All experience is thus potentially uncanny; the very fact that we consciously experience anything at all is an utter mystery! We can never know with certainty what to make of experience or our own presence as experiencers. It is only through the mind’s chronically inadequate efforts to make sense that anything can ever appear ordinary or profane. Mystery does not just present a puzzle that we might hope to resolve with further experience and thought. Sometimes it is a tangible revelation of utter incomprehensibility, which throws us back to a place of abject dependency.

We are self-conscious beings endowed with imagination and the tendency to imbue our imaginings with realness. We have developed the concept of personhood, as a state distinct from the mere existence of objects or impersonal forces. We seem compelled in general to imagine an objective reality underlying experience. A numinous experience is thus reified as a spiritual force or reality, which may be personified as a “god.” When the relationship of dependence—on a reality beyond one’s ken and control—is thus personified, it aligns with the young child’s experience of early dependence on parents, who must seem all powerful and (ideally) benevolent. Hence, the early human experience of nature as the Great Mother—and later, as God the Father. In the modern view, these family figures reveal the human psyche attempting to come to terms with its dependent status.

But nature is hardly benevolent in the consistent way humans would like their parents to be. Psychoanalysis of early childhood reveals that even the mother is perceived as ambivalent, sometimes depriving and threatening as well as nourishing. The patriarchal god projects the male ego’s attempt to trump the intimidating raw power of nature (read: the mother) by defining a “spiritual” (read: masculine) world both apart from it and somehow above it. The Semitic male God becomes the creator of all. He embodies the ideal father, at once severe and benevolent. But he also embodies the heroic quest to self-define and to re-create the world to human taste. In other words, the human aspiration to become as the gods.

On the one hand, this ideal projects onto an invisible realm the aspiration to achieve the moral perfection of a benevolent provider, and reflects how one would wish others (and nature) to behave. It demands self-mastery, power over oneself. The path of submission to a higher power acknowledges one’s abject dependence in the scheme of things, to resist which is “sin” by definition. On the other hand, it represents the quest for power over the other: to turn the tables on nature, uncertainty, and the gods—to be the ultimate authority that determines the scheme of things.

One first worships what one intends to master. Worship is not abject submission, but a strategy to dominate. Religion demonstrates the human ability to idealize, capture, and domesticate the unknown in thought. It feigns submission to the gods, even while its alter ego—science—covets and acquires their powers. Thus, the religious quest to mitigate the inaccessibility and wrath of God, which lurks behind the inscrutability of nature, is taken over by the scientific quest for order and control. The goal is to master the natural world by re-creating it, to become omniscient and omnipotent.

Relations of domination and submission play out obviously in human history. A divinely authorized social relationship is classically embodied in two kinds of players: kings and peasants. Yet, history also mixes these and blurs boundaries. Like some entropic process, the quest for empowerment is dispersed, so that it becomes a universal goal no longer projected upon the gods or reserved to kings. We see this “democratization” in the modern expectation of social progress through science and global management. While enjoying the benefits of technology, deeply religious people may not share this optimism, remaining skeptical that power rests forever in the inscrutable hands of God. Those who imagine a judgmental, vindictive, and jealous male god have the most reason to be doubtful of human progress, while those who identify with the transcendent aspect of religion are more likely to feel themselves above specific outcomes in the historical fray.

The ability of mind to self-transcend is a double-edged sword. It is the ability to conceive something beyond any proposed limit or system. This enables a dizzying intimation of the numinous; more importantly, it enables the human being to step beyond mental confines, including ideas and fears about the nature of reality and what lies beyond. On the one hand, we know that we know little for certain. To fully grasp that inspires the goosebumps of holy terror. One defensive response is to pretend that some text, creed, or dogma provides an ultimate assurance; yet we know in our bones that is wishful thinking. The experience of awe may incline one to bow down before the Great Mystery. Yet, we are capable of knowledge such as it can be, for which we (not the gods) are responsible. We are cursed and blessed with at least a measure of choice over how to relate to the unknown.

Uncommon sense

Common sense is a vague notion. Roughly it means what would be acceptable to most people. Yet how can there be such a thing as common sense in a divided world? And how can a common understanding of the world be achieved in the face of information that is doubly overwhelming—too much to process and also unreliable?

In half a century, we have gone from a dearth of information crucial for an informed electorate, to a flood of information that people ironically cannot use, do not trust, and are prone to misuse. We now rely less (and with more circumspection) on important traditional appraisers of information, such as librarians, teachers, academics and peer-reviewed journals, text-book writers, critics, censors, journalists and newscasters, civil and religious authorities, etc. The Internet, of course, is largely responsible for this change. On the one hand, it has democratized access to information; on the other, it has shifted the burden of interpreting information—from those trained for it onto the unprepared public, which now has little more than common sense to rely upon to decide what sources or information to trust.

Which brings us to a Catch-22: how to use common sense to evaluate information when the formation of common sense depends on a flow of reliable information? How does one get common sense? It was formerly the role of education to decide what information was worthy of transmission to the next generation, and to impart the wisdom of how to use it. (Also, at a time when there existed less specialized expertise, people formerly had a wider general experience and competence of their own to draw upon.) Now there is instant access to a plethora of influences besides the voices of official educators and recognized experts. The nature of education itself is up for grabs in a rapidly changing present and unpredictable future. Perhaps education should now aim at preparation for change, if such is not an oxymoron. That sort of education would mean not learning facts or skills that might soon become obsolete, but meta-skills of how to adapt and how to use information resources. In large part, that would mean how to interpret and reconcile diverse claims.

One such skill is “reason,” meaning the ability to think logically. If we cannot trust the information we are supposed to think about, at least we could trust our ability to think. If we cannot verify the facts presented, at least we can verify that the arguments do not contradict themselves. Training in critical thinking, logic, research protocols, data analysis, and philosophical critique are appropriate preparations for citizenship, if not for jobs. This would give people the socially useful skill to evaluate for themselves information that consists inevitably of the claims of others rather than “facts” naively presumed to be objective. Perhaps that is as close as we can come to common sense in these times.

Since everything is potentially connected to everything else, even academic study is about making connections as well as distinctions. The trouble with academia partly concerns the imbalance between analysis (literally taking things apart) and synthesis (putting back together a whole picture). Intellectual pursuit has come to overemphasize analysis, differentiation, and hair-splitting detail, often to the detriment of the bigger picture. Consequently, knowledge and study become ever more specialized and technical, with generalists reduced to another specialty. The result is an ethos of bickering, which serves to differentiate scholars within a niche more than to sift ideas ultimately for the sake of a greater synthesis. This does not serve as a model of common sense for society at large.

Technocratic language makes distinctions in the name of precision, but obstructs a unifying understanding that could be the basis for common sense. Much technical literature is couched in language that is simply inaccessible to lay people. Often it is spiced with gratuitous equations, graphs, and diagrams, as though sheer quantification or graphic summaries of data automatically guarantee clarity or plausibility, let alone truth. Sometimes the arguments are opaque even to experts outside that field. Formalized language and axiomatic method are supposed to structure thought rigorously, to facilitate deriving new knowledge deductively. Counter-productively, a presentation that serves ostensibly to clarify, support, and expand on a premise often seems to obfuscate even the thinking of those presenting it. How can the public assimilate such information, which deliberately misses the forest for the trees? How can we have confidence in complex argumentation that pulls the wool over the eyes even of its proponents?

Academic writing must meet formal requirements proposed by the editors of journals. There are motions to go through which have little to do with truth. Within such a framework, literary merit and even skill at communication is not required. Awkward complex sentences fulfill the minimal requirements of syntax. While this is frustrating for outsiders, such formalism permits insiders to identify themselves as members of an elite club. The danger is inbreeding within a self-contained realm. When talking to their peers, academics may feel little need to address the greater world.

For the preservation of common sense, an important lay skill might be the ability to translate academese, like legal jargon, into plain language. One must also learn to skim through useless or misleading detail to get to the essential points. Much popular non-fiction, like some academic books, ironically have few main ideas (and sometimes but one), amply fluffed out to book length with gratuitous anecdotes to make it appeal to a wider audience. Learning to recognize the essential and sort the wheat from the chaff now seems like a basic survival skill even outside academia.

Perhaps as a civilization we have simply become too smart for our own good. There is now such a profusion of knowledge that not even the smartest individuals, with the time to read, can keep up with it. Somehow the information bureaucracy works to expand technological production. But does it work to produce wisdom that can direct the use of technology? The means exist for global information sharing and coordination, but is there the political will to do the things we know are required for human thriving?

Part of the frustration of modern times is the sense of being overwhelmed yet powerless. We may suffer in having knowledge without the power to act effectively, as though we had heightened sensation despite physical paralysis. Suffering is a corollary of potential action and control. Suffering can occur only in a central nervous system, which doubles to inform the creature of its situation and provide some way to do something about it. Sensory input is paired with motor output; perception is paired with response.

Cells do not suffer, though they may respond and adapt (or die). It is the creature as a whole that suffers when it cannot respond effectively. If society is considered an organism, individual “cells” may receive information appropriate at the creature level yet be unable to respond to it at that level. Perhaps that is the tragedy of the democratic world, where citizens are expected to be informed and participate (at least through the vote) in the affairs of society at large—and to share its concerns—but are able to act only at the cellular level. To some extent, citizens have the same information available to their leaders, who are often experts only in the art of staying in power. They may have a better idea of what to do, but are not positioned to do it.

Listening to the news is a blessing when it informs you concerning something you can plausibly do. Even then, one must be able to recognize what is actual news and distill it from editorial, ideology, agenda, and hype. Otherwise it is just another source of anxiety, building a pressure with no sensible release. To know what to do, one must also know that it is truly one’s own idea and free decision and not a result of manipulation by others. That should be the role of common sense: to enable one to act responsibly in an environment of uncertainty.

Unfortunately, human beings tend to abhor uncertainty—a dangerous predicament in the absence of reliable information and common sense. The temptation is to latch onto false certainties to avoid the sheer discomfort of not knowing. These can serve as pseudo-issues, whose artificial simplicity functions to distract attention from problems of overwhelming complexity. Pseudo-issues tend to polarize opinion and divide people into strongly emotional camps, whose contentiousness further distracts attention from the true urgencies and the cooperative spirit required to deal with them. While common sense may be sadly uncommon, it remains our best hope.

Life and work in the paradise of machines

What would we do if we didn’t have to do anything? What would a world be like where nearly all work is done by machines? If machines did all the production, humans would have to find some other way to occupy their time. They would also have to find some other way to justify the cost to society for their upkeep and their right to exist. In the current reality, one’s income is roughly tied to one’s output—though hardly in an equitable way. Investors and upper management are typically rewarded grossly more than employees for their efforts. Yet their needs as organisms are no greater. In a world where all production and most services would be done by machines, human labour would no longer be the basis for either the production or the distribution of wealth. Society would have to find some other arrangement.

In that situation, a basic income could be an unconditional human right. When automation meets all survival needs, food, housing, education and health care could be guaranteed. All goods and services necessary for living a satisfying life would be a birthright, so that no one would be obliged to work in order to live. Time and effort would be discretionary and uncoupled from survival. What to do with one’s time would not be driven by economic need but by creative vision. Thus, the challenge to achieve freedom from toil cannot be separated from the problem of how to distribute wealth, which we already face. Nor can it be separated from the question of what to do with free time, which in turn cannot be separated from how we view the purpose of life.

As biological creatures, our existence is beholden to natural laws and biological necessities. We need food, shelter and clothing and must act to provide for these needs. A minimal definition of work is what must be done to sustain life. The hand-to-mouth subsistence of pre-industrial societies involved a relatively direct relationship between personal effort and survival. Industrial society organizes production by divisions of labour, providing an alternative concept of work with a less direct relationship. Production involves the cooperation of many people, among whom the resulting wealth must somehow be divided up. Work takes on a different meaning as the justification for one’s slice of the economic pie. It is less about production, per se, than the rationale for consumption: a symbolic dues paid to merit one’s keep and secure one’s place on the planet.

Early predictions that machines would create massive unemployment have not materialized. Nor have predictions that people would work far less because of automation. Instead, new forms of employment have replaced the older ones now automated, with people typically working longer hours. Whether or not these forms of work really add to the general wealth and welfare, they serve to justify the incomes of new types of workers. As society adjusts to automation, wealth is redistributed accordingly, though not equitably. Work is redefined but not reduced. In the present economy, those who own the means of production benefit most and control society, in contrast to those who perform labour. When machines are both the means of production and labour combined, how will ownership be distributed? What would be the relationship between, for example, 99% of people unemployed and the 1% who own the machines?

With advances in AI, newly automated tasks continue to encroach on human employment. In principle, any conceivable activity can be automated; and any role in the economy can be taken over by machines—even war, government, and the management of society. We are talking, of course, about superintelligent machines that are better than humans at most, if not all, tasks. But better how, according to which values? If we entrust machines to implement human goals efficiently, why not entrust them to set the goals as well? Why not let them decide what is best for us and sit back to let them provide it? On the one hand, that seems like a timeless dream come true, freedom from drudgery at last. Because physical labour is tiring and wears on the body, we may at least prefer mental to physical activity. The trend has been to become more sedentary, as machines take over grunt work and as forms of work evolve that are less physical and more mental. White-collar work is preferred to blue-collar or no-collar, and rewarded accordingly. Yet work is still tied to survival.

Humans have always struggled against the limitations of the body, the dictates of biology and physics, the restrictions imposed by nature. In particular, that means freedom from the work required to maintain life. In Christian culture, work was a punishment for original sin: the physical pain attending the sweat of the brow and the labour of childbirth alike. Work has had a redeeming quality, as an expiation or spiritual cleanse. The goal of our rebellion against the natural condition is return to paradise, freedom again from painful labour or any effort deemed unpleasant. Our very idea of progress implies the increase of leisure, if not immediately then in the long term: work now for a better future. This has guided the sort of work that people undertake, resulting in the achievements of technology, including artificial intelligence. Humans first eased their toil by forcing it upon animals and other humans they enslaved. Machines now solve that moral dilemma by performing tasks we find burdensome. So far at least, they do not tire, or suffer, or rebel against their slavery.

On the other hand, humans have also always been creative and playful, pursuing activity outside the mandate of Freud’s reality principle and the logic of delayed gratification. We find direct satisfaction in accomplishment of any sort. We deliberately strain the body in exercise and sport, climb mountains for recreation, push our physical limits. We seek freedom from necessity, not from all activity or effort. We covet the leisure to do as we please, what we freely decide upon. In an ideal world, then, work is redefined as some form of play or gratuitous activity, liberated from economic necessity. There have always existed non-utilitarian forms of production, such as music, art, dance, hobbies, and much of academic study. Though not directly related to survival, these have always managed to find an economic justification. When machines supply our basic needs, everyone could have the time for pursuits that are neither utilitarian nor economic.

Ironically, some people now express their creativity by trying to automate creativity itself: writing programs to do art, compose music, play games, etc. No doubt there are already robots that can dance. While AI tools and “expert” programs assist scientists with data analysis, so far there are no artificial scientists or business magnates. Yet, probably anything that humans do machines will eventually do at least as well. The advance of AI seems inevitable in part because some people are determined to duplicate every natural human function artificially through technology. There is an economic incentive, to be sure, yet there is also a drive to push AI to ever further heights purely for the creative challenge and the accomplishment. Because this drive often goes unrecognized even by those involved, it is especially crucial to harness it to an ideal social vision if humanity is to have a meaningful future. Where is the reasonable limit to what should be automated? If the human goal is not simply relief from drudgery, but that machines should ultimately do everything for us, does that not imply that we consider all activity onerous? What, then, would be the point of our existence? Are we here just to consume experience, or are we not by nature doers as well?

Some visionaries think that machines should displace human beings, who have outlived their role at the top of an evolutionary ladder. They view the human form as a catalyst for machine intelligence. However, that post-humanist dream is quintessentially a humanist ideal, invoking transcendence of biological limits. It is a future envisioned not by machines or cyborgs but by conventional human beings alive today. To fulfill it, AI would have to embody current human nature and values in many ways—not least by being conscious. Essentially, we are looking to AI for perfection of ourselves—to become or give birth to the gods we have idolized. But AI could only be conscious if it is effectively an artificial organism, vulnerable and limited in some of the ways we are, even if not in all. To create insentient superintelligence merely for its own sake (rather than its usefulness to us) makes no human sense. Art for art’s sake may make sense, but not automation for automation’s sake. Nor can the goal be to render us inactive, relieved even of creative effort. We must come to understand clearly what we expect from machines—and what we desire for ourselves.

On intentionality

Intentionality is an elusive concept that fundamentally means reference of something to something else. Reference, however, is not a property, state, or relationship inhering in things or symbols, nor between them; it is rather an action performed by an agent, who should be specified. It is an operation of relating or mapping one thing or domain to another. These domains may differ in their character (again, as defined by some agent). A picture, for example, might be a representation of a real landscape, in the domain of painted images. As such it refers to the landscape, and it is the painter who does the referring. Similarly, a word or sentence might represent a person’s thought, perception, or intention. The relevant agents, domains, and the nature of the mappings must be included before intentionality can be properly characterized.

In these terms, the rings of a tree, for example, may seem to track or indicate the age of the tree or periods favorable to growth. Yet, it is the external observer, not the tree, who establishes this connection and who makes the reference. Connections made by the tree itself (if such exist) are of a different sort. In all likelihood, the tree rings involve causal but not intentional connections.

A botanist might note connections she considers salient and may conclude that they are causal. Thus, changing environmental conditions can be deemed a cause of tree ring growth. Alternatively, it would stretch imagination to suppose that the tree intended to put on growth in response to favorable conditions. Or that God (or Nature) intended to produce the tree ring pattern in response to weather conditions. These suppositions would project human intentionality where it doesn’t belong. Equally, it would be far-fetched to think that the tree deliberately created the rings in order to store in itself a record of those environmental changes, either for its own future use or for the benefit of human observers. The tree is simply not the kind of system that can do that. The intentionality we are dealing with is rather that of the observer. On the other hand, there are systems besides human beings that can do the kind of things we mean by referring, intending, and representing. In the case of such systems, it is paramount to distinguish clearly the intentionality of the system itself from that of the observer. This issue arises frequently in artificial intelligence, where the intentionality of the programmer is supposed to transfer to the automated system.

The traditional understanding of intentionality generally fails to make this distinction, largely because it is tied to human language usage. “Reference” is taken for granted to mean linguistic reference or something modeled on it. Intentionality is thus often considered inherently propositional even though, as far as we know, only people formulate propositions. If we wish to indulge a more abstract notion of ‘proposition’, we must concede that in some sense the system makes assertions itself, for its own reasons and not those of the observer. If ‘proposition’ is to be liberated from human statements and reasoning, the intention behind it must be conceived in an abstract sense, as a connection or mapping (in the mathematical sense) made by an agent for its own purposes.

Human observers make assertions of causality according to human intentions, whereas intentional systems in general make their own internal (and non-verbal) connections, for their own reasons, regardless of whatever causal processes a human observer happens to note. Accordingly, an ‘intentional system’ is not merely one to which a human observer imputes her own intentionality as an explanatory convenience (as in Dennett’s “intentional stance”). Such a definition excludes systems from having their own intentionality, which reflects the longstanding mechanist bias of western science since its inception: that matter inherently lacks the power of agency we attribute to ourselves, and can only passively suffer the transmission of efficient causes.

An upshot of all this is that the project to explain consciousness scientifically requires careful distinctions that are often glossed over. One must distinguish the observer’s speculations about causal relations—between brain states and environment—from speculations about the brain’s tracking or representational activities, which are intentional in the sense used here. The observer may propose either causal or intentional connections, or both, occurring between a brain (or organism) and the world. But, in both cases, these are assertions made by the observer, rather than by the brain (organism) in question. The observer is at liberty to propose specific connections that she believes the brain (organism) makes, in order to try to understand the latter’s intentionality. That is, she may attempt to model brain processes from the organism’s own point of view, attempting as it were to “walk in the shoes of the brain.” Yet, such speculations are necessarily in the domain of the observer’s consciousness and intentionality. In trying to understand how the brain produces phenomenality (the “hard problem of consciousness”), one must be clear about which agent is involved and which point of view.

In general, one must distinguish phenomenal experience itself from propositions (facts) asserted about it. I am the witness (subject, or experiencer) directly to my own experience, about which I may also have thoughts in the form of propositions I could assert regarding the content of the experience. These could be proposed as facts about the world or as facts about the experiencing itself. Along with other observers, I may speculate that my brain, or some part of it, is the agent that creates and presents my phenomenal experience to “me.” Other people might also have thoughts (assert propositions) about my experience as they imagine it; they may also observe my behavior and propose facts about it they associate with what they imagine my experience to be. All these possibilities involve the intentionality of different agents in differing contexts.

One might think that intentionality necessarily involves propositions or something like them. This is effectively the basis on which an intentional analysis of brain processes inevitably proceeds, since it is a third-person description in the domain of scientific language. This is least problematic when dealing with human cognition, since humans are language users who normally translate their thoughts and perceptions into verbal statements. It is more problematic when dealing with other creatures. However, in all cases such propositions are in fact put forward by the observer rather than by the system observed. (Unless, of course, these happen to be the same individual; but even then, there are two distinct roles.)

The observer can do no better than to theoretically propose operations of the system in question, formulated in ordinary or some symbolic language. The theorist puts herself in the place of the system to try to fathom its strategies—what she would do, given what she conceives as its aims. This hardly implies that the system in question (the brain) “thinks” in human-language sentences (let alone equations) any more than a computer does. But, with these caveats, we can say that it is a reasonable strategy to translate the putative operations of a cognitive system into propositions constructed by the observer.

In the perspective presented here, phenomenality is grounded in intentionality, rather than the other way around. This does not preclude that intentionality can be about representations themselves or phenomenal experience per se (rather than about the world), since the phenomenal content as such can be the object of attention. The point to bear in mind is that two domains of description are involved, which should not be conflated. Speculation about a system’s intentionality is an observer’s third-person description; whereas a direct expression of experience is a first-person description by the subject. This is so, even when subject and observer happen to be the same person. It is nonsense to talk of phenomenality (qualia) as though it were a public domain like the physical world, to which multiple subjects can have access. It is the external world that offers common access. We are free to imagine the experience of agents similar to ourselves. But there is no verifiable common inner world.

All mental activity, conscious or unconscious, is necessarily intentional, insofar as the connections involved are made by the organism for its own purposes. (They may simultaneously be causal, as proposed by an observer.) But not all intentional systems are conscious. Phenomenal states are thus a subset of intentional states. All experience depends on intentional connections (for example, between neurons); but not all intentional connections result in conscious experience.