Schlock and bling

SCHLOCK AND BLING

My first understanding of status symbols came from tracing the origin of the shell motif in European architecture and furnishings. The scalloped shell is a symbol of Saint James (as in coquilles St. Jacques). Pilgrims on the Camino de Compostela wore a shell as a sort of spiritual bumper sticker to indicate their undertaking of a spiritual journey. The symbol made its way onto chests carried in their entourage and onto inns along the route. Eventually it was incorporated in churches, on secular buildings, and on furniture. Especially in the Baroque period, it became a common decorative motif. It was no longer a literal badge of spiritual accomplishment, but remained by implication a sign of spiritual status—ironic and undeserved.

Religion and power have long been associated. Worldly rulers bolstered their authority as representatives on earth of the divine, when not claiming actual divinity for themselves. Kings and nobles would surround themselves with spiritual symbols to enforce this idea and assure others that their superior status was god-given and well deserved. Their inferiors, desiring that such social standing should rub off on them, made use of the same emblems, now become status symbols completely devoid of religious significance, yet serving to assert their claim to superior class.

It is no coincidence that the powerful have also been rich. Wealth itself thus became a status symbol, based on the notion that the rich, like the noble, deserve their station, which may even be predestined or god-given. Wealth is a sign of merit and superiority. Thus, visible luxury items and baubles are not only attractive and fun adornments, but also set some people above others. Given human competitive nature, gold and jewels—to be treasured—must be specially distributed among the few, as well as relatively rare on earth.

Wealth has become abstract and intangible in modern times and above all quantitative—electronic digits in bank accounts. Money translates as power, to buy services and goods and command respect. Yet, there remains a qualitative aspect to wealth. In the industrial age of mass production, in which goods and services are widely available, there is nevertheless a range of quality among them. The rich can choose what they view as better quality versions of common items. Hence the eternal appeal of Rolex and the likes. How much better can they tell time than the fifty-dollar counterpart? Their role is rather as jewelry, to indicate the status of the wearer. In fact, such wristwatches may have all sorts of deliberately useless features. And so with haute couture: dresses so impractical they can be worn only to rare elite functions.

The very nature of status symbols creates paradoxical dilemmas. Everyone wants high status, which by definition is for the few. Street vendors sell counterfeit knock-offs of expensive labels, precisely because—from a distance or to the undiscerning eye—they serve as status symbol as effectively as the brand they mock. This underlines a distinction between what we might call objective and subjective quality. On the level of symbol and first appearance, the rhinestone necklace is equivalent to the diamond version it copies; the size and number of “gems” may be the same. Yet one is a repository of human labour in a way that the other is not. The real diamonds or emeralds, being rare, were mined with difficulty and perhaps great suffering; the metalwork involves hours more effort, first finding then shaping the gold in a befitting way. This is why art has always been valued as a form of wealth, because of the painstaking effort and intention it embodies. The expensive watch is touted as hand-made.

Is quality in the eye of the beholder? The real goods are wasted on those who can’t tell the difference, let alone afford it. Snobbery and class thus depend on sensibility as well as the quantitative power of money. Money can buy you the trappings of wealth, but can you recognize the real thing from the imitation? You can’t take it with you when you die; but can you at least take it in while alive? Does it make any tangible difference if you cannot? Status symbols do their work, after all, because they are symbolic, which does not entail being genuine. Of course, the buyer should beware. But if you don’t really care, then what difference does quality make, even if you can afford it?

Is there objectively genuine quality? Yes, of course! But to appreciate it requires the corresponding sensibility. We might define quality to mean “objectively better” in some sense—perhaps in making the world a better place? In that case, at least someone must know what is objectively better and why, and be capable of intending and implementing it—for example: designing and producing quality consumer goods. That could entail quite a diversity of features, such as durability, repairability, energy efficiency, recyclability, esthetics, usefulness, etc. Sadly, this is not what we see in the marketplace, which instead tends ever more toward shoddy token items, designed to stand in as knock-offs for the real thing. Designed to take your money but not to last or even to be truly useful.

The rich must have something to spend their monetary digits on, otherwise what is the point of accumulating them? True, economics is a game and there is value and status simply in winning, regardless of the prize. Just knowing (without even vaunting) that one has more points than others reinforces the sense of personal worth. But there is also the temptation to surround oneself with ever more things and conveniences, many of which are ironically empty tokens, mere rhinestones. These also serve as status symbols, to demonstrate one’s success to others who also cannot tell the difference (and thereby to oneself?) In the absence of imagination, collecting such things seems the default plan for a life. The would-be rich also must have something to spend their money on; hence consumerism, hence bling.

Traditionally, value is created by human labour. Quality of product is a function of the quality of effort, which in turn is a function of attention and intention. The things that are standard status symbols—artworks, jewels, servants; fine clothes and craftsmanship; luxury homes, cars and boats, etc.—represent the ability to command effort and thereby quality. There is a paradox here too. For, while quality ultimately refers to human effort and skill, in the automated age ever fewer people work at skilled jobs. The very meaning of the standard is undermined by loss of manual skills. Quality can then no longer be directly appreciated, but only evaluated after the fact: how long did the product last, was it really useful, etc.? Like social media, the marketplace is saturated with questionable products that require the role of consumer reviews.

Ever more people now grow up without manual skills and little hands-on experience of making or repairing the things they use. This is a handicap when it comes to evaluating quality, which is a function of what went into making those things. Many people now cannot recognize the difference between a building standard of accuracy to an eighth of an inch and a standard of a half of an inch (millimeters versus centimeters, if you prefer). Teenagers of my generation used to tear apart and rebuild their cars. Now cars are too sophisticated for that, as is most of our technology, which is not designed for home repair, or any repair at all. There are videos online now that (seriously) show how to change a light bulb! People who make nothing, and no longer understand how things are made or how they work, are not in a position to judge what makes things hold together and work properly. They are at the mercy of ersatz tokens mysteriously appearing on retail shelves: manufactured schlock. That is the ultimate triumph of a system of production where profit, not quality, is our most important product.

When machines and robots will do everything (and all humans will be consumers but not producers), what will be the criterion for quality? Quite possibly, in an ideal world where no one needs to work to survive, people would naturally work anyway, as many people now enjoy hobbies. Perhaps in such a world, wealth would not be a matter of possessions but of cultivated skills. As sometimes it is now, status would be a function of what one can do aside from accumulating wealth produced by others. Perhaps then quality will again be recognizable.

 

The truth of a matter

A natural organism can hardly afford to ignore its environment. To put that differently, its cognition and knowledge consist in those capabilities, responses and strategies that permit it to survive. We tend to think of knowledge as general, indiscriminate, abstract, free-floating, since this has been the modern ideal; for the organism, however, it is quite specific and tailored to survival. This is at least mildly paradoxical, since the human being too is an organism. Our idealized knowledge ought to facilitate, and must at least permit, survival of the human organism. Human knowledge may not be as general as suggested by the ideal. In particular, science may not be as objective and disinterested as presumed; its focus can even be myopic.

Science parallels ordinary cognition in many ways, serving to extend and also correct it. On the other hand, as a form of cognition, science is deliberately constrained in ways that ordinary cognition is not. It has a rigor that follows its own rules, not necessarily corresponding to those of ordinary cognition. The latter is allowed, even required, to jump to conclusions in situations demanding action. Science, in contrast, remains tentative and skeptical. It can speculate in earnest, creating elaborate mathematical constructs; but these are bracketed as “theoretical” until empirical data seem to confirm them. Even then, theory remains provisional: it can be accepted or be disqualified by countermanding evidence, but can never strictly be proven. In a sense, then, science maintains a stance of unknowing along with a goal of knowing.

Many questions facing organisms, about what to do and how to behave, hinge implicitly on what seems true or real from a human perspective. For us moderns, that often means from a scientific perspective, which may not correspond to the natural perspective of the organism. Yet, even for the human organism, behavior is not necessarily driven by objective reality and does not have to be justified by it. External reality is but one factor in the cognitive equation. It is a factor to which we habitually give great importance because, in so many words, we are conditioned to give credence to what appears to us real. Ultimately, this is because our survival and very existence indeed depend on what actually is real or true. To that extent, we are in the same boat as any other creature. The other factor, however, is internal: intention or will. We can, and often do, behave in ways that have little to do with apparent reality and which don’t refer to it for justification. (For example, doing something for the “hell” of it or because we enjoy it. Apart from their economic benefits, what do dancing, art, and sports have to do with survival?) Some things we do precisely because they have little to do with reality.

Of course, the question of what is real—or the truth of a matter—is hardly straightforward. It, too, depends on both internal and external factors, subject and object together. In any case, how we act does not depend exclusively on what we deem to be fact. In some cases, this dissonance is irrational and to our detriment—for instance, ignoring climate change or the health effects of smoking. In other cases, acting arbitrarily is the hallmark of our free will—the ability to thumb our noses at the dictates of reality and even to rebel against the constraints imposed by biology and nature. Often, both considerations apply. In a situation of overpopulation, for example, it may be as irrational—and as heroic—for humanity to value human life unconditionally as for the band to keep playing while the Titanic sinks.

At one time the natural world was considered more like an organism than a machine. Perhaps it should be viewed this way again. Should we treat nature as a sentient agent, of value comparable to the preciousness we accord to human life? Here is a topical question that seems to hinge on the truth of what nature “really” is. If it has agency in some sense like we do—whether sentient or not in the way that we are—perhaps it should have legal rights and be treated with the respect accorded persons. Native cultures are said to consider the natural world in terms of “all my relations.” Some people claim mystical experiences in which they commune and even communicate with the natural world, for example with plants. Yet, other people may doubt such claims, which seem counter to a scientific understanding that has long held nature to be no more than an it, certainly not a thou to talk to. For, from a scientific perspective, most matter is inanimate and insentient. Indeed, the mechanistic worldview of science has re-conceived the natural world as a mere resource for human disposal and use. Given such contradictory views, how to behave appropriately toward “the environment” seems to hinge on the truth of a matter. Is the natural world a co-agent? Can it objectively communicate with people, or do people subjectively make up such experiences for their own reasons?

But does the “truth” of that matter really matter? Apart from scientific protocol, as creatures we are ruled by the mandate of our natural cognition to support survival. That is the larger truth, which science ought to follow. Culturally, we have been engaged in a great modern experiment: considering the world inert, essentially dead, profane (or at least not sacred), something we are free to use for our own purposes. While that stance has supported the creation of a technological civilization, we cannot be sure it will sustain it—or life—in the long term. Scientific evidence itself suggests otherwise. It thus seems irrational to continue on such a path, no matter how “true” it may seem.

What have we to lose in sidestepping the supposed truth of the matter, in favour of an attitude that works toward our survival? Better still, how can such contradictory attitudes be made compatible? This involves reconciling subject with object as two complementary factors in our cognition. Science has deliberately bracketed the subject in order to better grasp the object. So be it. Yet, this situation itself is paradoxical, for someone (a subject) obviously is doing the grasping for some tacit reason. Nature is the object, the human scientist is the subject, and grasping is a motivated action that presumes a stance of possession and control—rather than, for example, belonging. We resist the idea that nature controls us (determinism)—but along with it the idea of being an integral part of the natural world. Can we have free will and still belong? Perhaps—if we are willing to concede free will to nature as well.

The irony is that, on a certain level, obsession with reality or truth serves the organism’s wellbeing, but denies it free will. Compulsive belief in the stimulus grants the object causal power over the subject’s response and experience. On the other hand, ignoring the stimulus perilously forfeits what power the subject has to respond appropriately. The classic subject-object relationship is implicitly adversarial. It maintains either the illusion of technological control over nature or of nature’s underlying control over us. The first implies irresponsible power; the second denies responsibility altogether.

Every subject, being embodied, is undoubtedly an object that is part of the natural world. To the extent we are conscious of this inclusion and of being agents, we are in a position to act consciously to maintain the system of which we are a part. In the name of the sort of knowledge achieved by denying this inclusion, however, we have created a masterful technological civilization that is on the brink of self-destruction, while hardly on the brink of conquering nature. Can we believe instead that we do not stand outside the natural world, as though on a foreign battlefield, but are one natural force in negotiation with other natural forces? Negotiation is a relationship among peers, agent to agent. Even when seemingly adversarial, the relationship is between worthy opponents. Let us therefore think of nature neither as master, slave, nor enemy, but as a peer with whom to collaborate toward a peace that insures a future for all life.

To choose or not to choose

Choice is often fraught with anxiety. We can agonize over decisions and are happy enough when an outcome is decided for us. That’s why we flip coins. Perhaps this says only that human beings loathe responsibility, which means accountability to others for possible error. We are essentially social creatures, after all. The meaning and value of our acts is always in relation to others, whose opinions we curry and fear. Even those unconcerned about reputation while they live may hope for approval in the long-term by posterity.

Perhaps there is a more fundamental reason why choice can be anxious. We have but one life. To choose one option or path seems to forfeit others. The road taken implies other roads not taken; one cannot have the cake and eat it. Choice implies a loss or narrowing of options, which perhaps explains why it invokes negative feelings: one grieves in advance the loss of possible futures, and fears the possibility of choosing the wrong future. Nature created us as individual organisms, distinct from others. That means we are condemned to the unique experience and history of a particular body, out of all the myriad life histories that others experience. Each of us has to be somebody, which means we must live a particular life, shaped by specific choices. We may regret them, but we can hardly avoid them. A life is defined by choices made, which can seem a heavy burden.

Yet, choice can also be viewed more positively as freedom. Choice is the proactive assertion of self and will, not a passive forfeit of options. It affords the chance to self-limit and self-define through one’s own actions, rather than be victimized by chance or external forces. To choose is to take a stand, to gather solid ground under one’s feet where there was but nebulous possibility. Rather than remaining vaguely potential, one becomes tangibly actual, by voluntarily sacrificing some options to achieve one’s goals. This is how we bring ourselves into definition and become response-able. We may be proud or ashamed of choices made. Yet, whatever the judgment, one gains experience and density through deliberate action.

To do nothing is also a choice—sometimes the wisest. The positive version of timidity or paralysis is deliberate restraint. Sometimes we chomp at the bit to act, perhaps prematurely, while the wiser alternative is to wait. Instinct and emotion prompt us to react impulsively. To be sure, such fast response serves a purpose: it can mean the difference between life and death. Yet, some situations allow, and even require, restraint and more careful thought. When there is not enough information for a proper decision, sometimes the responsible choice is to wait and see, while gathering more information. This too strengthens character.

Life tests us—against our own expectations and those of others. Perhaps the kindest measure of our actions is their intent: the good outcome hoped for. We may not accurately foresee the outcome, but at least we can know the desire. Yet, even that is no simple matter. For, we are complex beings with many levels of intention, some of which are contradictory or even unknown to us. We make mistakes. We can fool ourselves. The basic problem is that reality is complex, whereas mind and thought, feeling and intention, are relatively simplistic. We are like the blind men who each felt a part of the elephant and came to very different conclusions about the unseen beast that could crush them at any time. With all our pretense to objectivity, perhaps we are the elephant in the room!

Choice can be analog as well as digital. Plants interact with the world more or less in place, continuously responsive to changes in soil condition, humidity, temperature and lighting. Animals move, to pursue their food and avoid becoming food. Their choices have a more discrete character: yes or no. Yet, there are levels and nuances of choice, and choice about choice. We can be passive or aggressive, reactive or proactive. We can choose not to act, to be ready to act, or to seek a general policy or course of action instead of a specific deed. We can opt for a more analog approach, to adjust continuously, to keep error in small bounds, to play it by ear rather than be too decisive and perhaps dangerously wrong.

Of course, one may wonder whether choice and will are even possible. Determinism is the idea that one thing follows inexorably from another, like falling dominoes, with no intervening act of choosing. The physical world seems to unfold like that, following causes instead of goals. And perhaps there is even a limit to this unfolding, where nothing further can happen: the ultimate playing out of entropy. Yet these are ideas in the minds of living beings who do seem to have choice, and who seem to defy entropy. Determinism, and not free, will may well be the illusion. For, while concepts may follow one from another logically, there is (as Hume noted) no metaphysical binding between real events in time. The paradox is that we freely invent concepts that are supposed to tie the universe together—and bind us as well.

Where there is no free choice there is no responsibility. Determinism is a tool to foresee the future, but can also serve as a place of refuge from guilt over the past. If my genes, my upbringing, my culture or my diet made me do it, then am I accountable for my deeds, either morally or before the law? On the other hand, where there is no responsibility, there is no dignity. If my actions are merely the output of a programmed machine, then I am no person but a mere a thing. Of what account is my felt experience if it does not serve to inform and guide my behavior? I cannot rightfully claim to be a subject at all—to have my inner life be valued by others—unless I also claim responsibility for my outer life as an agent in the world.

Easier said than done, of course. Supposing that one tries to act morally and for the best, one may nevertheless fail. Worse, perhaps, one may wonder whether one’s thoughts and deeds will make any difference at all in the bigger picture. Especially at this crossroads—of human meddling and eleventh-hour concern for the future of all life—it may seem that the course is already set and out of one’s personal hands. Yet, what is unique about this time is precisely that we are called upon to find how to be personally and effectively responsible for the whole of the planet. The proper use of information in the information age is to enable informed choice and action. That no longer concerns only one’s personal—or local or even national—world, but now the world. This is the meta-choice confronting at least those who are in a position to think about it. Whatever our fate and whatever our folly, we at least bring ourselves more fully into being by choosing to think about it and, hopefully, choosing the right course of action.

A credible story about money as the root of evil

The word ‘credit’, like ‘credible’, comes from the Latin credo, to believe. It refers to the trust that must exist between a borrower and a lender. In his monumental work, Debt: the first 5000 years, anthropologist and philosopher-activist David Graeber proposes that credit, in one way or another, is the very basis of sociability and of society. He reverses the traditional dictum in economics that barter came first, then coinage, and finally credit. Quite the contrary: barter was ever only practical in exceptional circumstances; the actual basis of trade for most of human existence was some form of credit. Borrowing worked well in communities where everyone was known and reputation was crucial. Say you need something made, a favour, or service performed. You are then indebted to whoever helps you and at some point you will reciprocate. That sort of cooperation and mutual support is the essence of community.

This is not a review of Graeber’s wide-ranging book or thought, but a reflection on the deep and unorthodox perspective he brings to such questions as: what happens to community when money displaces the honor system of credit? Or: how did the introduction of money change the nature of debt and credit, and therefore society?

Let us note at the outset that many of the evils we associate with money and capitalism already existed in ancient societies that relied on credit, namely: usury. The extortion of “interest” on loans is already a different matter than simply repaying a debt (the “principle”). In a small community, or within families, such extortion would be unfriendly and unconscionable. In larger societies, relations are less personal. The psychological need to honour debt, based on trust, holds over, but without the intimate connection between persons. The debtor—who before was a friend, relative or neighbor—becomes a “stranger,” even when known. The person becomes a thing to exploit; the subject becomes an object.

Lending for gain was no longer a favour to someone in your community, which you knew would eventually be reciprocated fairly. It became something to do for calculated and often excessive profit. It thus became increasingly difficult to repay debts. Securities put up for the loan (even family members or one’s own person!) could be confiscated pending repayment. Usury—and debt in general—became such a problem even in ancient times that kings and rulers were obliged to declare debt amnesties periodically to avoid rebellion. And one of the first things rebellions would do is destroy records of debt. The sacred texts of many religions proscribe usury, but usually only regarding their own people. “Strangers” remained fair game as potential enemies.

The concept of interest has a precedent in the growth of natural systems. Large trees grow from tiny seeds; animal bodies grow from small eggs. Populations expand. Such growth is distinct from static self-maintenance or a population’s self-replenishment. People noticed this surplus when they began to grow crops and manage domesticated animals. The increase of the herd or crop served as metaphor for the interest expected on any sort of “investment.” However, the greedy expectations of loan sharks in all ages usually far exceed the rate of natural growth. Even the “normal” modest return on investment (consistently about 5%) exceeds the rate of growth of natural systems, such as forests. Moreover, there are always limits to natural growth. The organism reaches maturity and stops growing. (The refusal of cells to stop multiplying when they are supposed to is cancer.) A spreading forest reaches a shoreline or a treeline imposed by elevation and cold. The numbers of a species are held in check by other species and by limited resources. Nature as a whole operates within these bounds of checks and balances, which humans tend to ignore.

Money, credit, and debt are ethical issues because they directly involve how people treat one another. Credit in the old sense—doing a favour that will eventually be returned—involves one way of treating others, which is quite different from usury, which often resulted in debt peonage (often literally slavery). For good reason, usury was frowned upon as a practice within the group—i.e., amongst “ourselves.” The group needed to have an ethics in place that ensures its own coherence. But as societies expanded and intermingled, membership in the group became muddied. Trade and other relations with other groups created larger groupings. New identities required a new ethics.

Amalgamation led to states. War between states exacerbated the ethical crisis. War was about conquest, which reduced the defeated to chattel (war was another source of slaves). People, like domesticated animals, could become property bought and sold. Slaves were people ripped from their own community, the context that had given them identity and rights. Similarly, domestic animals had been removed from their natural life and context and forced into servitude to people. We may speak even of handmade things as being wrested from their context as unique objects, personally made and uniquely valued, when they enter the marketplace. Manufactured things are designed to be identical and impersonal, not only to economize through mass production, but also to standardize their value. Mass production of standard things matched mass production of money.

Enter coinage. Rather than supply armies through expensive supply lines, soldiers could be paid in coin to spend locally rather than pillage the countryside. These coins could then be returned to the central government in the form of taxes. Coinage standardized value by quantifying it precisely. But it did something more as well. It rendered trade completely impersonal. Before, you had a reciprocal relationship of dependency and trust with your trade partner or creditor—an ongoing relationship. In contrast to credit, the transfer of coins completed the transaction, cancelling the relationship; both parties could walk away and not assume any future dealings. Personal trust was not required because the value exchanged was fixed and clear, transferable, and redeemable anywhere. Indeed, money met a need because people were already involved in trade with people they might never see again and whom they did not necessarily trust. But this was a very different sort of transaction than the personal sort of exchange that bound parties together.

Yet, trust was still required, if on a different level. Using money depends on other people accepting it as payment. While money seemed to be a measure of the value of things, it implicitly depended on trust among people—no longer the direct personal trust between individuals but ongoing faith in the system. Coins had a symbolic value, regulated by the state, independent of the general valuation of the metals they were made of. (The symbolic value was usually greater than the value of the gold, silver or copper, since otherwise the coins would be hoarded.) The shift toward symbolic value was made clear with the introduction of paper money. But in fact, promissory notes had long been used before official paper money or coinage. The transition to purely symbolic (virtual) money was complete when the U.S. dollar was taken off the gold standard in 1971.

Unfortunately, some of the laws restricting usury were abandoned soon after. “Credit,” in its commercialized form, returned with a vengeance. Credit cards and loan sharks aggressively offered indiscriminate lending for the sake of the profit to be gained, never mind the consequences for the borrower. Hence, the international crisis of 2008—and the personal crises of people who lost their homes, of students who spend half their lives repaying student loans, of consumers always on the verge of bankruptcy, and of publics forced to bail out insolvent corporations.

The idea of credit evolved from a respectable mutual relationship of trust to a shady extortion business. The idea of indebtedness has accordingly long been tinged with sin, as a personal and moral failing. A version of the Lord’s Prayer reads, “forgive us our debts as we forgive our debtors.” (Alternatively: “forgive us our trespasses”, referring to the “sacredness” of private property rights.) As Graeber points out, we generally do not forgive debt, but have made it the basis of modern economics. There is no mention of forgiving the sins of creditors. The “ethics” of the marketplace is a policy to exploit one’s “neighbor,” who can now be anyone in the world—the further out of sight the better.

Usury now deals with abstractions that hide the nature of the activity: portfolios, mutual funds, financial “instruments,” stocks and bonds, “derivatives,” etc. The goal is personal gain, not social benefit, mutual relationship, or helping one another. Cash is going out of fashion in favour of plastic, which is no more than ones and zeros stored in a computer. The whole system is vulnerable to cyberattack. Worse, the confidence that underwrites the system runs on little more than inertia. It will eventually break down, if not renewed by a basis for trust more genuine, tangible and personal.

Apart from climate change, the other crisis looming is the unsustainability of our civilization. The global system of usury (let’s call a spade a spade: we’re talking about capitalism) unreasonably exploits not only human beings but the whole of nature. Like population growth, economic growth cannot continue indefinitely. The sort of growth implied by “progress” is a demented fantasy, with collapse lurking around the corner. Moreover, the fruits of present growth are siphoned by a small elite and hardly shared, while the false promise of a better life for all is the only thing keeping the system going. We cannot be any more ethical in regard to nature than we are in regard to fellow human beings. While people may or may not revolt against the greed of other people, we can be sure that nature will.

Relativity theory and the subject-object relationship

Concepts of the external world have evolved in the history of Western thought, from a naïve realism toward an increasing recognition of the role of the subject in all forms of cognition, including science. The two conceptual revolutions of modern physics both acknowledge the role of the observer in descriptions of phenomena observed. That is significant, because science traditionally brackets the role of the observer for the sake of a purely objective description of the world.  The desirability of an objective description is self-evident, whether to facilitate control through technology or to achieve a possibly disinterested understanding. Yet the object cannot be truly separated from the subject, even in science.

Knowledge of the object tacitly refers back to the participation of the observer as a physical organism, motivated by a biologically-based need to monitor the world and regulate experience. On the other hand, knowledge may seem to be a mental property of the subject, disembodied as “information.” However, the subject is necessarily also an object: there are no disembodied observers. Information, too, is necessarily embodied in physical signals.

A characteristic of all physical processes, including the conveyance of signals, seems to be that they take time and involve transfers of energy. These facts could long be conveniently ignored in the case of information conveyed by means of light, which for most of human history seemed instantaneous and with negligible physical effect. Eventually, it was realized through observation (Eotvos), in experiment (Fizeau), and in theory (Maxwell) that the speed of light is finite and definite, though very large. Since that was true all along, it could have posed a conceptual dilemma for physicists long before the late 19th century, since the foundation of Newtonian physics was instantaneous action-at-distance. Even for Einstein and his contemporaries, however, the approach to problems resulting from the finite speed of light was less about incorporating the subject into an objective worldview than to compensate the subject’s involvement in order to preserve that worldview. Einstein’s initial motivation for relativity theory lay less in the observational consequences of the finite speed of light signals than in resolving conceptual inconsistencies in Maxwell’s electrodynamics.

Nevertheless, perhaps for heuristic reasons, Einstein began his 1905 paper with an argument about light signals, in which the signal was defined to travel with the same finite speed for all observers. This, of course, violated the foundational principle of the addition of velocities. It skirted the issue of the physical nature of the signal (particle or wave?), since some observations seemed to defy either the wave theory or the emission theory of light. Something had to give, and Einstein decided it was the concept of time. What remained implicit was the fact that non-local measurement of events in time or space must be made via intervening light signals.

When the distant system being measured is in motion with respect to the observer, the latter’s measurement will differ from the local measurement by an observer at rest in the distant system. The difference will be proportional to their relative speed compared to the speed of light. By definition, these are line of sight effects. By the relativity postulate, the effects must be reciprocal, so that whether the observers are approaching each other or receding, each would perceive the other’s ruler to have contracted and clock to have slowed! Such a conclusion could not be more contrary to common sense. But that meant simply that common sense is based on assumptions that may hold true only in limited circumstances (namely, when the observation is presumed instantaneous). In other words, circumstances that are non-physical.

The challenge embraced by Einstein was to achieve coherence within the framework of physics as a logical system, which is a human construct, a product of definitions. Physics may aim to reflect the structure of the real world, but invokes the freedom of the human agent to define its axioms and elements. Einstein postulated two axioms in his famous paper: the laws of physics are the same for observers in uniform relative motion; and the speed of light does not depend on the motion of its source. From these it follows that simultaneity can have no absolute meaning and that measurements involving time and space depend on the observers’ relative state of motion. In other words, the fact that the subject does not stand outside the system, but is a physical part of it, affects how the object is perceived or measured. Yet, a contrary meta-truth is paradoxically also insinuated: to the degree that the system is conceptual and not physical, the theorist does stand outside the system. Einstein’s freedom to choose the axioms he thought fundamental to a consistent physics implied the four-dimensional space-time continuum (the so-called block universe), which consists of objective events, not acts of observation.

Could other axioms have been chosen—alternatives to his postulates? Indeed, they had been. The problem was in the air in the late 19th century. In effect, Lorentz and FitzGerald had proposed that movement through the ether somehow causes a change in intermolecular forces, so that apparently rigid bodies in motion literally change shape in such a way that rulers “really” contract in length in the direction of motion. This was an ontological (electrodynamic) explanation of the null result of the crucial Michelson-Morley experiment. (Poincaré was also working on an ontological solution.) That approach made sense, since the space between atoms in solid bodies depends on electrical forces. Though Einstein knew about the Michelson-Morley experiment, his epistemic (kinematic) approach did not focus on that experiment, but originated with his reflections in a youthful thought experiment concerning what it would be like to travel along with a light beam. It continued with reflections on apparent contradictions in Maxwell’s electrodynamics. Yet, it returned to focus on the physical nature of light, which bore fruit in the equivalence of matter and energy and in General Relativity as a theory of gravitation.

Despite his early positivism, it was Einstein’s lifelong concern to preserve the objectivity, rationality and consistency of physics, the principle challenges to which were the dilemmas that gave birth to the two great modern revolutions, relativity and quantum theory. His solutions involved taking the observer into account, but with an aim to preserve an essentially observer-independent worldview—the fundamental stance of classical physics. While he chose an epistemic over an ontological analysis, he was deeply committed to realism. There were real, potentially observable, consequences to his theories, which have since been confirmed in many experiments. Yet alternative interpretations are conceivable, formulated on the basis of different axioms, to account for the same—mostly subtle—effects. While relativity theory renders appearances a function of the observer’s state of motion, it is really about preserving the form of physical laws for all observers—reasserting the possibility of objective truth.

One ironic consequence is that space and time are no longer considered from the point of view of the observer but are objectified in a god’s-eye view. The four-dimensional manifold is mathematically convenient; yet it also makes a difference in how we understand reality. As a theory of gravitation, General Relativity asserts the substantial existence of a real entity called spacetime. Space and time are no longer functions of the observer and of the means of observation (light); now they have an existence independent of the observer—ironically, much as Newton had asserted. What was grasped as a relationship returned to being a thing.

Even in the Special theory, there is confusion over the interpretation of time dilation. In SR, time dilation was initially a mutually perceived phenomenon, which makes sense as a line-of-site effect. In modern expositions, however, mechanical clocks are replaced by “light clocks,” and the explanation of time dilation refers to the lengthened path of light in the moving clock. This is no longer a line-of-site or mutual effect, since the light path is no longer in the direction of motion relative to the observer. Instead, it substitutes a definition of time that circularly depends on light. While “objective” in the sense that it is not mutual, the explanation for the gravitational time dilation of General Relativity rests on an incoherent interpretation of time dilation in SR.

Einstein derived both the famous matter-energy equivalence and General Relativity using arguments based on Special Relativity. These arguments slide inconsistently from an epistemic to an ontological interpretation. While the predictions of GR and E=mc2 may be accurate, their theoretical dependence on SR remains unfounded if the effects are purely epistemic: that is, if they do not invoke a physical interaction of things with an ether, when they accelerate with respect to it (the so-called clock hypothesis). Or, to put it the other way around, GR and the mass-energy equivalence actually imply such an interaction.

The Lorentz transformation could as well be interpreted in purely epistemic terms, of observers’ mutually relative state of motion, given the finite intermediary of light. Spacetime need not be treated as an object if the subject’s role is fully taken into account. The invariance of the speed of light could have a different interpretation, not as a cosmic speed limit but as a side-effect of light’s unique role as signal between frames of reference. Time dilation could have a different explanation, as a function of moving things physically interacting with an ether.

Form and content

FORM AND CONTENT

That all things have form and content reflects an analysis fundamental in our cognition and an dichotomy fundamental to language. Language is largely about content—semantic meaning. Yet, it must have syntactical form to communicate successfully. The content of statements is their nominal reason for being; but their effectiveness depends on how they are expressed. In poetry and song, syntax and form are as important as semantics and content. They may even dominate in whimsical expressions of nonsense, where truth or meaning is not the point.

The interplay of form and content applies even in mathematics, which we think of as expressing timeless truths. ‘A=A’ is the simplest sort of logical truth—a tautology, a sheer matter of definition. It applies to anything, any time. By virtue of this abstractness and generality, it is pure syntax. As a statement, it bears no news of the world. Yet, mathematics arose to describe the world in its most general features. Its success in science lies in the ability to describe reality precisely, to pinpoint content quantitatively. The laws of nature are such generalities, usually expressed mathematically. They are thus sometimes considered transcendent in the way that mathematics itself appears to be. That is, they appear as formal rules that govern the behavior of matter. You could say that mathematics is the syntax of nature.

The ancient Greeks formalized the relation between syntax and semantics in geometry. Euclid provided the paradigm of a deductive method, by applying formal rules to logically channel thought about the world, much as language does intuitively. Plato considered the world of thought, including geometry, to be the archetypal reality, which the illusory sensory world only crudely copies. This inverted the process we today recognize as idealization, in which the mind abstracts an essence from sensory experience. For him, these intuitions (which he called Forms) were the real timeless reality behind the mundane and ever-changing content of consciousness.

The form/content distinction pertains especially perhaps in all that is called “art.” Plato had dismissed art as dealing only with appearances, not the truth or reality of things. According to him art should no more be taken seriously than play. However, it is precisely as a variety of play that we do take art seriously. What we find beautiful or interesting about a work of art most often involves its formal qualities, which reveal the artist’s imagination at play. Art may literally depict the world through representation; but it may also simply establish a “world” indirectly, by assembling pertinent elements through creative play. Whatever its serious themes, all art involves play, both for the producer and the consumer.

Meaning is propositional, the content of a message. It is goal-oriented, tied to survival and Freud’s reality principle. But the mind also picks up on formal elements of what may or may not otherwise bear a message or serve a practical function, invoking more the pleasure principle. The experience of beauty is a form of pleasure, and “form” is a form of play with (syntactic) elements that may not in themselves (semantically) signify anything or have any practical use. Art thus often simply entertains. This is no less the case when it is romanticized as a grand revelation of beauty than when it is dismissed as trivially decorative. Of course, art combines seriousness and play in varying ways that can place greater emphasis on either form or content. While these were most often integrated before the 19th century, relatively speaking modern art liberated form from content.

For most of European history, artists were expected to do representational work, to convey a socially approved message—usually religious—through images. At least in terms of content, art was not about personal expression. That left form as the vehicle for individual expression, though within limits. Artists could not much choose their themes, but they could play with style. The rise of subjectivity thematically in art mirrors the rise of subjectivity in society as a whole; it recapitulates the general awakening of individuality. Yet, even today, a given art work is a compromise between the artist’s vision and social dynamics that limit its expression and reception.

From the very rise of civilization, art had served as propaganda of one sort or another. For example, Mesopotamian kings had built imposing monuments to their victories in war, giving a clear message to any potentially rebellious vassals. Before the invention of printing, pictures and sculptures in Europe had been an important form of religious teaching. Yet, even in churches, the role of iconic art was from the beginning a divisive issue. On the one hand, there was the biblical proscription against idolatry. On the other hand, the Church needed a form of propaganda that worked for an illiterate populace. Style and decoration were secondary to the message and used to support it. In the more literate Islamic culture, the written message took precedence, but the formal element was expressed in the esthetics of highly stylized decorative calligraphy. In either case, the artist usually did little more than execute themes determined by orthodoxy, giving expression to ideas the artist may or may not have personally endorsed. But the invention of printing changed the role of graphic art, as later would the invention of photography.

Except to serve as political or commercial propaganda (advertising), today representational art holds a diminished place, superseded by photography and computer graphics. Yet, artists continue to paint and sculpt figures and scenes as well as decorative or purely abstract creations. In the age of instant images (provided by cell phones, for instance), what is the ongoing appeal of hand-made images? How and why is a painting based on a photograph received differently than the photo itself, and why do people continue to make and buy such a thing? The answer surely lies in the interplay of form and content. The representational content of the photo is a given which inspires and constrains the play with form.

Skill is involved in accurately reproducing a scene. We appreciate demonstrations of sheer skill, so that hyper-realist painting and sculpture celebrate technical proficiency at imitation. Then, too, a nostalgia is associated with the long tradition of representational art. Thirdly, status is associated with art as a form of wealth. An artwork is literally a repository of labor-intensive work, which formerly often embodied precious materials as well as skill. Photographic images are mostly cheap, but art is mostly expensive. Lastly, there are conventional ideas about decoration and how human space should be furnished. Walls must have paintings; public space must have sculptures. In general, art serves the purpose of all human enterprise: to establish a specifically human world set apart from nature. This is no less so when nature itself is the medium, as in gardens and parks that redefine the wild as part of the human world.

Nevertheless, it is fair to say that the essence of modern art—as sheer play with materials, images, forms, and ideas—is no longer representational. Art is no longer bound to a message; form reigns over content. Perhaps this feature is liberating in the age of information, when competing political messages overwhelm and information is often threatening. Art that dwells on play with formal elements refrains from imposing a message—unless its iconoclasm is the message. Abstraction does not demand allegiance to an ideology—except when it is the ideology. But in that case, it is no longer purely play. Art can serve ideology; but it can also reassure by the very absence of an editorial program. Playfulness, after all, does not intimidate or discriminate, though it may be contagious. It engages us on a level above personal or cultural differences.

Decoration has always been important to human beings, who desire to embellish and shape both nature and human artifacts. Decoration may incorporate representation or elements from nature, but usually in a stylized way that emphasizes form, while tailoring it to function. Yet, even decorative motifs constitute an esthetic vocabulary that can carry meaning or convey feeling. A motif can symbolize power and military authority, for example. Such are the fasces and the bull of Roman architecture; the “heroic” architecture, sculpture, and poster art of Fascism or Communism; or the Napoleonic “Empire” style of furnishings. It can be geometric and hard-edged, expressing mental austerity. Equally, it can express a more sensuous and intimate spirit, often floral or vegetal—as in the wallpapers of William Morris and the Art Nouveau style of architecture, furniture, and posters. In other words, decoration too reflects intent. It can reinforce or soften an obvious message. But it can also act independently of content, even subversively to convey an opposing ethos.

Even when no message seems intended, there is a meta-message. Whatever is well-conceived and well-executed uplifts and heartens us because it conveys the caring of the artist, artisan, or engineer. On the other hand, the glib cliché and the shoddily made product spread cynicism and discouragement. They reveal the callousness of the producer and inure us to a world in which quantity prevails over quality. Every made thing communicates an intent, for better or worse.

 

 

 

The power and the glory

THE POWER AND THE GLORY

Human beings are eminently social creatures. Our religions remind us to love one another and our laws require us to consider each other’s needs. One’s self-image depends on the good opinion of others and on status—comparative standing in a pecking order. Like other primates, human society is hierarchical. One strives to be better than others—in one’s own eyes and in theirs. Things that serve as symbols and visible trappings of status are a primary form of wealth. On the other hand, we also seek comfort and ease, and wealth consists of things that make our lot better. We are a masterful species not content to live in the abject condition of other creatures, nor content with our natural limitations and dependency on nature. We seek power to define and control our environs—collectively to make a specifically human world, and individually to improve our physical well-being and social standing within it.

The other side of wealth is economic dependency. And the other side of status is psychological dependency. Status and power over others complement each other, since status is essentially power that others have over us. There are those who achieve their relative economic sufficiency by exploiting the dependency of others, just as there are those who rely on the opinions of others for their good opinion of themselves. Independence means not only self-sufficiency (of food production, for example) but also immunity to the opinions of others. There are people for whom material ease and social approval are not paramount. Yet, even they might not be able to defend against others who would compel them with the threat of violence. On your own plot of land, it is possible to subsist and thumb your nose at others trying to buy your services (which provides you no means to control others). But, even if you are food-secure, someone with weapons—or who can pay someone with weapons—can force you to do their bidding or take away your land. When very few own the land required to raise food, most are in an awkward position of dependency.

Control of the physical environment and control over other people dovetail when both can be made to serve one’s purposes. This requires the ability to command or induce others to do one’s bidding. How does this power over others come about? In particular, how does the drive for status mesh with the drive for wealth and the ability to command others? Power must be merited in the eyes of society, and the justification is typically status. How separate can they be? Certainly, we honor some individuals who are not wealthy in material possessions or politically powerful. On the other hand, we may be awed by individuals we despise.

Power can take different forms in different societies. It can be a competition to determine status: who is best able to rule by virtue of their perceived qualities. Leaders are then obeyed out of loyalty to their personal charisma, or because they somehow represent divine authority in the imagination of others. God represents human ideals of omnipotence, omniscience, and benevolence; so does the monarch, ruling by divine proxy, symbolically represent these ideals in society. On the other hand, bureaucratic power is rule by impersonal law. Yet, even its ability to require obedience may have originally derived from divine authority, later replaced by institutions such as parliaments and courts of law, enforced by arms. Like values in general, once considered unquestionable because divinely sanctioned, authority becomes secularized. As the individual’s subjectivity grew more significant in society, so did individual responsibility to endorse ruling authority—through voting in elections, for example. As arbitrary and absolute authority gave way to institutions, equality of subjects under God or king gave way to equality under law. To replace the (theoretically absolute) authority of the monarch with the limited authority of elected representatives changes the political game: from common acceptance of a transcendent reality to a spectator sport of factions supporting competing mortal personalities.

A basic problem of social organization is how to get people to defer to a will that transcends the wills of the individuals constituting society. Just as siblings may bicker among themselves but defer to parental authority, so people seek an impartial, fair, and absolute source of authority—a binding arbitration, so to speak. That is a large part of the appeal of God or king, as givers of law who stand above the law and the fray of mere humans. (Psychologically, the very arbitrariness of royal whim points to the transcendent status of the ruler as above the law, therefore the one who can invest the law with absolute authority.) This is the background of modern deference to codified civil law, which was originally the edict of the king or of God. On the other hand, tradition has the authority of generations. Especially when expressed in writing, precedent has an objective existence that anyone can refer to—and thus defer to—though always subject to interpretation. This too explains the willingness to abide by the law even when in disagreement, provided the law has this explicit objective existence preserved in writing. It may also explain the authority of religious texts for believers.

Effective rule depends not only on charisma but also on delegation of authority to others, to tradition, and to institutions such as laws and bureaucracies. The appeal of law and administration over the whim of rulers lies in its equal application to all: fairness. A law or rule that does not apply to everyone is considered unjust. The other side of such uniformity is that one size must fit all: it is also unfair when individual circumstance is not considered. Acceptance of authority can grow out the success of a triumphant player or out of the rule of law through tradition and bureaucracy. When it fails, it can degenerate into either agonistic populism or bureaucracy run amok—or both. Either way, when authority breaks down, politics degenerates into a popularity contest among personalities mostly preselected from a privileged class. Indeed, that is what ‘democracy’ is, as we have come to know it! A true democratic system would not involve election at all, but selection by lottery—a civic duty like jury duty or military service.

Wealth has the dimensions of status and power. It consists of some form of ownership. In our society, every form of property is convertible to cash and measurable by it. Money has universal value by common agreement, to purchase what is needed for comfort, to purchase status, and to command others by purchasing their services. The rich enjoy the use of capital (property used to gain more wealth), the ability to command a wide variety of services money can buy, and the status symbols it can buy: artworks, jewelry, luxury cars and boats, villas maintained by servants, etc. Yet, most people have little capital and their wealth is little more than the momentary means to survive.

In general, money is now the universal measure of value and success. It also enables the accumulation of capital. Yet, status and power may well have been separate in societies that did not use money as we do. Without money as a medium of exchange, possessions alone cannot serve to command others. There must also be the ability to get others to do one’s bidding by paying them or by coercing them by (paid) force of arms. Without money, as a standard quantized medium of exchange, trade must be a direct exchange of goods and services—i.e., barter. All dollars are created equal (just as all people are, theoretically before the law). But the universal equality of units of money only led to its unequal distribution among people. In that sense, money is the root of economic inequality, if not of all evil. If only barter were possible, it would be difficult (short of outright theft) for one person to accumulate very much more than another. Money promotes plunder, legal and otherwise, by its very intangibility and ease of passing from hand to hand.

We are used to the idea of respecting property ownership and obeying the law, and to hierarchical structures in which one follows orders. Some indigenous societies simply rejected the idea of obeying orders or telling others what to do. Status was important to them, but not power over others. Or, rather, they took measures against the possibility of institutionalized power relations in their society. We tend to project modern power relations and structures back upon the past, so that the quest to understand the origins of power presumes current understandings and arrangements. This can blind us to alternative forms of political process, to real choice we may yet have.

Hardly anyone now could disagree with Plato’s idea that only a certain type of well-motivated and wise individual is truly qualified to lead society. That would mean someone unmotivated by status, wealth or power. But there does not seem to be a modern version of his Academy to train statespersons. (Instead, they graduate from business schools or Hollywood.) There are think tanks, but not wisdom tanks. If the political task is to plan humanity’s future, it might better be done by a technocracy of experts in the many disciplines relevant to that task, including global management of population and resources. They would make and enforce laws designed to ensure a viable future.

Such a governing committee might operate by consensus; but society as a whole (potentially the world) would not be ruled by democratically elected representatives. Instead, staggered appointments would be drawn by lottery among qualified candidates. The term of office would be fixed, non-renewable, and only modestly paid. This arrangement would bypass many of the problems that plague modern democracies, beginning with de facto oligarchy. There would be no occasion to curry favor with the public nor fear its disaffection, since the “will of the people” would be irrelevant. Hence, the nefarious aspects of social media (or corporately controlled official media) wouldn’t touch the political process. There would be no election campaigns, no populist demagoguery, no contested voting results, no need for fake news or disinformation. (Validation of knowledge within scientific communities has its own well-established protocols that remain relatively immune to the toxic by-products and skepticism of the Internet Age.)

Admittedly, members of this governing committee would not be immune to bribery or to using the office for personal benefit (just as juries and judges are sometimes corrupted). Spiritual advice before the modern age was to be in the world and not of it. Taking that seriously today may be the only cure for humanity’s age-old obsession with power and glory. Still, technocracy might be an improvement over the present farce of democracy.

[Acknowledgement: many of the ideas in this post were inspired by The Dawn of Everything by David Graeber and David Wingrow, McClelland and Stewart, 2021—a challenging and rewarding read.]

The mechanist fallacy and the prospect of artificial life

The philosophy of mechanism treats all physical reality as though it were a machine. Is an organism a machine? Under what circumstances could a machine become an organism? Clear answers to such questions are important to evaluate the feasibility and desirability of artificial life.

The answer to the first question is negative: an organism is not a machine, because it is not an artifact. The answer to the second question follows from an understanding of how the philosophy of mechanism leads falsely to the conclusion that natural reality can be formally exhausted in thought and recreated as artifact. A machine can become an organism only by designing itself, from the bottom up, as organisms in effect have done. An artificial organism cannot be both autonomous and fully subject to human control, any more than natural organisms are. This trade-off presents a watershed choice: to create artifacts as tools of human intent or to foster autonomous systems that may elude human control and pose a threat to us and all life.

Much of the optimism of genetic engineering rests on treating organisms as mechanisms, whose genetic program is their blueprint. But no natural thing is literally a machine, because (as far as we know) natural reality is found, not made. The quest to engineer the artificial organism from the top down rests on the theoretical possibility to analyze the natural one exhaustively, just as simulation relies on formal coding of the thing to be simulated. But, unlike machines and other artifacts, no natural thing can be exhaustively analyzed. Only things that were first encoded can be decoded.

As a way of looking, the philosophy of mechanism produces artifacts at a glance.  While this has been very fruitful for technology, imitating organisms is not an effective strategy for producing them artificially, because it can only produce other artifacts. The implicit idealist faith behind theoretical modelling and the notion of perfect simulation is that each and every property of a thing can be completely represented. A ‘property’, however, is itself an artifact, an assertion that disregards a potential infinity of other assertions. The collection of properties of a natural thing does not constitute it, although it does constitute an artifact.

A machine might be inspired by observing natural systems, but someone designed and built it. It has a finitely delimited structure, a precise set of well-defined parts. It can be dismantled into this same set of parts by reversing the process of construction. The mechanistic view of the cosmos assumes that the universe itself is a machine that can be deconstructed into its “true” parts in the same way that an engine can be assembled and disassembled. However, we are always only guessing at the parts of any natural system and how they relate to each other. The basic problem for those who want to engineer life is that they did not make the original.

We cannot truly understand the functioning of even the simplest creature and its genetic blueprint without grasping its complex interactions with environments that are the source and reference of its intentionality. Just as a computer program draws not only upon logic and the mechanics of the computer but also upon the semantically rich environment of the programmer (which ultimately includes the whole of the real world), so the developing embryo, for instance, does not simply unfold according to a program spelled out in genes, but through complex chemical interactions with the uterine environment and beyond. The genetic “program”, in other words, is not a purely syntactic system, but is rich in references that extend indefinitely beyond itself. The organism is both causally and intentionally connected to the rest of the world. Simply identifying genetic units of information cannot be taken as exhaustive understanding of the genetic “code”, any more than identifying units of a foreign language as words implies understanding their meaning.

Simulation involves the general idea that natural processes and objects can be reverse-engineered. They are taken apart in thought, then reconstructed as an artifact from the inferred design. The essence of the Universal Machine (the digital computer) is that it can simulate any other machine exhaustively. But whether any machine, program, artifact, model, or design can exhaustively simulate an organism—or, for that matter, any aspect of natural reality—is quite another question.

The characteristic of thought and language, whereby a rose is a rose is a rose, makes perfect simulation seem feasible. But there are many varieties of rose and every individual flower is unique. The baseball player and the pitching machine may both be called pitchers, but the device only crudely imitates the man, no matter how accurately it hurls the ball. Only in thought are they the “same” action. When a chunk of behavior (whether performed by a machine or a natural creature) seems to resemble a human action, it is implicitly being compared not to the human action itself but to an abstraction (“pitching”) that is understood as the essence of that behavior. Similarly, the essence or structure of an object (the “pitcher”) is only falsely imagined to be captured in a program or blueprint for its construction. Common sense recognizes the differences between the intricate human action of throwing and the mechanical hurling of the ball. Yet, the concept of simulation rests on obscuring such distinctions by conflating all that can pass under a given rubric. The algorithm, program, formalism, or definition is the semantic bottleneck through which the whole being of the object or behavior must be squeezed.

One thing simulates another when they both embody a common formalism. This can work perfectly well for two machines or artifacts that are alternative realizations of a common design. It is circular reasoning, however, to think that the being of a natural thing is exhausted in a formalism that has been abstracted from it, which is then believed to be its blueprint or essence. The structure, program, or blueprint is imposed after the fact, inferred from an analysis that can never be guaranteed complete. The mechanist fallacy implies that it is possible to replicate a natural object by first formalizing its structure and behavior and then constructing an artifact from that design. The artifact will instantiate the design, but it will not duplicate the natural object, any more than an airplane duplicates a bird.

If an organism is not a machine, can a machine be an organism? Perhaps—but only if, paradoxically, it is not an artifact! What begins as an artifact must bootstrap itself into the autonomy that characterizes organism. An organism is self-defining, self-assembling, self-maintaining, self-reproducing—in a word, autopoietic. In order to become an organism, a machine must acquire its own purposes. That property of organisms has come about through natural selection over many generations—a process that depends on birth and death. While a machine exhibits only the intentionality of its designers, the organism derives its own intentionality from participation in an evolutionary contest, through a long history of interactions that matter to it, in an environment of co-participants.

Technological development as we know it expresses human purposes; natural evolution does not. The key concepts that distinguish organism from machine are the organism’s own intentionality and its embodiment in an evolutionary contest. While a machine may be physical, it is not embodied, because embodiment means the network of relationships developed in an evolutionary context. No machine yet, however complex, is embodied in that sense or has its own purposes. Indeed, this has never been the goal of human engineers.

Quite apart from feasibility, we must ask what would be the point of facilitating the evolution of true artificial life, aside from the sheer claim to have done it? The autonomy of organisms limits how they can be controlled. We would have no more control over artificial organisms than we presently have over wild or domesticated ones. We could make use of an artificial ecology only in the ways that we already use the natural one. While it is conceivable that artificial entities could self-create under the right circumstances—after all, life did it—these would not remain within the sort of human control, or even understanding, exerted over conventional machines. We must distinguish clearly between machines that are tools, expressing their designers’ motivations, and machines that are autonomous creatures with their own motivations and survival instincts. The latter, if successful in competing in the biosphere, could displace natural creatures and even all life.

If we wish to retain human hegemony on the planet, there will be necessary limits to the autonomy of our technology. That, in turn, imposes limits on its capabilities and intelligence, especially the sort of general and self-interested intelligence we expect from living beings. We must choose—while we still can—between controllable technology to serve humans and the dubious accomplishment of siring new forms of being that could drive us to extinction. This is a political as well as a design choice. Only clarity of intention can avoid disaster resulting from the naive and confused belief that we can both retain control and create truly autonomous artifacts.

 

Origins of the sacred

Humanity and religion seem coeval. From the point of view of the religious mind, this hardly requires explanation. But from a modern scientific or secular view, religion appears to be an embarrassing remnant. There must be a reason why religion has played such a central and persistent role in human affairs. If not a matter of genes or evolutionary strategy, it must have a psychological cause deeply rooted in our very nature. Is there a core experience that sheds light on the phenomenon of religion?

The uncanny is one response to unexpected and uncontrolled experience. It is not solely the unpredictable external world that confounds the mind, which can produce from within its own depths terrifying, weird, or at least unsettling experiences outside the conscious ego’s comfort zone. One can suffer the troubling realization that the range of possible experience is hardly guaranteed to remain within the bounds of the familiar, and that the conscious mind’s strategies are insufficient to keep it there. The ego’s grasp of this vulnerability, to internal as well as external disturbance, may be the ground from which arises the experience of the numinous, and hence the origin of the notion of the sacred or holy. Essentially it is the realization that there will always be something beyond comprehension, which perhaps underlies the familiar like the hidden bulk of an iceberg.

To actually experience the numinous or “wholly other” seems paradoxical to the modern mind, given that all experience is considered a mediated product of the biological nervous system. For, the noumenon is that which, by Kant’s definition, cannot be experienced at all. Its utter inaccessibility has never been adequately rationalized, perhaps because our fundamental epistemic situation precludes knowing the world-in-itself in the way that we know our sensory experience. Kant acknowledged this situation by clearly distinguishing phenomenal experience from the inherent reality of things-in-themselves—a realm off-limits to our cognition by definition. He gave a name to that transcendent realm, choosing to catalogue it as a theoretical construct rather than to worship it. Yet, reason is a late comer, just as the cortex is an evolutionary addition to older parts of the brain. We feel things before we understand them. Rudolf Otto called this felt inaccessibility of the innate reality of things its ‘absolute unapproachability’. He deemed it the foundation of all religious experience. Given that we are crucially dependent on the natural environment, and are also psychologically at the mercy of our own imaginings, I call it holy terror.

In addition to being a property of things themselves, realness is a quality with which the mind imbues certain experiences. Numinosity may be considered in the same light. The perceived realness of things refers to their existence outside of our minds; but it is also how we experience our natural dependency on them. Real things command a certain stance of respect, for the benefit or the harm they can bring. Perhaps perceived sacredness or holiness instills a similar attitude in regard to the unknown. In both cases, the experienced quality amounts to judgment by the organism. Those things are cognitively judged real that can affect the organism for better or worse, and which it might affect in turn. Things judged sacred might play a similar role, not in regard not to the body but to the self as a presumed spiritual entity.

The quality of sacredness is not merely the judgment that something is to be revered; nor is holiness merely the judgment that something or someone is unconditionally good. These are socially-based assessments secondary to a more fundamental aspect of the numinous as something judged to be uncanny, weird, otherworldly, confounding, entirely outside ordinary human experience. The uncanny is at once real and unreal. The sacred commands awe in the way that the real compels a certain involuntary respect. Yet, numinous experiences do more than elicit awe. They also suggest a realm entirely beyond what one otherwise considers real. Paradoxically, this implies that we do not normally know reality as it really is.

Indeed, as Kant showed, we cannot know the world as it is “in itself,” apart from the limited mediating processes of our own consciousness. All experience is thus potentially uncanny; the very fact that we consciously experience anything at all is an utter mystery! We can never know with certainty what to make of experience or our own presence as experiencers. It is only through the mind’s chronically inadequate efforts to make sense that anything can ever appear ordinary or profane. Mystery does not just present a puzzle that we might hope to resolve with further experience and thought. Sometimes it is a tangible revelation of utter incomprehensibility, which throws us back to a place of abject dependency.

We are self-conscious beings endowed with imagination and the tendency to imbue our imaginings with realness. We have developed the concept of personhood, as a state distinct from the mere existence of objects or impersonal forces. We seem compelled in general to imagine an objective reality underlying experience. A numinous experience is thus reified as a spiritual force or reality, which may be personified as a “god.” When the relationship of dependence—on a reality beyond one’s ken and control—is thus personified, it aligns with the young child’s experience of early dependence on parents, who must seem all powerful and (ideally) benevolent. Hence, the early human experience of nature as the Great Mother—and later, as God the Father. In the modern view, these family figures reveal the human psyche attempting to come to terms with its dependent status.

But nature is hardly benevolent in the consistent way humans would like their parents to be. Psychoanalysis of early childhood reveals that even the mother is perceived as ambivalent, sometimes depriving and threatening as well as nourishing. The patriarchal god projects the male ego’s attempt to trump the intimidating raw power of nature (read: the mother) by defining a “spiritual” (read: masculine) world both apart from it and somehow above it. The Semitic male God becomes the creator of all. He embodies the ideal father, at once severe and benevolent. But he also embodies the heroic quest to self-define and to re-create the world to human taste. In other words, the human aspiration to become as the gods.

On the one hand, this ideal projects onto an invisible realm the aspiration to achieve the moral perfection of a benevolent provider, and reflects how one would wish others (and nature) to behave. It demands self-mastery, power over oneself. The path of submission to a higher power acknowledges one’s abject dependence in the scheme of things, to resist which is “sin” by definition. On the other hand, it represents the quest for power over the other: to turn the tables on nature, uncertainty, and the gods—to be the ultimate authority that determines the scheme of things.

One first worships what one intends to master. Worship is not abject submission, but a strategy to dominate. Religion demonstrates the human ability to idealize, capture, and domesticate the unknown in thought. It feigns submission to the gods, even while its alter ego—science—covets and acquires their powers. Thus, the religious quest to mitigate the inaccessibility and wrath of God, which lurks behind the inscrutability of nature, is taken over by the scientific quest for order and control. The goal is to master the natural world by re-creating it, to become omniscient and omnipotent.

Relations of domination and submission play out obviously in human history. A divinely authorized social relationship is classically embodied in two kinds of players: kings and peasants. Yet, history also mixes these and blurs boundaries. Like some entropic process, the quest for empowerment is dispersed, so that it becomes a universal goal no longer projected upon the gods or reserved to kings. We see this “democratization” in the modern expectation of social progress through science and global management. While enjoying the benefits of technology, deeply religious people may not share this optimism, remaining skeptical that power rests forever in the inscrutable hands of God. Those who imagine a judgmental, vindictive, and jealous male god have the most reason to be doubtful of human progress, while those who identify with the transcendent aspect of religion are more likely to feel themselves above specific outcomes in the historical fray.

The ability of mind to self-transcend is a double-edged sword. It is the ability to conceive something beyond any proposed limit or system. This enables a dizzying intimation of the numinous; more importantly, it enables the human being to step beyond mental confines, including ideas and fears about the nature of reality and what lies beyond. On the one hand, we know that we know little for certain. To fully grasp that inspires the goosebumps of holy terror. One defensive response is to pretend that some text, creed, or dogma provides an ultimate assurance; yet we know in our bones that is wishful thinking. The experience of awe may incline one to bow down before the Great Mystery. Yet, we are capable of knowledge such as it can be, for which we (not the gods) are responsible. We are cursed and blessed with at least a measure of choice over how to relate to the unknown.

Uncommon sense

Common sense is a vague notion. Roughly it means what would be acceptable to most people. Yet how can there be such a thing as common sense in a divided world? And how can a common understanding of the world be achieved in the face of information that is doubly overwhelming—too much to process and also unreliable?

In half a century, we have gone from a dearth of information crucial for an informed electorate, to a flood of information that people ironically cannot use, do not trust, and are prone to misuse. We now rely less (and with more circumspection) on important traditional appraisers of information, such as librarians, teachers, academics and peer-reviewed journals, text-book writers, critics, censors, journalists and newscasters, civil and religious authorities, etc. The Internet, of course, is largely responsible for this change. On the one hand, it has democratized access to information; on the other, it has shifted the burden of interpreting information—from those trained for it onto the unprepared public, which now has little more than common sense to rely upon to decide what sources or information to trust.

Which brings us to a Catch-22: how to use common sense to evaluate information when the formation of common sense depends on a flow of reliable information? How does one get common sense? It was formerly the role of education to decide what information was worthy of transmission to the next generation, and to impart the wisdom of how to use it. (Also, at a time when there existed less specialized expertise, people formerly had a wider general experience and competence of their own to draw upon.) Now there is instant access to a plethora of influences besides the voices of official educators and recognized experts. The nature of education itself is up for grabs in a rapidly changing present and unpredictable future. Perhaps education should now aim at preparation for change, if such is not an oxymoron. That sort of education would mean not learning facts or skills that might soon become obsolete, but meta-skills of how to adapt and how to use information resources. In large part, that would mean how to interpret and reconcile diverse claims.

One such skill is “reason,” meaning the ability to think logically. If we cannot trust the information we are supposed to think about, at least we could trust our ability to think. If we cannot verify the facts presented, at least we can verify that the arguments do not contradict themselves. Training in critical thinking, logic, research protocols, data analysis, and philosophical critique are appropriate preparations for citizenship, if not for jobs. This would give people the socially useful skill to evaluate for themselves information that consists inevitably of the claims of others rather than “facts” naively presumed to be objective. Perhaps that is as close as we can come to common sense in these times.

Since everything is potentially connected to everything else, even academic study is about making connections as well as distinctions. The trouble with academia partly concerns the imbalance between analysis (literally taking things apart) and synthesis (putting back together a whole picture). Intellectual pursuit has come to overemphasize analysis, differentiation, and hair-splitting detail, often to the detriment of the bigger picture. Consequently, knowledge and study become ever more specialized and technical, with generalists reduced to another specialty. The result is an ethos of bickering, which serves to differentiate scholars within a niche more than to sift ideas ultimately for the sake of a greater synthesis. This does not serve as a model of common sense for society at large.

Technocratic language makes distinctions in the name of precision, but obstructs a unifying understanding that could be the basis for common sense. Much technical literature is couched in language that is simply inaccessible to lay people. Often it is spiced with gratuitous equations, graphs, and diagrams, as though sheer quantification or graphic summaries of data automatically guarantee clarity or plausibility, let alone truth. Sometimes the arguments are opaque even to experts outside that field. Formalized language and axiomatic method are supposed to structure thought rigorously, to facilitate deriving new knowledge deductively. Counter-productively, a presentation that serves ostensibly to clarify, support, and expand on a premise often seems to obfuscate even the thinking of those presenting it. How can the public assimilate such information, which deliberately misses the forest for the trees? How can we have confidence in complex argumentation that pulls the wool over the eyes even of its proponents?

Academic writing must meet formal requirements proposed by the editors of journals. There are motions to go through which have little to do with truth. Within such a framework, literary merit and even skill at communication is not required. Awkward complex sentences fulfill the minimal requirements of syntax. While this is frustrating for outsiders, such formalism permits insiders to identify themselves as members of an elite club. The danger is inbreeding within a self-contained realm. When talking to their peers, academics may feel little need to address the greater world.

For the preservation of common sense, an important lay skill might be the ability to translate academese, like legal jargon, into plain language. One must also learn to skim through useless or misleading detail to get to the essential points. Much popular non-fiction, like some academic books, ironically have few main ideas (and sometimes but one), amply fluffed out to book length with gratuitous anecdotes to make it appeal to a wider audience. Learning to recognize the essential and sort the wheat from the chaff now seems like a basic survival skill even outside academia.

Perhaps as a civilization we have simply become too smart for our own good. There is now such a profusion of knowledge that not even the smartest individuals, with the time to read, can keep up with it. Somehow the information bureaucracy works to expand technological production. But does it work to produce wisdom that can direct the use of technology? The means exist for global information sharing and coordination, but is there the political will to do the things we know are required for human thriving?

Part of the frustration of modern times is the sense of being overwhelmed yet powerless. We may suffer in having knowledge without the power to act effectively, as though we had heightened sensation despite physical paralysis. Suffering is a corollary of potential action and control. Suffering can occur only in a central nervous system, which doubles to inform the creature of its situation and provide some way to do something about it. Sensory input is paired with motor output; perception is paired with response.

Cells do not suffer, though they may respond and adapt (or die). It is the creature as a whole that suffers when it cannot respond effectively. If society is considered an organism, individual “cells” may receive information appropriate at the creature level yet be unable to respond to it at that level. Perhaps that is the tragedy of the democratic world, where citizens are expected to be informed and participate (at least through the vote) in the affairs of society at large—and to share its concerns—but are able to act only at the cellular level. To some extent, citizens have the same information available to their leaders, who are often experts only in the art of staying in power. They may have a better idea of what to do, but are not positioned to do it.

Listening to the news is a blessing when it informs you concerning something you can plausibly do. Even then, one must be able to recognize what is actual news and distill it from editorial, ideology, agenda, and hype. Otherwise it is just another source of anxiety, building a pressure with no sensible release. To know what to do, one must also know that it is truly one’s own idea and free decision and not a result of manipulation by others. That should be the role of common sense: to enable one to act responsibly in an environment of uncertainty.

Unfortunately, human beings tend to abhor uncertainty—a dangerous predicament in the absence of reliable information and common sense. The temptation is to latch onto false certainties to avoid the sheer discomfort of not knowing. These can serve as pseudo-issues, whose artificial simplicity functions to distract attention from problems of overwhelming complexity. Pseudo-issues tend to polarize opinion and divide people into strongly emotional camps, whose contentiousness further distracts attention from the true urgencies and the cooperative spirit required to deal with them. While common sense may be sadly uncommon, it remains our best hope.