All posts by Dan

The found and the made

There is a categorical difference between natural things and artifacts. The latter we construct, the former we simply encounter. We can have certainty only concerning our own creations, because—like the constructs of mathematics—they alone are precisely what they are defined to be. For this reason, the Renaissance thinker Vico advised that knowledge of human institutions was more reliable than knowledge of nature.

If this distinction was glossed over in the early development of science, it was probably because natural philosophers believed that nature is an artifact—albeit created by God rather than by human beings. We were positioned to understand the mind of God because we were made in God’s image. Believing that the natural world was God’s thought, imposed on primal substance, the first scientists were not obliged to consider how the appearance of the world was a result of their own minds’ impositions. Even when that belief was no longer tenable, the distinction between natural things and artifacts continued to be ignored because many natural systems could be assimilated to mathematical models, which are artifacts. Because they are perfectly knowable, mathematical models—standing  in for natural reality—enable prediction.

According to Plato, the intellect is privileged to have direct access to a priori truths. In contrast, sensory knowledge was at best indirect and at worst illusory. In a parallel vein, Descartes claimed that while appearances could deceive (as in Plato’s Cave), one could not be deceived about the fact that such appearances occur. However, Kant drew a different distinction: one has access to the phenomenal realm (perception) but not to the noumenal realm (whatever exists in its own right). The implicit assumption of science was that scientific constructs—and mathematics in particular—correspond to the noumenal realm, or at least correspond better than sensory perception.

The usefulness of this assumption rests in practice on dealing only with a select variety of natural phenomena: namely, those that can be effectively treated mathematically. Historically this meant simple systems defined by linear equations, since only such equations could be manually solved. The advent of computers removed this limitation, enabling the mathematical modelling of non-linear phenomena. But it does not remove the distinction between artifact and nature, or between the model and the real phenomenon it models.

The model is a product of human definitions. As such it is well-defined, finite, relatively simple, and oriented toward prediction. The real phenomenon, in contrast, is ambiguous and indefinitely complex, hence somewhat unpredictable. Definition is a human action; definability not a property of real systems, which cannot be assumed finite or clearly delimited. The model is predictable by definition, whereas the real system is only predictable statistically, after the fact, if at all.

In part, the reason the found can be confused with the made is that it is unclear what exactly is found, or what finding and making mean in the context of cognition. At face value, it seems that the “external” world is given in the contents of consciousness. But this seemingly real and external world is certainly not Kant’s noumena, the world-in-itself. Rather, the appearance of realness and externality is a product of the mind. It presumes the sort of creative inference that Helmholtz called ‘perceptual hypothesis.’ That is, for reasons of adaptation and survival, the mind has already interpreted sensory input in such a way that the world appears real and external, consisting objects in space and events in time, etc. Overlaid on this natural appearance are ideas about what the world consists of and how it works—ideas that refine our biological adaptation. To the modern physicist it may appear to consist of elementary particles and fundamental forces “obeying” natural laws. To the aborigine or the religious believer it may seem otherwise. Thus, we must look to something more basic, directly common to all, for what is “immediately” found, prior to thought.

Acknowledging that all subjects have in common a realm of perceptual experience (however different for each individual) presumes a notion of subjectivity, contrary to the natural realism which views experience as a window on the world independent of the subject. What is directly accessible to the mind is an apparition in the field of one’s own consciousness: the display that Kant called the phenomenal realm. What we find is actually something the brain has made in concert with what is presumed to be a real external environment, which includes the body of which the brain is a part. This map (the phenomenal realm) is a product of the interaction of mind and the noumenal territory. What is the nature of this interaction? And what is the relationship between the putatively real world and the consciousness that represents it? Unsurprisingly, so far there has been no scientific or philosophical consensus about the resolution of these questions, often referred to as the “hard problem of consciousness.” Whatever the answer, our epistemic situation seems to be such that we can never know reality in itself and are forever mistaking the map for the territory.

Whether or not the territory can be truly found (or what finding even means), the map is something made, a representation of something presumably real. But how can you make a representation of something you cannot find? What sort of “thing” is a representation or an idea, in contrast to what it represents or is an idea of?

A representation responds to something distinct from it. A painting may produce an image of a real scene. But copying is the wrong metaphor to account for the inner representation whereby the brain monitors and represents to itself the world external to it. It is naïve to imagine that the phenomenal realm is in any sense a copy of the external world. A better analogy than painting is map making. A road map, for example, is highly symbolic and selective in what it represents. If to scale, it faithfully represents distances and spatial relationships on a plane. A typical map of a subway system, however, represents only topological features such as connections between the various routes. The essential point is that a map serves specific purposes that respond to real needs in a real environment, but is not a copy of reality. To understand the map as a representation, we must understand those purposes, how the map is to be used.

This map must first be created, either in real time by the creature through its interactions with its environment, or by the species through its adaptive interactions, inherited by the individual creature. How does the brain use its map of the world? The brain is sealed inside the skull, with no direct access to the world outside. The map is a sort of theory concerning what lies outside. The mapmaker has only input signals and motor commands through which to make the map in the first place and to use it to navigate the world. An analogy is the submarine navigator or the pilot flying by instrument—with the strange proviso that neither navigator nor pilot has ever set foot outside their sealed compartment.

The knowledge of the world provided by real time experience, and the knowledge inherited genetically, both consist in inferences gained through feedback. Sensory input leads (say, by trial and error) to motor output, which influences the external world in such a way that new input is provided, resulting in new motor output, and so on. The pilot or navigator has an idea of what is causing the inputs, upon which the outputs are in turn acting. This idea (the map or theory) works to the extent it enables predictions that do not lead to disaster. On the genetic level, natural selection has resulted in adaptation by eliminating individuals with inappropriate connections. On the individual level, real-time learning operates similarly, by eliminating connections that do not lead to a desired result. What the map represents is not the territory directly, but a set of connections that work to at least permit the survival of the mapmaker. It is not that the creature survives because the map is true or accurate; rather, the map is true or accurate because the creature survives!

The connections involved are actively made by the organism, based on its inputs and outputs. They constitute a representation or map insofar as an implicit or explicit theory of reality is involved. While such a connections (in the physical brain) must have a physical and causal basis (as neural synapses, for example), the connections may be viewed as logical and intentional rather than physical and causal. Compare the function of a wiring diagram for an electrical appliance. From an engineering point of view, the soldered connections of the wires and components are physical connections. From a design point of view, the wiring diagram expresses the logical connections of the system, which include the purposes of the designer and the potential user. In the case of a natural brain, the organism is its own designer and makes the connections for its own purposes. The brain can be described as a causal system, but such a description does not go far to explain the neural connectivity or behavior of the organism. It certainly cannot explain the existence of the phenomenal world we know in consciousness.

What’s in a game?

Games are older than history. They are literally fascinating. The ancient Greeks took their sports seriously, calling them the Olympic games. Board games, card games, and children’s games have structured play throughout the ages. Such recreations continue to be important today, especially in online or computer gaming. They underline the paradoxical seriousness of play and the many dimensions of the concept of game. These include competition, cooperation, entertainment and fun, gratuity, chance and certainty, pride at winning and a sense of accomplishment. Besides the agonistic aspect of sports, armies play war games and economists use game theory. The broad psychological significance of the game as a cognitive metaphor begs wider recognition of how the notion mediates experience and structures thought. The mechanist metaphor that still dominates science and society is grounded in the general idea of system, which is roughly equivalent to the notion of game. Both apply to how we think of social organization. The game serves as a powerful metaphor for daily living: “the games people play.” It is no wonder so many people are taken by literal gaming online, and by activities (such as business and war) that have the attributes of competitive games.

While games are old, machines are relatively new. A machine is a physical version of a system, and thus has much in common with a game. The elements of the machine parallel those of the game, because each embodies a well-defined system. While the ancient Greeks loved their games, they were also enchanted by the challenges of clearly defining and systematizing things. Hence, their historical eminence in Western philosophy, music theory, and mathematics. Euclid generalized and formalized relationships discovered through land measurement into an abstract system—plane geometry. Pythagoras systematized the harmonics of vibrating strings. Today we call such endeavors formalization. We recognize Euclid’s geometry as the prototype of a ‘formal axiomatic system’, which in essence is a game. Conversely, a game is essentially a formal system, with well-defined elements, actions and rules. So is a machine and a social or political system. As concepts, they all bear a similar appeal, because they are clear and definite in a world that is inherently ambiguous.

The machine age began in earnest with the Industrial Revolution. Already Newton had conceived the universe as a machine (his “System of the World”). Descartes and La Mettrie had conceived the human and animal body as a machine. Steam power inspired the concepts of thermodynamics, which extended from physics to other domains such as psychology. (Freud introduced libido on the model of fluid dynamics.) The computer is the dominant metaphor of our age—the ultimate, abstract, and fully generalized universal machine, with its ‘operating system’. Using a computer, like writing a program, is a sort of game. We now understand the brain as an extremely complex computer and the genetic code as a natural program for developing an organism. Even the whole universe is conceived by some as a computer, the laws of physics its program. These are contemporary metaphors with ancient precedents in the ubiquitous game.

Like a formal system, a game consists of a conceptual space in which action is well-defined. This could be the literal board of a board game or the playing field of a sport. There are playing pieces, such as chess pieces or the members of a soccer or football team. There are rules for moving them in the space (such as the ways chess pieces can move on the board). And there is a starting point from which the play begins. There is a goal and a way to know if it has been reached (winning is defined). A game has a beginning and an end.

A formal system has the elements of a game. In the case of geometry or abstract algebra, the defined space is an abstraction of physical space. The playing pieces are symbols or basic elements such as “point,” “straight line,” “angle,” “set”, “group,” etc. There are rules for manipulating and combining these elements legitimately, (i.e., “logically”). And there are starting points (axioms), which are strings of symbols already accepted as legitimate. The goal is to form new strings (propositions), derived from the initial ones by means of the accepted moves (deduction). To prove a statement is to derive it from statements already taken on faith. This corresponds to lawful moves in a game.

Geometry is a game of solitaire insofar as there is no opponent. Yet, the point of proof is to justify propositions to other thinkers as well as to one’s own mind, by using legitimate moves. One arrives at certainty, by carefully following unquestioned rules and assumptions. The goal is to expand the realm of certainty by leading from a familiar truth to a new one. It’s a shared game insofar as other thinkers share that goal and accept the rules, assumptions, and structure; it’s competitive insofar as others may try to prove the same thing, or disprove it, or dispute the assumptions and conventions.

Geometry and algebra were “played” for a long time before they were fully formalized. Formalization occurred over the last few centuries, through trying to make mathematics more rigorous, that is to become more consistent and explicitly well-defined. The concept of system, formalized or not, is the basis of algorithms such as computer programs, operating systems, and business plans. Machines, laws, rituals, blueprints—even books and DNA—are systems that can potentially be expressed as algorithms, which are instructions to do something. They involve the same elements as a game: goal, rules, playing pieces, operations, field of action, starting and ending point.

Game playing offers a kind of security, insofar as everything is clearly defined. Every society has its generally understood rules and customs, its structured spaces such as cities and public squares and its institutions and social systems. Within that context, there are psychological and social games that people play, such as politics, business, consumption, and status seeking. There are strategies in personal negotiation, in legal proceedings, in finance, and in war. These are games in which one (or one’s team) plays against opponents. The economy is sometimes thought of as a zero-sum game, and game theory was first devised in economic analysis to study strategies.

Yet, economic pursuit itself—“earning a living,” “doing business,” “making” money, “getting ahead”—serves also as a universal game plan for human activity. The economy is a playing field with rules and goals and tokens (such as money) to play with. In business or in government, a bureaucracy is a system that is semi-formalized, with elements and rules and a literal playing field, the office. The game is a way to structure activity, time, experience and thought. It serves a mediating cognitive function for each individual and for society at large. Conversely, cognition (and mind generally) can be thought of as a game whose goal is to make sense of experience, to structure behavior, and to win in the contest to survive.

The game metaphor is apt for social intercourse, a way to think of human affairs, especially the in-grouping of “us” versus “them.” It is unsurprising that systems theory, digital computation, and game theory arose around the same time, since all involve formalizing common intuitive notions. Human laws are formulas that prescribe behavior, while the laws of nature are algorithms that describe observed patterns in the natural world. The task of making such laws is itself a game with its own rules—the law-maker’s rules of parliamentary procedure and jurisprudence, or the scientist’s experimental method and theoretical protocol. Just as the game can be thought of as active or static, science and law can be thought of as human activities or as bodies of established knowledge. Aside from its social or cognitive functions, a game can be viewed as a gratuitous creation in its own right, an entertainment. It can be either a process or a thing. A board game comes in a box. But when you play it, you enter a world.

Thinking of one’s own behavior as game-playing invites one to ask useful questions: what is my goal? What are the rules? What is at stake? What moves do I take for granted as permissible? How is my thinking limited by the structuring imposed by this game? Is this game really fun or worthwhile? With whom am I playing? What constitutes winning or losing? How does this game define me? What different or more important game could I be playing?

Every metaphor has it limits. The game metaphor is a tool for reflection, which can then be applied to shed light on thought itself as a sort of game. Creating and applying metaphors too is a kind of game.

 

 

The Gender Fence

Apart from biological gender, is there a masculine or feminine mentality? Are men from Mars and women from Venus? In this era when gender identity is up for grabs, can one speak meaningfully about masculine and feminine ways of being and gender differences, apart from biologically determined individuals?

The very notion of gender choice is subtly tricky. For, it may be an essentially masculine idea—a result of social processes and intellectual traditions long dominated by men. Under patriarchy, after all, it is men (at least some men) who have had preferential freedom of choice over their lives. Of course, generalizations are generally suspicious. Exceptions always abound. Nevertheless, the fact of exceptions (outliers they are called in statistics) does not negate the validity of apparent patterns. It only raises deeper questions.

So, here’s my tentative and shaky idea, to take or leave as you please: for better and worse, men tend to be more individualistic than women. One way this manifests is in terms of boundaries. The need for “good boundaries” is a modern cliché of pop psychology. But this too is essentially a masculine idea, since men seek to differentiate their identity more than women. This inclines them to maintain sterner boundaries, to favour rules and structures, to be competitive and authoritarian,  to be self-serving. Women, in contrast, tend to be more nurturing, giving, accommodating and accepting because of their biological role as mothers and their traditional social role as keepers of the hearth and the domestic peace. Which means they appear to have weaker boundaries. They do not separate their identity so clearly from those who depend on them. They literally have no boundary with the fetus growing within, and a more nebulous boundary with the infant and child after birth. Giving often means giving in, and nurturance often means placing the needs of others above one’s own. Men have systematically exploited this difference to their own advantage. It is in their interest to maintain that advantage by maintaining boundaries—that is, to continue being self-centred individualists.

In many ways, this division of labour has worked to maintain society—that is, society as a patriarchal order. Yin and yang complement each other, perhaps like positive and negative—like protons and electrons? (Consider the metaphor: a proton is nearly 2000 times more massive than an electron and is thought of as a solid object, whereas an electron is considered little more than a fleeting bit of charge circling about it!) Traditionally, men have been the centre of gravity, women their minions, servants, and satellites. In the modern nuclear family, men were the bread winners, disciplinarians and authority figures, the autocrats of the breakfast table. One wonders how the dissolved post-modern family, with separated parents (ionized atoms?), affects the emerging gender identities and boundaries of children.

Gender issues loom disproportionately large in the media these days, in part serving as a distracting pseudo-issue in society at large. However, emerging choice about gender identity may be a good thing, with broader social significance than for the individuals involved. It may mean that the centre of gravity of individual identity is shifting toward the feminine, away from traditional masculine values that have been destroying the world even while creating it. Women have long had the model of male roles dangling before them as the avenue to possible freedom, whereas men have more been obliged to buck prejudice to identify with nurturance and endure persecution to identify with the feminine. To put it differently, individuation (and its corollary, individual choice) has become less polarized. It has lost some of its association with males and has become more neutral. In principle, at least, an “individual” is no longer a gendered creature to such an extent. That could also mean a shift away from reproductivity as a basis for identity, which would benefit an overpopulated world. But what does it imply for the mentalities of masculine and feminine?

Masculine and feminine identities are grounded in biology and evolutionary history. That is, they are natural. The modern evolution of the concept of the individual reflects the general long-term human project to deny or escape biological determinants, to secede from nature. But, paradoxically, that too is predominantly a masculine theme! “Individuation” means not only claiming an identity distinct from others in the group. The psychological characteristics of individuality have also meant differentiating from the feminine and from “mother” nature: alienation from the natural. Isn’t it predominantly men who aspire to become non-biological beings, to create a human world apart from nature and a god-like identity apart from animality? To seize control of reproduction (in hospitals and laboratories) and even to duplicate life artificially? Not bound to nature through the womb, men seek to expand this presumed advantage through technology, business and culture, even creating cities as refuges from the wild. However, their ideological rebellion against nature and denial of biological origins is given the obvious lie by the male sex drive and by male imperatives of domination that clearly have animal origins. Are women, then, less hypocritical and perhaps more accepting of their biological roots? Are those roots in fact more socially acceptable than men’s?

If the centre of gender gravity is moving toward the feminine, what could be the consequences for society, for the world? Certainly, a reaction by patriarchy to the threat of “liberal” (read: feminine?) values might be expected and is indeed seen around the world. We could expect an increasing preoccupation with boundaries, which are indeed madly invaded and defended as political borders. Power asserts itself not only against other power-wielding males, but also to defend against the very idea of an alternative to power relations. Men egg each other on in their conspiracy to maintain masculine values of domination, control, the pursuit of money and status, etc. Increasing bureaucracy may be another symptom, since it thrives on structure and hierarchy.

The human overpopulation and destruction of the planet should mitigate against men and women continuing in their traditional biological roles as progenitors and the traditional social goal of “getting ahead.” If so, what will they do with their energies instead? The fact that modern women can escape their traditional lot by embracing masculine values and goals is hardly encouraging. Far better for the world if they claim their individuality by re-defining themselves (and femininity) from scratch, neither on the basis of biology nor in the political world defined by men. On the other hand, one could take heart in the fact that some men are abandoning traditional macho identities. There is hope if that shift is widespread and more than superficial: if gay rights and gender freedom, for example, represent an emerging mentality different from the one that is destroying the world.

On the other hand, boundaries are sure to figure in any emerging sense of individuality, in which masculinity and femininity may continue to play a role. Can men be real men and women be real women in a way that meets the current needs of the planet? Or should the gender fence be torn down? As a male, I like to think there is a positive ideal of masculinity to embrace. This would involve strength, wisdom, objectivity, benevolence, compassion, justice, etc. Yet, I don’t see why these values should be considered masculine more than feminine. Nor should nurturance, accommodation, patience and peace-keeping be more feminine than masculine. Rather, all human beings should aspire to all these values. If the division of labour according to biological gender is breaking down, there is nothing for it but that a moral “individual” should embrace all the qualities that used to be consider gendered. “In the kingdom of heaven there is neither male nor female.” To achieve these ideals may mean transcending the natural and social bases of gender differences—indeed, to ignore gender as a basis for identity.

Why is there anything at all?

WHY IS THERE ANYTHING AT ALL?

Perhaps the most basic question that we can ask is why does the world exist? In other words, why is there anything at all rather than nothing? This is a matter every child ponders at some time and which adults may dismiss as unanswerable or irrelevant to getting on with life. Yet, philosophers, theologians, and even scientists have posed the question seriously and proposed various answers over the ages. Let us back up a moment, however, to realize that questions are not simply mental wonderings but also a certain kind of statement in language, which is notorious for shaping as well as reflecting how we think.

The first questionable element of the question is ‘why.’ What kind of explanation is expected? Answers fall into two broad categories: causal and intentional. Why did the doorbell ring? Well, because electricity flowed in a circuit. Alternatively: because someone at the door pushed the button. Sometimes the difference is not so clear. Newton wondered why the apple fell to the ground. Obviously, because of “gravity,” which he conceived as a universal force between all matter. But he was reluctant to speak of the nature of that force, which he privately identified with the will of God. What guides planets in their orbits around the sun? Well, maybe angels? So, perhaps his answer to our question—like many of his contemporaries—is that the world exists because God created it. But then, child and adult may reasonably wonder where God came from. On the other hand, we now view human, if not divine, actions more like the doorbell: in terms of neuro-electrical circuitry.

The second element to question is ‘is.’ This little verb can have rather different meanings. “The apple is on the tree” tells us about location. “There is an apple on the tree” asserts its existence. “There are seven apples on the tree” identifies a collection of things (apples) with a number. This identity can be more abstract: “one plus one is two.” Which sense is intended in our question?

The third questionable element is ‘anything,’ which suggests a contrast to ‘nothing.’ It raises another question: can we really conceive of nothing? And this raises yet another question: who is asking the question and how are they implicated in any possible answer? We begin to see the question as a set-up, in the sense that it is inquiring about more than just the dubious existence of the world. It tacitly asks about us, the questioners, and about our patterns of thought as not-so-innocent bystanders.

The theological answer (the world exists because God created it) includes his having created us within it—that is, two kinds of things, matter and souls, or things and observers of things. A more existential or phenomenal version of the question contains the same dualism. “Why is there anything (for me) to experience?” implies the question “Why do I exist?” It opens the Pandora’s box of mind-body dualism: how can there be minds (or spirits) in a physical universe? Or: how can consciousness (phenomenal experience) be produced by a material organ, the brain?

Such considerations shape the kinds of arguments that can be made to answer our question. One approach could be called the Anthropic Argument: We could only be here to ask the question if there is a world for us to exist in. That world would have to have specific properties that permit the existence of organisms with conscious minds. The most basic such property of such a favorable world is “existence.” Therefore, the universe must exist because we do! Admittedly, that’s an odd sort of explanation—a bit like reasoning backward from our existence as creatures to the inevitability of a Creator.

A different approach might be called the Argument from Biology. Just as the world must exist and be a certain way for us to exist, so must we see the world in certain ways in order for us to exist. For example, we must view the world in terms of objects in space (and time). Our categories of thought are derived from our cognition, which is grounded in our biological (survival) needs. The concept of nothing(ness) abstracts our actual experience with things and their absence (for example, an empty container). But the container itself is a sort of thing. The idea of ‘object’ for us implies the idea of ‘space’, and vice-versa, so that we cannot really imagine empty space—or truly nothing. At least for our mentality as a biological organism, there cannot be nothing without something. The fact that language can posit the world not existing is paradoxical, since the thought is based on experience of something. Therefore, the world exits because we cannot conceive it not existing!

A similar argument might be called the Argument by Contradiction: perhaps one can imagine a universe without physically embodied minds, and perhaps even a universe that is entirely empty physically (the empty container, which nevertheless leaves the container existing). But, in any case, these are the imaginings of a physically embodied mind, living in the one universe that we know does exist (not empty). We exist, therefore the world does!

Perhaps, similarly, one can imagine a phenomenal blankness (an empty mind), devoid of sensations, thoughts and feelings, and even any conceivable experience. But there is still a point of view from which “someone” is doing the imagining, which is itself a phenomenal experience, so not empty after all. (Nor can it be empty of matter, since we are material beings here imagining it and thinking about it.) With a nod to Descartes: I think, therefore the universe is!

It is not only philosophers and theologians, with their sophistry, who have weighed in on our question. Modern physics and cosmology have posed the question in a scientific form—that is, potentially in way that is empirically testable, if only indirectly. We could call this the Argument from Modern Physics. It proposes that the physical universe arose, for example, from a “quantum fluctuation” in the “vacuum.” (This process traditionally involves a Big Bang.) Given enough time, some random fluctuation was bound to produce a state that would eventually lead to a universe, if not necessarily the one we know. And here we are in the one we know—so at least it exists. (There might be others “somewhere else”?) The argument could be stated thus: there is something because the state of nothing was unstable.

Of course, there are a few conceptual glitches with such schemes. What is the unstable something from which the universe emerged? What exactly is the “vacuum” if not literally and absolutely nothing? Where did it come from? What could be the meaning of time before there existed cyclical processes (before the universe “cooled” enough to allow electrons, for example, to adhere to protons?) How much of such “time” is required to produce a universe?

One might also wonder what causes quantum fluctuations. The current idea seems to be that they are random and uncaused. But randomness and causality are notions derived from common experience in the world we know. The very idea of ‘random fluctuation’ raises questions about our categories of thought. Does “random” mean there is no cause, or no known cause? If the former, can we even imagine that? Moreover, probability usually refers to a sequence of trial runs, as in the random results of repeated coin flips. Could there have been multiple big bangs, only some of which produced what we know as a universe—and only one which produced this universe? What, then, is the probability of existing at all? Such questions boggle the mind, but have been seriously asked. Physicist Lee Smolin, for example, has proposed a theory in which new universes emerge from black holes that produce a new big bang. Each of these events could result in a re-setting of basic parameters, producing a different sort of world. But, what then accounts for the pre-existence of such “parameters,” other than the imagination of the theorist?

The logic in such arguments may be no more impeccable than arguments for the existence of God. But, then, logic itself may have no absolute sway outside the one-and-only real world from which it was gleaned. Does logic represent some transcendent Platonic realm outside nature or does it simply generalize and abstract properties and relationships derived from experience in the world we know? If the former, what accounts for the existence of that transcendent realm? If the latter, how can we, without circularity, apply ideas that are parochial products of our existence in the world we know, to understand how that world could arise? We can only imagine the possible in terms of the actual. The world exists, therefore the world exists!

Schlock and bling

SCHLOCK AND BLING

My first understanding of status symbols came from tracing the origin of the shell motif in European architecture and furnishings. The scalloped shell is a symbol of Saint James (as in coquilles St. Jacques). Pilgrims on the Camino de Compostela wore a shell as a sort of spiritual bumper sticker to indicate their undertaking of a spiritual journey. The symbol made its way onto chests carried in their entourage and onto inns along the route. Eventually it was incorporated in churches, on secular buildings, and on furniture. Especially in the Baroque period, it became a common decorative motif. It was no longer a literal badge of spiritual accomplishment, but remained by implication a sign of spiritual status—ironic and undeserved.

Religion and power have long been associated. Worldly rulers bolstered their authority as representatives on earth of the divine, when not claiming actual divinity for themselves. Kings and nobles would surround themselves with spiritual symbols to enforce this idea and assure others that their superior status was god-given and well deserved. Their inferiors, desiring that such social standing should rub off on them, made use of the same emblems, now become status symbols completely devoid of religious significance, yet serving to assert their claim to superior class.

It is no coincidence that the powerful have also been rich. Wealth itself thus became a status symbol, based on the notion that the rich, like the noble, deserve their station, which may even be predestined or god-given. Wealth is a sign of merit and superiority. Thus, visible luxury items and baubles are not only attractive and fun adornments, but also set some people above others. Given human competitive nature, gold and jewels—to be treasured—must be specially distributed among the few, as well as relatively rare on earth.

Wealth has become abstract and intangible in modern times and above all quantitative—electronic digits in bank accounts. Money translates as power, to buy services and goods and command respect. Yet, there remains a qualitative aspect to wealth. In the industrial age of mass production, in which goods and services are widely available, there is nevertheless a range of quality among them. The rich can choose what they view as better quality versions of common items. Hence the eternal appeal of Rolex and the likes. How much better can they tell time than the fifty-dollar counterpart? Their role is rather as jewelry, to indicate the status of the wearer. In fact, such wristwatches may have all sorts of deliberately useless features. And so with haute couture: dresses so impractical they can be worn only to rare elite functions.

The very nature of status symbols creates paradoxical dilemmas. Everyone wants high status, which by definition is for the few. Street vendors sell counterfeit knock-offs of expensive labels, precisely because—from a distance or to the undiscerning eye—they serve as status symbol as effectively as the brand they mock. This underlines a distinction between what we might call objective and subjective quality. On the level of symbol and first appearance, the rhinestone necklace is equivalent to the diamond version it copies; the size and number of “gems” may be the same. Yet one is a repository of human labour in a way that the other is not. The real diamonds or emeralds, being rare, were mined with difficulty and perhaps great suffering; the metalwork involves hours more effort, first finding then shaping the gold in a befitting way. This is why art has always been valued as a form of wealth, because of the painstaking effort and intention it embodies. The expensive watch is touted as hand-made.

Is quality in the eye of the beholder? The real goods are wasted on those who can’t tell the difference, let alone afford it. Snobbery and class thus depend on sensibility as well as the quantitative power of money. Money can buy you the trappings of wealth, but can you recognize the real thing from the imitation? You can’t take it with you when you die; but can you at least take it in while alive? Does it make any tangible difference if you cannot? Status symbols do their work, after all, because they are symbolic, which does not entail being genuine. Of course, the buyer should beware. But if you don’t really care, then what difference does quality make, even if you can afford it?

Is there objectively genuine quality? Yes, of course! But to appreciate it requires the corresponding sensibility. We might define quality to mean “objectively better” in some sense—perhaps in making the world a better place? In that case, at least someone must know what is objectively better and why, and be capable of intending and implementing it—for example: designing and producing quality consumer goods. That could entail quite a diversity of features, such as durability, repairability, energy efficiency, recyclability, esthetics, usefulness, etc. Sadly, this is not what we see in the marketplace, which instead tends ever more toward shoddy token items, designed to stand in as knock-offs for the real thing. Designed to take your money but not to last or even to be truly useful.

The rich must have something to spend their monetary digits on, otherwise what is the point of accumulating them? True, economics is a game and there is value and status simply in winning, regardless of the prize. Just knowing (without even vaunting) that one has more points than others reinforces the sense of personal worth. But there is also the temptation to surround oneself with ever more things and conveniences, many of which are ironically empty tokens, mere rhinestones. These also serve as status symbols, to demonstrate one’s success to others who also cannot tell the difference (and thereby to oneself?) In the absence of imagination, collecting such things seems the default plan for a life. The would-be rich also must have something to spend their money on; hence consumerism, hence bling.

Traditionally, value is created by human labour. Quality of product is a function of the quality of effort, which in turn is a function of attention and intention. The things that are standard status symbols—artworks, jewels, servants; fine clothes and craftsmanship; luxury homes, cars and boats, etc.—represent the ability to command effort and thereby quality. There is a paradox here too. For, while quality ultimately refers to human effort and skill, in the automated age ever fewer people work at skilled jobs. The very meaning of the standard is undermined by loss of manual skills. Quality can then no longer be directly appreciated, but only evaluated after the fact: how long did the product last, was it really useful, etc.? Like social media, the marketplace is saturated with questionable products that require the role of consumer reviews.

Ever more people now grow up without manual skills and little hands-on experience of making or repairing the things they use. This is a handicap when it comes to evaluating quality, which is a function of what went into making those things. Many people now cannot recognize the difference between a building standard of accuracy to an eighth of an inch and a standard of a half of an inch (millimeters versus centimeters, if you prefer). Teenagers of my generation used to tear apart and rebuild their cars. Now cars are too sophisticated for that, as is most of our technology, which is not designed for home repair, or any repair at all. There are videos online now that (seriously) show how to change a light bulb! People who make nothing, and no longer understand how things are made or how they work, are not in a position to judge what makes things hold together and work properly. They are at the mercy of ersatz tokens mysteriously appearing on retail shelves: manufactured schlock. That is the ultimate triumph of a system of production where profit, not quality, is our most important product.

When machines and robots will do everything (and all humans will be consumers but not producers), what will be the criterion for quality? Quite possibly, in an ideal world where no one needs to work to survive, people would naturally work anyway, as many people now enjoy hobbies. Perhaps in such a world, wealth would not be a matter of possessions but of cultivated skills. As sometimes it is now, status would be a function of what one can do aside from accumulating wealth produced by others. Perhaps then quality will again be recognizable.

 

The truth of a matter

A natural organism can hardly afford to ignore its environment. To put that differently, its cognition and knowledge consist in those capabilities, responses and strategies that permit it to survive. We tend to think of knowledge as general, indiscriminate, abstract, free-floating, since this has been the modern ideal; for the organism, however, it is quite specific and tailored to survival. This is at least mildly paradoxical, since the human being too is an organism. Our idealized knowledge ought to facilitate, and must at least permit, survival of the human organism. Human knowledge may not be as general as suggested by the ideal. In particular, science may not be as objective and disinterested as presumed; its focus can even be myopic.

Science parallels ordinary cognition in many ways, serving to extend and also correct it. On the other hand, as a form of cognition, science is deliberately constrained in ways that ordinary cognition is not. It has a rigor that follows its own rules, not necessarily corresponding to those of ordinary cognition. The latter is allowed, even required, to jump to conclusions in situations demanding action. Science, in contrast, remains tentative and skeptical. It can speculate in earnest, creating elaborate mathematical constructs; but these are bracketed as “theoretical” until empirical data seem to confirm them. Even then, theory remains provisional: it can be accepted or be disqualified by countermanding evidence, but can never strictly be proven. In a sense, then, science maintains a stance of unknowing along with a goal of knowing.

Many questions facing organisms, about what to do and how to behave, hinge implicitly on what seems true or real from a human perspective. For us moderns, that often means from a scientific perspective, which may not correspond to the natural perspective of the organism. Yet, even for the human organism, behavior is not necessarily driven by objective reality and does not have to be justified by it. External reality is but one factor in the cognitive equation. It is a factor to which we habitually give great importance because, in so many words, we are conditioned to give credence to what appears to us real. Ultimately, this is because our survival and very existence indeed depend on what actually is real or true. To that extent, we are in the same boat as any other creature. The other factor, however, is internal: intention or will. We can, and often do, behave in ways that have little to do with apparent reality and which don’t refer to it for justification. (For example, doing something for the “hell” of it or because we enjoy it. Apart from their economic benefits, what do dancing, art, and sports have to do with survival?) Some things we do precisely because they have little to do with reality.

Of course, the question of what is real—or the truth of a matter—is hardly straightforward. It, too, depends on both internal and external factors, subject and object together. In any case, how we act does not depend exclusively on what we deem to be fact. In some cases, this dissonance is irrational and to our detriment—for instance, ignoring climate change or the health effects of smoking. In other cases, acting arbitrarily is the hallmark of our free will—the ability to thumb our noses at the dictates of reality and even to rebel against the constraints imposed by biology and nature. Often, both considerations apply. In a situation of overpopulation, for example, it may be as irrational—and as heroic—for humanity to value human life unconditionally as for the band to keep playing while the Titanic sinks.

At one time the natural world was considered more like an organism than a machine. Perhaps it should be viewed this way again. Should we treat nature as a sentient agent, of value comparable to the preciousness we accord to human life? Here is a topical question that seems to hinge on the truth of what nature “really” is. If it has agency in some sense like we do—whether sentient or not in the way that we are—perhaps it should have legal rights and be treated with the respect accorded persons. Native cultures are said to consider the natural world in terms of “all my relations.” Some people claim mystical experiences in which they commune and even communicate with the natural world, for example with plants. Yet, other people may doubt such claims, which seem counter to a scientific understanding that has long held nature to be no more than an it, certainly not a thou to talk to. For, from a scientific perspective, most matter is inanimate and insentient. Indeed, the mechanistic worldview of science has re-conceived the natural world as a mere resource for human disposal and use. Given such contradictory views, how to behave appropriately toward “the environment” seems to hinge on the truth of a matter. Is the natural world a co-agent? Can it objectively communicate with people, or do people subjectively make up such experiences for their own reasons?

But does the “truth” of that matter really matter? Apart from scientific protocol, as creatures we are ruled by the mandate of our natural cognition to support survival. That is the larger truth, which science ought to follow. Culturally, we have been engaged in a great modern experiment: considering the world inert, essentially dead, profane (or at least not sacred), something we are free to use for our own purposes. While that stance has supported the creation of a technological civilization, we cannot be sure it will sustain it—or life—in the long term. Scientific evidence itself suggests otherwise. It thus seems irrational to continue on such a path, no matter how “true” it may seem.

What have we to lose in sidestepping the supposed truth of the matter, in favour of an attitude that works toward our survival? Better still, how can such contradictory attitudes be made compatible? This involves reconciling subject with object as two complementary factors in our cognition. Science has deliberately bracketed the subject in order to better grasp the object. So be it. Yet, this situation itself is paradoxical, for someone (a subject) obviously is doing the grasping for some tacit reason. Nature is the object, the human scientist is the subject, and grasping is a motivated action that presumes a stance of possession and control—rather than, for example, belonging. We resist the idea that nature controls us (determinism)—but along with it the idea of being an integral part of the natural world. Can we have free will and still belong? Perhaps—if we are willing to concede free will to nature as well.

The irony is that, on a certain level, obsession with reality or truth serves the organism’s wellbeing, but denies it free will. Compulsive belief in the stimulus grants the object causal power over the subject’s response and experience. On the other hand, ignoring the stimulus perilously forfeits what power the subject has to respond appropriately. The classic subject-object relationship is implicitly adversarial. It maintains either the illusion of technological control over nature or of nature’s underlying control over us. The first implies irresponsible power; the second denies responsibility altogether.

Every subject, being embodied, is undoubtedly an object that is part of the natural world. To the extent we are conscious of this inclusion and of being agents, we are in a position to act consciously to maintain the system of which we are a part. In the name of the sort of knowledge achieved by denying this inclusion, however, we have created a masterful technological civilization that is on the brink of self-destruction, while hardly on the brink of conquering nature. Can we believe instead that we do not stand outside the natural world, as though on a foreign battlefield, but are one natural force in negotiation with other natural forces? Negotiation is a relationship among peers, agent to agent. Even when seemingly adversarial, the relationship is between worthy opponents. Let us therefore think of nature neither as master, slave, nor enemy, but as a peer with whom to collaborate toward a peace that insures a future for all life.

To choose or not to choose

Choice is often fraught with anxiety. We can agonize over decisions and are happy enough when an outcome is decided for us. That’s why we flip coins. Perhaps this says only that human beings loathe responsibility, which means accountability to others for possible error. We are essentially social creatures, after all. The meaning and value of our acts is always in relation to others, whose opinions we curry and fear. Even those unconcerned about reputation while they live may hope for approval in the long-term by posterity.

Perhaps there is a more fundamental reason why choice can be anxious. We have but one life. To choose one option or path seems to forfeit others. The road taken implies other roads not taken; one cannot have the cake and eat it. Choice implies a loss or narrowing of options, which perhaps explains why it invokes negative feelings: one grieves in advance the loss of possible futures, and fears the possibility of choosing the wrong future. Nature created us as individual organisms, distinct from others. That means we are condemned to the unique experience and history of a particular body, out of all the myriad life histories that others experience. Each of us has to be somebody, which means we must live a particular life, shaped by specific choices. We may regret them, but we can hardly avoid them. A life is defined by choices made, which can seem a heavy burden.

Yet, choice can also be viewed more positively as freedom. Choice is the proactive assertion of self and will, not a passive forfeit of options. It affords the chance to self-limit and self-define through one’s own actions, rather than be victimized by chance or external forces. To choose is to take a stand, to gather solid ground under one’s feet where there was but nebulous possibility. Rather than remaining vaguely potential, one becomes tangibly actual, by voluntarily sacrificing some options to achieve one’s goals. This is how we bring ourselves into definition and become response-able. We may be proud or ashamed of choices made. Yet, whatever the judgment, one gains experience and density through deliberate action.

To do nothing is also a choice—sometimes the wisest. The positive version of timidity or paralysis is deliberate restraint. Sometimes we chomp at the bit to act, perhaps prematurely, while the wiser alternative is to wait. Instinct and emotion prompt us to react impulsively. To be sure, such fast response serves a purpose: it can mean the difference between life and death. Yet, some situations allow, and even require, restraint and more careful thought. When there is not enough information for a proper decision, sometimes the responsible choice is to wait and see, while gathering more information. This too strengthens character.

Life tests us—against our own expectations and those of others. Perhaps the kindest measure of our actions is their intent: the good outcome hoped for. We may not accurately foresee the outcome, but at least we can know the desire. Yet, even that is no simple matter. For, we are complex beings with many levels of intention, some of which are contradictory or even unknown to us. We make mistakes. We can fool ourselves. The basic problem is that reality is complex, whereas mind and thought, feeling and intention, are relatively simplistic. We are like the blind men who each felt a part of the elephant and came to very different conclusions about the unseen beast that could crush them at any time. With all our pretense to objectivity, perhaps we are the elephant in the room!

Choice can be analog as well as digital. Plants interact with the world more or less in place, continuously responsive to changes in soil condition, humidity, temperature and lighting. Animals move, to pursue their food and avoid becoming food. Their choices have a more discrete character: yes or no. Yet, there are levels and nuances of choice, and choice about choice. We can be passive or aggressive, reactive or proactive. We can choose not to act, to be ready to act, or to seek a general policy or course of action instead of a specific deed. We can opt for a more analog approach, to adjust continuously, to keep error in small bounds, to play it by ear rather than be too decisive and perhaps dangerously wrong.

Of course, one may wonder whether choice and will are even possible. Determinism is the idea that one thing follows inexorably from another, like falling dominoes, with no intervening act of choosing. The physical world seems to unfold like that, following causes instead of goals. And perhaps there is even a limit to this unfolding, where nothing further can happen: the ultimate playing out of entropy. Yet these are ideas in the minds of living beings who do seem to have choice, and who seem to defy entropy. Determinism, and not free, will may well be the illusion. For, while concepts may follow one from another logically, there is (as Hume noted) no metaphysical binding between real events in time. The paradox is that we freely invent concepts that are supposed to tie the universe together—and bind us as well.

Where there is no free choice there is no responsibility. Determinism is a tool to foresee the future, but can also serve as a place of refuge from guilt over the past. If my genes, my upbringing, my culture or my diet made me do it, then am I accountable for my deeds, either morally or before the law? On the other hand, where there is no responsibility, there is no dignity. If my actions are merely the output of a programmed machine, then I am no person but a mere a thing. Of what account is my felt experience if it does not serve to inform and guide my behavior? I cannot rightfully claim to be a subject at all—to have my inner life be valued by others—unless I also claim responsibility for my outer life as an agent in the world.

Easier said than done, of course. Supposing that one tries to act morally and for the best, one may nevertheless fail. Worse, perhaps, one may wonder whether one’s thoughts and deeds will make any difference at all in the bigger picture. Especially at this crossroads—of human meddling and eleventh-hour concern for the future of all life—it may seem that the course is already set and out of one’s personal hands. Yet, what is unique about this time is precisely that we are called upon to find how to be personally and effectively responsible for the whole of the planet. The proper use of information in the information age is to enable informed choice and action. That no longer concerns only one’s personal—or local or even national—world, but now the world. This is the meta-choice confronting at least those who are in a position to think about it. Whatever our fate and whatever our folly, we at least bring ourselves more fully into being by choosing to think about it and, hopefully, choosing the right course of action.

A credible story about money as the root of evil

The word ‘credit’, like ‘credible’, comes from the Latin credo, to believe. It refers to the trust that must exist between a borrower and a lender. In his monumental work, Debt: the first 5000 years, anthropologist and philosopher-activist David Graeber proposes that credit, in one way or another, is the very basis of sociability and of society. He reverses the traditional dictum in economics that barter came first, then coinage, and finally credit. Quite the contrary: barter was ever only practical in exceptional circumstances; the actual basis of trade for most of human existence was some form of credit. Borrowing worked well in communities where everyone was known and reputation was crucial. Say you need something made, a favour, or service performed. You are then indebted to whoever helps you and at some point you will reciprocate. That sort of cooperation and mutual support is the essence of community.

This is not a review of Graeber’s wide-ranging book or thought, but a reflection on the deep and unorthodox perspective he brings to such questions as: what happens to community when money displaces the honor system of credit? Or: how did the introduction of money change the nature of debt and credit, and therefore society?

Let us note at the outset that many of the evils we associate with money and capitalism already existed in ancient societies that relied on credit, namely: usury. The extortion of “interest” on loans is already a different matter than simply repaying a debt (the “principle”). In a small community, or within families, such extortion would be unfriendly and unconscionable. In larger societies, relations are less personal. The psychological need to honour debt, based on trust, holds over, but without the intimate connection between persons. The debtor—who before was a friend, relative or neighbor—becomes a “stranger,” even when known. The person becomes a thing to exploit; the subject becomes an object.

Lending for gain was no longer a favour to someone in your community, which you knew would eventually be reciprocated fairly. It became something to do for calculated and often excessive profit. It thus became increasingly difficult to repay debts. Securities put up for the loan (even family members or one’s own person!) could be confiscated pending repayment. Usury—and debt in general—became such a problem even in ancient times that kings and rulers were obliged to declare debt amnesties periodically to avoid rebellion. And one of the first things rebellions would do is destroy records of debt. The sacred texts of many religions proscribe usury, but usually only regarding their own people. “Strangers” remained fair game as potential enemies.

The concept of interest has a precedent in the growth of natural systems. Large trees grow from tiny seeds; animal bodies grow from small eggs. Populations expand. Such growth is distinct from static self-maintenance or a population’s self-replenishment. People noticed this surplus when they began to grow crops and manage domesticated animals. The increase of the herd or crop served as metaphor for the interest expected on any sort of “investment.” However, the greedy expectations of loan sharks in all ages usually far exceed the rate of natural growth. Even the “normal” modest return on investment (consistently about 5%) exceeds the rate of growth of natural systems, such as forests. Moreover, there are always limits to natural growth. The organism reaches maturity and stops growing. (The refusal of cells to stop multiplying when they are supposed to is cancer.) A spreading forest reaches a shoreline or a treeline imposed by elevation and cold. The numbers of a species are held in check by other species and by limited resources. Nature as a whole operates within these bounds of checks and balances, which humans tend to ignore.

Money, credit, and debt are ethical issues because they directly involve how people treat one another. Credit in the old sense—doing a favour that will eventually be returned—involves one way of treating others, which is quite different from usury, which often resulted in debt peonage (often literally slavery). For good reason, usury was frowned upon as a practice within the group—i.e., amongst “ourselves.” The group needed to have an ethics in place that ensures its own coherence. But as societies expanded and intermingled, membership in the group became muddied. Trade and other relations with other groups created larger groupings. New identities required a new ethics.

Amalgamation led to states. War between states exacerbated the ethical crisis. War was about conquest, which reduced the defeated to chattel (war was another source of slaves). People, like domesticated animals, could become property bought and sold. Slaves were people ripped from their own community, the context that had given them identity and rights. Similarly, domestic animals had been removed from their natural life and context and forced into servitude to people. We may speak even of handmade things as being wrested from their context as unique objects, personally made and uniquely valued, when they enter the marketplace. Manufactured things are designed to be identical and impersonal, not only to economize through mass production, but also to standardize their value. Mass production of standard things matched mass production of money.

Enter coinage. Rather than supply armies through expensive supply lines, soldiers could be paid in coin to spend locally rather than pillage the countryside. These coins could then be returned to the central government in the form of taxes. Coinage standardized value by quantifying it precisely. But it did something more as well. It rendered trade completely impersonal. Before, you had a reciprocal relationship of dependency and trust with your trade partner or creditor—an ongoing relationship. In contrast to credit, the transfer of coins completed the transaction, cancelling the relationship; both parties could walk away and not assume any future dealings. Personal trust was not required because the value exchanged was fixed and clear, transferable, and redeemable anywhere. Indeed, money met a need because people were already involved in trade with people they might never see again and whom they did not necessarily trust. But this was a very different sort of transaction than the personal sort of exchange that bound parties together.

Yet, trust was still required, if on a different level. Using money depends on other people accepting it as payment. While money seemed to be a measure of the value of things, it implicitly depended on trust among people—no longer the direct personal trust between individuals but ongoing faith in the system. Coins had a symbolic value, regulated by the state, independent of the general valuation of the metals they were made of. (The symbolic value was usually greater than the value of the gold, silver or copper, since otherwise the coins would be hoarded.) The shift toward symbolic value was made clear with the introduction of paper money. But in fact, promissory notes had long been used before official paper money or coinage. The transition to purely symbolic (virtual) money was complete when the U.S. dollar was taken off the gold standard in 1971.

Unfortunately, some of the laws restricting usury were abandoned soon after. “Credit,” in its commercialized form, returned with a vengeance. Credit cards and loan sharks aggressively offered indiscriminate lending for the sake of the profit to be gained, never mind the consequences for the borrower. Hence, the international crisis of 2008—and the personal crises of people who lost their homes, of students who spend half their lives repaying student loans, of consumers always on the verge of bankruptcy, and of publics forced to bail out insolvent corporations.

The idea of credit evolved from a respectable mutual relationship of trust to a shady extortion business. The idea of indebtedness has accordingly long been tinged with sin, as a personal and moral failing. A version of the Lord’s Prayer reads, “forgive us our debts as we forgive our debtors.” (Alternatively: “forgive us our trespasses”, referring to the “sacredness” of private property rights.) As Graeber points out, we generally do not forgive debt, but have made it the basis of modern economics. There is no mention of forgiving the sins of creditors. The “ethics” of the marketplace is a policy to exploit one’s “neighbor,” who can now be anyone in the world—the further out of sight the better.

Usury now deals with abstractions that hide the nature of the activity: portfolios, mutual funds, financial “instruments,” stocks and bonds, “derivatives,” etc. The goal is personal gain, not social benefit, mutual relationship, or helping one another. Cash is going out of fashion in favour of plastic, which is no more than ones and zeros stored in a computer. The whole system is vulnerable to cyberattack. Worse, the confidence that underwrites the system runs on little more than inertia. It will eventually break down, if not renewed by a basis for trust more genuine, tangible and personal.

Apart from climate change, the other crisis looming is the unsustainability of our civilization. The global system of usury (let’s call a spade a spade: we’re talking about capitalism) unreasonably exploits not only human beings but the whole of nature. Like population growth, economic growth cannot continue indefinitely. The sort of growth implied by “progress” is a demented fantasy, with collapse lurking around the corner. Moreover, the fruits of present growth are siphoned by a small elite and hardly shared, while the false promise of a better life for all is the only thing keeping the system going. We cannot be any more ethical in regard to nature than we are in regard to fellow human beings. While people may or may not revolt against the greed of other people, we can be sure that nature will.

Relativity theory and the subject-object relationship

Concepts of the external world have evolved in the history of Western thought, from a naïve realism toward an increasing recognition of the role of the subject in all forms of cognition, including science. The two conceptual revolutions of modern physics both acknowledge the role of the observer in descriptions of phenomena observed. That is significant, because science traditionally brackets the role of the observer for the sake of a purely objective description of the world.  The desirability of an objective description is self-evident, whether to facilitate control through technology or to achieve a possibly disinterested understanding. Yet the object cannot be truly separated from the subject, even in science.

Knowledge of the object tacitly refers back to the participation of the observer as a physical organism, motivated by a biologically-based need to monitor the world and regulate experience. On the other hand, knowledge may seem to be a mental property of the subject, disembodied as “information.” However, the subject is necessarily also an object: there are no disembodied observers. Information, too, is necessarily embodied in physical signals.

A characteristic of all physical processes, including the conveyance of signals, seems to be that they take time and involve transfers of energy. These facts could long be conveniently ignored in the case of information conveyed by means of light, which for most of human history seemed instantaneous and with negligible physical effect. Eventually, it was realized through observation (Eotvos), in experiment (Fizeau), and in theory (Maxwell) that the speed of light is finite and definite, though very large. Since that was true all along, it could have posed a conceptual dilemma for physicists long before the late 19th century, since the foundation of Newtonian physics was instantaneous action-at-distance. Even for Einstein and his contemporaries, however, the approach to problems resulting from the finite speed of light was less about incorporating the subject into an objective worldview than to compensate the subject’s involvement in order to preserve that worldview. Einstein’s initial motivation for relativity theory lay less in the observational consequences of the finite speed of light signals than in resolving conceptual inconsistencies in Maxwell’s electrodynamics.

Nevertheless, perhaps for heuristic reasons, Einstein began his 1905 paper with an argument about light signals, in which the signal was defined to travel with the same finite speed for all observers. This, of course, violated the foundational principle of the addition of velocities. It skirted the issue of the physical nature of the signal (particle or wave?), since some observations seemed to defy either the wave theory or the emission theory of light. Something had to give, and Einstein decided it was the concept of time. What remained implicit was the fact that non-local measurement of events in time or space must be made via intervening light signals.

When the distant system being measured is in motion with respect to the observer, the latter’s measurement will differ from the local measurement by an observer at rest in the distant system. The difference will be proportional to their relative speed compared to the speed of light. By definition, these are line of sight effects. By the relativity postulate, the effects must be reciprocal, so that whether the observers are approaching each other or receding, each would perceive the other’s ruler to have contracted and clock to have slowed! Such a conclusion could not be more contrary to common sense. But that meant simply that common sense is based on assumptions that may hold true only in limited circumstances (namely, when the observation is presumed instantaneous). In other words, circumstances that are non-physical.

The challenge embraced by Einstein was to achieve coherence within the framework of physics as a logical system, which is a human construct, a product of definitions. Physics may aim to reflect the structure of the real world, but invokes the freedom of the human agent to define its axioms and elements. Einstein postulated two axioms in his famous paper: the laws of physics are the same for observers in uniform relative motion; and the speed of light does not depend on the motion of its source. From these it follows that simultaneity can have no absolute meaning and that measurements involving time and space depend on the observers’ relative state of motion. In other words, the fact that the subject does not stand outside the system, but is a physical part of it, affects how the object is perceived or measured. Yet, a contrary meta-truth is paradoxically also insinuated: to the degree that the system is conceptual and not physical, the theorist does stand outside the system. Einstein’s freedom to choose the axioms he thought fundamental to a consistent physics implied the four-dimensional space-time continuum (the so-called block universe), which consists of objective events, not acts of observation.

Could other axioms have been chosen—alternatives to his postulates? Indeed, they had been. The problem was in the air in the late 19th century. In effect, Lorentz and FitzGerald had proposed that movement through the ether somehow causes a change in intermolecular forces, so that apparently rigid bodies in motion literally change shape in such a way that rulers “really” contract in length in the direction of motion. This was an ontological (electrodynamic) explanation of the null result of the crucial Michelson-Morley experiment. (Poincaré was also working on an ontological solution.) That approach made sense, since the space between atoms in solid bodies depends on electrical forces. Though Einstein knew about the Michelson-Morley experiment, his epistemic (kinematic) approach did not focus on that experiment, but originated with his reflections in a youthful thought experiment concerning what it would be like to travel along with a light beam. It continued with reflections on apparent contradictions in Maxwell’s electrodynamics. Yet, it returned to focus on the physical nature of light, which bore fruit in the equivalence of matter and energy and in General Relativity as a theory of gravitation.

Despite his early positivism, it was Einstein’s lifelong concern to preserve the objectivity, rationality and consistency of physics, the principle challenges to which were the dilemmas that gave birth to the two great modern revolutions, relativity and quantum theory. His solutions involved taking the observer into account, but with an aim to preserve an essentially observer-independent worldview—the fundamental stance of classical physics. While he chose an epistemic over an ontological analysis, he was deeply committed to realism. There were real, potentially observable, consequences to his theories, which have since been confirmed in many experiments. Yet alternative interpretations are conceivable, formulated on the basis of different axioms, to account for the same—mostly subtle—effects. While relativity theory renders appearances a function of the observer’s state of motion, it is really about preserving the form of physical laws for all observers—reasserting the possibility of objective truth.

One ironic consequence is that space and time are no longer considered from the point of view of the observer but are objectified in a god’s-eye view. The four-dimensional manifold is mathematically convenient; yet it also makes a difference in how we understand reality. As a theory of gravitation, General Relativity asserts the substantial existence of a real entity called spacetime. Space and time are no longer functions of the observer and of the means of observation (light); now they have an existence independent of the observer—ironically, much as Newton had asserted. What was grasped as a relationship returned to being a thing.

Even in the Special theory, there is confusion over the interpretation of time dilation. In SR, time dilation was initially a mutually perceived phenomenon, which makes sense as a line-of-site effect. In modern expositions, however, mechanical clocks are replaced by “light clocks,” and the explanation of time dilation refers to the lengthened path of light in the moving clock. This is no longer a line-of-site or mutual effect, since the light path is no longer in the direction of motion relative to the observer. Instead, it substitutes a definition of time that circularly depends on light. While “objective” in the sense that it is not mutual, the explanation for the gravitational time dilation of General Relativity rests on an incoherent interpretation of time dilation in SR.

Einstein derived both the famous matter-energy equivalence and General Relativity using arguments based on Special Relativity. These arguments slide inconsistently from an epistemic to an ontological interpretation. While the predictions of GR and E=mc2 may be accurate, their theoretical dependence on SR remains unfounded if the effects are purely epistemic: that is, if they do not invoke a physical interaction of things with an ether, when they accelerate with respect to it (the so-called clock hypothesis). Or, to put it the other way around, GR and the mass-energy equivalence actually imply such an interaction.

The Lorentz transformation could as well be interpreted in purely epistemic terms, of observers’ mutually relative state of motion, given the finite intermediary of light. Spacetime need not be treated as an object if the subject’s role is fully taken into account. The invariance of the speed of light could have a different interpretation, not as a cosmic speed limit but as a side-effect of light’s unique role as signal between frames of reference. Time dilation could have a different explanation, as a function of moving things physically interacting with an ether.

Form and content

FORM AND CONTENT

That all things have form and content reflects an analysis fundamental in our cognition and an dichotomy fundamental to language. Language is largely about content—semantic meaning. Yet, it must have syntactical form to communicate successfully. The content of statements is their nominal reason for being; but their effectiveness depends on how they are expressed. In poetry and song, syntax and form are as important as semantics and content. They may even dominate in whimsical expressions of nonsense, where truth or meaning is not the point.

The interplay of form and content applies even in mathematics, which we think of as expressing timeless truths. ‘A=A’ is the simplest sort of logical truth—a tautology, a sheer matter of definition. It applies to anything, any time. By virtue of this abstractness and generality, it is pure syntax. As a statement, it bears no news of the world. Yet, mathematics arose to describe the world in its most general features. Its success in science lies in the ability to describe reality precisely, to pinpoint content quantitatively. The laws of nature are such generalities, usually expressed mathematically. They are thus sometimes considered transcendent in the way that mathematics itself appears to be. That is, they appear as formal rules that govern the behavior of matter. You could say that mathematics is the syntax of nature.

The ancient Greeks formalized the relation between syntax and semantics in geometry. Euclid provided the paradigm of a deductive method, by applying formal rules to logically channel thought about the world, much as language does intuitively. Plato considered the world of thought, including geometry, to be the archetypal reality, which the illusory sensory world only crudely copies. This inverted the process we today recognize as idealization, in which the mind abstracts an essence from sensory experience. For him, these intuitions (which he called Forms) were the real timeless reality behind the mundane and ever-changing content of consciousness.

The form/content distinction pertains especially perhaps in all that is called “art.” Plato had dismissed art as dealing only with appearances, not the truth or reality of things. According to him art should no more be taken seriously than play. However, it is precisely as a variety of play that we do take art seriously. What we find beautiful or interesting about a work of art most often involves its formal qualities, which reveal the artist’s imagination at play. Art may literally depict the world through representation; but it may also simply establish a “world” indirectly, by assembling pertinent elements through creative play. Whatever its serious themes, all art involves play, both for the producer and the consumer.

Meaning is propositional, the content of a message. It is goal-oriented, tied to survival and Freud’s reality principle. But the mind also picks up on formal elements of what may or may not otherwise bear a message or serve a practical function, invoking more the pleasure principle. The experience of beauty is a form of pleasure, and “form” is a form of play with (syntactic) elements that may not in themselves (semantically) signify anything or have any practical use. Art thus often simply entertains. This is no less the case when it is romanticized as a grand revelation of beauty than when it is dismissed as trivially decorative. Of course, art combines seriousness and play in varying ways that can place greater emphasis on either form or content. While these were most often integrated before the 19th century, relatively speaking modern art liberated form from content.

For most of European history, artists were expected to do representational work, to convey a socially approved message—usually religious—through images. At least in terms of content, art was not about personal expression. That left form as the vehicle for individual expression, though within limits. Artists could not much choose their themes, but they could play with style. The rise of subjectivity thematically in art mirrors the rise of subjectivity in society as a whole; it recapitulates the general awakening of individuality. Yet, even today, a given art work is a compromise between the artist’s vision and social dynamics that limit its expression and reception.

From the very rise of civilization, art had served as propaganda of one sort or another. For example, Mesopotamian kings had built imposing monuments to their victories in war, giving a clear message to any potentially rebellious vassals. Before the invention of printing, pictures and sculptures in Europe had been an important form of religious teaching. Yet, even in churches, the role of iconic art was from the beginning a divisive issue. On the one hand, there was the biblical proscription against idolatry. On the other hand, the Church needed a form of propaganda that worked for an illiterate populace. Style and decoration were secondary to the message and used to support it. In the more literate Islamic culture, the written message took precedence, but the formal element was expressed in the esthetics of highly stylized decorative calligraphy. In either case, the artist usually did little more than execute themes determined by orthodoxy, giving expression to ideas the artist may or may not have personally endorsed. But the invention of printing changed the role of graphic art, as later would the invention of photography.

Except to serve as political or commercial propaganda (advertising), today representational art holds a diminished place, superseded by photography and computer graphics. Yet, artists continue to paint and sculpt figures and scenes as well as decorative or purely abstract creations. In the age of instant images (provided by cell phones, for instance), what is the ongoing appeal of hand-made images? How and why is a painting based on a photograph received differently than the photo itself, and why do people continue to make and buy such a thing? The answer surely lies in the interplay of form and content. The representational content of the photo is a given which inspires and constrains the play with form.

Skill is involved in accurately reproducing a scene. We appreciate demonstrations of sheer skill, so that hyper-realist painting and sculpture celebrate technical proficiency at imitation. Then, too, a nostalgia is associated with the long tradition of representational art. Thirdly, status is associated with art as a form of wealth. An artwork is literally a repository of labor-intensive work, which formerly often embodied precious materials as well as skill. Photographic images are mostly cheap, but art is mostly expensive. Lastly, there are conventional ideas about decoration and how human space should be furnished. Walls must have paintings; public space must have sculptures. In general, art serves the purpose of all human enterprise: to establish a specifically human world set apart from nature. This is no less so when nature itself is the medium, as in gardens and parks that redefine the wild as part of the human world.

Nevertheless, it is fair to say that the essence of modern art—as sheer play with materials, images, forms, and ideas—is no longer representational. Art is no longer bound to a message; form reigns over content. Perhaps this feature is liberating in the age of information, when competing political messages overwhelm and information is often threatening. Art that dwells on play with formal elements refrains from imposing a message—unless its iconoclasm is the message. Abstraction does not demand allegiance to an ideology—except when it is the ideology. But in that case, it is no longer purely play. Art can serve ideology; but it can also reassure by the very absence of an editorial program. Playfulness, after all, does not intimidate or discriminate, though it may be contagious. It engages us on a level above personal or cultural differences.

Decoration has always been important to human beings, who desire to embellish and shape both nature and human artifacts. Decoration may incorporate representation or elements from nature, but usually in a stylized way that emphasizes form, while tailoring it to function. Yet, even decorative motifs constitute an esthetic vocabulary that can carry meaning or convey feeling. A motif can symbolize power and military authority, for example. Such are the fasces and the bull of Roman architecture; the “heroic” architecture, sculpture, and poster art of Fascism or Communism; or the Napoleonic “Empire” style of furnishings. It can be geometric and hard-edged, expressing mental austerity. Equally, it can express a more sensuous and intimate spirit, often floral or vegetal—as in the wallpapers of William Morris and the Art Nouveau style of architecture, furniture, and posters. In other words, decoration too reflects intent. It can reinforce or soften an obvious message. But it can also act independently of content, even subversively to convey an opposing ethos.

Even when no message seems intended, there is a meta-message. Whatever is well-conceived and well-executed uplifts and heartens us because it conveys the caring of the artist, artisan, or engineer. On the other hand, the glib cliché and the shoddily made product spread cynicism and discouragement. They reveal the callousness of the producer and inure us to a world in which quantity prevails over quality. Every made thing communicates an intent, for better or worse.