Old Wine in New Bottles

Like everyday experience, science relies on metaphor to extend its conceptual grasp. An airplane or rocket “flies,” but not in the literal way that an insect or bird does. The idea of flight is abstracted to include unnatural things without flapping wings.

Metaphor may also involve reification. A literal field is an expanse where grass grows. The concept of field in physics was metaphorically first a mathematical device to map the measured strength of the electric, magnetic, or gravitational vector in space; later it was considered ontologically as real as traditional matter. The concept of energy was similarly reified, though at root it refers to measurements of mass (gravitational or inertial) and change of position, not to a substance. Even in common parlance information is an abstraction based on real acts of communication. It has been further abstracted and reified in physics as an alleged fundamental entity, even with causal powers.

Platonism illustrates all of these tendencies: metaphor, abstraction, and reification. Like Aristotle, Plato had imagined an essence of a physical thing, abstracted from it; but he went further to suppose that this essence serves as a prototype for its material counterpart. It was a form—in the metaphorical sense of a mould into which the material thing could be cast, or a blueprint from which it could be constructed. It was also thought to pre-exist material things in an eternal realm independent of them. This is a view long favored by mathematicians since Pythagoras, because mathematical “objects” seem to exist apart from matter, in some logically prior domain. Max Tegmark’s “Mathematical Universe” is a modern example: the physical universe is not merely described by mathematics, but is itself a “mathematical structure.”

The temptation of Platonism is understandable for mathematicians and theoretical physicists, who work in a mental realm of abstractions. It is something of a surprise that an experimental biologist would embrace such metaphysical thinking. I have great respect for the research of Michael Levin, which challenges and broadens our understanding of what constitutes organism and intelligence or mind. In particular, it further reveals the limitations of the role of DNA in morphogenesis, and of natural selection in determining biological forms. It points to a huge gap in our understanding of how organisms develop and how life evolves.

Something—vaguely called epigenetics—mysteriously guides developmental processes in ways unaccounted for by DNA, genetics, and natural selection. Levin’s research shows it has something to do with bioelectricity. His discoveries should stand as an invitation to the scientific community to explore more deeply how cells collectively know how to organize into larger beings. No doubt this is what Levin himself will continue to investigate. However, he seems inclined to take his investigations in a metaphysically suspect direction—that is, away from biochemistry and into the realm of mathematical Platonism.

According to Levin, the Platonic realm consists not only of forms corresponding to mathematical ideas (endowed with what he calls “low-level agency”), but contains far more that literally in-forms the diversity of forms and processes of life. According to him, the mysterious something that explains epigenetics is this metaphysical realm. Forms in this realm “ingress” to the material level where a receptive “interface” is presented. This idea reminds one uncomfortably of souls incarnating in bodies. This is not a scientific explanation, in physical terms. I don’t doubt that whatever Levin pursues will be interesting. The risk, however, is that it could turn out to be metaphysics more than biology.

Levin’s research shows that small organisms can be “prompted” with electricity to reconfigure themselves in a controllable way. While that’s an amazing empirical finding, it’s framed in the language of computation: a living system can be (re)programmed at a high level without addressing its mechanics at lower levels. This is because it already embodies a certain intelligence—as do even some simple chemical networks capable of learning. He notes that we are used to thinking of intelligence in terms of problem-solving in physical space—largely an issue for organisms as whole entities, especially those creatures we are familiar with on our human scale. He points out that problem-solving can occur in other kinds of “space,” such as morphogenetic space, including a space of possible body forms and a space of possible minds.

These are metaphors, like phase space in physics, which abstracts the visual space we naturally experience, redefined as a mathematical continuum with arbitrary “dimensions.” (It is merely a convention that even ordinary space has three orthogonal dimensions.) However useful the concept of morphogenetic space, the bottom line is that organisms must be able to problem-solve in real space and time in order to survive. For abstract spaces to have the same reality, something parallel to natural selection in real time must be shown.

A similar creative use of metaphor is Joscha Bach’s “cyber-animism.” He likens the non-material nature of software to the age-old notion of a spirit, a “self-organizing agentic pattern.” For him—as for many others—software is software, whether it runs in a digital computer or in an organism (indeed, “organism” then means the organization, not the substrate). However, we merely guess at the organization of creatures, based on patterns we notice. One helpful tool to do that guessing is digital simulation—to see what patterns result from commands we give to a human artifact made to resemble the natural thing. But resemblance can be superficial; software (digital programming) is literally a human construct, not a natural occurrence.

Only metaphorically does an organism run on its software, just as the universe only metaphorically runs on the laws of physics—as though either could be a digital computer. We observe that there are patterns in the behavior of inert matter, which we formulate as mathematical laws of physics; similarly, we observe patterns in the behavior of living matter, and try to formulate an underlying program. But neither the universe nor the organism is literally running on a computer program. Rather, digital computation has become the modern metaphor to relate patterns found in nature to patterns intentionally created by us. It’s an empowering and productive metaphor. But it’s also potentially misleading, because it rests on the assumption that the whole of the natural thing can be captured in the program. In truth, what can be fully captured is always itself an artifact, something we create.

The purpose of the mechanist metaphor is to view the physical and biological worlds in terms of our own intentionality (machines we create and control), thereby extending human power. The goal of viewing the organization of a living thing as software is to be able to duplicate and control that organization. As a cypher, the “software” of an organism is a handy concept because it provides a course of action. The notion of “spirit” was long similarly handy. The benefit of believing in nature spirits and gods is to be able to control such entities through the primitive “technology” of magic or supplication—that is, through prompts like those that have become familiar through the magic of chatbots and Large Language Models, which have metaphorically become minds with which we converse.

Another Platonic realm is implied in the “receiver” concept of consciousness: the perennially revived idea that the brain does not produce consciousness but only tunes in to it. This theory is as old as radio, at least, and was suggested by William James. Apparently, it informs the latest novel by Dan Brown. Like panpsychism, such an approach does not explain consciousness, but circumvents the need the to explain it, since it is held to be fundamental or axiomatic. Like naïve realism, it also spares us responsibility for what we experience, since we are not its active creators. Perhaps the general lesson is not to be victims of our metaphors.

Buridan’s Razor

Truth is the enemy of choice. If something is absolutely true (or right or good), there is no valid alternative to it. There is no rational choice involved. The choice between good and evil, for example, is no more than rhetorical: the appearance of an alternative simply reinforces the rightness of the correct choice. There is meaningful choice only when the possibilities are more equivalent or ambiguous, like a choice between household detergents or how to dress for the occasion. Yet, in the opposite extreme, the alternatives can be so apparently equal that there is no objective reason to prefer one over the other. That was the dilemma facing Buridan’s famous Ass. With two perfectly identical bales of hay to choose between, and no way to make up its mind, the poor creature starved to death!

There are several morals in the story. For the sake of argument, the creature is presumed to rely crucially on a perceptible external difference to make a decision. Further, it should decide. A real creature’s hunger would naturally override any indecision. Human beings too are nominally smart enough not to be immobilized by indecision. Yet, we do rely heavily on alleged realities to determine our choices. Should one vote liberal or conservative if the parties’ platforms are scarcely distinguishable? Or should that redundancy discourage voting at all? Is there something corresponding to the donkey’s hunger that would drive citizens to vote despite a lack of meaningful choice? Though choice in the marketplace of consumer goods is similarly limited—and often meaningless—that hardly stops people from spending their hard-earned money.

At the extreme represented by absolute truth or value there is no real choice. Absolutes compel compliance. But at the extreme of no evident truth or value there is simply no basis for choice. If “free will” is a matter of choice, then it must lie somewhere between these extremes. But, of course, truth and value are not entirely external matters. To some extent (but which, exactly?), truth is in the eye of the beholder, and value is value to someone in particular. (The hay is of value to the donkey; one bale may loom larger in the moment of peak hunger.) Whether or not differences are objectively real, it is up to the subject to act (or not) upon differences perceived. Yet, the fundamental dilemma of a trade-off remains: freedom is conditional to the extent we rely upon externals (perceivable differences), while a rationale for choice is undermined to the extent that we ignore them. Determinism and free will are opposing human constructs—extreme idealizations. Between these extremes, where does instinct, intuition, or common sense lie in choosing, whether for the donkey or the human?

Here’s a thought experiment. Imagine a tube down which ball bearings roll in a vacuum. This tube is perfectly ideal, as are the perfectly fitting metal spheres, with perfectly identical dimensions. The tube is perfectly vertical. Below is situated a sharp wedge, perfectly centered underneath the tube. This wedge diverts the falling balls either to the left or right. We should expect a random distribution of balls going either way, analogous to the outcomes of random trials like the tossing of a coin. Any preference for one side or the other would indicate that the wedge is not perfectly centred (corresponding to a “loaded” coin if the statistics is not even between heads and tails). However, it is physically possible for a coin to land on its edge. Only an infinitely thin coin would totally exclude this unlikely possibility. Similarly, we could suppose the wedge is honed to infinite perfection and centering on the tube. As with the coin tosses, two infinitely ideal perfections are pitted against each other—like the irresistible force versus the immovable object. Perfect undecidability is pitted against the perfect means to decide. In this counterintuitive situation, instead of veering left or right, the perfectly elastic falling ball could hit the wedge square on and bounce right back up the tube! In the absence of disturbing forces, even the infinitely thin coin could balance on its razor edge. No decision is made. This is the sort of logical stalemate that idealization can produce.

Occam’s Razor is the principle that the simplest explanation should be preferred. It presumes a well-defined criterion of simplicity and a well-defined situation. To coin a term, Buridan’s Razor is the principle that a distinction can always be made on which to base a choice, if only one’s powers of discrimination are honed enough. That doesn’t mean that choosing is always necessary. Sometimes it’s handy to keep alternative explanations or options on hand. Nor is choosing always meaningful or desirable. Bales of hay are irrelevant to a satiated donkey—or to one that is currently falling to its death alongside them from a high cliff. On the other hand, in this cartoon situation, that momentary state of weightlessness wouldn’t preclude choosing, purely as an act of free will. The fact that we seem to have solid ground under our feet gives apparent weight to our choices. But are we not all falling through time?

 

The Fermi paradox and para-luminal signalling

Life could be common in the universe, given the abundance of stars with planets and the fact that life arose on this planet within a relatively short cosmic time. One could conclude from this that there should be an abundance of civilizations capable of contacting or visiting Earth. This does not seem to have happened. UFOs notwithstanding, there is no scientific evidence for alien visitations or of efforts to communicate, such as might be revealed through programs like SETI. This discrepancy was first pointed out by physicist Enrico Fermi, after whom it was named.

Our search for extraterrestrial intelligence (SETI), and attempts to message it (METI), involve detecting or sending electromagnetic signals—especially radio waves. Indeed, our primary means of knowing about the universe is through electromagnetic radiation—light—one of the four fundamental known forces. A modern revelation, however, holds that less than 5% of the “bulk” of the universe is visible to us by means of that force. The mysterious remainder of the universe does not seem to interact with electromagnetism. It does interact gravitationally, however, which is how we can even know about its existence. Let us put aside the possibility that so-called dark matter and dark energy may be no more than artifacts of a defective understanding of gravitation. Perhaps our search for alien life has been like the man who was looking at night for his lost keys under the lamppost—because that’s where the light was. Could we be using the wrong medium to seek out extraterrestrial intelligence?

Faster-than-light communication or travel is commonly thought to be prohibited by Special Relativity. However—as argued in my paper “Why Is c a Cosmic Speed Limit?” (archived on this site)—the speed of light imposes an absolute limit only if light itself remains the unique standard for measuring speeds. The apparent speed limit represented by c results from the circularity of using light itself to investigate even its own properties. That does not, of itself, preclude the existence of a superluminal force that could serve as a signal. To be clear, there is no evidence for the existence such a thing. If it did exist, however, the paradoxical effects of faster-than-light travel (such as going backward in time) would only appear if light continued to serve as the standard by which to measure and investigate the new medium. If this new signal itself became the standard, the speed of light would take a modest place like the speed of sound, perhaps well below the new cosmic speed limit imposedby the new medium.

This suggests a possible explanation of the Fermi paradox: advanced civilizations simply don’t use electromagnetism for communication, but some unknown medium to which we are not (yet) sensible. By analogy, we can imagine our present situation as like blind creatures who only know of the world through touch and through sound—who build huge sound detectors and emitters in the vain hope of making contact with extraterrestrials. On the other hand, the para-luminal signalling medium of advanced civilizations might enable aliens to “see” dark matter if it involves a causal interaction resembling that of electromagnetism with ordinary matter.

This brings us to an aside about black holes. In present theory, by definition a black hole is a region of gravitational force so strong that even electromagnetic radiation cannot escape from it (except by “leaking” through Hawking radiation). But gravity itself does readily escape from it. Curiously like very long and subtle electromagnetic waves, gravity is supposed to travel at the speed of light. But why should gravity resemble electromagnetism in having the same characteristic speed and wave-like nature, yet be unlike it in escaping black holes? (Indeed, what does it mean for gravity to escape itself?) Gravity waves may be thought of as disturbances propagating in the structure of spacetime itself. But that spacetime structure is defined in terms of the speed of light, c, which makes the whole business suspiciously circular.

Of course, many other explanations for the Fermi paradox have been proposed, which is not really a paradox so much as a mystery. A reasonable explanation (while bearing ominous implications for us) is that technological civilizations inevitably destroy themselves before they can accomplish serious space travel. Carl Sagan was perhaps the first to write about the Fermi paradox. In his relevant novel, Contact, the main character is asked what question she would pose to the aliens who have (in the story) initiated contact with Earth in a message that has been received by a radio telescope and deciphered. Her answer: “How did you do it?” Meaning: how did you survive your hazardous technological adolescence?

Technological civilizations might actually be rare. Natural evolution may not favour our kind of tool-using intelligence. On the other hand, our brand of technology and intelligence may lead us to conceive aliens in too-human terms. They could operate on other time scales, using novel means of communication. Their form of advancement might be post-technological: they might not be interested in space travel or contact with distant civilizations. They might be cautious, if not paranoid, about contact—deliberately hiding. They might be ignoring us because we are too primitive—or too small  or too big.

Humans exist on a scale intermediate between the largest and smallest we can conceive. There may be as many possibilities for existence in the realm of the very small as in the very large. Aliens might not occupy our physical scale, with our expansionist ideals of “conquering space.” They might have opted for miniaturization, maximizing possibilities in the micro realm of computation and information flow rather than energy flow. Such a mentality would have no interest in expansion in outer space, terra-forming, Dyson spheres, massive engineering projects on a galactic scale, or communicating with us.

Perhaps we should not expect to encounter biological aliens at all, or their signals. For, in order to persist over cosmological periods, biological life forms would logically give place to more durable artificial successors. Since such beings, unlike us, could have the ability to re-configure their minds and bodies voluntarily, their motivations and thinking could be incomprehensible to us and unimaginable. We cannot anticipate what they might become. Indeed, with some feasible means to overcome the limitations of physical embodiment, they might migrate to cyberspace, no longer interested in what we call physical reality. Yet, even living in a virtual world, their digital reality would necessarily exist as some form of computation in the physical world. According to our present limited thinking, that would involve massive use of energy, which might itself constitute an identifiable signature of alien civilization.

Similar reasoning applies to the human future. In order to persist, we too would be succeeded by artificial beings. Will they have the same interests in space exploration and contact with alien intelligence? How much will they have in common with present humanity at all? If supra-luminal space travel is possible (such as through worm holes), then the distances involved would be less a deterrent to contact than the conventional speed of light currently presents as a limiting factor. But our successors might not have the motivation or mentality to reach out, even if freed from that limit.

The problem of nihilism

The problem of nihilism arises when conventional sources of meaning and motivation wither or are overcome by doubt. By nature, we look to externals to justify what we value. That is, we look to a worth inhering in those things themselves, or some reason in the world why they should be valued. This habit stems from the natural outward orientation of the brain, which focusses on the environment as the source of the organism’s wellbeing. This orientation is a basic fact of being a creature with a nervous system, dependent on an environment for its life. There is security in a dependent relationship, as the child finds security in the mother. The loss of this dependency itself is threatening. One can feel “orphaned,” on one’s own to face the Void (or, more cheerfully, the Great Mystery). One can no longer count on inherited patterns of belief we call meaning.

The same outward focus is the source of our natural interest in causality. We notice that events lead to other events. Turning that observation upon ourselves is a different matter. We are loathe to think that the world determines our every act—including our thoughts and feelings and our very perceptions. We admit that physical causes operate upon and within the brain. But we prefer to think that there are reasons as well as causes for what we do, think, feel and perceive. We prefer to believe that we choose our actions, and that our perceptions are justified insofar as they correspond to reality.

The senses seem to reveal the external world as it truly is. Similarly, there is more to choosing behavior than whim. Choices have real consequences and we naturally seek to justify them (to ourselves and to others) in real terms—which means in terms of causal processes in the world. On the one hand, we recognize that the world has power over us; on the other, we want to determine our own actions and thoughts. Meaning is naturally imposed on us by the external world, upon which we depend as natural organisms. But as beings who wish to claim free will, we are ambivalent toward that dependent condition. Through our biological dependency, meaning seems to abide ready-made in the world, just as the world seems transparently revealed to the visual sense. However, to know that meaning, like perception, is a function of biological need—and not intrinsic in the world—renders the world unreliable as the source of meaning and throws us awkwardly back upon ourselves.

Nietzsche warned that such nihilism could lead to personal despair, apathy, and a passive or destructive culture. There can be anger at the loss of meaning, as of any resource. Disillusionment or disenchantment is a loss of faith in something once deemed real or true, and therefore a loss of certainty. The normal outward-facing mind can no longer count on finding justification “out there” for its beliefs and actions. Doubting particular beliefs or assumptions can be functional, because it can lead to a better understanding of reality and more self-confidence. But doubting the reliability of the mind or the reality of one’s experience can be undermining and overwhelming.

Descartes’ skepticism concerned the input of the senses, which could be falsified through interference in the nervous system; his solution was that God would not permit systematic deception. If we substitute nature for God, we could suppose that natural selection would not permit deception that prevents our existence at least long enough to reproduce. (Deception that promotes our existence is allowed!) In any case, Descartes took comfort in his cogito ergo sum, concluding falsely that—while one could doubt the existence of the world—one could hardly doubt one’s own existence as an experiencing subject.

Nietzsche’s skepticism concerned something else—not the veracity of experience but its meaning. Even granted reliable sensory input, there is no absolute basis on which to interpret it, no absolutely trustworthy source of meaning. There is no absolute reference frame: “God” is dead. His solution was to re-evaluate valuation itself—to relinquish reliance on the external world to determine one’s beliefs and actions, whether by cause or by reason. He took comfort in amor fati—the practice of intentionally embracing all experience without the habitual evaluation. If meaning cannot be counted on from outside, one must create it oneself. Even negative experience should be welcomed as an opportunity to be intentional and self-determining.

Meaning is the framework for evaluating experience. Its loss is itself naturally evaluated negatively. That judgment simply reflects the continuing general habitual reliance on externals, part of our natural conditioning. Despair or depression is a normal response to the deprivation of meaning. One can defend against it by joining a group or cause, embracing an ideology or reaffirming a faith, returning to traditional values, adhering to routines, etc. One can also simply lose oneself in distraction, entertainment, or drugs. Nietzsche calls such responses passive nihilism. While these responses may acknowledge the potential loss of meaning, one reacts as though to an external threat, reflecting the continuing belief in reality as a causal factor. While individuals might choose to actively confront the Void, society as a whole cannot be expected to. Yet, if it does not, Nietzsche warned, it may fall into religious fundamentalism, rigid nationalism, populist movements, or totalitarian systems, as substitutes for lost meaning—all of which we have seen.

An alternative is to voluntarily relinquish the meaning of which we otherwise feel deprived. But that requires claiming utter responsibility for all one thinks and feels. Ironically, that is the position of the Creator, as opposed to the finite creature whose lot is to respond as best as it can to the vicissitudes of the Creation. Nietzsche’s Übermensch is a godlike being, who asserts free will against the determinism of the natural world. That is a strenuous and heroic ideal, whose demands may have been too much even for Nietzsche, who collapsed at age 44 and spent the rest of his short life being cared for by family or in an institution. One could read this as a concrete illustration of the human confrontation with the Void. To live continuously in tension with it, creating new values ex nihilo, requires enormous courage and vitality. Nietzsche’s life shows both the possibility of a life-affirming philosophy and the cost of pushing a finite organism to its limits.

In other words, existentialism may be bad for your health. Moreover, to have faith in it—much less to seek refuge in it (like in the Buddha or in Christ)—would be paradoxical. That would assert an externality to justify a course of action; but that externality is precisely the truth that no externalities can absolutely justify action or belief. To “love life as it is” (embracing all experience as Nietzsche prescribed) must include the biologically-grounded judgments of pleasure and pain, which are given in experience and are hardly matters of conscious choice. (Except for masochists, loving pain seems a contradiction in terms.) To confront the paradox is to confront the Void itself. For, we are conditioned to seek reasons (justifications) for our values, beliefs and actions. If there is no reason underwriting any choice, there is no reason to choose existentialism either. This does not prevent one from choosing one’s values or behavior, only from expecting a justification. There one is on one’s own.

Choice is ultimately arbitrary insofar as it cannot be justified by externals. On the other hand, justification can be found internally: in how the organism senses and evaluates its own states, providing its own reasons. In that context, to choose nihilism, existentialism, or anything else, mirrors the organism’s fundamental self-defining nature—its autopoiesis. Looking inward affirms the self’s responsibility for itself and its actions, reflecting the organism’s autonomy and relative freedom. From an existential point of view, values are not found but created. Nietzsche viewed life as a work of art, rather than a science. Artistic choice is simply up to the artist. On the other hand, value can be a consciously shared collective creation. Then it would no longer be what we fight over, but what we create together.

 

The tragedy of the Web?

Increasingly, people are getting their information directly from chatbots, which do the trawling for us, instead of using conventional search engines to find relevant sites and then visiting those sites. According to The Economist (July 19, 2025), the consequent loss of revenue from advertising that appears on those sites poses a looming problem. The internet is in trouble because AI-powered search engines are reducing human traffic on the Web. After all, humans are needed in the loop—to make purchases, to spend money, to be good consumers!

From the point of view of those who have felt all along that the Web should be a non-profit resource, commerce is not the victim but the perpetrator. From that point of view, the Web has been in trouble from its inception.

Certainly, there are costs involved to maintain the internet as a “commons.” (Accelerating electricity consumption for giant servers could one day bankrupt the planet—but that’s another story.) These infrastructure costs must somehow be passed on to users. The latter pay fees for access to their internet provider, for example, and for various apps and services. Yet, the bulk of the cost of running the Web is apparently paid by advertisers—and, thus, indirectly by consumers who view the advertising. They pay by buying products advertised—despite (and indeed because of) having paid for internet access. The business model is like a conventional newspaper, which charges a subscription fee but also derives revenue from printed advertising. The proportion between these sources of revenue can vary.

Presumably, advertising works, otherwise it wouldn’t pay. What that actually means is that it causes people to buy products and services they would not spontaneously seek out, and perhaps do not actually need. The word literally means “turn (attention) to.” In other words: to advertise is to distract attention from where it would otherwise be directed. This is a complaint of many users, who find the distraction of online ads annoying. Some are willing to pay for the “service” of not being exposed to them. In effect, they would rather pay up front for internet access than pay indirectly through ads. This changes the balance between direct and indirect revenues; but the fees for ad-blocking do no go (directly) to pay the costs of infrastructure.

Why advertising works is a mystery with deep implications for society and human psychology. For people who know their own needs and wants, it is helpful to get information about products and services they seek, and how to find them. Information about what they do not need and are not seeking is noise—an annoyance. Personal assistant chatbots can provide a very useful service by searching, upon request, for those products and services a client actually wants. In other words: shopping for the client. However, advertising is not based on this client-driven model, but on manipulating people to consume.

If consumers were rational, advertising would not pay. Users tolerate online ads because they seem to get free access to the sites that incorporate them. But if advertising works, the access is not free. We do pay for it—indirectly and collectively, if not personally. If you don’t happen to respond in that moment to the ads on that site, someone else nevertheless does. The effectiveness of advertising is statistical.

Big data is a new commodity, ultimately dependent on advertising. Aside from identity theft, your personal “data” are about your online attention patterns, so that advertising can target you specifically. If advertising never works on you, your data are worthless commercially, although they could be of interest to the state. You get the advertising anyway, because statistically it works on enough people. It would cost the advertiser to remove you individually as a target— which, ironically, has become a service you can pay for, with ad blockers or with the effort it takes to “unsubscribe” to a mailing list.

For those who seek fingertip information online—for whatever purpose—chatbots provide a valuable service, for which we should be willing to pay. For those who seek to make money from their websites indirectly through advertising on their sites, chatbots may represent a threat to their traffic-based income. This points to a divide in motivations. In the early days of the internet, there was great hope for a universal non-commercial show and tell, where people could share information freely—that is, without restriction and without cost beyond maintaining infrastructure. It did not take long for commercial interests to take over—for the purpose of profit unrelated to that cost. The Web is now primarily a marketplace that incidentally allows show and tell.

Those motivated to share content altruistically, or for fun, need not feel threatened by chatbots stealing traffic from their sites. If the information is the important thing, then what does it matter where it comes from or how it gets delivered? If the information is offered gratis, crediting sources is important for validation—but not because of copyright. The problem is rather for those whose livelihood depends on the sharing—authors, artists, musicians—who need and deserve compensation for their creative efforts. Their livelihood depends on it. AI threatens to take over their production as well as to affect their distribution. On the other hand, the threat to those whose “productive” contribution is no more than manipulating the attention of others—or profiting indirectly from their labors—is a different matter. The divide is roughly along the lines of the traditional divide between labor and capital.

Will chatbots diminish the incentive to create online content? Well, that depends on what that incentive is. If you are making content primarily for gain, then you may deservedly be at risk. If your motive is to share your work for its own merit, or for its potential benefit to others, you should not necessarily be discouraged by a decrease in human visits to your site. Your work will be forwarded, in some digested form, by chatbots that may actually frequent your site more often than was occurring before.

The more disturbing question is how chatbots will transform that content. Volunteers may be discouraged from contributing to Wikipedia, for example, if they know the information will be distorted or that Wikipedia will not be credited. This is an epistemic issue of citing sources, not a commercial one. It reflects broader issues of the dissemination of information in modern society. In contrast to the non-profit Wikipedia, Reddit is a show-and-tell forum that is also a corporation with a listed share price. Those who share on the platform may or may not be shareholders in the corporation. Even for those who are both, losing human readers should be a different concern than falling profit. AI now competes with human content providers, who have been competing with each other all along. Getting attention remains the same needle-in-a-haystack problem it has always been. If AI can provide better content faster, this is indeed a challenge to humans, but one we have created ourselves. Let us rise to it!

Many academic publishers of journals and book now encourage an “open access” model, where authors or their institutions pay the cost of publication, which is then available online to the reader for free. This has been very profitable for the publisher and very costly to the supporting institutions, such as universities. It may be one cause of rising tuition fees. Why should costs for reproducing content online should be so high, when the intellectual content itself has already been produced for free, or already paid for through academic employment? The cost of formatting has been greatly reduced by AI tools, and pre-publication editorial reviewers are usually unpaid volunteers. A simple answer is greed, euphemistically known as profit margin. Academic publishing is big business, dominated by a few players. In the context of the present information distribution system, the publisher offers what seems to be a valuable service, given the absence of alternatives—though hardly out of the goodness of their heart. AI has the potential to revolutionize the information distribution system, so that access would be truly open. But predatory commercial interests will try to hijack it for their own benefit.

Motivation is the dilemma that underlies the emerging crisis for the Web—and for the world at large. The issue was there from the start, when the internet was envisioned as a “commons” whose use was up for grabs. It was there at the origins of capitalism, when lending transformed from a reciprocated neighborly gesture to ruthless usury. Should the Web be for sharing information, moderately compensated, or for rampant commercial exploitation? As it stands, sharing of information is still possible—in the context of commercialism. According to The Economist, “everyone has an interest in making content-creation pay.” Really, everyone? The tragedy of the Web is the tragedy of modern society. We are in a crisis of motivation.

 

 

 

 

 

Inter-species ethics

As part of the system of nature, humans must eat other beings, whether animal or plant. Beyond satisfying hunger, culturally we’ve made dining on our fellow creatures an enjoyable experience. Yet, culture itself reflects a rejection of the animal way of living. Cooking makes food more digestible chemically, but also esthetically and morally. We do not, like other carnivores, tear into raw bloody carcasses. While not equipped with the teeth to do so, we also do not want act like brutes. The whole of civilization represents a flight from our animal nature, as well as from biological and physical limitations. But, as such, this project of transcendence can only be a compromise. We may choose to eat with cutlery, manners, spices—and not like the beasts. Yet the fact remains that we must eat.

With good reason, there is increasing concern over animal welfare, as well as concern for future food supplies. The circle of moral concern has expanded from sheer tribalism to a universal definition of humanness as membership in a species, regardless of race or ethnicity. Now it rightfully extends to other creatures in the “web of life,” and potentially to artificial creatures as well. Yet, the basis for such concern remains fundamentally anthropocentric. The notion of sentience is valued as a criterion, based on conscious human experience. We empathize with other creatures in the way that we do with other people: by imagining their experience. While that extends a natural capacity that enables human sociality and coexistence, sentience is an inadequate moral criterion for the treatment of animals. The fundamental problem is that we cannot experience, or really even imagine, the consciousness of other forms of life. Indeed, we can only experience our own personal consciousness. We humans have agreed, by convention established through language, to acknowledge and mutually conceive each other’s experience as similar. We do not have that advantage with other creatures and, for the most part, do not consider them persons. We are confined to observing their behavior to infer what they might experience, using our own experience as a reference: how I would feel in that situation, with that injury, etc. It long remained easy enough to deny that non-human creatures even have experience.

Using behavior as a guide, it now seems clear that many creatures show signs of reacting to injury or threat in the ways that we associate with pain or fear. Yet, it is a mistake to judge their moral worth by an imagined sentience, as though the creature’s experience trumps its objectively observable condition. The cliché of putting a suffering animal or insect “out of its misery” suggests that the pain itself is what counts, not the injury that results in the pain. (In that way, we can be spared the effort to repair damage, in contrast to the heroic efforts made to repair human injuries.) Similarly, “humane” slaughter of animals for meat overlooks the worth of the animal as a remarkable entity, by focusing on simply minimizing its suffering. But its pain, or fear or suffering, is its own evaluation of its threatened or damaged state. To give it a mercifully quick or anesthetized demise simply curtails this self-evaluation of its own destruction—as though fooling the creature substitutes for preserving its life. In contrast, we anesthetize people for surgery, with the aim to preserve them. A more useful criterion than sentience is the well-being (the objective state) of the creature, as determined by observation. This may not coincide with the creature’s own self-assessment, but at least it refers to something potentially observable. Doctors make their assessment of damage to your tissues independently of your assessment through felt pain. Certainly, they strive to relieve your pain and distress; but, above all, they want your body to heal for your sake. We may care in that way for pets, but hardly for animals destined to become our dinner.

So-called utilitarianism is based on the assumption that what matters is subjective experience rather than an objective state of affairs. The idea is to maximize pleasure and minimize suffering, even worldwide. (The parallel in economic theory is maximizing “utility.”) This seems morally admirable, especially if it could extend to all life. However, there is no way to quantify pleasure or suffering, which are falsely reified as measurable commodities. The idea leads to absurdities, as in trying to calculate how much pleasure for some individuals justifies a given amount of suffering for others. Moreover, what counts or “matters” can only mean what matters to someone or some group in particular. In the case of livestock, what matters to the human owner is hardly what matters to the animals themselves, who do not volunteer to sacrifice themselves for human nourishment. Similarly, moral arguments based on “intrinsic” worth of life are spurious, because worth means worth to someone, and may vary accordingly. Ideas of inherent dignity or moral equality among species (the interests of all creatures are equally valuable) similarly have no absolute sense; they can only mean that someone values them equally or treats them with uniform dignity. If humans count for more, it is likely because they are doing the counting.

One way to measure the worth to us of various creatures—morally and even economically—would be through the effort required to create them artificially. With artificial intelligence and genetic engineering, we are on the threshold of the science and technology to do so. Imitating, recreating, displacing, and improving upon nature are ancient goals slowly being realized through technology. They partially satisfy our drive to altogether transcend natural limitations. But “nature” is not only the physical world surrounding us; it is also the nature at the core of our being. We’ve sought to morally transcend our own animal nature, at first through religion and law, and now through science. Yet, a most embarrassing  remnant or our animality is the need to feed off of other creatures. While other life forms are rightly considered worthy of our moral concern, our primary concern should be for our own status as moral agents. Are we essentially the gods we aspire to be or are we the beasts that nature made us? Given what we eat, the verdict should be clear.

Artificial intelligence not only can imitate life, but can also advise us how to properly relate to it. I asked a chatbot to outline solutions to the moral dilemma presented by food dependency. I began with the premise that dead human bodies represent a wasted food source—in contrast to the billions of animals whose lives are deliberately made miserable and cut short for human consumption (more than 70 billion land animals alone, per year). Though I wasn’t asking how to murder people for food, I wondered if the prompt might trigger a censure rule. Quite the contrary, the chatbot enthusiastically made some interesting suggestions, even proposing a “Transhumanist Food Manifesto.” According to this vision, as so aptly put by the AI: “No child starves. No cow screams. No forest is razed for a hamburger. No body is wasted in flames or boxes.”

Certainly, there would be problems with recycling human corpses, let alone with eating human flesh. Aside from cultural taboos and biases, human brains contain dangerous prions that are not destroyed by cooking. Like other long-lived animals, human remains contain accumulated heavy metals and other toxins. Yet, it is theoretically possible to hydrolyze proteins, derived from human sources, into amino acids, in a purified way. Instead of eating human flesh, bodies could be rendered into fertilizer, animal feed, or the nutrients for growing lab meat synthetically. The program would be voluntary, of course, like donating your body for medical research (with a little tattoo, saying “Eat Me”?). Human cells could also be used to culture synthetic meat. Protein-rich biomass could be engineered from bacteria, yeast, and algae.

From a moral perspective, and also for efficiency, food would no longer be a product of suffering, slaughter, or waste. Food production could be optimized with AI and genetically tailored to individual nutritional needs. While it could be made to taste like the real thing, it would not even necessarily consist of solids. Nutrition could be delivered via synthetic blood, nanobot-mediated infusions, or even photosynthetic skin implants. All in all, factory farming would be eliminated and famine potentially overcome. Emissions and land use would be minimized. All biological matter, including humans, would be respectfully recycled. The human moral dilemma of carnivorism would be solved.

The treatment of animals is intimately linked to the treatment of human beings. At one time, members of the other tribes were literally fair game: the category ‘human’ was reserved to one’s own kind. We’ve since automated both the slaughter of people and animals alike, though we give lip service to humane treatment of both human and animal captives. It can be argued that improved ethics in regard to animals has followed upon improvements in our ethical principles in regard to human beings. Perhaps the leverage can work the other way as well. We might treat each other better if respect for life was unconditional. But that seems possible only when our own existence doesn’t depend on taking the life of ostensibly sentient beings.

The end of common sense

Common sense may not be so common, but what exactly is it and what is it for? Aristotle thought there must be a special sense to coordinate the other senses. This meaning persisted through the Middle Ages, although the Romans had added another meaning: a moral sensibility shared with others. It fell to Descartes to provide the modern meaning of “good sense” (bon sens) or practical judgment. Like the other senses, he thought it was not reliable and should be supplemented by formal reasoning. Giambattista Vico, the father of sociology, thought common sense was not innate and should be taught in school. His view of it as judgments or biases, held in common by a society, merges with the idea of public opinion or consensus. Kant and later thinkers returned to the idea of a shared moral sensibility, so that common sense is related conceptually and linguistically to conscience.

We have long taken common sense for granted, that it should come as standard equipment with each human being. That presumes, however, that each person not only develops according to a norm but develops in the setting of the real world. Ultimately, it is physical reality and our physiology that we have in common, which provide the biological basis for mutual understanding and consensus. The human organism, like all others, evolved as an adaptation to the natural world. Whatever “practical judgment” we have is learned in relation to a world that holds over us the power of life and death. Common sense is our baseline ability to navigate reality.

Of course, most of us do not grow up in the wild, like animals, but in environments that are to a large degree artificial. “Reality” for us is not the same world as it was for people a hundred years ago, a thousand years ago, or ten thousand years ago. Yet, until recently, the reality experienced by people of all times lay objectively outside their minds and bodies. Common sense was firmly grounded in actual sensory experience of the external world. This can no longer be taken for granted. We now live increasingly in “virtual” realities that are, however, far from virtuous. Because they can be as diverse and arbitrary as imagination (now augmented by AI) permits, there is no longer a common basis for shared experience, or for common sense.

This shift is the latest phase of a long-standing human project to secede from the confines of nature and the body. In the anthropological sense, culture is the creation of a distinctively human realm, a world apart from the wilderness and physical embodiment. We built cities for physical escape. Our first mental escape was through trance, drugs, and religion, which imagined a life of the spirit or mind that was distinct from the life of the animal body. With Descartes, “private experience” formally became a realm unhooked from the external world. With dawning knowledge of the nervous system, he grasped that the natural formation of experience could be hijacked by a malicious agent. His thought experiment became the basis of the “brain in a vat” scenario, the Matrix films, and the paranoid popular memes that you are “probably living in a simulation” or in a theatrical hallucination created by “prompts.” Descartes consoled us that God would not allow such deception. Humanists supposed that natural selection would not allow it. Post-humanists invite it in the name of unlimited freedom.

In any case, common sense is the baby thrown out with the bathwater of external reality. Through technology, humanity grants itself its deepest wish: to be free to roam in inner man-made worlds disconnected from the world outside the skull. Nature had granted us a relative version of that freedom through dreaming and imagination. But our impulse toward creative mastery requires that humanity find this freedom on its own, not naturally but artificially. It must be created from scratch, originally and absolutely, not accepted as a limited hand-me-down from biology. Here we venture into dangerous territory. For, we continue to be vulnerable embodied creatures living in real reality, even as we buckle up for the virtual ride. Is God looking out for us while we trip? Is nature? The other side of utter creative freedom is utter self-responsibility. If experience is no longer to be grounded in the real world, but a matter of creative whim, then what basis is there for limits and rules—for anything but chaos?

The more time children spend online, using their eyes to look at screens instead of at the world outdoors, the less direct experience they will have of the external world. The more time they spend in some entertaining digital fantasy, the less basis they will have for developing their own common sense, which is grounded in the natural use of the senses to explore the external world. Of course, this applies to adults as well. It is not only the proper use of the senses that may atrophy, but the very ability to distinguish real from virtual, nature from artifact, truth from lie. The contents of movie entertainment, for example, are often absurdly fantastical, about themes and situations deliberately as far removed as possible from the tame humdrum of real life. It is precisely drug-like distraction from daily living that entertainment is typically designed to provide. But this is a vicious circle. We then expect from the real world the level of stimulation (adrenaline, serotonin?) that we get artificially from films, online gaming, “adult” content, and “substance abuse.” Indeed, we are trained to ignore the difference between reality and fiction, which can result in failure to tell the difference.

Social media are a form of entertainment, a virtual drug in which truth is reduced to gossip. They may help build consensus with those who “like” you and are like you in some context. In a brave new world of information overload, where the basic challenge is to sort what is fact or reliable opinion from what is not, common sense should be a legacy tool one can count upon. But common sense is not consensus. The failure of a society to know the difference is the banal soil in which authoritarianism grows. We are seeing it around the world right now.

Large language models and similar “generative” tools are another form of virtual reality and entertainment. Ironically, if properly used, they provide access to an artificial version of common sense—or at least consensus. For, they draw upon the common pool of human experience and creative output, as archived digitally. The answers you get to chatbot queries reflect a baseline of collective human knowledge and creativity; they are also organized according to collective ideas about what is logical, sensible, relevant. Another name for such collective wisdom, however, is mediocrity. LLMs are not minds that can think for themselves or originally. If they regurgitate information that proves useful to you, the task of understanding and using the information remains your own, grounded in common sense.

The internet potentially embodies the ancient ideal of omniscience. In itself, the instant online access to encyclopedic knowledge aggravates the problem of discernment: how to know what and whom to trust. The traditional answer to that dilemma has been education, reinforced by common sense, what is meaningful and what is chaff. The traditional encyclopedia, while vetted by well-educated experts, gives relatively cursory information. The new answer is the “intelligence” of the AI tool itself, which sifts, organizes, and even interprets seemingly unlimited information on your behalf. You place your trust in it, as you would in human experts, at your own risk. It draws upon a common denominator of expert opinion. As with human experts, however, you are still dealing with hearsay: accounts that are second-hand (or nth-hand), which you must interpret for yourself. When your quest to go deeper approaches the ceiling of current common understanding, the replies will simply recycle existing clichés.

The situation is like what happened with the invention of printing. Suddenly people had a greatly expanded access to information (beginning with the Bible). This invited them—and indeed required them—to think for themselves in ways they were not used to when guided by the erstwhile gatekeepers of knowledge. This hardly led to consensus, however, but to an explosion of diverging Protestant sects. An optimistic view of the new information revolution is that people are similarly being challenged to think for themselves. Again, the actual result seems to be divisiveness. Of course, the printed page—while novel, thought provoking and entertaining—did not do people’s thinking for them. Yet, AI proposes do exactly that! To implicitly trust the authority of AI is not so different from the faith in religious authority before the Reformation, when the priest could do your thinking for you. If common sense did not provide immunity from the excesses of theology, we can blame the closure of the medieval  world—an excuse we should no longer have. Common sense should be the back-up tool of first resort. But to maintain it requires first-hand experience in the real natural world, which you cannot get online.

Epistemic cycles

Knowledge is a process that involves a dialectical cycle: thesis, antithesis, synthesis. The last term then serves as a new “thesis,” beginning a new cycle. We see this in formal knowledge processes, like scientific theory-making and testing. A new idea is proposed to explain data or to make up for a deficiency in current theory. This idea is published in a journal, for example, which invites comment and critique (antithesis), which may lead to further refinement and experimental testing. If the idea is accepted by the scientific community (and not disqualified by experiment), the resulting synthesis becomes a new thesis to be eventually challenged.

Ordinary cognition involves a similar cycle. But the brain tends to be more definite in its conclusions than scientific experiment or observation, whose results are always probabilistic; and it tends to be less rigorous about testing ideas. The organism must be able to act decisively on the basis of the information it has, however inadequate. If our perceptions were not definite despite actual uncertainty, we would be paralyzed by doubt and unable to act decisively. Yet, the knowledge cycle is incomplete and less reliable when thesis alone is in play, however confidently asserted.

The inherent need to believe our perceptions and trust our beliefs runs up against the contradictory perceptions and beliefs of others. While objectivity is desirable, the natural tendency is to mistake perception for reality or truth, short-circuiting the epistemic process. And in order to maintain this illusion, we tend to overlook inconsistencies in our own thinking, perhaps to protest that we are being objective while others are not. While there can be dissonance within one’s own thinking, leading to self-scepticism, dissonance with others is nearly guaranteed. Too often, internal dissonance leads not to questioning one’s own views, however, but to retrenchment of them and scepticism in regard those who disagree. Nevertheless, the fact that opinions differ plays an overall positive role in the epistemic cycle, for which others provide the necessary antithesis. Whether spontaneous or forced by others, the recognition of one’s own error or subjective limits enables mind to evolve at once toward humble relativity and greater objectivity.

It can hardly be taken for granted that embodied mind seeks truth. The goal of life is survival long enough to reproduce, not objectivity. In other words, our natural condition as organisms is to see and know what we need to see and know. And this is not simply a matter of selective attention or reduced information flow—an obscuring filter between the mind and an otherwise transparent window on the external world. Simply, there is no window at all!

The epistemic circumstance of the scientist parallels that of the brain, sealed inside the skull, which relies on the input of “remote” receptors to infer the nature of the external world. The scientist similarly relies on instrument readings. Both situations demand radical inference. The brain makes use of unconscious perceptual models, according to the body’s needs. Scientists consciously model observed phenomena, according to society’s needs. The brain’s unconscious perceptual models are reliable to the degree they enable life. By the same token, scientific modelling, like other human practices, should not be regarded for its truth value alone, but also for its ultimate contribution to planetary well-being. Good science supports a human future.

Science and engineering are intrinsically idealizing. The dominance of mathematics (which is pure idealization) means that physical phenomena are idealized in such a way that they can be treated effectively with math. This leads to an analysis of real systems in terms of the idealized parts of a conceptual machine. But the reality of nature never conforms perfectly to the idealization. There are no spherical cows, and nature is not a machine. The discrepancy constitutes a potential antithesis to the oversimplified thesis.

Unlike the individual brain, science is a collective social process. It is a communication among scientists—a (mostly) polite form of argumentation through which ideas are justified to others. In fact, science is a model of social cooperation, transcending political and cultural boundaries. Just as there is an epistemic cycle of knowledge production, so there are larger-scale cycles in science: paradigm shifts, but also alternations of more general undercurrents, themes, and fashions such as positivism and Platonism.

Indeed, the interplay of positing and negating aspects of mind manifests in historical cycles generally. The opposing phases in culture may be characterized broadly as heroic and ironic. These poles form a unity, like those of a magnet, alternating as undercurrents which surface in philosophical, social, political, religious, moral, and artistic movements, as well as in scientific fashions. The limiting nature of any proposition or “positive” system of thought casts a complementing shadow that is the other side of the coin. Every thesis posited defines its own antithesis. Where contradictions cannot be resolved logically—that is, outside of time—they give rise to temporal alternations in the phases of a cycle. The pendulum of history swings back, fashions return; we move in spirals if not circles.

Throughout history, there has been a dialectical relationship between the playful, embroidering, subjective, ironic side of the human spirit and the heroic, serious, goal-oriented, earnest, realist side. The ironic mentality delights in playing within bounds. It understands limits to be arbitrary, relative, intentional. The heroic mentality rejects limits as obstructions to absolute truth and personal freedom, while worshipping limitlessness as a transcendent ideal. The heroic is aspiring, straightforward, straightlaced, straight-lined, passionately simplistic, rectilinear, square, naive, concerned with content over form, and tending toward fascism and militarism in its drive toward monumental ideals and monolithic conceptions. The ironic is witty, sarcastic, curvaceous, ornate, sophisticated, diverse, complex, sceptical, self-indulgent and self-referential, tending toward decadent aimlessness and empty formalism. While each is excessive as an extreme, together they are the creative engine of history.

There are cycles of opening and closing in societies, in individual lives, and in creative processes generally. The tension between idealism and materialism, or between heroic and ironic frames of mind, helps to explain why history appears to stutter. Most of any historical cycle will consist of working out the details of a new regime, scheme, paradigm, or theory. But the cycle will also necessarily include an initial creative ferment and a final stagnation, sandwiching the more conventional middle. When change is too rapid or chaotic, there is nostalgia for the probably not-so-good ol’ days. Instability inspires conservative longing for structure, order, certainty and control—until an excess of those inspires revolt again, beginning a new cycle. Generally, too much of anything breeds contempt—and therefore its opposite—as part of the homeostatic search for balance.

Cycles acted out in real time may reflect the deeper endemic circularity of logical paradox. If space and time themselves are products of the brain, how can the brain be located in the space and time it has created? Self-aware consciousness deems the external world to be an image constructed by the brain, but the brain is part of the world so constructed as an image. The endpoint of an explanatory process is recycled as its beginning. It does not seem possible to resolve such circularity in a synthesis. That is perhaps why there cannot be a logically consistent scientific theory of consciousness, which remains a mystery because we are it.

 

Better to believe it

Against common sense, people can believe some very strange things. One marvels at the ingenuity of the human imagination—not only the things that make practical sense, like houses, agriculture, technology—but above all the things that make little sense to a rational mind, like gods and demons, superstition and magic. Yet, religion and magical thinking have characterized human culture far longer than what our secular culture now defines as rationality.

The ancient Greeks we admire as paragons of rationality seem to have actually believed in their pantheon of rowdy and absurdly human-like gods. The Pythagoreans believed in sacred numbers and the transmigration of souls; they used mathematics and music for mystical purposes. Plato believed in a metaphysical realm of Ideal Forms underlying material reality. Copernicus thought the planets must move in perfect circles, because the circle was the symbol of perfection; and Kepler thought that angels moved the planets along their (elliptical) orbits. The early scientists were literally alchemists and Creationists. There are scientists today who believe in the Trinity and the transubstantiation of the Eucharist. My point here is not to disparage religion as superstition, but to marvel that superstition can exist at all.

Language confers the nearly magical power to define things into being—as we imagine and wish them. Outrageous beliefs are possible because a story can easily be preferred to truth. A story can make sense, be consistent, clear, predictable and repeatable. Reality, on the other hand, is fundamentally ambiguous, confusing, elusive and changing. Reality only makes sense to the degree it can be assimilated to a story. It made sense to many ancient cultures that a year should have exactly 360 days (corresponding neatly to the 360 degrees of the circle). The fact that the daily rotation of the earth on its axis has no physical relation to the time it takes to move around the sun was a great inconvenience to calendar makers over the ages, who knew better than nature how the world should work.

In general, what we consciously experience as real is the result of sensory input that has been assimilated to a story that is supposed to make sense of it, and upon which an output can be based that helps us live. The story does not need to be true; it only needs to not conflict with the existence of our species. That gives a wide latitude to imagination and belief.

The brain is a delicate instrument, normally tuned to the needs of the body. Like a complicated machine, there is much that can go wrong with it. Being so complex and malleable, it is also capable of great variation among ostensibly similar individuals, which can include behavior that deviates far from what serves the body or serves the species. Underlying all variation or dysfunction, however, is the natural faith we have in experience. We naturally tend to believe whatever our minds present to us. Human freedom consists in the ability to be wrong while utterly convinced that we are right.

Addiction is an obvious example of the compelling attractiveness of some stimuli—such as alcohol, drugs, or sex. It is natural to seek pleasure and try to avoid pain, because these represent the state of the organism, which tries to maintain itself. However, when experience is sought for its own sake (rather than for the body’s sake), the link with wellbeing is broken. We can then find pleasure in things that are bad for the body (and society), and reject things that are good for it. Of course, we have extended such meanings to include intellectual pleasures and emotional suffering as well. In fact, humans can abstract experience in general, away from its ties to the body, so that it becomes a sort of private entertainment to pursue for its own sake, apart from its relevance to bodily or social needs.

Other compulsions, such as obsessive behavior (including avoidance as well as attraction), further demonstrate the mind’s willingness to believe its contents. And then there is artificial input, applied with electrodes to the brain, for example, which can stimulate specific experiences or memories. Or applied by means of transcranial magnetic stimulation, which can change your perception, for example altering the apparent color of things or draining them of color altogether. On the other hand, sensory deprivation causes outright hallucination, as the brain makes up its own experience in the absence of sensory input.

Depending on the circumstance, we may either believe, or have reason not to believe, a given experience. If you know you have wires stuck in your head, you may justifiably be suspicious of your experience. On the other hand, if you have ingested a psychedelic drug, or have an unsuspected brain tumor, it may affect your judgment as well as your perception, and you may fail to disbelieve your hallucination. It is helpful to keep in mind that the brain hallucinates all of the time; while some of the time its hallucinations are dominated and guided by bona fide sensory input. We then call that reality and feel justified in believing the hallucination.

Within the framework of normal perceptual reality, we also have thoughts and feelings that we feel compelled to believe. Social media now run rampant with outrageous claims and memes, endorsed by our natural willingness, as social creatures, to believe what others tell us. Again, this reflects the power of language to evoke mental images and feelings, in a socially approved form of hallucination, to which we tend to accord the same credibility as we do to first-hand perceptual images and the feelings they arouse.

Even in the most abstract realms of speculation, we tend to have faith in our mental constructs. Often that faith is justified, at least provisionally, as a useful tool that can be updated by further observation and experiment. In the seventeenth and eighteenth centuries, scientists believed in a substance called phlogiston, released as heat during combustion. This concept was superseded by the caloric theory, which conceived heat as a sort of fluid. That idea was abandoned in favor of heat as a form of energy—whether the kinetic energy of molecules or the radiant energy of electromagnetic “waves.” Energy, in modern treatments, persists as a kind of substance interchangeable with mass (as per Einstein’s famous formula). What is actually involved, in all cases, is tangible measurement in specific contexts, not some ethereal quasi substance. But to reify energy conceptually seems to be useful in physics even though “it” manifests in such diverse forms and consists in no more than measurable quantities. (Not to mention nebulous popular metaphysical notions of “energy,” such as chi.) Even more derivative abstractions, like entropy and information, are now reified as quasi-substantial, attributed their own causal powers. Even the measures we call space and time are reified—for example, as the 4-dimensionsal spacetime continuum.

To objectify is a built-in tendency of the mind. After all, our primary orientation is toward objects in space. We literally experience the world as a real space outside our skulls, filled with interacting things. Since language and thought are essentially metaphorical, it is natural (if not logical) for us to think of abstractions—indeed, anything that can be named—as at least vaguely substantial. We ontologize everything, more or less automatically (just as I am now, admittedly, ontologizing the compulsion to ontologize). The fact that this compulsion includes reifying experience as ‘mind’ or ‘consciousness’ leads to the infamous Mind-Body Problem, as we then ponder what sort of thing it must be, compared to physical things. Descartes posited a dualism of physical thing and “thinking thing.” Others, before and since, have proposed some monism or other instead: that everything is material, that everything is mental, or that mental and physical amount to the same “thing.” Underlying these isms, concerning what is ultimately real, remains the fundamental need to be settled about something that seems substantial.

The dualism above may turn out to be little more than a built-in feature of our nervous system, which provides us with two radically different points of view. The myelinated exteroceptive nervous system is the basis for the experience of an external world of objects in space and “digital” judgments regarding them. Through language, we conceive a “third-person” point of view based upon that experience of a world of publicly accessible objects. But the body operates also with a more fundamental unmyelinated nervous system, responsible for feeling, valuation, and homeostasis. It operates in a more analog mode to monitor the body’s needs and regulate its state. We identify these aspects with a “first-person” point of view, in which qualia and feeling are the chief features and seemingly private. Evolution has thus provided us with two minds, so to speak, which have in common the need to believe what they present.

A generalized Turing test?

The concept of the Turing Test, as proposed by Alan Turing, was intended to distinguish between a machine’s intelligent behavior and that of a human. Here we extend and generalize this idea to a broader framework, the Generalized Turing Test (GTT), as a thought experiment designed to distinguish between what is natural and what is manmade. The fundamental premise of the GTT rests on the idea that ‘natural thing’ and ‘artifact’ are categorically disjunct concepts, though the line between them can become blurred in actual experience. By premise, natural things are not made, but simply found, in the literal sense that they are encountered or come upon in experience. They seem to exist independently of human creation or intervention. Artifacts, on the other hand, are made; they are products of human agency and definition, though they might also be found in the above sense. Following Vico’s makers-knowledge principle, an artifact should be exhaustively knowable by the agent that made it. In contrast, the properties and relationships of a natural thing are indefinite for any cognitive agent.

In principle, finding and making are distinct relationships of subject (or agent) to object. In practice, they are ambiguous in some situations. For instance, in quantum measurement, it can be unclear whether the observer is finding or making the experimental result, since the observer physically intervenes in ways that affect the result. For another example, because it is not known how “neural networks” produce their results, it is unclear whether the programmer is making or finding the results. You can know that something has been made, because you made it or witnessed it being made. But a thing you find cannot be assumed natural simply because you did not knowingly make it. The matter is complicated by the fact that your perception of the world (in contrast to the world itself) is also an artifact—produced by your nervous system! Is the world that appears to you found or made?

A model is an artifact that simulates a found thing by attempting to formally reproduce its properties and relationships. A model is a product of an agent’s definitions. It consists of a finite list of properties and relationships, which are themselves products of definition. Any model can be exhaustively modelled, because it is well-defined. In principle, an artifact and its model(s) are finitely complex; any artifact or simulation constructed from a model can be perfectly simulated. In contrast, a natural thing may be indefinitely complex; it cannot be perfectly simulated because no model can list all its properties and relationships. On the other hand, there is no logical limit to the complexity and completeness of models, or thus to the apparent realness of simulations. In principle, any given thing can be simulated so effectively that a given cognitive agent (with its limited resources) cannot distinguish between the model and its target. However, there are practical limits to modeling and simulation, which involve limited computational resources. Deterministic chaos, for example, can be modeled only for a limited period of time before diverging from expectation. The question is whether these resources are sufficient to pass the GTT in a given instance—which means to convince a cognitive agent that the thing in question is natural.

William Paley’s watchmaker argument for intelligent design invokes an obvious difference between a rock, found on the ground during a walk in the forest, and a pocket watch found lying beside it. However, modern technology blurs this distinction: an artificial rock could theoretically be assembled through nanotechnology—or more conventionally, as with man-made diamonds. Machines can now be so complex and sophisticated that they appear natural, even organic. We can no longer rely on ordinary cognition to conclusively judge the difference between nature and artifice, especially when there is an intention to obscure the difference—as with generative AI and chatbots. Moreover, the distinction is only meaningful because we already have a category of ‘made’ (or ‘fake’) to contrast with ‘found’ or (‘genuine’). Such categories depend on conscious human agency. Absent a GTT, fake only need be good enough to fool our natural cognition.

Suppose we happened to live in an entirely artificial world—for example, a virtual reality, as some people imagine is possible. If so, everything encountered during the stroll through the virtual forest would seem “natural.” (That would be the sole category of existence until something is “made” in the virtual world by someone in that world fashioning it from the “natural” ingredients available there.) We may add to this concern the idea that “reality” is not an ontologically fixed concept or category. The “realness” with which our normal experience of the external world is imbued serves an evolutionary function for biological cognitive agents; its epistemic utility is relative to changing context. Historically, it refers to what affects us (humans) physically and what we can affect. As we co-exist ever more with artifacts—even conceptual ones—these become “real” as we interact with them, and come to seem “natural” as our new environment.

Of course, conventional ways to test for naturalness already exist. An object or substance can be analyzed chemically and structurally. (For example, there are microscopic tests to distinguish man-made from natural diamonds.) However, such procedures would not necessarily reveal a thing’s origin, granted the possibility that any natural chemistry or structure can be simulated to a finer degree than the resolving capabilities of the test procedure. While certain patterns (e.g., tree rings and other growth patterns) do characterize natural things, these too can be imitated. Though idealization, perfect symmetry, and over-simplification do characterize man-made things, a simulation could intentionally avoid obvious idealization, perfect geometric forms, or perfect regularity well enough to fool even a vigilant observer. Pseudo-randomness can be deliberately introduced to imitate naturalness. (The challenge then becomes to distinguish real from pseudo randomness.)

At least on the macro scale, natural things have individual identity as variations from a type. Manufactured items are intended to be identical, but minor imperfections may distinguish them. Yet, even such telltale marks can be simulated. An object might be deemed natural because it is older than any plausible agent that could have made it; or found in some location where there could not have been any previous agents. This does not strictly disprove agency, however, since absence of evidence is not evidence of absence. Robots or bioengineered organisms might display preprogrammed or abnormal behavior that seems incompatible with evolutionary adaptation. But this is relative to earthbound human expectations, which might not apply in alien environments. It also begs the question of whether “evolutionary adaptation” must be natural and how well it could be simulated.

Apart from specific conventional tests and their limitations, an absolute GTT would ideally determine, in an unrestricted way, whether any given thing or experience is natural or artificial. But is that feasible? If (a) all the properties and relationships of a given item can be listed, then it should count as an artifact. Similarly, if (b) it can be shown that not all the properties and relationships of an item can be listed, or that the list is infinite, then the item is by definition natural. If (c) a new property or relationship is found outside the list given in a model, then the item does not correspond to that model, yet could still correspond to some more complete model, augmented by at least the new property. But, just as it cannot be proven that all crows are black, it cannot be proven that all properties have been listed. So (a) is no help. In regard to (b), while it can be shown that not all properties have been listed (such as by finding a new property), this does not prove that the list could never be complete—that no further properties can be found. Finding such a new property, as in (c), does not establish that the item in question is natural, nor does failure to find it establish that the item is artificial. Hence, an absolute GTT does not seem feasible. There is still the option of relative GTTs, whose power of discrimination need only be superior to that of humans and superior to the power of the simulation to deceive.

External things can be put in various real situations that would test whether their response is unnatural or seems limited by inadequate computational resources. On the other hand, if the agent is having an experience from within what is suspected to be a simulation, the agent can look for glitches in the experience presented, as telltale errors that stand out with respect to the norm of previously known reality. Within the confines of the VR experience, however, the agent must have reliable memory of such a reality. (This poses a recursive problem, since the memory could itself be part of the virtual reality: the dilemma facing our brains all the time.) Similarly, digitation has a bottom grain (pixelation), which can be noted with reference to a known finer-grained “reality.” As above, however, there must be a perceivable or remembered experience of a contrasting reality outside the VR experience to serve as norm. In the case of the brain’s natural and normal simulation (i.e., phenomenal experience), there is nothing outside it to serve as norm for comparison. Digitation and discontinuity within the nervous system are normally ignored or glossed over when functionally irrelevant, as manifested in the visual blind spot and other forms of perceptual adaptation and “filling in.” Thus, normal perception is transparent. It does not normally occur to us that we are living in the brain’s simulation.