Origins of the sacred

Humanity and religion seem coeval. From the point of view of the religious mind, this hardly requires explanation. But from a modern scientific or secular view, religion appears to be an embarrassing remnant. There must be a reason why religion has played such a central and persistent role in human affairs. If not a matter of genes or evolutionary strategy, it must have a psychological cause deeply rooted in our very nature. Is there a core experience that sheds light on the phenomenon of religion?

The uncanny is one response to unexpected and uncontrolled experience. It is not solely the unpredictable external world that confounds the mind, which can produce from within its own depths terrifying, weird, or at least unsettling experiences outside the conscious ego’s comfort zone. One can suffer the troubling realization that the range of possible experience is hardly guaranteed to remain within the bounds of the familiar, and that the conscious mind’s strategies are insufficient to keep it there. The ego’s grasp of this vulnerability, to internal as well as external disturbance, may be the ground from which arises the experience of the numinous, and hence the origin of the notion of the sacred or holy. Essentially it is the realization that there will always be something beyond comprehension, which perhaps underlies the familiar like the hidden bulk of an iceberg.

To actually experience the numinous or “wholly other” seems paradoxical to the modern mind, given that all experience is considered a mediated product of the biological nervous system. For, the noumenon is that which, by Kant’s definition, cannot be experienced at all. Its utter inaccessibility has never been adequately rationalized, perhaps because our fundamental epistemic situation precludes knowing the world-in-itself in the way that we know our sensory experience. Kant acknowledged this situation by clearly distinguishing phenomenal experience from the inherent reality of things-in-themselves—a realm off-limits to our cognition by definition. He gave a name to that transcendent realm, choosing to catalogue it as a theoretical construct rather than to worship it. Yet, reason is a late comer, just as the cortex is an evolutionary addition to older parts of the brain. We feel things before we understand them. Rudolf Otto called this felt inaccessibility of the innate reality of things its ‘absolute unapproachability’. He deemed it the foundation of all religious experience. Given that we are crucially dependent on the natural environment, and are also psychologically at the mercy of our own imaginings, I call it holy terror.

In addition to being a property of things themselves, realness is a quality with which the mind imbues certain experiences. Numinosity may be considered in the same light. The perceived realness of things refers to their existence outside of our minds; but it is also how we experience our natural dependency on them. Real things command a certain stance of respect, for the benefit or the harm they can bring. Perhaps perceived sacredness or holiness instills a similar attitude in regard to the unknown. In both cases, the experienced quality amounts to judgment by the organism. Those things are cognitively judged real that can affect the organism for better or worse, and which it might affect in turn. Things judged sacred might play a similar role, not in regard not to the body but to the self as a presumed spiritual entity.

The quality of sacredness is not merely the judgment that something is to be revered; nor is holiness merely the judgment that something or someone is unconditionally good. These are socially-based assessments secondary to a more fundamental aspect of the numinous as something judged to be uncanny, weird, otherworldly, confounding, entirely outside ordinary human experience. The uncanny is at once real and unreal. The sacred commands awe in the way that the real compels a certain involuntary respect. Yet, numinous experiences do more than elicit awe. They also suggest a realm entirely beyond what one otherwise considers real. Paradoxically, this implies that we do not normally know reality as it really is.

Indeed, as Kant showed, we cannot know the world as it is “in itself,” apart from the limited mediating processes of our own consciousness. All experience is thus potentially uncanny; the very fact that we consciously experience anything at all is an utter mystery! We can never know with certainty what to make of experience or our own presence as experiencers. It is only through the mind’s chronically inadequate efforts to make sense that anything can ever appear ordinary or profane. Mystery does not just present a puzzle that we might hope to resolve with further experience and thought. Sometimes it is a tangible revelation of utter incomprehensibility, which throws us back to a place of abject dependency.

We are self-conscious beings endowed with imagination and the tendency to imbue our imaginings with realness. We have developed the concept of personhood, as a state distinct from the mere existence of objects or impersonal forces. We seem compelled in general to imagine an objective reality underlying experience. A numinous experience is thus reified as a spiritual force or reality, which may be personified as a “god.” When the relationship of dependence—on a reality beyond one’s ken and control—is thus personified, it aligns with the young child’s experience of early dependence on parents, who must seem all powerful and (ideally) benevolent. Hence, the early human experience of nature as the Great Mother—and later, as God the Father. In the modern view, these family figures reveal the human psyche attempting to come to terms with its dependent status.

But nature is hardly benevolent in the consistent way humans would like their parents to be. Psychoanalysis of early childhood reveals that even the mother is perceived as ambivalent, sometimes depriving and threatening as well as nourishing. The patriarchal god projects the male ego’s attempt to trump the intimidating raw power of nature (read: the mother) by defining a “spiritual” (read: masculine) world both apart from it and somehow above it. The Semitic male God becomes the creator of all. He embodies the ideal father, at once severe and benevolent. But he also embodies the heroic quest to self-define and to re-create the world to human taste. In other words, the human aspiration to become as the gods.

On the one hand, this ideal projects onto an invisible realm the aspiration to achieve the moral perfection of a benevolent provider, and reflects how one would wish others (and nature) to behave. It demands self-mastery, power over oneself. The path of submission to a higher power acknowledges one’s abject dependence in the scheme of things, to resist which is “sin” by definition. On the other hand, it represents the quest for power over the other: to turn the tables on nature, uncertainty, and the gods—to be the ultimate authority that determines the scheme of things.

One first worships what one intends to master. Worship is not abject submission, but a strategy to dominate. Religion demonstrates the human ability to idealize, capture, and domesticate the unknown in thought. It feigns submission to the gods, even while its alter ego—science—covets and acquires their powers. Thus, the religious quest to mitigate the inaccessibility and wrath of God, which lurks behind the inscrutability of nature, is taken over by the scientific quest for order and control. The goal is to master the natural world by re-creating it, to become omniscient and omnipotent.

Relations of domination and submission play out obviously in human history. A divinely authorized social relationship is classically embodied in two kinds of players: kings and peasants. Yet, history also mixes these and blurs boundaries. Like some entropic process, the quest for empowerment is dispersed, so that it becomes a universal goal no longer projected upon the gods or reserved to kings. We see this “democratization” in the modern expectation of social progress through science and global management. While enjoying the benefits of technology, deeply religious people may not share this optimism, remaining skeptical that power rests forever in the inscrutable hands of God. Those who imagine a judgmental, vindictive, and jealous male god have the most reason to be doubtful of human progress, while those who identify with the transcendent aspect of religion are more likely to feel themselves above specific outcomes in the historical fray.

The ability of mind to self-transcend is a double-edged sword. It is the ability to conceive something beyond any proposed limit or system. This enables a dizzying intimation of the numinous; more importantly, it enables the human being to step beyond mental confines, including ideas and fears about the nature of reality and what lies beyond. On the one hand, we know that we know little for certain. To fully grasp that inspires the goosebumps of holy terror. One defensive response is to pretend that some text, creed, or dogma provides an ultimate assurance; yet we know in our bones that is wishful thinking. The experience of awe may incline one to bow down before the Great Mystery. Yet, we are capable of knowledge such as it can be, for which we (not the gods) are responsible. We are cursed and blessed with at least a measure of choice over how to relate to the unknown.

Uncommon sense

Common sense is a vague notion. Roughly it means what would be acceptable to most people. Yet how can there be such a thing as common sense in a divided world? And how can a common understanding of the world be achieved in the face of information that is doubly overwhelming—too much to process and also unreliable?

In half a century, we have gone from a dearth of information crucial for an informed electorate, to a flood of information that people ironically cannot use, do not trust, and are prone to misuse. We now rely less (and with more circumspection) on important traditional appraisers of information, such as librarians, teachers, academics and peer-reviewed journals, text-book writers, critics, censors, journalists and newscasters, civil and religious authorities, etc. The Internet, of course, is largely responsible for this change. On the one hand, it has democratized access to information; on the other, it has shifted the burden of interpreting information—from those trained for it onto the unprepared public, which now has little more than common sense to rely upon to decide what sources or information to trust.

Which brings us to a Catch-22: how to use common sense to evaluate information when the formation of common sense depends on a flow of reliable information? How does one get common sense? It was formerly the role of education to decide what information was worthy of transmission to the next generation, and to impart the wisdom of how to use it. (Also, at a time when there existed less specialized expertise, people formerly had a wider general experience and competence of their own to draw upon.) Now there is instant access to a plethora of influences besides the voices of official educators and recognized experts. The nature of education itself is up for grabs in a rapidly changing present and unpredictable future. Perhaps education should now aim at preparation for change, if such is not an oxymoron. That sort of education would mean not learning facts or skills that might soon become obsolete, but meta-skills of how to adapt and how to use information resources. In large part, that would mean how to interpret and reconcile diverse claims.

One such skill is “reason,” meaning the ability to think logically. If we cannot trust the information we are supposed to think about, at least we could trust our ability to think. If we cannot verify the facts presented, at least we can verify that the arguments do not contradict themselves. Training in critical thinking, logic, research protocols, data analysis, and philosophical critique are appropriate preparations for citizenship, if not for jobs. This would give people the socially useful skill to evaluate for themselves information that consists inevitably of the claims of others rather than “facts” naively presumed to be objective. Perhaps that is as close as we can come to common sense in these times.

Since everything is potentially connected to everything else, even academic study is about making connections as well as distinctions. The trouble with academia partly concerns the imbalance between analysis (literally taking things apart) and synthesis (putting back together a whole picture). Intellectual pursuit has come to overemphasize analysis, differentiation, and hair-splitting detail, often to the detriment of the bigger picture. Consequently, knowledge and study become ever more specialized and technical, with generalists reduced to another specialty. The result is an ethos of bickering, which serves to differentiate scholars within a niche more than to sift ideas ultimately for the sake of a greater synthesis. This does not serve as a model of common sense for society at large.

Technocratic language makes distinctions in the name of precision, but obstructs a unifying understanding that could be the basis for common sense. Much technical literature is couched in language that is simply inaccessible to lay people. Often it is spiced with gratuitous equations, graphs, and diagrams, as though sheer quantification or graphic summaries of data automatically guarantee clarity or plausibility, let alone truth. Sometimes the arguments are opaque even to experts outside that field. Formalized language and axiomatic method are supposed to structure thought rigorously, to facilitate deriving new knowledge deductively. Counter-productively, a presentation that serves ostensibly to clarify, support, and expand on a premise often seems to obfuscate even the thinking of those presenting it. How can the public assimilate such information, which deliberately misses the forest for the trees? How can we have confidence in complex argumentation that pulls the wool over the eyes even of its proponents?

Academic writing must meet formal requirements proposed by the editors of journals. There are motions to go through which have little to do with truth. Within such a framework, literary merit and even skill at communication is not required. Awkward complex sentences fulfill the minimal requirements of syntax. While this is frustrating for outsiders, such formalism permits insiders to identify themselves as members of an elite club. The danger is inbreeding within a self-contained realm. When talking to their peers, academics may feel little need to address the greater world.

For the preservation of common sense, an important lay skill might be the ability to translate academese, like legal jargon, into plain language. One must also learn to skim through useless or misleading detail to get to the essential points. Much popular non-fiction, like some academic books, ironically have few main ideas (and sometimes but one), amply fluffed out to book length with gratuitous anecdotes to make it appeal to a wider audience. Learning to recognize the essential and sort the wheat from the chaff now seems like a basic survival skill even outside academia.

Perhaps as a civilization we have simply become too smart for our own good. There is now such a profusion of knowledge that not even the smartest individuals, with the time to read, can keep up with it. Somehow the information bureaucracy works to expand technological production. But does it work to produce wisdom that can direct the use of technology? The means exist for global information sharing and coordination, but is there the political will to do the things we know are required for human thriving?

Part of the frustration of modern times is the sense of being overwhelmed yet powerless. We may suffer in having knowledge without the power to act effectively, as though we had heightened sensation despite physical paralysis. Suffering is a corollary of potential action and control. Suffering can occur only in a central nervous system, which doubles to inform the creature of its situation and provide some way to do something about it. Sensory input is paired with motor output; perception is paired with response.

Cells do not suffer, though they may respond and adapt (or die). It is the creature as a whole that suffers when it cannot respond effectively. If society is considered an organism, individual “cells” may receive information appropriate at the creature level yet be unable to respond to it at that level. Perhaps that is the tragedy of the democratic world, where citizens are expected to be informed and participate (at least through the vote) in the affairs of society at large—and to share its concerns—but are able to act only at the cellular level. To some extent, citizens have the same information available to their leaders, who are often experts only in the art of staying in power. They may have a better idea of what to do, but are not positioned to do it.

Listening to the news is a blessing when it informs you concerning something you can plausibly do. Even then, one must be able to recognize what is actual news and distill it from editorial, ideology, agenda, and hype. Otherwise it is just another source of anxiety, building a pressure with no sensible release. To know what to do, one must also know that it is truly one’s own idea and free decision and not a result of manipulation by others. That should be the role of common sense: to enable one to act responsibly in an environment of uncertainty.

Unfortunately, human beings tend to abhor uncertainty—a dangerous predicament in the absence of reliable information and common sense. The temptation is to latch onto false certainties to avoid the sheer discomfort of not knowing. These can serve as pseudo-issues, whose artificial simplicity functions to distract attention from problems of overwhelming complexity. Pseudo-issues tend to polarize opinion and divide people into strongly emotional camps, whose contentiousness further distracts attention from the true urgencies and the cooperative spirit required to deal with them. While common sense may be sadly uncommon, it remains our best hope.

Life and work in the paradise of machines

What would we do if we didn’t have to do anything? What would a world be like where nearly all work is done by machines? If machines did all the production, humans would have to find some other way to occupy their time. They would also have to find some other way to justify the cost to society for their upkeep and their right to exist. In the current reality, one’s income is roughly tied to one’s output—though hardly in an equitable way. Investors and upper management are typically rewarded grossly more than employees for their efforts. Yet their needs as organisms are no greater. In a world where all production and most services would be done by machines, human labour would no longer be the basis for either the production or the distribution of wealth. Society would have to find some other arrangement.

In that situation, a basic income could be an unconditional human right. When automation meets all survival needs, food, housing, education and health care could be guaranteed. All goods and services necessary for living a satisfying life would be a birthright, so that no one would be obliged to work in order to live. Time and effort would be discretionary and uncoupled from survival. What to do with one’s time would not be driven by economic need but by creative vision. Thus, the challenge to achieve freedom from toil cannot be separated from the problem of how to distribute wealth, which we already face. Nor can it be separated from the question of what to do with free time, which in turn cannot be separated from how we view the purpose of life.

As biological creatures, our existence is beholden to natural laws and biological necessities. We need food, shelter and clothing and must act to provide for these needs. A minimal definition of work is what must be done to sustain life. The hand-to-mouth subsistence of pre-industrial societies involved a relatively direct relationship between personal effort and survival. Industrial society organizes production by divisions of labour, providing an alternative concept of work with a less direct relationship. Production involves the cooperation of many people, among whom the resulting wealth must somehow be divided up. Work takes on a different meaning as the justification for one’s slice of the economic pie. It is less about production, per se, than the rationale for consumption: a symbolic dues paid to merit one’s keep and secure one’s place on the planet.

Early predictions that machines would create massive unemployment have not materialized. Nor have predictions that people would work far less because of automation. Instead, new forms of employment have replaced the older ones now automated, with people typically working longer hours. Whether or not these forms of work really add to the general wealth and welfare, they serve to justify the incomes of new types of workers. As society adjusts to automation, wealth is redistributed accordingly, though not equitably. Work is redefined but not reduced. In the present economy, those who own the means of production benefit most and control society, in contrast to those who perform labour. When machines are both the means of production and labour combined, how will ownership be distributed? What would be the relationship between, for example, 99% of people unemployed and the 1% who own the machines?

With advances in AI, newly automated tasks continue to encroach on human employment. In principle, any conceivable activity can be automated; and any role in the economy can be taken over by machines—even war, government, and the management of society. We are talking, of course, about superintelligent machines that are better than humans at most, if not all, tasks. But better how, according to which values? If we entrust machines to implement human goals efficiently, why not entrust them to set the goals as well? Why not let them decide what is best for us and sit back to let them provide it? On the one hand, that seems like a timeless dream come true, freedom from drudgery at last. Because physical labour is tiring and wears on the body, we may at least prefer mental to physical activity. The trend has been to become more sedentary, as machines take over grunt work and as forms of work evolve that are less physical and more mental. White-collar work is preferred to blue-collar or no-collar, and rewarded accordingly. Yet work is still tied to survival.

Humans have always struggled against the limitations of the body, the dictates of biology and physics, the restrictions imposed by nature. In particular, that means freedom from the work required to maintain life. In Christian culture, work was a punishment for original sin: the physical pain attending the sweat of the brow and the labour of childbirth alike. Work has had a redeeming quality, as an expiation or spiritual cleanse. The goal of our rebellion against the natural condition is return to paradise, freedom again from painful labour or any effort deemed unpleasant. Our very idea of progress implies the increase of leisure, if not immediately then in the long term: work now for a better future. This has guided the sort of work that people undertake, resulting in the achievements of technology, including artificial intelligence. Humans first eased their toil by forcing it upon animals and other humans they enslaved. Machines now solve that moral dilemma by performing tasks we find burdensome. So far at least, they do not tire, or suffer, or rebel against their slavery.

On the other hand, humans have also always been creative and playful, pursuing activity outside the mandate of Freud’s reality principle and the logic of delayed gratification. We find direct satisfaction in accomplishment of any sort. We deliberately strain the body in exercise and sport, climb mountains for recreation, push our physical limits. We seek freedom from necessity, not from all activity or effort. We covet the leisure to do as we please, what we freely decide upon. In an ideal world, then, work is redefined as some form of play or gratuitous activity, liberated from economic necessity. There have always existed non-utilitarian forms of production, such as music, art, dance, hobbies, and much of academic study. Though not directly related to survival, these have always managed to find an economic justification. When machines supply our basic needs, everyone could have the time for pursuits that are neither utilitarian nor economic.

Ironically, some people now express their creativity by trying to automate creativity itself: writing programs to do art, compose music, play games, etc. No doubt there are already robots that can dance. While AI tools and “expert” programs assist scientists with data analysis, so far there are no artificial scientists or business magnates. Yet, probably anything that humans do machines will eventually do at least as well. The advance of AI seems inevitable in part because some people are determined to duplicate every natural human function artificially through technology. There is an economic incentive, to be sure, yet there is also a drive to push AI to ever further heights purely for the creative challenge and the accomplishment. Because this drive often goes unrecognized even by those involved, it is especially crucial to harness it to an ideal social vision if humanity is to have a meaningful future. Where is the reasonable limit to what should be automated? If the human goal is not simply relief from drudgery, but that machines should ultimately do everything for us, does that not imply that we consider all activity onerous? What, then, would be the point of our existence? Are we here just to consume experience, or are we not by nature doers as well?

Some visionaries think that machines should displace human beings, who have outlived their role at the top of an evolutionary ladder. They view the human form as a catalyst for machine intelligence. However, that post-humanist dream is quintessentially a humanist ideal, invoking transcendence of biological limits. It is a future envisioned not by machines or cyborgs but by conventional human beings alive today. To fulfill it, AI would have to embody current human nature and values in many ways—not least by being conscious. Essentially, we are looking to AI for perfection of ourselves—to become or give birth to the gods we have idolized. But AI could only be conscious if it is effectively an artificial organism, vulnerable and limited in some of the ways we are, even if not in all. To create insentient superintelligence merely for its own sake (rather than its usefulness to us) makes no human sense. Art for art’s sake may make sense, but not automation for automation’s sake. Nor can the goal be to render us inactive, relieved even of creative effort. We must come to understand clearly what we expect from machines—and what we desire for ourselves.

On intentionality

Intentionality is an elusive concept that fundamentally means reference of something to something else. Reference, however, is not a property, state, or relationship inhering in things or symbols, nor between them; it is rather an action performed by an agent, who should be specified. It is an operation of relating or mapping one thing or domain to another. These domains may differ in their character (again, as defined by some agent). A picture, for example, might be a representation of a real landscape, in the domain of painted images. As such it refers to the landscape, and it is the painter who does the referring. Similarly, a word or sentence might represent a person’s thought, perception, or intention. The relevant agents, domains, and the nature of the mappings must be included before intentionality can be properly characterized.

In these terms, the rings of a tree, for example, may seem to track or indicate the age of the tree or periods favorable to growth. Yet, it is the external observer, not the tree, who establishes this connection and who makes the reference. Connections made by the tree itself (if such exist) are of a different sort. In all likelihood, the tree rings involve causal but not intentional connections.

A botanist might note connections she considers salient and may conclude that they are causal. Thus, changing environmental conditions can be deemed a cause of tree ring growth. Alternatively, it would stretch imagination to suppose that the tree intended to put on growth in response to favorable conditions. Or that God (or Nature) intended to produce the tree ring pattern in response to weather conditions. These suppositions would project human intentionality where it doesn’t belong. Equally, it would be far-fetched to think that the tree deliberately created the rings in order to store in itself a record of those environmental changes, either for its own future use or for the benefit of human observers. The tree is simply not the kind of system that can do that. The intentionality we are dealing with is rather that of the observer. On the other hand, there are systems besides human beings that can do the kind of things we mean by referring, intending, and representing. In the case of such systems, it is paramount to distinguish clearly the intentionality of the system itself from that of the observer. This issue arises frequently in artificial intelligence, where the intentionality of the programmer is supposed to transfer to the automated system.

The traditional understanding of intentionality generally fails to make this distinction, largely because it is tied to human language usage. “Reference” is taken for granted to mean linguistic reference or something modeled on it. Intentionality is thus often considered inherently propositional even though, as far as we know, only people formulate propositions. If we wish to indulge a more abstract notion of ‘proposition’, we must concede that in some sense the system makes assertions itself, for its own reasons and not those of the observer. If ‘proposition’ is to be liberated from human statements and reasoning, the intention behind it must be conceived in an abstract sense, as a connection or mapping (in the mathematical sense) made by an agent for its own purposes.

Human observers make assertions of causality according to human intentions, whereas intentional systems in general make their own internal (and non-verbal) connections, for their own reasons, regardless of whatever causal processes a human observer happens to note. Accordingly, an ‘intentional system’ is not merely one to which a human observer imputes her own intentionality as an explanatory convenience (as in Dennett’s “intentional stance”). Such a definition excludes systems from having their own intentionality, which reflects the longstanding mechanist bias of western science since its inception: that matter inherently lacks the power of agency we attribute to ourselves, and can only passively suffer the transmission of efficient causes.

An upshot of all this is that the project to explain consciousness scientifically requires careful distinctions that are often glossed over. One must distinguish the observer’s speculations about causal relations—between brain states and environment—from speculations about the brain’s tracking or representational activities, which are intentional in the sense used here. The observer may propose either causal or intentional connections, or both, occurring between a brain (or organism) and the world. But, in both cases, these are assertions made by the observer, rather than by the brain (organism) in question. The observer is at liberty to propose specific connections that she believes the brain (organism) makes, in order to try to understand the latter’s intentionality. That is, she may attempt to model brain processes from the organism’s own point of view, attempting as it were to “walk in the shoes of the brain.” Yet, such speculations are necessarily in the domain of the observer’s consciousness and intentionality. In trying to understand how the brain produces phenomenality (the “hard problem of consciousness”), one must be clear about which agent is involved and which point of view.

In general, one must distinguish phenomenal experience itself from propositions (facts) asserted about it. I am the witness (subject, or experiencer) directly to my own experience, about which I may also have thoughts in the form of propositions I could assert regarding the content of the experience. These could be proposed as facts about the world or as facts about the experiencing itself. Along with other observers, I may speculate that my brain, or some part of it, is the agent that creates and presents my phenomenal experience to “me.” Other people might also have thoughts (assert propositions) about my experience as they imagine it; they may also observe my behavior and propose facts about it they associate with what they imagine my experience to be. All these possibilities involve the intentionality of different agents in differing contexts.

One might think that intentionality necessarily involves propositions or something like them. This is effectively the basis on which an intentional analysis of brain processes inevitably proceeds, since it is a third-person description in the domain of scientific language. This is least problematic when dealing with human cognition, since humans are language users who normally translate their thoughts and perceptions into verbal statements. It is more problematic when dealing with other creatures. However, in all cases such propositions are in fact put forward by the observer rather than by the system observed. (Unless, of course, these happen to be the same individual; but even then, there are two distinct roles.)

The observer can do no better than to theoretically propose operations of the system in question, formulated in ordinary or some symbolic language. The theorist puts herself in the place of the system to try to fathom its strategies—what she would do, given what she conceives as its aims. This hardly implies that the system in question (the brain) “thinks” in human-language sentences (let alone equations) any more than a computer does. But, with these caveats, we can say that it is a reasonable strategy to translate the putative operations of a cognitive system into propositions constructed by the observer.

In the perspective presented here, phenomenality is grounded in intentionality, rather than the other way around. This does not preclude that intentionality can be about representations themselves or phenomenal experience per se (rather than about the world), since the phenomenal content as such can be the object of attention. The point to bear in mind is that two domains of description are involved, which should not be conflated. Speculation about a system’s intentionality is an observer’s third-person description; whereas a direct expression of experience is a first-person description by the subject. This is so, even when subject and observer happen to be the same person. It is nonsense to talk of phenomenality (qualia) as though it were a public domain like the physical world, to which multiple subjects can have access. It is the external world that offers common access. We are free to imagine the experience of agents similar to ourselves. But there is no verifiable common inner world.

All mental activity, conscious or unconscious, is necessarily intentional, insofar as the connections involved are made by the organism for its own purposes. (They may simultaneously be causal, as proposed by an observer.) But not all intentional systems are conscious. Phenomenal states are thus a subset of intentional states. All experience depends on intentional connections (for example, between neurons); but not all intentional connections result in conscious experience.

Sentience and selfhood

‘Consciousness’ is a vague term in the English language. Where existent, its counterparts in other languages often carry several meanings. To be conscious can be either transitive or intransitive; it can mean simply to be aware of something—to have an experience—or it can mean a state opposed to sleep, coma, or inattention. While consciousness clearly involves the role of the subjective self, one is not necessarily aware of that role in the moment. That is, one can be conscious though not self-conscious. The latter notion also is ambiguous: in everyday talk, self-consciousness refers to a potentially embarrassing awareness of one’s relationship to others, perhaps social strategizing. Here, it will mean something more technical: simply the momentary awareness of one’s own existence as a conscious subject.

It might be assumed that to be conscious is to be self-conscious, since the two are closely bound up for human beings. I propose rather to make a distinction between sentience (simply having experience) and the awareness of having that experience. The first involves no more than the naïve appearance of an external world as well as internal sensations—what Kant called phenomena and more recent philosophers call “contents of consciousness” or qualia. No concept of self enters into sentience as such. The second involves, additionally, the awareness of self and of the act or fact of experiencing. One should thus be able to imagine, at least, that other creatures can be sentient—even if they do not seem aware of their individual existence in our human way, and regardless of whether one can imagine just what it is like to be them.

Language complicates the issue. For, we can scarcely speak or think of sentience (or awareness, consciousness, experience, etc.) in general without reference to our familiar human sentience. We are thereby reminded of our own existence—indeed, of our presence in the moment of speaking or thinking about it. Nevertheless, it is as possible to be caught up in thought as to be caught up in sensation. (We all daydream, for example, only “awakening” when we realize that is what we have been doing.) Then the object is the focus rather than its subject. This outward focus is, in fact, the default state. Often, we are simply aware of the world around us, or of some thought in regard to it; we are not aware of being aware. Perhaps it is the fluidity of this boundary—between the state of self-awareness and simple awareness of the contents of experience—which gives the impression that sentience necessarily involves self-awareness. After all, as soon as we notice ourselves being sentient, we are self-aware. It is illogical, however, to conclude that creatures without the capability of self-awareness are not sentient. Language plays tricks with labels. At one time, animals were considered mere insensate machines—incapable of feeling, let alone thought, because these properties could belong only to the human soul.

One might even suppose that self-consciousness is a function of language, since the act of speaking to others directly entails and reflects one’s own existence in a way that merely perceiving the world or one’s sensations does not. Yet, it hardly follows that either sentience or self-consciousness is limited to language users. The problem, again, is that we are ill-equipped to imagine any form of experience than our own, which we are used to expressing in words, both to others and to ourselves.

This raises the question of the nature and function of self-consciousness, if it is not simply a by-product of the highly evolved communication of a social species. The question is complicated by the fact that identifiable tags of self-consciousness (such as recognizing one’s image in a reflection) seem to be restricted to intelligent creatures with large brains—such as chimpanzees, cetaceans, and elephants—all of which are also social creatures. On the other hand, social insects communicate, but we do not thereby suppose that they are conscious as individuals. To attribute a collective consciousness to the hive or colony extends the meaning of the term beyond the subjective sense we are considering here. It becomes a description of emergent behavior, observed, rather than individual experience perceived. In some sense, consciousness emerges in the brain; but few today would claim that individual neurons are “conscious” because the brain (or rather the whole organism) is conscious.

Closely related to the distinction between simple awareness and self-awareness is the distinction between object and subject, and the corresponding use of person in language. We describe events around us in the third person, as though their appearance is simply objective fact, having nothing to do with the perceiver. For the most part, for us the world simply is. Self-conscious in theory, naïve realism is our actual default state of mind. With good (evolutionary) reason, the object dominates our attention. Yet, self-awareness, too, is functional for us as highly social creatures. We get along, in part through the ability to imagine the subjective experience of others, which means first recognizing our own subjectivity. The very fact that we conceive of sentience at all is only possible because of this realization. The subject (self) emerges in our awareness as an afterthought with profound implications. As in the biblical Fall, our eyes are opened to our existence as perceiving agents, and we are cast from the state of unselfconscious being.

The modern understanding of consciousness (i.e., awareness of the world as distinct from the world itself) is that the object’s appearance is constructed by the subject. Our daily experience is a virtual reality produced in the brain, an internal map constantly updated from external input. This realization entails metaphysical questions, such as the relationship between that virtual inner show and the reality that exists “out there.” But that is also a practical question. We need an internal account of external reality that is adequate for survival, independent of how “true” it might or might not be. Self-consciousness is functional in that way too. It serves us to know that we co-create a model of external reality, and that the map is not the territory itself, but something we create as a useful guide to navigate it. Knowing the map as a symbolic representation rather than objective fact, means we are free to revise it according to changing need. The moment or act of self-consciousness awakens us from the realist trance. One is no longer transfixed by experience taken at face value. Suddenly we are no longer looking at the world but at our own looking.

This capacity to “wake up” serves both the individual and society. It enables the person or group to stand back from an entrapping mindset, or viewpoint, to question it, which opens the possibility of a broader perspective. Literally, this means a bigger picture, encompassing more of reality, which is potentially more adequate for survival both individually and collectively. Knowledge is empowering; yet it is also a trap when it seems to form a definitive account. The map is then mistaken for the territory and we fall again into trance. So, there is a dialectical relationship between knowing and questioning, between certainty and uncertainty. The ability to break out of a particular viewpoint or framework establishes a new ground for an expanded framework; but that can only ever be provisional, for the new ground must eventually give way again to a yet larger view—ad infinitum. That, of course, is challenging for a finite creature. We are obliged to trust the knowledge we have at a given time, while aware that it may not be adequate. That double awareness is fraught with anxiety. The psychological tendency is to take refuge in what we take to be certain, ignoring the likelihood that it is illusory.

Sentience arose in organisms as a guide to survival, an internal model of the world. Self-consciousness arose—at least in humans—as a further survival tool, the ability to transcend useful appearances in favor of potentially more useful ones. It comes, however, at the price of ultimate uncertainty. One may prefer the trance to the anxiety. From a species point of view, that may be a luxury that expendable individuals can afford, which the planetary collective cannot. Individuals and even nations can stand or fall by their mere beliefs, through some version of natural selection. But what inter-galactic council will be there to give the Darwin Award to a failed human species?

The equation of experience

I cringe when I hear people speak casually of their reality, since I think what they mean is their personal experience and not the reality we live in together. Speaking about “realities” in the plural is more than an innocent trope. It is often a way to justify belief or opinion, as though private experience is all that matters because there is no objective reality to arbitrate between perspectives, or because the task of approaching it seems hopeless. But clearly there is an objective reality of nature, even if people cannot agree upon it, and what we believe certainly does matter to our survival. So, it seems important to express the relationship between experience and reality in some clear and concise way.

The “equation of experience” is my handy name for the idea that everything a person can possibly experience or do—indeed all mental activity and associated behavior—is a function of self and world conjointly. Nothing is ever purely subjective or purely objective. There is always a contribution to experience, thought, and behavior from within oneself, and likewise a contribution from the world outside. On the analogy of a mathematical function, this principle reads E = f(s,w). The relative influence of these factors may vary, of course. Sensory perception obviously involves a strong contribution from the external world; nevertheless, the organization of the nervous system determines how sensory input is processed and interpreted, resulting in how it is experienced and acted upon. At the other extreme, the internal workings of the nervous system dominate hallucination and imagination; nevertheless, the images and feelings produced most often refer to the sort of experiences one normally has in the external world.

Of course, one should define terms. Experience here means anything that occurs in the consciousness of a cognitive agent (yet the “equation” extends to include behavior that other agents may observe, whether one is conscious of it or not). Self means the cognitive agent to whom such experience occurs—usually a human being or other sentient organism. World means the real external world that causes an input to that agent’s cognitive system.

But the “equation” can be put in a more general form., which simply expresses the input/output relations of a system. Then, O = f(is, iw), where O is the output of the agent or system, is is input from the system itself, and iw is the input of the world outside the system or agent. This generalization does not distinguish between behavior and experience. Either is an “output” of a bounded system defined by input/output relations. For organisms, the boundary is the skin, which also is a major sensory surface.

While it seems eminently a matter of common sense that how we perceive and behave is always shaped both by our own biological nature and by the nature of the environing world, human beings have always found reasons to deny this simple truth, either pretending to an objective view independent of the subject, or else pretending that everything is subjective or “relative” and no more than a matter of personal belief.

The very ideal of objectivity or truth attempts to factor out the subjectivity of the self. Science attempts to hold the “self” variable constant, in order to explore the “world” variable. In principle, it does this by excluding what is idiosyncratic for individual observers and by imposing experimental protocols and a common mathematical language embraced by all standardized observers. Yet, this does not address cognitive biases that are collective, grounded in the common biology of the species. Science is, after all, a human-centric enterprise. To focus on one “variable” backs the other into a corner, but does not eliminate it.

Even within the scientific enterprise, there are conflicting philosophical positions. The perennial nature versus nurture debate, for example, emphasizes one factor over the other—though clearly the “equation” tells us there should be no such debate because nature and nurture together make the person! At the other extreme, politics and the media amount to a free-for-all of conflicting opinions and beliefs. Consensus is rarely attempted—which hardly means that no reality objectively exists. Sadly, “reality” is a wild card played strategically according to the subjective needs of the moment, by pointing disingenuously to select information to support a viewpoint, while an opposing group points to other select information. The goal is to appear clever and right—and to belong, within the terms of one’s group—precisely by opposing some other group, dismissing and mocking their views and motives. Appeal to reality becomes no more than a strategy of rhetoric, rather than a genuine inquiry into what is real, true, or objective.

How does such confusion arise? The basic challenge is to sort out the influence of the internal and external factors, without artificially ignoring one or the other. However, an equation in two variables cannot be solved without a second equation to provide more information—or by deliberately holding one variable constant, as in controlled experiments. The problem is that in life there is no second equation and little control. This renders all experience ambiguous and questionable. But that is a vulnerable psychological state, which we are programmed to resist. On the one hand, pretending that the “self” factor has no effect on how we perceive reality is willful ignorance. On the other hand, so is pretending that there is no objective reality or that it can be taken for granted as known. How one views oneself and how one views the world are closely related. Both are up for grabs, because they are themselves joint products of inner and outer factors together. How, then, to sort out truth?

I think the first step is to recognize the problem, which is the basic epistemic dilemma facing embodied biological beings. We are not gods but human creatures. In terms of knowing reality, this means acknowledging the subjective factor that always plays a part in all perception and thought. It means transcending the naïve realism that is our biological inheritance, which has served us well in many situations, but has its limits. We know that appearances can be deceptive and that communication often serves to deceive others. Our brains are naturally oriented outward and toward survival; we are programmed to take experience at “face value,” which is as much determined by biological or subjective need as by objective truth. We now know something of how our own biases shape how we perceive and communicate. We know something about how brains work to gain advantage rather than truth. Long ago we were advised to “Know Thyself.” There is still no better recipe for knowing others or knowing reality.

The second—and utterly crucial—step is to act in good faith, using that knowledge. That is, to intend truth or reality rather than personal advantage. To aim for objectivity, despite the stacked odds. This means being honest with oneself, trying earnestly to recognize one’s personal bias or interest for the sake of getting to a truth that others can recognize who also have that aim and who practice that sincerity. Holding that intention in common allows convergence. Intending to find that common ground presumes that it should be mutually approachable by those who act in good faith. In contrast, the attitude of all against all tacitly denies the common ground of an objective reality.

No doubt convergence is easier said than done, for the very reasons here discussed—namely, our biological nature and the ambiguity inhering in all experience because of the inextricable entanglement of subject and object. With no gods-eye view, that is the disadvantage of being a finite and limited creature, doomed to see the everything through a glass darky. But there is also an advantage in knowing this condition and the limitations it imposes. To realize the influence of the mind over experience is sobering but also empowering. We are no longer passive victims of experience but active co-creators of it, who can join with others of good will to create a better world.

Compromise is a traditional formula to overcome disagreement; yet, it presumes some grumbling forfeit by all parties for the sake of coming to a begrudged decision. In the wake of the decision, it assumes that people will nevertheless continue to differ and disagree, in the same divergent pattern. There is an alternative. While perceiving differently, we can approach agreement from different angles by earnestly intending to focus on the reality that is common to all. Then, like the blind men trying to describe the elephant in the room, each has something important to contribute to the emerging picture upon which the fate of all depends.

From taking for granted to taking charge

In our 200,000 years as a species, humankind has been able to take for granted a seemingly boundless ready-made world, friendly enough to permit survival. Some of that was luck, since there were relatively benign periods of planetary stability, and some of it involved human resourcefulness in being able to adapt or migrate in response to natural changes of conditions—even changes brought about by people themselves. Either way, our species was able to count on the sheer size of the natural environment, which seemed unlimited in relation to the human presence. (Today we recognize the dimensions of the planet, but for most of that prehistory there was not even a concept of living on a “planet.”) There was no need—and really no possibility—to imagine being responsible for the maintenance of what has turned out to be a finite and fragile closed system. Perhaps there was a local awareness among hunter-gatherers about cause and effect: to browse judiciously and not to poo in your pond. Yet the evidence abounds that early humans typically slaughtered to extinction all the great beasts. Once “civilized,” the ancients cut down great forests—and even bragged about it, as Gilgamesh pillaged the cedars of Lebanon for sport.

Taming animals and plants (and human slaves) for service required a mentality of managing resources. Yet, this too was in the context of a presumably unlimited greater world that could absorb any catastrophic failures in a regional experiment. We can scarcely know what was in the minds of people in transition to agriculture; but it is very doubtful that they could have thought of “civilization” as a grand social experiment. Even for kings, goals were short-term and local; for most people, things mostly changed slowly in ways they tried to adjust to. Actors came and went in the human drama, but the stage remained solid and dependable. Psychologically, we have inherited that assumption: human actions are still relatively local and short-sighted; the majority feel that change is just happening around them and to them. The difference between us and people 10,000 years ago (or even 500 years ago) is that we finally know better. Indeed, only in the past few decades has it dawned on us that the theatre is in shambles.

I grew up in 1950s Los Angeles, when gasoline was 20 cents the gallon, and where you might casually drive 20 miles to go out for dinner. As a child, that environment seemed the whole world, totally “natural,” just how things should be. My job was to learn the ropes of that environment. But, of course, I had little knowledge of the rest of the planet and certainly no notion of a ‘world’ in the cultural sense. Only when I traveled to Europe as a young man did I experience something different: instead of the ephemera of L.A., an environment that was old and made of stone, in which people organized life in delightfully different ways. No doubt that cultural enlightenment would have been more extreme had I traveled in Africa instead of Europe. But it was the beginning of an awareness of alternatives. Still, I could not then imagine that cheap gas was ruining the planet. That awareness only crept upon the majority of my generation in our later years, coincident with the maturing consciousness of the species.

We’ve not had the example of another planet to visit, whose wise inhabitants have learned to manage their own numbers and effects in such a way to keep the whole thing going. We have only imagination and history on this planet to refer to. Yet, the conclusion is now obvious: we have outgrown the mindset of taking for granted and must embrace the mindset of taking charge if we are to survive.

What happened to finally bring about this species awakening? To sum it up: a global culture. When people were few, they were relatively isolated, the world was big, and the capacity to affect their surroundings was relatively small. Now that we are numerous and our effects highly visible, we are as though crowded together in a tippy lifeboat, where the slightest false move threatens to capsize Spaceship Earth. Through physical and digital proximity, we can no longer help being aware of the consequences of our own existence and attendant responsibility. Yet, a kind of schizophrenia sets in from the fact that our inherited mentality cannot accommodate this sudden awareness of responsibility. It is as though we hope to bring with us into the lifeboat all our bulky possessions and conveniences and all the behaviors we took for granted as presumed rights in a “normally” spacious and stable world.

We are the only species capable of deliberately doing something about its fate. But that fact is not (yet) engrained in our mentality. Of course, there are futurists and transhumanists who do think very deliberately about human destiny, and now there are think tanks like the Future of Humanity Institute. Individual authors, speakers, and activists are deeply concerned about one dire problem or another facing humanity, such as climate change, social inequity, and continuing nuclear threat, along with the brave new worlds of artificial intelligence and genetic engineering. Some of them have been able to influence public policy, even on the global scale. Most of us, however, are not directly involved in those struggles, and are only beginning to be touched directly by the issues. Like most of humanity throughout the ages, we simply live our lives, with the daily concerns that have always monopolized attention.

However, the big question now looming over all of us is: what next for humanity? It is not about predicting the future but about choosing and making it. (Prediction is just more of bracing ourselves for what could happen, and we are well past that.) We know what will happen if we remain in the naïve mindset of all the creatures that have competed for existence in evolutionary history: homo sapiens will inevitably go extinct, like the more than 99% of all species that have ever existed. Given our accelerating lifestyle, this will likely be sooner than later. They passively suffered changes they could not conceive, let alone consciously control, even when they had contributed to those changes. We are forced to the terrible realization that only our own intervention can rectify the imbalances that threaten us. Let us not underestimate the dilemma: for, we also know that “intervention” created many of those problems in the first place!

Though it is the nature of plans to go awry, humanity needs a plan and the will to follow it if we are to survive. That requires a common understanding of the problems and agreement on the solutions. Unfortunately, that has always been a weak point of our species, which has so far been unable to act on a species level, and until very recently has been unable even to conceive of itself as a unified entity with a possible will. We are stuck at the tribal level, even when the tribes are nations. More than ever we need to brainstorm toward a calm consensus and collective plan of action. Ironically, there is now the means for all to be heard. Yet, our tribal nature and selfish individualist leanings result in a cacophony of contradictory voices, in a free-for-all bordering on hysteria. There is riot, mutiny and mayhem on the lifeboat, with no one at the tiller. No captain has the moral (much less political) authority to steer Spaceship Earth. What can we then hope for but doom?

Some form of life will persist on this planet, perhaps for several billion years to come. But the experiment of civilization may well fail. And what is that experiment but the quest to transcend the state of nature given us, which no other creature has been able to do? We were not happy as animals, having imagined the life of gods. With one foot on the shore of nature and one foot in the skiddy raft of imagination, we do the splits. The two extreme scenarios are a retreat into the stone age or charging brashly into a post-humanist era. Clearly, eight billion people cannot go back to hunting and gathering. Nor can they all become genetically perfect immortals, colonize Mars, or upload to some more durable form of embodiment. The lifeboat will empty considerably if it does not sink first.

Whatever the way forward, it must be with conscious intent on a global level. We will not go far bumbling as usual. Whether salvation is possible or not, we ought to try our best to achieve the best of human ideals. Whether the ship of state (or Spaceship Earth) floats or sinks, we can behave in ways that honour the best of human aspirations. To pursue another metaphor, in the board game of life, though ever changing, at any given moment there are rules and other elements. The point is not just to win but also to play well, even as we attempt to re-define the rules and even the game. That means to behave nobly, as though we are actually living in that unrealized dream. Our experiment all along has been to create an ideal world—using the resources of the real one. Entirely escaping physical embodiment is a pipe-dream; but modifying ourselves physically is a real possibility. In a parallel way, a completely man-made world is an oxymoron, for it will always exist in the context of some natural environment, with its own rules—even in outer space. Yet coming to a workable arrangement with nature should be possible. After all, that’s what life has always done. With no promise of success, our best strategy is a planetary consciousness willing to take charge of the Earth’s future. To get there, we must learn to regulate our own existence.

Yes, but is it art?

Freud observed that human beings have a serious and a playful side. The “Reality Principle” reflects the need to take the external world seriously, driven by survival. Science and technology serve the Reality Principle insofar as they accurately represent the natural world and allow us to predict, control, and use it for our benefit. Yet they leave unfulfilled a deep need for sheer gratuitous activity—play. The “Pleasure Principle” is less focused, for it reflects not only pursuit of what is good for the organism but also the playful side of human nature that sometimes thumbs its nose at “reality.” It reflects the need to freely define ourselves and the world we live in—not to be prisoners of biology, social conditioning, practicality, and reason. I believe this is where art (like music, sport, and some mathematics) comes literally into play.

Plato had dismissed art as dealing only with appearances, not with truth. According to him art is merely a form of play, not be taken seriously. However, we do take art seriously precisely because it is play. What we find beautiful or interesting about a work of art often involves its formal qualities, which reveal the artist’s playfulness at work. Like science fiction, art may portray an imagined world; but it can also directly establish a world simply by assembling the necessary elements. Just as a board game comes neatly in a box, so the artist’s proposed world comes in a frame, on a plinth, or in a gallery. What it presents may seem pointless, but that is its point. It makes its own kind of sense, if not that of the “real” world. The artwork may be grammatically correct while semantically nonsense. Art objects are hypothetical alternatives to the practical objects of consumer society, of which they are sometimes parodies. Often they are made of similar materials, using similar technology, but expressing a different logic or no apparent logic at all. Artistic invention parallels creativity in science and technology. At the most ambitious levels, large teams of art technicians undertake huge projects, rivaling the monumentality of medieval cathedrals and the modern cinema, but also rivalling space launches and cyclotrons. Extravagance expresses the Pleasure Principle in all domains.

Like technologists, artists are experimentalists. They want to see what happens when you do this or that. They love materials, processes and tinkering. Some are also theorists who want to follow out certain assumptions or lines of thought to their ultimate conclusions. In this they are aided by zealous curators, art historians, and gallery owners who propose ever-changing commentaries and theories of art, reflecting what artists do but also shaping it. The world of contemporary art seems driven by some restless mandate of “originality” that resembles the dynamics of the fashion industry and the need for constant change that fuels consumerism generally. Like scientists, ambitious artists may be driven to excel what they have already done or the accomplishments of others. Some seek a place in art history, which is little more than the hindsight of academics and curators or the self-serving promotions of dealers and gallerists.

Science is often distinguished from art and other cultural expressions by its progress, through the accumulation of data and consequent advance of technology. Its theories seem to build toward a more complete and accurate representation of reality. Yet theories are always subject to revision and data are subject to refinement and reinterpretation. To predict the future of science is to predict new truths of nature that we cannot know in advance. Art too accumulates, and its social role has evolved in step with changing institutions and practices, its forms with changing technology. There is pattern and direction in art history, but whether that can be called progress in a normative sense is debatable. Art does not seek to reveal reality, so much as to reveal the artist and to play. Indeed, it seems to be bent on freeing itself from the confines of reality.

Art is also an important kind of self-employment. It provides not only alternative objects and visions, but also an alternative form of work and of work place. It’s a way to establish and control one’s own work environment. The studio is the artist’s laboratory. Art defines an alternative form of production and relation to work. Artists can be their own bosses, if at the price of an unstable income. As in society at large, a small elite enjoy the bulk of success and wealth. Some artists are now wealthy entrepreneurs, and some collectors are but speculative investors. The headiness of the contemporary art world mirrors the world of investment, with its easy money and financial abstractions, prompting questions about the very meaning of wealth—and of art. Indeed, art has always served as a visible form of wealth, and therefore as a status symbol. At one time, the value of artworks reflected the labor-intensive nature of the work, and often the use of precious materials. Today, however, the market value of an artwork reflects how badly other people want it—whatever their reasons.

In modern times, art has inherited a mystique that imbues it with social value apart from labor value and even the marketplace. Despite the fact that art defies an easy definition, and now encompasses a limitless diversity of expressions, people continue to recognize and value art as different from consumer items that serve more practical functions. On the one hand, art represents pure creativity—which is another word for play—and also an alternative vision. On the other hand, like everything else it has succumbed to commercialization. Artists are caught between. Most must sell their work to have a livelihood. To get “exposure,” they must be represented in galleries and are tempted to aim at least some of their work toward the marketplace. Thus, one aspect of art, and of being an artist, reflects the Pleasure Principle while the other represents the Reality Principle. Yet, when the motives surrounding art are not earnest enough—when they appear too mundane, too heady, too trivial, too dominated by money, fame, or ideology—the perennial question arises: is it art? That we can raise the question indicates that we expect more.

What more might be expected? European art originated as a religious expression—which might be said of art in many places and times. Quite apart from any specific theology, human beings have always had a notion of the sacred. That might be no more than a reverence for tradition. But it might also be a quest to go beyond how things have been done and how they have been seen. Religious art has often served as propaganda for an ideology that reinforced the social order of the day. Advertising and news media serve this purpose in our modern world. But even within the strictures of religious art (or commercial art or politically sanctioned art), there is license to interpret, to play, to improvise and surprise. The gratuitous play with esthetics and formal elements can undermine the serious ostensible message. Perhaps that is the eternal appeal of art, its mystique and its mandate: to remind us of our own essential freedom to view the world afresh, uniquely, and playfully.

What is intelligence?

Intelligence is an ambiguous and still controversial notion. It has been defined variously as goal-directed adaptive behavior, the ability to learn, to deal with novel situations or insufficient information, to reason and do abstract thinking, etc. It has even been defined as the ability to score well on intelligence tests! Sometimes it refers to observed behavior and sometimes to an inner capacity or potential—even to a pseudo-substance wryly called smartonium.

Just as information is always for someone, so intelligence is someone’s intelligence, measured usually by someone else with their biases, using a particular yardstick for particular purposes. Even within the same individual, the goals of the conscious human person may contradict the goals of the biological human organism. It is probably this psychological fact that allows us to imagine pursuing arbitrary goals at whim, whereas the goals of living things are hardly arbitrary.

Measures of intelligence were developed to evaluate human performance in various areas of interest to those measuring. This gave rise to a notion of general intelligence that could underlie specific abilities. A hierarchical concept of intelligence proposes a “domain independent” general ability (the famous g-factor) that informs and perhaps controls domain-specific skills. “General” can refer to the range of subjects as well as the range of situations. What is general across humans is not the same as what is general across known species or theoretically possible agents or environments. Perhaps, the intelligence measured can be no more general than the tests and situations used to measure it. As far as it is relevant to humans, the intelligence of other entities (whether natural or artificial) ultimately reflects their capacity to further or thwart human aims. Whatever does not interact with us in ways of interest to us may not be recognized at all, let alone recognized as intelligent.

It is difficult to compare animal intelligence across species, since wide-ranging sense modalities, cognitive capacities, and adaptations are involved. Tests may be biased by human motivations and sensory-motor capabilities. The tasks and rewards for testing animal intelligence are defined by humans, aligned with their goals. Even in the case of testing people, despite wide acceptance and appeal, the g-factor has been criticized as little more than a reification whose sole evidence consists in the very behaviors and correlations it is supposed to explain. Nevertheless, the comparative notion of intelligence, generalized across humans, was further generalized to include other creatures in the comparison, and then generalized further to include machines and even to apply to “arbitrary systems.” By definition, the measure should not be anthropocentric and should be independent of particular sense modalities, environments, goals, and even hardware.

Like the notion of mind-in-general, intelligence-in-general is an abstraction that is grounded in human experience while paradoxically freed in theory from the tangible embodiment that is the basis of that experience. Its origins are understandably anthropocentric, derived historically from comparisons among human beings, and then extended to comparisons of other creatures with each other and with human beings. It was then further abstracted to apply to machines. The goal of artificial intelligence (AI) is to produce machines that can behave “intelligently”—in some sense that is extrapolated from biological and human origins. It remains unclear whether such an abstraction is even coherent. Since concepts of general intelligence are based on human experience and performance, it also remains unclear to what extent an AI could satisfy or exceed the criteria for human-level general intelligence without itself being at least an embodied autonomous agent: effectively an artificial organism, if not an artificial person.

Can diverse skills and behaviors even be conflated into one overall capacity, such as “problem-solving ability” or the g-factor? While ability to solve one sort of problem carries over to some extent to other sorts of tasks, it does not necessarily transfer equally well to all tasks, let alone to situations that might not best be described as problem solving at all—such as, for example, the ability to be happy. Moreover, problem solving is a different skill from finding, setting, or effectively defining the problems worth solving, the tasks worth pursuing. The challenges facing society usually seem foisted upon us by external reality, often as emergencies. Our default responses and strategies are often more defensive than proactive. Another level of intelligence might involve better foresight and planning. Concepts of intelligence may change as our environment becomes more challenging. Or as it becomes progressively less natural and more artificial, consisting largely of other humans and their intelligent machines.

Biologically speaking, intelligence is simply the ability to survive. In that sense, all currently living things are by definition successful, therefore intelligent. Though trivial sounding, this is important to note because models of intelligence, however abstract, are grounded in experience with organisms; and because the ideal of artificial general intelligence (AGI) involves attempting to create artificial organisms that are (paradoxically) supposed to be liberated from the constraints of biology. It may turn out, however, that the only way for an AI to have the autonomy and general capability desired is to be an embodied product of some form of selection: in effect, an artificial organism. Another relevant point is that, if an AI does not constitute an artificial organism, then the intelligence it manifests is not actually its own but that of its creators.

Autonomy may appear to be relative, a question of degree; but there is a categorical difference between a true autonomous agent—with its own intelligence dedicated to its own existence—and a mere tool to serve human purposes. A tool manifests only the derived intelligence of the agent designing or using it. An AI tool manifests the intelligence of the programmer. What does it mean, then, for a tool to be more intelligent than its creator or user? What it can mean, straightforwardly, is that a skill valued by humans is automated to more effectively achieve their goals. We are used to this idea, since every tool and machine was motivated by such improvement and usually succeeds until something better comes along. But is general intelligence a skill that can be so augmented, automated, and treated as a tool at the beck of its user?

The evolution of specific adaptive skills in organisms must be distinguished from the evolution of a general skill called intelligence. In conditions of relative stability, natural selection would favor automatic domain-specific behavior, reliable and efficient in its context. Any pressure favoring general intelligence would arise rather in unstable conditions. The emergence of domain-general cognitive processes would translate less directly into fitness-enhancing behavior, and would require large amounts of energetically costly brain tissue. The biological question is how domain-general adaptation could emerge distinct from specific adaptive skills and what would drive its emergence.

In light of the benefits of general intelligence, why do not all species evolve bigger and more powerful brains? Every living species is by definition smart enough for its current niche, for which its intelligence is an economical adaptation. It would seem, as far as life is concerned, that general intelligence is not only expensive, and often superfluous, but implies a general niche, whatever that can mean. Humans, for example, evolved to fit a wide range of changing conditions and environments, which they continue to further expand through technology. Even if we manage to stabilize the natural environment, the human world changes ever more rapidly—requiring more general intelligence to adapt to it.

The possibility to understand mind as computation, and to view the brain metaphorically as a computer, is one of the great achievements of the computer age. (The computer metaphor is underwritten more broadly by the mechanist metaphor, which holds that any behavior of a biological “system” could be reduced to an algorithm.) Computer science and brain science have productively cross-pollinated. Yet, the brain is not literally a machine, and mind and intelligence are ambiguous concepts not exclusively related to brain. “Thinking” suggests reasoning and an algorithmic approach—the ideal of intellectual thought—which is only a small part of the brain’s activity responsible for the organism as a whole. Ironically, abstract concepts produced by the brain are recycled to explain the operations of the brain that give rise to them in the first place.

Ideally, we expect of artificial intelligence to do what we want, better than we can, and without supervision. This raises several questions and should raise eyebrows too. Will it do what we want, or how can it be made to do so? How will we trust its (hopefully superior) judgment if is so much smarter than us that we cannot understand its considerations? How autonomous can AI be, short of being a true self-interested agent? Under what circumstance could machines become such agents, competing with each other and with humans and other life forms for resources and for their very existence? The dangers of superintelligence attend the motive to achieve ever greater autonomy in AI systems, the extreme of which is the genuine autonomy manifest by living things. AI should instead focus on creating powerful tools that remain under human control. That would be safer, wiser, and—shall we say—more intelligent.

Origins of the white lie

In the wake of recently discovered unmarked graves of indigenous children, at state-sponsored residential schools run by churches, there has been much discussion lately about attitudes and practices of colonialism in Canada. Hardly institutions of learning, these were indoctrination centres serving cultural genocide. It is politically correct now to look back with revulsion, as though we now live in a different world. Should we be so smug? After all, the last Indian residential school closed only twenty-five years ago.

What is particularly horrifying—and yet perplexing—is the prospect that many of the people running these schools (and the government officials who commissioned them) probably felt they were doing the right thing in “helping” indigenous children assimilate into white society. Apart from cynical land-grabbing and blatant racism, many in government may have thought themselves well-motivated, and the school personnel may have been sincerely devout. Yet, the result was malicious and catastrophic. There were elements of the same mean-spirited practices in English boarding schools and ostensibly charitable institutions. Nineteenth-century novels depict the sadism in the name of character formation, discipline and obedience, which were supposed to prepare young men and women for their place in society. How is it possible to be mean and well-meaning at the same time?

Certainly, “the white man’s burden” was a notion central to colonialism. It is related to the European concept of noblesse oblige, which was an aspect of the reciprocal duties between peasant and aristocrat in medieval society. The very fact that such class relationships (between the lowly and their betters) persist even today is key to the sort of presumption of superiority illustrated by the residential schools. Add to class the element of race, then combine with religious proselytizing, empire and greed, and you have a rationale for conquest. The natives were regarded suspiciously as ignorant savages who made no proper use of their land and “resources.” Their bodies were raw material for slavery and their souls for conversion. All in the name of civilizing “for their own good.” Indeed, slavery was a global institution from time immemorial, practiced in Canada as well as the U.S., and practiced even by indigenous natives themselves.

In view of the Spanish Inquisition in the European homeland, it cannot be too surprising that the conquistadors applied similar methods abroad. The fundamental religious assumption was that the body has little importance compared to the soul. In the medieval Christian context, it was self-evident that the body could be mistreated, tortured, even burnt alive for the sake of the soul’s salvation. According to contemporary accounts, the conquistadors committed atrocities in a manner intended to outwardly honor their religion: natives hanged and burned at the stake—in groups of thirteen as a tribute to Christ and his twelve apostles! The utter irony and perversity of such “logic” has more recent parallels and remains just as possible today.

The Holocaust applied an intention to keep society pure by eliminating elements deemed undesirable. Eugenics was a theme of widespread interest in early twentieth century, not only in Nazi Germany. Hannah Arendt argued controversially that the atrocities were committed less by psychopathic monsters than by ordinary people who more or less believed in what they were doing, if they thought about it deeply at all. In the wake of WW2, interest was renewed in understanding how such things can happen in the name of nationalism, racial superiority, or some other captivating agenda. In particular: to understand how unconscionable behavior is internally justified. The psychological experiments of Stanley Milgram, about obedience to authority, shed light on the banality of evil, by showing how easy it is for people to commit acts of torture when an authority figure assures them it is necessary and proper. The underlying question remains: how to account for the disconnect between common sense (or compassion or morality) and behavior that can later (or by others) be judged patently wrong? By what reasoning do people justify their evil deeds so that they appear to them acceptable or even good?

Self-deception seems to be a general human foible, part and parcel of the ability to deceive others. It can be deliberate, even when unconscious. Or, it can be incidental, as when we simply do not have conscious access to our motives. Organisms, after all, are cobbled together by natural selection in a way that coheres only enough to insure survival. The ego or rational mind, too, is a cobbled feature, cut off from access to much of the organism’s workings, with which it would not be adaptive for it to directly interfere. The conscious self is charged by society to produce behavior in accord with social expectations, yet is poorly equipped as an organ of self-control.

Biology is no excuse, of course, especially since our highest ideals aspire to transcend biological limitations. Yet, a brief digression may shed some light. The primary aim of every organism is its own existence. Life, by definition, is self-serving; yet our species is characteristically altruistic toward those recognized as their own kind. The human organism discovered reason as a survival strategy. It has surrounded itself with tools, machines, factories and institutions that serve some purpose other than their own existence. As seemingly rational agents in the world, we try to shape the world in certain ways that nevertheless fit our needs as organisms. Thus, we purport to act according to some rational program, even for the good of others or society, but which often turns out to be self-serving or serving our specific group. The disconnect is a product of evolutionary history. We aspire and purport to be rational, but we were not rationally designed.

Hypocrisy literally means failing to be (self-)critical enough. The context of that failing is that we believe we are acting in accordance with one agenda and do not see how we are also acting in accordance with a very different one. We think we are pursuing one aim and fail to recognize another aim inconsistent with it. Deaf to the dissonance, the right hand (hemisphere?) knows not what the left is doing. A person, group, or class behaves according to their interests, and believes some story that justifies their entitlement, to themselves and to others. The cover story is somehow made to jive with other motivations behind it. What is supposedly objective fact is molded to fit subjective desire.

As social creatures, we tend to look to others for clues to how we should behave. But that is a self-fulfilling prophecy when everyone else is doing likewise. There must be some way to weigh action that is not based on social norm. This is the proper function of reason, argument, debate, and social criticism. It is not to convince others of a point of view, but to find what is wrong with a point of view (no matter how good-sounding) and hopefully set it right. In particular, it should reveal how one intention can be inconsistent with another intention that lurks at its core, just as the whole structure of the brain lurks beneath the neo-cortex. Reason ought to reveal internal inconsistency and the self-deception that permits it.

Yet, self-deception is a concomitant of the ability to deceive others, which is built into our primate heritage and the structure of language. Society can only cohere through cooperation, and there must be ways to tell the cooperators from the defectors in society. Reputation serves this function. But reputation is an image in people’s minds that can be manipulated and faked. As any actor can tell you, the best way to make your performance emotionally convincing is to believe it yourself. If your story is a lie, then you too must believe the lie if you expect to convince others of your sincerity. Furthermore, deception of the others dovetails with their willingness to be deceived—namely, their own self-deceptions.

We know that people consciously create acts of fiction and fantasy; also, that they sometimes knowingly lie. Self-deception overlaps these categories: fiction that we convince ourselves is fact. Rationally, we know that opinions—when expressed as such—are someone’s thoughts. But the category of fact renounces this understanding in favor of an objective truth that has no author, requires no evidence, and for which no individual is responsible, unless God. We disown responsibility for our statements by failing to acknowledge them as personal assertions and beliefs, instead proposing them offhand as free-standing truths in the public domain.

Religion, patriotism, and cultural myth are not about reason or factual truth, but about social cohesion and soothing of existential anxiety through a sense of belonging. We trust those who seem to think and act like us. But this is a double-edged sword. It makes towing a line a condition of membership in the group. Controlling the behavior of members helps the group cohere, but does not allow for a control on the behavior of the group itself.

Scientific propositions can be pinned down and disproven, but not so cultural myths and biases, nor religious beliefs, which cannot even be unambiguously comprehended, let alone debunked in a definitive way. Like water for the fish, the ethos of a society’s prejudices cannot easily be perceived. As Scott Atran has observed, “…most people in our society accept and use both science and religion without conceiving of them in a zero-sum conflict. Genesis and the Big Bang theory can perfectly well coexist in a human mind.” Perhaps that foible is a modern sign that we have not outgrown the capacity for self-deception, and thus for evil.