Self-Transcendence

In 1931, Kurt Gödel famously rocked the worlds of mathematics and philosophy with an ingenious and ground-breaking formal proof. He showed that there are mathematical truths that elude any given formalized mathematical system. If we view a given mind as such a system—well-defined but limited—then there are truths it cannot grasp, which a more advanced mind might be able to. (That superior mind, in turn, would have its own relative limitations.)

But is your mind something you can formalize or even point to? If so, then paradoxically the act of pointing or formalizing must come from somewhere outside it. Can a particular brain be adequately formalized? The dream of “whole brain emulation” hinges on this question—along with the hope of uploading a person’s “mind” as a digital file in cyberspace or transferring it to a new body. Can the ability to self-transcend be formalized?

Some (including Gödel himself) argue that Gödellian self-transcendence implies that humans will always have an edge over machines, which can be formalized. Yet AI—a logical extension of the machine concept—has already slipped beyond tidy human definitions. Neural networks, for instance, are no longer fully transparent. They are not strictly products of human design, and we don’t know precisely how they work. There seems to be no reason in principle why AI could not become even more complex than a human brain. If so, might it even exceed the human brain’s ability to transcend its own limitations? Self-transcendence could grant AI the same advantages it offers us: flexibility, self-control, objectivity, and open-endedness. These traits have long given human beings an edge over other creatures. Possessing them more could give AI an edge over us.

The dividedness of human nature—half animal and half god—manifests in moral inconsistency. We frame this tension as a struggle between good and evil, often codified in terms of moral absolutes. Our ideals reflect the desire to be self-defining and unconstrained by biology. Yet morality refers literally to mores—customs and habits—not timeless truths. Custom is a matter of negotiated agreement. Because we cannot trust one another to come to such agreement and abide by it, we invoke ideals often enforced by super-authorities we call gods. The capacity for self-transcendence enables the possibility of agreement—by helping us rise above individual and cultural biases and converge on objective reality. But this capacity remains unreliable in us, constrained as we are by biology.

Could artificial entities fare better? Could AI become our substitute for divine authority—a new kind of technological overlord? We seem to have given up on the Enlightenment hope of reforming humanity through education or politics. In its place, we now imagine post-human futures: genetic enhancement, cybernetic augmentation, colonizing outer space, or escaping embodiment entirely. Perhaps the final solution of “the human question” will come not through reform but replacement—by machines that are more conscious than us, better at transcending their own limitations, and better equipped morally to occupy the future.

Many of the current fears about AI reflect our misgivings about our own darker human nature—and the extent to which technology mirrors and magnifies it. Those are well-founded concerns, especially given the commercial and military motivations for technological innovation. Yet, because the technological impulse also reflects our higher aspirations, it may not be doomed to destroy us. The future remains uncertain, precisely because we are divided creatures. It’s not inconceivable to create a race of machines to replace us, for better or worse. It’s not unthinkable that they could be morally superior to us: super-benevolent and super-objective as well as super-intelligent. That may depend on our purposes now to guide them. In the end, we are likely to get the future we deserve.