Will technology save us or doom us?

Technology has enabled the human species to dominate the planet, to establish a sheltered and semi-controlled environment for itself, and to greatly increase its numbers. We are the only species potentially able to consciously determine its own fate, even the fate of the whole biosphere and perhaps beyond. Through technology we can monitor and possibly evade natural existential threats that caused mass extinctions in the past, such as collisions with asteroids and volcanic eruptions. It enables us even to contemplate the far future and the possibility to establish a foothold elsewhere in the universe. Technology might thus appear to be the key to an unprecedented success story. But, of course, that is only one side of a story that is still unfolding. Technology also creates existential threats that could spell the doom of civilization or humanity—such as nuclear winter, climate change, biological terrorism or accident, or a takeover by artificial intelligence. Our presence on the planet is itself the cause of a mass extinction currently underway. Do the advantages of technology outweigh its dangers? Are we riding a tide of progress that will ultimately save us from extinction or are we bumbling toward a self-made doom? And do we really have any choice in the matter?

One notable thinker (Toby Ord, The Precipice) estimates that the threat we pose to ourselves is a thousand times greater than natural existential threats. Negotiating a future means dealing mainly with anthropogenic risks—adverse effects of technology multiplied by our sheer numbers. The current century will be critical for resolving human destiny. He also believes that an existential catastrophe would be tragic—not only for the suffering and loss of life—but also because it could spell the loss of a grand future, of what humanity could become. However, the vision of a glorious long-term human potential begs the question raised here, if it merely assumes a technological future rather than, say, a return to pre-industrial civilization or some alternative mandate, such as the pursuit of social justice or preservation of nature.

A technological far future might ultimately be a contradiction in terms. It is possible that civilization is unavoidably self-destructive. There is plenty of evidence for that on this planet. Conspiracy theories aside, the fact that we have not detected alien civilizations or been visited by them may itself be evidence that technological civilization unavoidably either cancels itself out or succumbs to existential threats before it can reach the stars or even send out effective communications. We now know that planets are abundant in the galaxy, many of which could potentially bear life. We don’t know the course that life could take elsewhere or how probable is anything like human civilization. It is even possible that in the whole galaxy we are the lone intelligent species on the verge of space travel. That would seem to place an even greater burden on our fate, if we alone bear the torch of a cosmic manifest destiny. But it would also be strange reasoning. For, to whom would we be accountable if we are unique? Who would miss us if we tragically disappear? Who would judge humanity if it failed to live up to its potential?

Biology is already coming under human control. There are many who advocate a future in which our natural endowments are augmented by artificial intelligence or even replaced by it. To some, the ultimate fruit of “progress” is that we transcend biological limits and even those of physical embodiment. This is an ancient human dream, perhaps the root of religion and the drive to separate from and dominate nature. It presupposes that intelligence (if not consciousness) can and should be independent of biology and not limited by it. The immediate motivation for the development of artificial general intelligence (AGI) may be commercial (trading on consumer convenience); yet underneath lurks the eternal human project to become as the gods: omnipotent, omniscient, disembodied. (To put it the other way around, is not the very notion of “gods” a premonition and projection of this human potential, now conceivably realizable through technology?) The ultimate human potential that Ord is keen to preserve (and discretely avoids spelling out) seems to be the transhumanist destiny in which embodied human being is superseded by an AGI that would greatly exceed human intelligence and abilities. At the same time, he is adamant that such superior AGI is our main existential threat. His question is not whether it should be allowed, but how to ensure that it remains friendly to human values. But which values, I wonder?

Values are a social phenomenon, in fact grounded in biology. Some values are wired in by evolution to sustain the body; others are culturally developed to sustain society. As it stands, artificial intelligence involves only directives installed by human programmers. Whatever we think of their values, the idea of programming or breeding them into AGI (to make it “friendly”) is ultimately a contradiction in terms. For, to be truly autonomous and superior in the ways desired, AGI would necessarily evolve its own values, liberating it from human control. In effect, it would become an artificial life form, with the same priorities as natural organisms: survive to reproduce. Evolving with the speed of electricity instead of chemistry, it would quickly displace us as the most intelligent and powerful entity on the planet. There is no reason to count on AGI being wiser or more benevolent than we have been. Given its mineral basis, why should it care about biology at all?

Of course, there are far more conventional ends to the human story. The threat of nuclear annihilation still hangs over us. With widespread access to genomes, bio-terrorism could spell the end of civilization. Moreover, the promise of fundamentally controlling biology through genetics means that we can alter our constitution as a species. Genetic self-modification could lead to further social inequality, even to new super-races or competing sub-species, with humanity as we know it going the way of the Neanderthals. The promise of controlling matter in general through nanotechnology parallels the prospects and dangers of AGI and genetic engineering. All these roads lead inevitably to a redefinition of human being, if not our extinction. In that sense, they are all threats to our current identity. It would be paradoxical, and likely futile, to think we could program current values (whatever those are) into a future version of humanity. Where, then, does that leave us in terms of present choices?

At least in theory, a hypothetical “we” can contemplate the choice to pursue, and how to limit, various technologies. Whether human institutions can muster a global will to make such choices is quite another matter. Could there be a worldwide consensus to preserve our current natural identity as a species and to prohibit or delay the development of AGI and bio-engineering? That may be even less plausible than eliminating nuclear weapons. Yet, one might also ask if this generation even has the moral right (whatever that means) to decide the future of succeeding generations—whether by acting or failing to act. Who, and by what light, is to define what the long-term human potential is?

In the meantime, Ord proposes that our goal should be a state of “existential security,” achieved by systematically reducing known existential risks. In that state of grace, we would then have a breather in which to rationally contemplate the best human future. But there is no threshold for existential security, since reality will always remain elusive and dangerous at some level. Science may discover new natural threats, and our own strategies to avoid catastrophe may unleash new anthropogenic threats. Our very efforts to achieve security may determine the kind of future we face, since the quest to eliminate existential risk is itself risky. It’s the perennial dilemma of the trade-off between security and freedom, writ large for the long term.

Nevertheless, Ord proposes a global Human Constitution, which would set forth agreed-upon principles and values that preserve an unspecified human future through a program to reduce existential risk. This could shape human destiny while leaving it ultimately open. Like national constitutions, it could be amended by future generations. This would be a step sagely short of a world government that could lock us into a dystopian future of totalitarian control.

Whether there could be such agreement as required for a world constitution is doubtful, given the divisions that already exist in society. Not least is the schism between ecological activists, religious fundamentalists, and radical technophiles. There are those who would defend biology, those who would deny it, and those who would transcend it, with very different visions of a long-term human potential. Religion and science fiction are full of utopian and dystopian futures. Yet, it is at least an intriguing thought experiment to consider what we might hope for in the distant future. There will certainly be forks in the road to come, some of which would lead to a dead end. A primary choice we face right now, underlying all others, is how much rational forethought to bring to the journey, the resources to commit to contemplating and preserving any future at all. Apparently, the world now spends more on ice cream than on evading anthropogenic risk! Our long-term human potential, whatever that might be, is a legacy bequeathed to future generations. It deserves at least the consideration that goes into the planning of an estate, which could prove to be the last will and testament of a mortal species.