UN Report: How to Stop Risking Human Extinction

Since 1990, the United Nations Development Program has been tasked with publishing reports every few years on the state of the world. The 2021/2022 report, released earlier this month and the first since the Covid-19 pandemic began, is titled “Uncertain Times, Unstable Lives.” And, unsurprisingly, it’s a stressful read.

“The war in Ukraine resonates around the world,” the report opens, “causing immense human suffering, including a cost-of-living crisis. Climate and ecological disasters threaten the world on a daily basis. It is seductively easy to dismiss crises as unique, natural to expect a return to normality. But putting out the last fire or driving out the last demagogue will be an unwinnable game unless we accept the fact that the world is fundamentally changing. There is no way back.”

Those words ring true. Just a few years ago, we lived in a world where experts had long warned that a pandemic was coming and could be devastating; now, we live in a world that a pandemic has clearly devastated. Just a year ago, there hadn’t been a major land war in Europe since World War II, and some pundits optimistically assumed that two countries with McDonald’s would never go to war.

Not only is Russia now occupying swaths of Ukraine, but the destruction of Russia’s military in the fighting there has led to other regional instability, most notably with Azerbaijan attacking Armenia earlier this month. Fears about the use of nuclear weapons in wartime, quiet since the Cold War, are back as people worry that Putin might resort to tactical nukes if he faces total defeat in Ukraine.

Of course, all of these situations are possible, even probable, to be resolved without a catastrophe. The worst rarely happens. But it’s hard to avoid the feeling that we’re just rolling the dice, hoping that somehow we don’t come up with an unlucky number. Every pandemic, every minor war between nuclear-armed powers, every new and uncontrolled technology, may present only a small chance of escalating to a catastrophic-scale event. But if we take that risk every year without taking precautions, humanity’s lifespan may be limited.

Why “existential safety” is the opposite of “existential risk”

Toby Ord, Senior Research Fellow at the Future of Humanity Institute in Oxford and author of the book Existential Risk The precipice: existential risk and the future of humanity, explores this issue in an essay from the latest UNDP report. He calls it the problem of “existential security”: the challenge of not just preventing every possible individual catastrophe, but of building a world that stops rolling the dice on possible extinction.

“To survive,” he writes in the report, “we need to accomplish two things. We must first reduce the current level of existential risk by putting out the fires we already face due to the threats of nuclear war and climate change. But we can’t always be fighting fires. A defining characteristic of existential risk is that there are no second chances: a single existential catastrophe would be our permanent undoing. Therefore, we must also create the equivalent of fire brigades and fire safety codes, making institutional changes to ensure that existential risk (including that of new technologies and developments) is kept low forever.”

He illustrates the point with this rather scary graph:

Toby Ord, UN Human Development Report 2021-2022

The idea is this: Suppose we go through a situation where a dictator threatens to use nuclear war, or where tensions between two nuclear powers seem to be reaching breaking point. Perhaps most of the time the situation settles down, as indeed was the case during the many, many close calls to the Cold War. But if this situation repeats itself every few decades, then the probability that we will disable all possible nuclear wars it will go down gradually. The odds that humanity will still be around 200 years from now eventually get pretty low, just like the odds that you can still win at the dice decrease with every roll.

“Existential security” is the state where most of the time we face no risks in a year, a decade, or ideally even a century, that have a substantial chance of annihilating civilization. For the existential safety of nuclear risk, for example, perhaps we will reduce nuclear arsenals to the point where even a full nuclear exchange does not pose a risk of civilizational collapse, something the world made significant progress on when countries nuclear arsenal levels were reduced after the Cold War. . For existential safety from pandemics, we could develop PPE that is comfortable to wear and provides roughly total protection against disease, plus a global system to detect disease early, ensuring that any catastrophic pandemic can be nipped in the bud and protect people. people.

However, the ideal would be the existential security of everything, not only the known, but also the unknown. For example, a major concern among experts, including Ord, is that once we build highly capable artificial intelligences, AI will drastically accelerate the development of new technologies that endanger the world while, because of how modern systems are designed of AI, it will be incredibly difficult. to say what you are doing or why.

So an ideal approach to managing existential risk not only combats current threats, but also creates policies that will prevent threats from emerging in the future.

That sounds great. As long-termists have recently argued, existential risks pose a particularly devastating threat because they could destroy not only the present, but also a future in which hundreds of billions more people may one day live. But how do we provoke it?

Ord proposes “an institution aimed at existential security.” He points out that preventing the end of the world is exactly the sort of thing that is supposed to be within the purview of the United Nations; after all, “the risks that could destroy us transcend national borders,” he writes. The problem, Ord observes, is that to prevent existential risk, an institution would have to have a broad capacity to intervene in the world. No country wants another country to be allowed to carry out an incredibly dangerous research program, but at the same time, no country wants to give other countries jurisdiction over their own research programs. Only a supranational authority, something like the International Atomic Energy Agency, but with a much broader mandate, could potentially overcome those narrower national concerns.

Often the hard part of securing humanity’s future is not figuring out what to do, but actually doing it. With climate change, the problem and the risks were well understood for a long time before the world took steps to move away from greenhouse gases. Experts warned about the risks of pandemics before Covid-19 hit, but they largely went unheeded, and institutions the US thought were ready, like the CDC, turned out to fall flat on their face during a crisis. real. Today, there are expert warnings about artificial intelligence, but other experts assure us that there will be no problem and that we do not need to try to fix it.

Writing reports only helps if people read them; building an international institute for existential security only works if there is a way to transform the study of existential risks into serious, coordinated action to make sure we don’t confront them. “There is not enough acceptance right now,” Ord acknowledges, but “this may change over the years or decades as people slowly come to grips with the severity of the threats facing humanity.”

Ord doesn’t speculate on what might cause that change, but personally, I’m pessimistic. Anything that changed the international order enough to support international institutions with real authority with respect to existential risk would probably have to be a devastating catastrophe in its own right. It seems unlikely that we’ll get down the road to “existential security” without taking some serious risks, which we hopefully survive to learn from.

Leave a Comment