Loading

End of the World

 

How often do you think about the end of the world?

The recent election and inauguration of Donald Trump has gotten me thinking about the end of the world quite a lot.

Some people think about it quite a bit. Within Effective Altruism communities, of which I am a member, many people share a concern for the future of humanity. Effective Altruists attempt to combine good intentions with science and reasoning to find the best ways to do good, whether for humans or non-human animals. Mitigation of so-called “existential risks” is a huge priority for some of their more risk-seeking members.

An existential risk, put simply, is some class of possible event that presents a risk of extinction to humanity. Nick Bostrom, Oxford philosopher and existential risk extraordinaire, defines it this way: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

Some main classes of existential risk include catastrophic climate change, malicious artificial superintelligence, the emergence of malicious nanotechnology, nuclear war and malicious bio-tech, among others.

When considering the threats posed by so-called “x-risks,” there are at least three factors to keep in mind.

First, bear in mind that if humanity continues for the foreseeable future, then the number of potential people in the future will be significantly higher than the number who exist today or have existed in the past. Additionally, the expected disutility of extinction-level events is massive, meaning that even a small mitigation of those probabilities results in a huge positive. Per one interpretation of the evidence, “even if we use the most conservative of these estimates… we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.” If this holds even remotely true, then surely we should keep listening.

Second, consider that some experts believe the probability of extinction-level events is somewhat high. In a report released by Oxford’s Future of Humanity Institute, a survey of experts found the likelihood of extinction by the year 2100 to be a whopping 19 percent. While this number should be taken with a grain of salt, it is unsettling that people in the know are so pessimistic about our odds.

Third, bear in mind that there are very, very few people dedicated to mitigating these existential risks. Some limited efforts exist, but they are low-staffed and underfunded. As Nick Bostrom has noted, even “a million dollars could currently make a vast difference to the amount of research done on existential risks; the same amount spent on furthering world peace would be like a drop in the ocean.” If you’re looking for a cause with a funding gap, this might be just the ticket.

Looking throughout history, we can find plenty of examples of near-nuclear war; the Future of Life Institute compiled a nice list of the most notable.

What this might show us is that our planet has almost faced near-extinction level events in the past. One reason we are all still here is because people worked to craft systems that would avoid careless mistakes or oversights. In other words, we built systems that attempted to mitigate these risks. If these systems had not been in place, and lazy fail-safes failed to prevent disaster, then what would have happened? Perhaps not outright extinction, but disaster indeed.

During the Cold War, the notion of “mutually assured destruction” was not some abstract; it was a working possibility, one that humanity had to take seriously. So today, in a world with ever-advancing technology and geopolitical uncertainty, we lack a compelling reason not to take these sorts of risks seriously.

The need to mitigate existential risk stands or falls with free will—if it does not exist, then there is little or no case to be made. But if it does—even to an extent—then we have every reason to at least listen to the experts.

So, perhaps my thesis is that insofar as a person believes humans have free will (i.e. a degree of autonomy over their destinies), she likely will have reason to support causes that mitigate the risks imposed by disaster scenarios.

This is not meant to take a stand on cause prioritization. It might be more worthwhile still to donate to groups that fight global health problems or empower people economically. However, excluding opportunity cost, donating time or money to mitigating these threats is likely net positive, depending on the efficacy of the organization or project.

Given that we have not observed an existential threat play out in the past, we might be biased towards believing that one might never emerge. Accordingly, this is an area where rational thinking is absolutely essential.

In my view, whether or not to support or donate to these causes is an open question. But if the whole of humanity is at stake, it is at least a conversation worth having.

Note: This article is adapted from a piece I wrote for my Chronicle column on February 8, 2017.