1 Comment

I think this is a nuanced overview and I mostly agree, but to give a possible counterargument:

"Ignorance suffices for establishing an epistemic possibility, but not for taking it seriously" is a good guideline when uncertainty leads to an exponential increase in considerations that paralyzes decision making. This generally applies, but not necessarily here, since the bar for existential risks is very very high.

There are not many things that can permanently derail all that is valuable. Arguably, the number of existential risks that are epistemic possibilities and estimated to be at least somewhat likely* can be counted on 1 or 2 hands. Given the stakes, it's warranted to take these small number of threats very seriously, even if they're only an epistemic possibility (especially when a lack of positive evidence isn't evidence of absence: We would expect to see no concrete evidence of AGI being an existential risk before we build AGI, regardless of whether we live in a world where AGI is an existential risk or not).

*Say, at least ~1 in 1000 for this century. For example, even asteroids/comets, while commonly seen as a plausible extinction event, are only estimated to be a 1 in 1.000.000 per century existential risk.

At the end of the day, I find myself keep coming back to a basic risk management argument: risk priority score = estimated probability * estimated impact * estimated risk mitigation per unit of effort. If something has an unusually high score on such a calculation, it ought to be a priority for humanity. I would be curious to know if there are any arguments that undermine such reasoning (what "estimated" means here will of course be a subject of disagreement, but that only means that the meta-project of aggregating different perspectives and estimates also ought to be a top priority for humanity. It would be awesome if more xrisk mitigation-skeptical people were to engage with this question, but even this seems hard to do).

The best such argument I can think of is Pascal's mugging, though that mostly applies for very small probabilities of very large impact, which can be easily rooted out with a probability threshold, like the ~1 in 1000 per century I used.

Expand full comment