re
...........
473
...........
..........
473
474
475
476
..........
481
..........
..........
·1·
Introd uction
Nick Bostrom and Milan M. Cirkovu:
I
i
~
..........
..........
..........
..........
..........
..........
I
t
482
482
483
484
486
487
..........
488
..........
496
498
499
502
j
~
i
j
;'i
i
I
'f
504
504
506
510
511
512
514
514
515
516
518
518
~.
520
531
~
•
t)
1.1 Why?
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer,
loosely, to a risk that might have the potential to inflict serious damage to
human well-being on a global scale. On this definition, an immensely diverse
collection of events could constitute global catastrophes: potential candidates
range from volcanic eruptions to pandemic infections, nuclear accidents to
worldwide tyrannies, out-of-control scientific experiments to climatic changes,
and cosmic hazards to economic collapse. With this in mind, one might
well ask, what use is a book on global catastrophic risk? The risks under
consideration seem to have little in common, so does 'global catastrophic risk'
even make sense as a topic? Or is the book that you hold in your hands as illconceived and unfocused a project as a volume on 'Gardening, Matrix Algebra,
and the History of Byzantium'?
We are confident that a comprehensive treatment of global catastrophic risk
will be at least somewhat more useful and coherent than the above-mentioned
imaginary title. We also believe that studying this topic is highly important.
Although the risks are of various kinds, they are tied together by many links
and commonalities. For example, for many types of destructive events, much
of the damage results from second-order impacts on social order; thus the risks
of social disruption and collapse are not unrelated to the risks of events such as
nuclear terrorism or pandemic disease. Or to take another example, apparently
dissimilar events such as large asteroid impacts, volcanic super-eruptions, and
nuclear war would all eject massive amounts of soot and aerosols into the
atmosphere, with significant effects on global climate. The existence of such
causal linkages is one reason why it is can be sensible to study multiple risks
together.
Another commonality is that many methodological, conceptual, and cultural
issues crop up across the range of global catastrophic risks. If our interest lies
in such issues, it is often illuminating to study how they play out in different
contexts. Conversely, some general insights - for example, into the biases of
human risk cognition - can be applied to many different risks and used to
improve our assessments across the board .
2
Global catastrophic risks
Beyond these theoretical commonalities, there are also pragmatic reasons
for addressing global catastrophic risks as a single field. Attention is scarce.
Mitigation is costly. To decide how to allocate effort and resources, we must
make comparative judgements. If we treat risks singly, and never as part of
an overall threat profile, we may become unduly fixated on the one or two
dangers that happen to have captured the public or expert imagination of the
day, while neglecting other risks that are more severe or more amenable to
mitigation. Alternatively, we may fail to see that some precautionary policy,
while effective in reducing the particular risk we are focusing on, would at the
same time create new hazards and result in an increase in the overall level of
risk. A broader view allows us to gain perspective and can thereby help us to
set wiser priorities.
The immediate aim of this book is to offer an introduction to the range
of global catastrophic risks facing humanity now or expected in the future,
suitable for an educated interdisciplinary readership. There are several
constituencies for the knowledge presented. Academics specializing in one of
these risk areas will benefit from learning about the other risks. Professionals
in insurance, finance, and business - although usually preoccupied with
more limited and imminent challenges - will benefit from a wider view.
Policy analysts, activists, and laypeople concerned with promoting responsible
policies likewise stand to gain from learning about the state of the art in global
risk studies. Finally, anyone who is worried or simply curious about what could
go wrong in the modern world might find many of the following chapters
intriguing. We hope that this volume will serve as a useful introduction to
all of these audiences. Each of the chapters ends with some pointers to the
literature for those who wish to delve deeper into a particular set of issues.
This volume also has a wider goal: to stimulate increased research,
awareness, and informed public discussion about big risks and mitigation
strategies. The existence of an interdisciplinary community of experts and
laypeople knowledgeable about global catastrophic risks will, we believe,
improve the odds that good solutions will be found and implemented to the
great challenges of the twenty-first century.
1.2 Taxonomy and organization
Let us look more closely at what would, and would not, count as a global
catastrophic risk. Recall that the damage must be serious, and the scale global.
Given this, a catastrophe that caused 10,000 fatalities or 10 billion dollars.
worth of economic damage (e.g., a major earthquake) would not qualify as a
global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion
dollars worth of economic loss (e.g., an influenza pandemic) would count as a
global catastrophe, even if some region of the world escaped unscathed. As for
< ,
Introduction
disasters falling between these points, the definition is vague. The stipulation
of a precise cut-off does not appear needful at this stage.
Global catastrophes have occurred many times in history, even if we only
count disasters causing more than 10 million deaths. A very partial list
of examples might include the An Shi Rebellion (756-763), the Taiping
Rebellion (1851-1864), and the famine of the Great Leap Forward in China,
the Black Death in Europe, the Spanish flu pandemic, the two world wars,
the Nazi genocides, the famines in British India, Stalinist totalitarianism, the
decimation of the native American population through smallpox and other
diseases following the arrival of European colonizers" probably the Mongol
conquests, perhaps Belgian Congo - innumerable others could be added to
the list depending on how various misfortunes and chronic conditions are
individuated and classified.
We can roughly characterize the severity of a risk by three variables: its scope
(how many people - and other morally relevant beings - would be affected), its
intensity (how badly these would be affected), and its probability (how likely the
disaster is to occur, according to our best judgement, given currently available
evidence). Using the first two of these variables, we can construct a qualitative
diagram of different types of risk (Fig. 1.1). (The probability dimension could
be displayed along a z-axis were this diagram three-dimensional.)
The scope of a risk can be personal (affecting only one person), local, global
(affecting a large part of the human population), or trans-generational (affecting
isons
:arce.
must
ut of
~two
if the
ile to
olicy,
it the
.el of
us to
ange
ture,
veral
ae of
mals
with
new.
sible
lobal
ould
rters
III to
) the
s.
Itch,
rtion
and
ieve,
» the
Scope
(Cosmic?)
,
.s
f'
~
~
Transgenerational
i
-------------
----~~~ ~
Loss of one
species of
beetle
--- - - ----- -- Global
: Drastic loss
of
Existential risks
: biodiversity
..-----.-.-.:~~~:+.:
scussion by Richard
. 9. Posner notes that
often impeded by the
j terms of office and
srmore, mitigation of
1 free-rider problem.
hope of taking a free
.il countries, in turn,
iders,
: tsunamis, asteroid
lobal warming, and
osed by these risks.
'ays possible, it is
es, potential harms,
order to determine
suggests that when
ncd as 'Obesity, diabetes
case; resistant bacterial
most standards, obesity,
much would health care
. whether this definition
1.4 Part II: Risks from nature
Volcanic eruptions in recent historical times have had measurable effects on
global climate, causing global cooling by a few tenths of one degree, the effect
lasting perhaps a year. But as Michael Rampino explains in Chapter 10, these
eruptions pale in comparison to the largest recorded eruptions. Approximately
75,000 years ago, a volcano erupted in Toba, Indonesia, spewing vast volumes
of fine ash and aerosols into the atmosphere, with effects comparable to
nuclear-winter scenarios. Land temperatures globally dropped by 5-l5°C, and
ocean-surface cooling of ~2-6°C might have extended over several years. The
persistence of significant soot in the atmosphere for one to three years might
have led to a cooling of the climate lasting for decades (because of climate
feedbacks such as increased snow cover and sea ice causing more of the sun's
radiation to be reflected back into space). The human population appears to
have gone through a bottleneck at this time, according to some estimates
dropping as low as approximately five hundred reproducing females in a world
population of approximately 4000 individuals. On the Toba catastrophe theory,
the population decline was caused by the super-eruption,
and the human
species was teetering on the brink of extinction. This is perhaps the worst
disaster that has ever befallen the human species, at least if severity is measured
by how close to terminal was the outcome.
More than twenty super-eruption sites for the last two million years have
been identified. This would suggest that, on average, a super-eruption occurs
at least once every 50,000 years. However, there may well have been additional
super-eruptions that have not yet been identified in the geological record.
IJ...•
7 This heuristic is only meant to be a first stab at the problem. It is obviously not generally valid.
For example, if one million dollars is sufficient to take all the possible precautions, there is no
reason to spend more on the risk even if we think that its probability is much greater than 1/1000 .
A more careful analysis would consider the marginal returns on investment in risk reduction.
14
Global catastrophic risks
The global damage from super-volcanism
would come chiefly from its
climatic effects. The volcanic winter that would follow such an eruption would
cause a drop in agricultural productivity which could lead to mass starvation
and consequent social upheavals. Rampino's analysis of the impacts of supervolcanism is also relevant to the risks of nuclear war and asteroid or meteor
impacts. Each of these would involve soot and aerosols being injected into the
atmosphere, cooling the Earth's climate.
Although we have no way of preventing a super-eruption,
there are
precautions that we could take to mitigate its impacts. At present, a global
stockpile equivalent to a two-month supply of grain exists. In a super-volcanic
catastrophe, growing seasons might be curtailed for several years. A larger
stockpile of grain and other foodstuffs, while expensive to maintain, would
provide a buffer for a range of catastrophe scenarios involving temporary
reductions in world agricultural productivity.
The hazard from comets and meteors is perhaps the best understood of all
global catastrophic risks (which is not to deny that significant uncertainties
remain). Chapter 11, by William Napier, explains some of the science behind
the impact hazards: where comets and asteroids come from, how frequently
impacts occur, and what the effects of an impact would be. To produce a
civilization-disrupting
event, an impactor would need a diameter of at least
one or two kilometre. A ten kilornetre impactor would, it appears, have a good
chance of causing the extinction of the human species. But even sub-kilometre
impactors could produce damage reaching the level of global catastrophe,
depending on their composition, velocity, angle, and impact site.
Napier estimates that 'the per capita impact hazard is at the level associated
with the hazards of air travel and the like'. However, funding for mitigation is
meager compared to funding for air safety. The main effort currently underway
to address the impact hazard is the Spaceguard project, which receives about
four million dollars per annum from NASA besides in-kind and voluntary
contributions from others. Spaceguard aims to find 90% of near-Earth asteroids
larger than one kilometre by the end of 2008. Asteroids constitute the largest
portion of the threat from near-Earth objects (and are easier to detect than
comets) so when the project is completed, the subjective probability of a
large impact will have been reduced considerably - unless, of course, it were
discovered that some asteroid has a date with our planet in the near future, in
which case the probability would soar.
Some preliminary study has been done of how a potential impactor could
be deflected. Given sufficient advance warning, it appears that the space
technology needed to divert an asteroid could be developed. The cost of
producing an effective asteroid defence would be much greater than the cost
of searching for potential impactors. However, if a civilization-destroying
wrecking ball were found to be swinging towards the Earth, virtually any
expense would be justified to avert it before it struck.
Asteroids and comet:
space. Other co
tluctuations in solar act
rJYs from supernova ext
III Chapter
12 by Arnor
I isks appear to be very
present time beyond CO]
(rum
1.5 Part III: Risks fl
We have already encour
cooling - as a destructiw('1\ as a possible conseq
Y t·t it is the risk of gra
g~l$ emissions
that has
It'cent years. Anthropog
j-tlobal threats. Global w
t\t(' attention given to gk
Ca rbon dioxide and
.lHnosphere, where they
,HId a concomitant
rise
United Nations' Intergo:
tqwesents the most aut
;ttt('rnpts to estimate the
t'" pcctcd by the end of t.
lH,tig;llion are made. ThE
11I1(Ntainty about what tl
h•• over the century, uno
HII{ ertainry about other f~
III terms of six different
.htTerct'lt assumptions. T
•I R"C (uncertainty rang
(2.4-6.4°C). Estirr
• r-na rios of the six consi.
Chapter 13, by David
hit>:lrscicnce behind din
l""h~lbility high-impact s
hunk It is, arguably. this
•..•o-c
~ A \ ilrnpt'ehcnsivC"
review ot
__ II> I!\lrllj!l~n\ I'x\ralcrreSI, inl
i!h-c
H~ll}!iltliSln:i:how
vel', thc:·a
Introduction
Asteroids and comets are not the only potential global catastrophic threats
from space. Other cosmic hazards include global climatic change from
fluctuations in solar activity, and very large fluxes from radiation and cosmic
rays from supernova explosions or gamma ray bursts. These risks are examined
in Chapter 12 by Amon Dar. The findings on these risks are favourable: the
risks appear to be very small. No particular response seems indicated at the
present time beyond continuation of basic research.f
ie chiefly from its
. an eruption would
to mass starvation
e impacts of superasteroid or meteor
19 injected into the
uption, there are
t present, a global
In a super-volcanic
ral years. A larger
J maintain,
would
volving temporary
1.5 Part III: Risks from unintended consequences
t understood of all
cant uncertainties
:he science behind
m, how frequently
be. To produce a
iameter of at least
pears, have a good
ven sub-kilometre
Iobal catastrophe,
t site.
he level associated
g for mitigation is
urrently underway
ich receives about
.nd and voluntary
-ar- Earth asteroids
istitute the largest
ier to detect than
~ probability of a
of course, it were
he near future, in
~l impactor could
:s that the space
oed. The cost of
iter than the cost
zation-destroying
th, virtually any
15
.,
I
I.:.
j
if
We have already encountered climate change - in the form of sudden global
cooling - as a destructive modality of super-eruptions
and large impacts (as
well as a possible consequence oflarge-scale nuclear war, to be discussed later).
Yet it is the risk of gradual global warming brought about by greenhouse
gas emissions that has most strongly captured the public imagination in
recent years. Anthropogenic climate change has become the poster child of
global threats. Global warming commandeers a disproportionate
fraction of
the attention given to global risks.
Carbon dioxide and other greenhouse gases are accumulating
in the
atmosphere, where they are expected to cause a warming of Earth's climate
and a concomitant rise in seawater levels. The most recent report by the
United Nations' Intergovernmental
Panel on Climate Change (fPCC), which
represents the most authoritative assessment of current scientific opinion,
attempts to estimate the increase in global mean temperature that would be
expected by the end of this century under the assumption that no efforts at
mitigation are made. The final estimate is fraught with uncertainty because of
uncertainty about what the default rate of emissions of greenhouse gases will
be over the century, uncertainty about the climate sensitivity parameter, and
uncertainty about other factors. The IPCC, therefore, expresses its assessment
in terms of six different climate scenarios based on different models and
different assumptions. The 'low' model predicts a mean global warming of
+1.8°C (uncertainty range 1.1-2.9°C); the 'high' model predicts warming by
+4.0°C (2.4-6.4°C). Estimated sea level rise predicted by the two most extreme
scenarios of the six considered is 18-38 em, and 26-59 em, respectively.
Chapter 13, by David Frame and Myles Allen, summarizes some of the
basic science behind climate modelling, with particular attention to the lowprobability high-impact scenarios that are most relevant to the focus of this
book. It is, arguably, this range of extreme scenarios that gives the greatest
k A comprehensive
wi I h
intelligent
microorganisms;
review of space hazards would also consider scenarios involving contact
extraterrestrial
species or contamination
from hypothetical extraterrestrial
however, these risks are outside the scope of Chapter 12.
16
Global catastrophic risks
cause for concern. Although their likelihood seems very low, considerable
uncertainty still pervades our understanding of various possible feedbacks that
might be triggered by the expected climate forcing (recalling Peter Taylor's
point, referred to earlier, about the importance of taking parameter and model
uncertainty into account). David Frame and Myles Allen also discuss mitigation
policy, highlighting the difficulties of setting appropriate mitigation goals given
the uncertainties about what levels of cumulative emissions would constitute
'dangerous anthropogenic interference' in the climate system.
Edwin Kilbourne reviews some historically important
pandemics
in
Chapter 14, including the distinctive characteristics
of their associated
pathogens,
and discusses the factors that will determine the extent and
consequences of future outbreaks.
Infectious disease has exacted an enormous toll of suffering and death on
the human species throughout history and continues to do so today. Deaths
from infectious disease currently account for approximately 25% of all deaths
worldwide. This amounts to approximately 15 million deaths per year. About
75% of these deaths occur in Southeast Asia and sub-Saharan Africa. The top
five causes of death due to infectious disease are upper respiratory infection
(3.9 million deaths), HIV/AIDS (2.9 million), diarrhoeal disease (1.8 million),
tuberculosis (1.7 million), and malaria (1.3 million).
Pandemic disease is indisputably one of the biggest global catastrophic risks
facing the world today, but it is not always accorded its due recognition.
For example, in most people's mental representation
of the world, the
influenza pandemic of 1918-1919 is almost completely overshadowed by the
concomitant World War 1. Yet although the WWI is estimated to have directly
caused about 10 million military and 9 million civilian fatalities, the Spanish
flu is believed to have killed at least 20-50 million people. The relatively low
'dread factor' associated with this pandemic might be partly due to the fact that
only approximately 2-3% of those who got sick died from the disease. (The
total death count is vast because a large percentage of the world population
was infected.)
In addition to fighting the major infectious diseases currently plaguing the
world, it is vital to remain alert to emerging new diseases with pandemic
potential, such as SARS, bird flu, and drug-resistant tuberculosis. As the
World Health Organization and its network of collaborating laboratories and
local governments have demonstrated repeatedly, decisive early action can
sometimes nip an emerging pandemic in the bud, possibly saving the lives of
millions.
We have chosen to label pandemics a 'risk from unintended consequences'
even though most infectious diseases (exempting the potential of genetically
engineered bioweapons) in some sense arise from nature. Our rationale is that
the evolution as well as the spread of pathogens is highly dependent on human
civilization. The worldwide spread of germs became possible only after all the
IJdnblll
Ihe lorn
dISl':I~l'
Of
wt'ek
;lll ~I fac
and cul
bomoge
it qllickl
·lh(' mas
Ir one
co
bacteria then PJcI
rn:ly -nsi
~
~
1
t-'
Conyers
any sin]
dangero
material
By co
,
immine
~
lI'
cause fo
of gener
as one c
Ii
It
I
~
~
~
~
~
;;!
;
i
I
,
the main
superint
title of C
global ri:
As Eli
a difficul
substant
to under:
serious (
eruptver
hypothe~
from clai
it will be
abilities
take a sh
Yudko
enormou
targets ir
superint(
beings) iJ
I
Introduction
.onsiderable
edbacks that
-ter Taylor's
r and model
s mitigation
l goals given
d constitute
inhabited continents were connected by travel routes. By now, globalization in
the form of travel and trade has reached such an extent that a highly contagious
disease could spread to virtually all parts of the world within a matter of days
or weeks. Kilbourne also draws attention to another aspect of globalization
as a factor increasing pandemic risk: homogenization
of peoples, practices,
and cultures. The more the human population comes to resemble a single
homogeneous niche, the greater the potential for a single pathogen to saturate
it quickly. Kilbourne mentions the 'one rotten apple syndrome', resulting from
the mass production of food and behavioural fads:
idemics in
associated
extent and
If one contaminated item, apple, egg or, most recently, spinach leaf carries a billion
bacteria - not an unreasonable estimate - and it enters a pool of cake mix constituents
then packaged and sent to millions of customers nationwide, a bewildering epidemic
may ensue.
id death on
lay. Deaths
,f all deaths
zear. About
ca. The top
y infection
.8 million),
ophic risks
-cognition.
vorld, the
Ned by the
ve directly
LeSpanish
atively low
ie fact that
ease. (The
iopulation
guing the
pandemic
s. As the
·ories and
ction can
ie lives of
quences'
-netically
LIeis that
1 human
er all the
17
I
J
Conversely, cultural as well as genetic diversity reduces the likelihood that
any single pattern will be adopted universally before it is discovered to be
dangerous - whether the pattern be virus RNA, a dangerous new chemical or
material, or a stifling ideology .
By contrast to pandemics, artificial intelligence (AI) is not an ongoing or
imminent global catastrophic risk. Nor is it as uncontroversially a serious
cause for concern. However, from a long-term perspective, the development
of general artificial intelligence exceeding that of the human brain can be seen
as one of the main challenges to the future of humanity (arguably, even as
the main challenge). At the same time, the successful deployment of friendly
superintelligence could obviate many of the other risks facing humanity. The
title of Chapter 15, 'Artificial Intelligence as a positive and negative factor in
global risk', reflects this ambivalent potential.
As Eliezer Yudkowsky notes, the prospect of superintelligent machines is
a difficult topic to analyse and discuss. Appropriately, therefore, he devotes a
substantial part of his chapter to clearing common misconceptions and barriers
to understanding. Having done so, he proceeds to give an argument for giving
serious consideration to the possibility that radical superintelligence
could
erupt very suddenly - a scenario that is sometimes referred to as the 'Singularity
hypothesis'. Claims aboutthe steepness of the transition must be distinguished
from claims about the timing of its onset. One could believe, for example, that
it will be a long time before computers are able to match the general reasoning
abilities of an average human being, but that once that happens, it will only
take a short time for computers to attain radically superhuman levels.
Yudkowsky proposes that we conceive of a superintelligence
as an
enormously powerful optimization
process: 'a system which hits small
targets in large search spaces to produce coherent real-world effects'. The
superintelligence
will be able to manipulate the world (including human
beings) in such a way as to achieve its goals, whatever those goals might be.
18
Global catastrophic risks
To avert disaster, it would be necessary to ensure that the superintelligence
is endowed with a 'Friendly' goal system: that is, one that aligns the system's
goals with genuine human values.
Given this set-up, Yudkowskyidentifies two different ways in which we could
fail to build Friendliness into our AI: philosophical failure and technical failure.
The warning against philosophical failure is basically that we should be careful
what we wish for because we might get it. We might designate a target for the
AI which at first sight seems like a nice outcome but which in fact is radically
misguided or morally worthless. The warning against technical failure is that
we might fail to get what we wish for, because of faulty implementation
of the
goal system or unintended consequences of the way the target representation
was specified. Yudkowsky regards both of these possible failure modes as very
serious existential risks and concludes that it is imperative that we figure out
how to build Friendliness into a superintelligence before we figure out how to
build a superintelligence.
Chapter 16 discusses the possibility that the experiments that physicists carry
out in particle accelerators might pose an existential risk. Concerns about such
risks prompted the director of the Brookhaven Relativistic Heavy Ion Collider
to commission an official report in 2000. Concerns have since resurfaced with
the construction of more powerful accelerators such as CERN's Large Hadron
Collider. Following the Brookhaven report, Frank Wilczek distinguishes three
catastrophe scenarios:
1. Formation of tiny black holes that could start accreting surrounding
matter, eventually swallowing up the entire planet.
2. Formation of negatively charged stable strangelets which could catalyse
the conversion of all the ordinary matter on our planet into strange matter.
3. Initiation of a phase transition of the vacuum state, which would
propagate outward in all directions at near light speed and destroy not
only our planet but the entire accessible part of the universe.
Wilczek argues that these scenarios are exceedingly unlikely on various
theoretical grounds. In addition, there is a more general argument that these
scenarios are extremely improbable which depends less on arcane theory.
Cosmic rays often have energies far greater than those that will be attained in
any of the planned accelerators. Such rays have been bombarding the Earth's
atmosphere (and the moon and other astronomical objects) for billions of
years without a single catastrophic effect having been observed. Assuming
that collisions in particle accelerators do not differ in any unknown relevant
respect from those that occur in the wild, we can be very confident in the safety
of our accelerators.
By everyone's reckoning, it is highly improbable that particle accelerator
experiments will cause an existential disaster. The question is how improbable?
And what would constitute an 'acceptable' probability of an existential disaster?
III
;tssn
,ltlt(
()Ill<
Iling (0;
cou ld le:
some eXI
lip the ri
estimate
regard tc
Chapt
.onsequcatastrop
'!11C main
slip and h
and soon
about the
direct effe
economic
I
This a:
those fro:
by Yudkc
doomed,
types of r
terrorist;
and the J
major pal
to follow
of such a
9 Eveni
ourjudgme
10
If expe
try to sound
accept their
verdicts eve
In the end,
hysterical pi
it requires c
Introduction
lligence
ystem's
te could
failure.
careful
.for the
adically
~is that
1 of the
ntation
as very
.ire out
how to
:s carry
it such
ollider
dwith
'adron
; three
nding
.talyse
latter.
vould
)y not
rious
these
.eory.
edin
.rth's
1S of
mng
vant
afety
ator
ble?
ter?
19
In assessing the probability, we must consider not only how unlikely the
outcome seems given our best current models but also the possibility that our
best models and calculations might be flawed in some as-yet unrealized way.
In doing so we must guard against overconfidence bias (compare Chapter 5
on biases). Unless we ourselves are technically expert, we must also take into
account the possibility that the experts on whose judgements we rely might be
consciously or unconsciously biased. 9 For example, the physicists who possess
the expertise needed to assess the risks from particle physics experiments are
part of a professional community that has a direct stake in the experiments
going forward. A layperson might worry that the incentives faced by the experts
could lead them to err on the side of downplaying the risks.l" Alternatively,
some experts might be tempted by the media attention they could get by playing
up the risks. The issue of how much and in which circumstances to trust risk
estimates by experts is an important one, and it arises quite generally with
regard to many of the risks covered in this book.
Chapter 17 (by Robin Hanson) from Part III on Risks from unintended
consequences focuses on social collapse as a devastation multiplier of other
catastrophes. Hanson writes as follows:
The main reason to be careful when you walk up a flight of stairs is not that you might
slip and have to retrace one step, but rather that the first slip might cause a second slip,
and so on until you fall dozens of steps and break your neck. Similarly we are concerned
about the sorts of catastrophes explored in this book not only because of their terrible
direct effects, but also because they may induce an even more damaging collapse of our
economic and social systems.
This argument does not apply to some of the risks discussed so far, such as
those from particle accelerators or the risks from superintelligence as envisaged
by Yudkowsky. In those cases, we may be either completely safe or altogether
doomed, with little probability of intermediary outcomes. But for many other
types of risk - such as windstorms, tornados, earthquakes, floods, forest fires,
terrorist attacks, plagues, and wars - a wide range of outcomes are possible,
and the potential for social disruption or even social collapse constitutes a
major part of the overall hazard. Hanson notes that many of these risks appear
to follow a power law distribution. Depending on the characteristic exponent
of such a power law distribution, most of the damage expected from a given
9 Even if we ourselves are expert, we must still be alert to unconscious
biases that may influence
our judgment (e.g., anthropic biases, see Chapter 6).
10 If experts anticipate that the public will not quite trust their reassurances,
they might be led to
I ry to sound even more reassuring
than they would have if they had believed that the public would
.iccept their claims at face value. The public, in turn, might respond by discounting the experts'
verdicts even more, leading the experts to be even more wary of fuelling alarmist overreactions.
I n the end, experts might be reluctant to acknowledge any risk at all for fear of a triggering a
hysterical public overreaction. Effective risk communication
is a tricky business, and the trust that
Ii n:t] uires can be hard to gain and easy to lose.
20
Global catastrophic risks
t)(,t.
type of risk may consist either of frequent small disturbances or of rare large
catastrophes. Car accidents, for example, have a large exponent, reflecting the
fact that most traffic deaths occur in numerous small accidents involving one
or two vehicles. Wars and plagues, by contrast, appear to have small exponents,
meaning that most of the expected damage occurs in very rare but very large
conflicts and pandemics.
After giving a thumbnail sketch of economic growth theory, Hanson
considers an extreme opposite of economic growth: sudden reduction in
productivity brought about by escalating destruction of social capital and
coordination. For example, 'a judge who would not normally consider taking a
bribe may do so when his life is at stake, allowing others to expect to get away
with theft more easily, which leads still others to avoid making investments that
might be stolen, and so on. Also, people may be reluctant to trust bank accounts
or even paper money, preventing those institutions from functioning.' The
productivity of the world economy depends both on scale and on many different
forms of capital which must be delicately coordinated. We should be concerned
that a relatively small disturbance (or combination of disturbances) to some
vulnerable part of this system could cause a far-reaching unraveling of the
institutions and expectations upon which the global economy depends.
Hanson also offers a suggestion for how we might convert some existential
risks into non-existential risks. He proposes that we consider the construction
of one or more continuously inhabited refuges - located, perhaps, in a deep
mineshaft, and well-stocked with supplies - which could preserve a small but
sufficient group of people to repopulate a post-apocalyptic world. It would
obviously be preferable to prevent altogether catastrophes of a severity that
would make humanity's survival dependent on such modern-day 'Noah's arks';
nevertheless, it might be worth exploring whether some variation of this
proposal might be a cost-effective way of somewhat decreasing the probability
of human extinction from a range of potential causes. 11
I>r-t
'.lll
.11 It
Ill.i
1
'1ttj!
~i
tilll
nH)'
P"P
(is
j'( () I
\ull
II
(,Ill!,
jllH
:,ull
1!.t1
rt II
WOII
n.i! I
rf'l II
iiI
Purchase answer to see full
attachment