decades or centuries - timeframe of the risks of human extinction
DESCRIPTION
One of the main open questions in discussion about x-risks is time-scale of the possible global catastrophe. Time-scale is time from now during which it will or happened or permanently prevented. Two main opinions exist: decades or centuries. If we take in account many predictions about continuing exponential or even hyperbolic development of new technologies when we should conclude that superhuman AI and ability to create super deadly biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write it in 2014, so it is just 15-30 years from now. As well as predictions about runaway global warming, limits of growth, peak oil and some version of Doomsday argument – all of them are centers around the year 2030. Such prediction easily could be falsified because 2030 year is rather soon. And also such prediction left us hopeless because it is clear that in such short timeframe it is unlikely that we could do something to prevent x-risks especially knowing how small were pervious efforts. But if we take one hundred years timeframe we as authors will have some advantages. We are signaling to be more respectful and conservative. It will be almost never proved that we are false during our lifetime. We have 10 times more chances to be right just because we have larger timeframe. We have plenty of time to implement some defense measures or in fact to think that such measures would be implemented (they will not). We may also think that we are correcting overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.TRANSCRIPT
Decades or centuries?Timeframe of the risks
of human extinction
Alexey Turchin, Longevity party
TimeframeOpen question: when?
Timeframe: x-risk happened or prevented
Two theories about x-risk timeframe:
Decades (15-30 years)
Centuries (now-500 years)
“Decades” scenarioX-risk : 2030-2050
Probability is rising exponentially
Chaotic and complex processes near the event horizon (Technological singularity)
AI is main factor
Decades: 10-30 yearsTiming of creation superhuman AI and other super-technologies: 2030 – Vinge, 2045 – Kurzweil
Superhuman AI
or destroy humanity
or prevent x-risks
Period of vulnerability to x-risks will be finished after creation of superhuman AI
Arguments for decades scenario
Nano-Bio-Info-Cogno
Convergence
Everything appears
simultaneously
Arguments for decades scenario
Exponential growth of technologies
Exponential growth of x-risks
Deadly viruses – cheaper
AI – simpler
Arguments for decades scenario
Possible triggers in the near future:
World war
New arms race
Peak oil
Runaway global warming
Smaller catastrophe starts bigger one
Centuries scenario50-500 years from now
Rare events
Accidental
Mutually independent
Linear distribution of probability
Prevention by space dwelling
Arguments for centuries scenario:
Most predictions about AI: false
Most predictions about
near future global catastrophe:
false
Arguments for centuries scenario:
Exponential growth – level up
Moore’s law – stop
Linear future growth
Arguments for centuries scenario:
X-risks:
Independent
Accidental
Unknown origin
No chain reaction
Public bias for centuries scenario:
Long-term predictions:
More scientific
Less chances to be false
Improving the reputation
Helping to prevent x-risks.
John Leslie – 500 years (1996)
Nick Bostrom – 200 (2001)
Martin Rees – 100 (2003)
Decades scenario is worse
Sooner
Less time to prepare
More complex
Military AI – Unfriendly
In our lifetime
ConclusionOpen question – timeframe
It depends on exponential or linear development of future technologies
Different risks interact in complex and unpredictable ways near Technological Singularity
It could happen as soon as in next 15 years
We need to search effective mode of actions to prevent x-risks
Create social demand for preventing existential risks
Example: fight against nuclear war in 80ies
Political movement for x-risk prevention and life extension.
Near-term risk is more motivating for actions