decedes or centuries keynotes version

17
Decades or centuries? Timeframe of the risks of human extinction Alexey Turchin, Longevity party

Upload: avturchin

Post on 28-Jul-2015

48 views

Category:

Documents


1 download

TRANSCRIPT

Decades or centuries?Timeframe of the risks

of human extinction

Alexey Turchin, Longevity party

TimeframeOpen question: when?

Timeframe: x-risk happened or prevented

Two theories about x-risk timeframe:

Decades (15-30 years)

Centuries (now-500 years)

“Decades” scenarioX-risk : 2030-2050

Probability is rising exponentially

Chaotic and complex processes near the event horizon (Technological singularity)

AI is main factor

Decades: 10-30 yearsTiming of creation superhuman AI and other super-technologies: 2030 – Vinge, 2045 – Kurzweil

Superhuman AI

or destroy humanity

or prevent x-risks

Period of vulnerability to x-risks will be finished after creation of superhuman AI

Arguments for decades scenario

Nano-Bio-Info-Cogno

Convergence

Everything appears

simultaneously

Arguments for decades scenario

Exponential growth of technologies

Exponential growth of x-risks

Deadly viruses – cheaper

AI – simpler

Arguments for decades scenario

Possible triggers in the near future:

World war

New arms race

Peak oil

Runaway global warming

Smaller catastrophe starts bigger one

Centuries scenario50-500 years from now

Rare events

Accidental

Mutually independent

Linear distribution of probability

Prevention by space dwelling

Arguments for centuries scenario:

Most predictions about AI: false

Most predictions about

near future global catastrophe:

false

Arguments for centuries scenario:

Exponential growth – level up

Moore’s law – stop

Linear future growth

Arguments for centuries scenario:

X-risks:

Independent

Accidental

Unknown origin

No chain reaction

Public bias for centuries scenario:

Long-term predictions:

More scientific

Less chances to be false

Improving the reputation

Helping to prevent x-risks.

John Leslie – 500 years (1996)

Nick Bostrom – 200 (2001)

Martin Rees – 100 (2003)

Decades scenario is worse

Sooner

Less time to prepare

More complex

Military AI – Unfriendly

In our lifetime

ConclusionOpen question – timeframe

It depends on exponential or linear development of future technologies

Different risks interact in complex and unpredictable ways near Technological Singularity

It could happen as soon as in next 15 years

We need to search effective mode of actions to prevent x-risks

Create social demand for preventing existential risks

Example: fight against nuclear war in 80ies

Political movement for x-risk prevention and life extension.

Near-term risk is more motivating for actions