professor nick bostrom director, future of humanity institute oxford martin school oxford university
TRANSCRIPT
Professor Nick BostromDirector, Future of Humanity Institute
Oxford Martin SchoolOxford University
global catastrophic risk
SCOPE
pan-generational
Global Dark AgeBiodiversity
reduced by one species of beetle
Aging
imperceptible
Congestion from one extra vehicle
Recession in one country
Loss of one hair
(cosmic)
global
local
personal
endurable crushing
Car is stolen Fatal car crash
Genocide
(hellish)
X
Global warming by 0.01 Cº
Thinning of ozone layer
SEVERITY
existential risk
trans-generational
Destruction of cultural heritage
One original Picasso painting
destroyed
Ephemeral global tyranny
The risk of creativity
The risk of creativity
?
Hazardous future techs?
• Machine intelligence• Synthetic biology• Molecular nanotechnology• Totalitarianism-enabling techs• Human modification• Geoengineering• Unknown• Unknown• Unknown• Unknown
Technological determinism?
Need for speed?
“I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.”
— the blog-commenter “washbash”
Principle of differential technological development
Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.
Biological cognition
Networks and organizations
(Brain-computer interfaces)
Whole brain emulation
Artificial intelligence
Embryo selection 1. Conduct very large genome-wide association
studies (and select a number of embryos that are higher in desired genetic characteristics.
2. Overcome various ethical scruples3. Use to select during IVF
Iterated embryo selection 1. Genotype and select a number of embryos
that are higher in desired genetic characteristics.
2. Extract stem cells from those embryos and convert them to sperm and ova, maturing within six months or less.
3. Cross the new sperm and ova to produce embryos.
4. Repeat until large genetic changes have been accumulated.
Maximum IQ gains from selecting among a set of embryos
Selection IQ points gained
1 in 2 4.2
1 in 10 11.5
1 in 100 18.8
1 in 1000 24.3
5 generations of 1 in 10 < 65 (b/c diminishing returns)
10 generations of 1 in 10 < 130 (b/c diminishing returns)
Cumulative limits (additive variants optimized
for cognition)
100+ (< 300 (b/c diminishing returns))
Possible impacts?
Biological cognition
Networks and organizations
(Brain-computer interfaces)
Whole brain emulation
Artificial intelligence
• Decision theory• First-order logic• Heuristic search• Decision trees• Alpha-Beta Pruning• Hidden Markov Models• Policy iteration• Backprop algorithm
• Evolutionary algorithms• Support vector machines• Hierarchical planning• Algorithmic complexity theory• TD learning• Bayesian networks• Big Data• Convolutional neural networks
Brain emulation?
Applications
• Algorithmic trading• Route-finding software• Medical decision support• Industrial robotics• Speech recognition• Recommender systems• Machine translation• Face recognition
• Search engines• Equation-solving and theorem-
proving• Automated logistics planning• Airline reservation systems• Spam filters• Credit card fraud detection• Game AI• …
Game AICheckers SuperhumanBackgammon SuperhumanTraveller TCS Superhuman in collaboration with human Othello SuperhumanChess SuperhumanCrosswords Expert levelScrabble SuperhumanBridge Equal to the bestJeopardy! SuperhumanPoker VariedFreeCell SuperhumanGo Very strong amateur level
When will HLMI be achieved?
10% 50% 90%
PT-AI 2023 2048 2080
AGI 2022 2040 2065
EETN 2020 2050 2093
TOP100 2024 2050 2070
Combined 2022 2040 2075
How long from HLMI to SI?
within 2 yrs within 30 yrs
TOP100 5% 50%
Combined 10% 75%
Difficulty of achievement
Fast – minutes, hours, daysSlow – decades, centuriesIntermediate – months, years
An AI takeover scenario
What do alien minds want?
Principles of AI motivation
• The orthogonality thesis– Intelligence and final goals are orthogonal: more or less
any level of intelligence could in principle be combined with more or less any final goal.
Principles of AI motivation
• The orthogonality thesis– Intelligence and final goals are orthogonal: more or less
any level of intelligence could in principle be combined with more or less any final goal.
• The instrumental convergence thesis– Several instrumental values can be identified which are
convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
– self-preservation, goal content integrity, cognitive enhancement, technological perfection, resource acquisition
The challenge
The challenge
• Solve the intelligence problem and the control problem
The challenge
• Solve the intelligence problem and the control problem• In the correct order!
The challenge
• Solve the intelligence problem and the control problem• In the correct order!
Principle of differential technological development
Capability control Motivation selection
Control methods
• Reliable self-modification (“tiling agents”)• Logical uncertainty (reasoning without logical omniscience)• Reflective stability of decision theory• Decision theory for Newcomb-like problems• Corrigibility (accepting modifications)• The shutdown problem• Value loading• Indirect specification of decision theory• Domesticity (goal specification for limited impact)• The competence gap• Weighting options or outcomes for variance-normalizing solution to
moral uncertainty• Program analysis for self-improvement• Reading values and beliefs of AIs• Pascal’s mugging• Infinite ethics• Mathematical modelling of intelligence explosion
Technical research questions
Theoretical computer scienceParts of mathematicsParts of philosophy
Relevant fields
• History and difficulty of international technology coordination (treaties)• Past progress in artificial intelligence• Survey of intelligence measures• Survey of endogenous growth theories in economics• History of opinions on danger among AI experts• Examine past large, (semi-)secret tech projects (Manhattan, Apollo)• Examine past price trends in software, hardware, networking, etc.• Analyse the technological completion conjecture• Search for additional technology couplings• Search for plausible “second-guessing” policy arguments• History of policy on technological / scientific / catastrophic risks
Strategic research questions
Technology forecastingRisk analysisTechnology policy and strategyS&T governanceEthicsParts of philosophyHistory of technologyParts of economics, game theory
Relevant fields
The Common Good Principle
Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.