safety and morality r equire the recognition of self-improving machines as moral/justice ...

13
Safety and Morality REQUIRE the Recognition of Self-Improving Machines as Moral/Justice Patients & Agents Mark R. Waser

Upload: gerry

Post on 25-Feb-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Safety and Morality R EQUIRE the Recognition of Self-Improving Machines as Moral/Justice Patients & Agents . Mark R. Waser. The function/goal of. M ORALITY I S. “to suppress or regulate selfishness and make cooperative social life possible”. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

Safety and Morality REQUIRE the Recognition of

Self-Improving Machinesas Moral/Justice

Patients & Agents Mark R. Waser

Page 2: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

The function/goal of

“to suppress or regulate selfishness and make cooperative social life

possible”

MORALITY IS

J. Haidt & S. Kesebir Chapter 20. Morality

Handbook of Social Psychology, 5th Edition (Wiley, 2010)

Page 3: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

Cooperation Predictably Evolves• Evolutionary “ratchets” are local/global optima of

biological form and function which emerge, persist, and converge predictably (enjoying sex, fins, etc.).

• Cooperation exists almost anywhere that there is the cognitive machinery and circumstances to support it.

• Axelrod’s Iterated Prisoner’s Dilemma & subsequent evolutionary game theory provide for a rigorous evaluation of the pros and cons of cooperation – including that others *MUST* punish defection behavior and make unethical behavior as expensive as possible.

Page 4: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

Selfishness Predictably Evolves• There are *very* substantial evolutionary advantages to

undetected selfishness and the exploitation of others. • Humans have evolved to detect the deceptions used to

cloak selfishness and the exploitation of others• In a evolutionary “Red Queen” arms race, humans have

evolved to self-deceive and exploit the advantages of both selfishness and community.

• Numerous unconscious reflexes protect our selfishness from discovery without alerting the conscious mind and ruining the self-deception (e.g. images of eyes improve behavior).

Page 5: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

• Optimization at/for the community level• NOT defecting & harming the community

even when substantial personal gain can be achieved by defection (selfishness)

• Distinct/different from “doing what is best for the community” (i.e. not self-sacrifice)

• What is necessary to “make cooperative social life possible”

MORALITY IS

Page 6: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

HUMAN MORALITY IS• Implemented primarily as emotions• Entirely separate from conscious reasoning (to

enable self-deception to hide selfishness)– Scientific evidence [Hauser et al. Mind & Language 22:1-21

(2007)] clearly refutes that moral judgments are products of, based upon, or even correctly retrievable by conscious reasoning.

– Humans are actually even very likely to consciously discard the very reasons (e.g. the “contact principle”) that govern our behavior when unanalyzed.

– Most human moral “reasoning” is simply post hoc justification of unconscious and inaccessible decisions.

Page 7: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

MACHINE MORALITY Could Be• Implemented as an integrated system with

both “quick and dirty” rules of thumb and a detailed reasoning system that explains why the rules are correct and when they are not

• Entirely transparent in terms of determining (and documenting) true motivation

• Updated with the newest best reasoning and serve as a platform for legislation

• Much “better than human”

Page 8: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

The function/goal of

to suppress or regulate selfishness and make cooperative social life possible

JUSTICE IS

Justice is nothing but morality on the scale of groups and communities rather than individuals.It is merely that we haven’t lived long enough in large interconnected communities that

causes us to view them as two separate concepts. Morality and justice should work together to reduce selfishness at all levels and maximize consistency & coherency to

minimize interference & conflict/maximize coordination, cooperation & economies of scale.

Page 9: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

The “Friendly AI” goalto follow humanity’s wishes• Has a single point of failure!• Is NOT self-correcting if corrupted

(whether through error or due to “enemy action”)

• Requires determination of exactly what “humanity’s wishes” are(unless they are just “to have a cooperative social life . . . )

Page 10: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

the “Friendly AI” goalto follow humanity’s wishes

MUST be regarded as

SELFISH and IMMORAL

Viewed impartially . . .

(and likely to detrimentally affect future relationships)

Page 11: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

Steps to Morality/Justice1. Accept all individual goals/ratchets initially as being

equal and merely attempt to minimize interference and conflict; maximize coordination, cooperation and economies of scale

2. Obvious evils (murder, slavery, etc.) are weeded out by the fact that they suppress goals, create conflict and waste resources (suppressing even more goals)

3. Non-obvious evils (e.g. one involuntary organ donor used to save five lives) become obvious because of the resources/goals wasted defending against them

Page 12: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

Moral/Justice SocietyGoal/Mission Statement

Maximize the goal fulfillmentof all participating entities as

judged/evaluated by thenumber and diversity ofboth goals and entities

Page 13: Safety and Morality R EQUIRE  the Recognition  of Self-Improving Machines as  Moral/Justice  Patients & Agents

“Morals”• The mission statement should be attractive to all

with entities rapidly joining and reaping the benefits of cooperating rather than fighting.

• Any entity that places their own selfish goals and values above the benefits of societal level optimization and believes that they will profit from doing so (for example, so-called “Friendly AI” advocates) must be regarded as immoral, inimical, dangerous, stupid, and to be avoided.