ethics of artificial intelligence 09005022 - saurabh bhola 09005027 - ashish tajane 09005028 - lalit...

59
Ethics of Artificial Intelligence 09005022 - Saurabh Bhola 09005027 - Ashish Tajane 09005028 - Lalit Swami 1

Upload: payton-kenning

Post on 16-Dec-2015

217 views

Category:

Documents


2 download

TRANSCRIPT

1

Ethics of Artificial Intelligence

09005022 - Saurabh Bhola09005027 - Ashish Tajane09005028 - Lalit Swami

2

Outline

• What is AI ethics?• The Laws of Robotics• The need for AI ethics• Moral expectations from AI• Machines and Moral Status• Super Intelligence• Conclusion

3

WHAT ISAI ETHICS ?

4

Ethics

• Ethics is a branch of philosophy that involves systemizing, defending and recommending concepts of right and wrong behaviour.

• It has a subfield 'ethics of technology' which tries to answer if it is always, never, or contextually right or wrong to invent and implement a technological innovation. For example: computer viruses, nuclear weapons, environmental stability.

5

Ethics of Artificial Intelligence

• It is a part of 'ethics of technology' that is specific to robots, and other artificially intelligent beings.

• It is again divided into:1) Roboethics: Deals with moral behaviour of

human beings as they design, construct, use and treat artificially intelligent beings.

2) Machine Ethics: Deals with the behaviour of Artificial Moral Agents (AMA).

6

• Coined by Gianmarco Veruggio in 2002, it focuses upon how artificially intelligent beings may be used to benefit or harm humans.

• It deals with so called Robot Rights, which are the moral obligations of society towards its machines, similar to human rights or animal rights.

• An example of robot rights: 2003 Loebner prize competition

Roboethics

7

Roboethics (contd.)

• The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:

If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible the development of that Entry. If no such body can be identified/there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the USA or in the venue of the contest, the Award and Gold Medal in its own right.

8

Machine Ethics

• It focuses on how a machine learns new things – the algorithms involved (discussed later), can it learn on its own, can it become superior to humans etc.

• Isaac Asimov considered such issues in his book,I, Robot. He devised three laws of robotics that shall define the actions and morality of any artificially intelligent being.

9

THE LAWS OF ROBOTICS

10

The Three Laws of Robotics

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

11

I, Robot

• A book by Asimov with 9 fictional stories – Robbie, Runaround, Reason, Catch that Rabbit, Liar!, Little Lost Robot, Escape!, Evidence, The Evitable Conflict.

• Main characters in all stories – Dr. Susan Calvin, Powell and Donovan, members of field-testing team.

• Two of the stories follow on next slides.

12

Runaround

• It features the first explicit appearance of the Three Laws of Robotics as devised by Isaac Asimov

• It is a story of Powell, Donovan and Robot SPD-13(known as ‘Speedy’) sent to Mercury to start a mining operation abandoned ten years before.

• It focuses on the problems that might occur when one law contradicts another and tells about their mutual supersedence

13

Liar!

• In this story, a robot RB-34 has telepathic abilities.

• Now it is the duty of robot to tell people what others think of them. But since the First Law still applies to the Robot, it deliberately lies in order to avoid hurting their feelings, and to make people happy.

• However, by lying, it is hurting them anyway.

• Irreversible logical conflict arises!

14

Runaround Liar!

15

The Zeroth Law

• As the low-numbered law supersede the higher numbered laws, hence it gets its name.

• It states that - A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

• How does it decide in practice, as human beings are concrete objects but humanity is an abstraction – injury to it cannot be judged or estimated.

• A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general.

16

The Fourth Law

• What happens if a human-like robot is made in future and the robot starts destruction in order to save this one human-like life.

• Hence the fourth law states that a robot must identify itself as a robot in all cases or in other words, a robot must know it is a robot.

17

Ambiguities and Loopholes

• A robot may unknowingly breach any of the law stated before. Restating the first law - A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm.

• For instance, being ordered to add something to a person's food without knowing it’s poison,

• Even a criminal can divide tasks into multiple robots so that no individual robot could recognize that its actions would lead to harming a human being.

18

Ambiguities and Loopholes (contd.)

• Another criticism of Asimov's robot laws is that the installation of unaltered laws into a sentient consciousness would be a limitation of free will and therefore unethical.

• How does a robot identify a ‘human’ and a ‘robot’

• Moreover how does it identify ‘harm’ – Is scolding by parents harm to children?

• A harm to one shall be benefit to another and vice-versa. Then there are long term and short term harms.

19

Ambiguities and Loopholes (contd.)

• As discussed in Medico-Robotics, a robot shall be programmed to accept the necessity of inflicting damage to a human during surgery in order to prevent the greater harm that might occur if the surgery were not carried out

20

THE NEED FORAI ETHICS

21

The need for AI ethics

• If AIs were to have no effective intrusion into our lives then whether they could be ethical would be of interest to only those who engage in thought experiments.

• But AIs do intrude in our lives !• The kinds of intrusion are broadly of two sorts :1) Intrusions in which we have no choice but to

interact with.2) Intrusions which cause and are a result of

significant changes to culture and human interaction in particular.

22

The need for AI ethics (contd)

• An example of the first kind is : autopilot in passenger flights.

• While ATMs come under the second category as they influence us by forcing to take currency of specified denominations.

• A consequence of this relinquishing control is that we are taking less and less active part in decisions which have moral import.

23

The need for AI ethics (contd)

• Why does this relinquishing of control to AIs got to do with them being ethical ?

• The answer lies in the assumptions that we have when we do the same relinquishing with human beings i.e., what assumptions we operate with when we give over control of parts of our lives to other humans.

• We do two things :1) We establish that these other humans can and

will carry out our wishes for that part of our lives.

2) We hold them responsible for the actions that they carry out as a part of that control.

24

The need for AI ethics (contd)

• Example of the relinquishing control is the Russian Airplane disaster in which a plane load of children going on holiday and an American cargo plane crashed into each other over Switzerland.

• The voice recorders show that a Swiss air traffic controller's order for the Russian pilot to descend contradicted the cockpit warning system's command for the plane to climb up.

• The automatic cockpit warning systems issued simultaneous instructions for the Russian passenger jet to climb up and a cargo jet to descend about 45 seconds before they ultimately collided, killing all 71 people on board.

25

The need for AI ethics (contd)

• Moral consideration usually implies that the agent carrying out our wishes take into account our moral values.

• We might accord them some sort of independence in moral values and trust that these values are similar enough to ours to get the desired outcome.

• In all circumstances, where moral delegation occur, we assume that some sort of (imposed or inherent) moral capability is present in the being that has been delegated to.

26

The need for AI ethics (contd)

• The issue is – how far beyond being merely the extension of a programmer's morality can an AI's morality be.

• To what extent would/do AIs have autonomy of the sort that is needed for moral agency, how extensive would the features or characteristics AIs have, have to be ?

• This morality, when considered fully, would as well as moral consideration, include notions of moral worth, moral constraint, moral personhood, and being morally praiseworthy.

27

Turing Says...

• Take Turing’s now famous quote:

“The original question, ‘Can computers think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.”

28

Turing Says... (contd.)

• The common interpretation of this statement is that machines will think. What is more likely is that people will believe that machines can think. This belief is, to some, all that matters.

• The thinking goes like :• “If it is believed that AIs can think then why not

believe that they can be ethical. ”• If this attitude of acceptance is extended to the

(moral) decision making and action taking of AIs then their autonomy will grow and they will become more independent of us. The possible impacts of this independence ought to give us cause for pause and reflection.

29

The need for AI ethics

• Another reason to care if AIs can be ethical is the effect they might have in changing society if they were able to be ethical.

• One affect might be that the incorporation of machine agents into human practices will accelerate and deepen as artefacts simulate basic social capacities: dependence upon them will grow.

• The attribution of human like agency to artefacts will change the image of both machines and of human beings.

• Given the destructiveness of contemporary society, an examination of the additional influence that an ethical AI would have in the technologising of human social relations is timely.

30

Artificial General Intelligence

• Artificial Intelligence falls short of human capabilities in some critical sense.

• It lacks generality !!• Even though AI algorithms have beaten humans

in specific domains(for example, Deep Blue has beaten Gary Kasparov in chess) but researchers agree that something important is missing from AI.

• The missing consensus here is Generality.• Artificial General Intelligence is the emerging

term used to denote “real” AI.

31

Artificial General Intelligence

• Current AI algorithms with human equivalent or superior performance are characterized by a deliberately programmed competence in a single, restricted domain.

• Human intelligence however, is significantly more generally applicable.

• A lot of safety issues may emerge from AI operating in specific domains.

• For example, if we place a cloth inside a toaster then it may catch fire as the design executes in an unenvisioned context with an unenvisioned side effect.

32

MORAL EXPECTATIONS FROM AI

33

Qualities in AI

• Qualities that we usually consider while making machines– Powerful (Speed, Accuracy etc.)– Scalability – how an algorithm scales up on larger parallel

systems

• Qualities that we need to consider – Transparency– Auditability– Incorruptibility– Responsibility– Predictability

34

Transparency and Auditability

• A bank uses machine learning algorithm to recommend mortgage applications for approval

• Statistics show that the bank’s approval rate for black applicants has been steadily dropping

• Algorithm is deliberately blinded to the race of applicants

• Algorithm can be based on – Complicated neural network– Genetic algorithm produced by directed evolution– Decision trees– Bayesian networks

• Out of these, the last two algorithms are more transparent and hence easily auditable

35

Transparency and Auditability

• The algorithm actually uses the address information of applicants.

• Most black applicants were born or previously resided in predominantly poverty-stricken areas

• When AI algorithms take on cognitive work with social dimensions – cognitive tasks previously performed by humans – the AI algorithm inherits the social requirements

• So transparency to inspection becomes a desirable feature of AI algorithms

36

Incorruptibility

• A machine vision system to scan airline luggage for bombs

• The system should be sturdy against human and other adversaries deliberately searching for exploitable flaws in the algorithm

• A shape placed next to a pistol in one’s luggage should not neutralize recognition of the pistol

• So it becomes increasingly important that AI algorithms be robust against manipulation

37

Predictability

• AI algorithms taking over social functions should be predictable to those they govern

• Analogy with stare decisis• Legal principle that binds judges to follow past

precedent whenever possible• May seem incomprehensible – why bind future to

the past?• The job is not necessarily to optimize society but

to provide a predictable environment within which citizens can optimize their lives

38

Responsibility

• When AI system fails – who to blame?– The programmers– The end-users

• Even with systems designed with user override, people prefer to blame AI for any difficult decision that gives a negative outcome

39

MACHINES AND MORAL STATUS

40

Moral Status

• According to Francis Kamm

α has moral status => because α counts morally in its own right, it is permissible / impermissible to do things for its own sake

• A rock has no moral status• A human person has moral status

41

Criteria for Moral Status

• Two criteria are commonly proposed as being importantly linked to moral status

1. Sentience – the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer

2. Sapience – a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent

42

Moral Status in AI

• It is widely agreed that, currently AI systems have no moral status

• We may change, copy, delete or use computer programs as we please; at least as far as the programs themselves are concerned and not other humans or someone with moral status

43

Principle of Substrate Non-Discrimination

• If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementations, they have the same moral status

• Rejecting this principle would be like racism

• Holding the sentience or functionality constant, it makes no moral difference whether a being is made of silicon or carbon, or whether its brain uses semi-conductors or neurotransmitters

44

Principle of Ontogeny Non-Discrimination

• If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status

• Humans have same moral status no matter the way of birth – assisted delivery, in vitro fertilization, gamete selection etc.

• This principle extends this reasoning to the case involving entirely artificial cognitive systems

45

EXOTIC PROPERTIES OF ARTIFICIAL MINDS

46

Exotic Properties

• Non Sentient Sapience

• Variable Subjective Rate of Time

• Reproduction

47

Subjective Rate of Time

• A property which is certainly metaphysically and physically possible for an AI, is for its subjective rate of time to deviate drastically from the rate that is characteristic of a biological human brain

• This concept is best explained using the idea of whole brain emulsion or uploading

48

Uploading

• Transfer of intellect from brain to digital computer

1. High resolution scan of brain, possibly destroying the original

2. This 3-D map of brain combined with library of advanced neuroscientific theory

3. Computational structure and associated algorithmic behaviour is implemented in a powerful computer

On successful uploading, the program should replicate essential functional characteristics of original brain

49

Subjective Rate of Time

• Run the upload program on a faster computer

• If the computer is running thousand times faster and upload program has some input device like camera then the world will appear to be slowed down by a factor of thousand.

• 1 sec of objective time = 17 min of subjective time

• In cases where duration of experience is ethically relevant, which time should be considered ?

50

Reproduction

• An AI could copy itself very quickly and therefore the population could grow exponentially

• Some mid level ethical principles that are suitable in contemporary societies might need to be modified if those societies were to include persons (AI) with exotic property of being able to reproduce very rapidly

• We must not mistake mid level ethical principles for foundational normative truths

51

SUPERINTELLIGENCE

52

Intelligence Explosion

• An AI sufficiently intelligent to understand its own design could redesign itself or create a successor system, more intelligent, which could redesign itself yet again to become even more intelligent, and so on in a positive feedback cycle.

• Super intelligence may also be achieved by increasing the processing speed

• Minds which think like human but much faster are also called weak super intelligence

• Smarter minds pose great potential benefits as well as risks

53

Intelligence Explosion (cont.)

• Intelligence is inherently impossible to control• Most self modifying minds will naturally have

stable utility functions• Difference of ethical perspectives across

civilizations• How to avoid ethical stagnation by avoiding to

build minds which are stable in ethical dimensions along which human civilizations seem to exhibit directional change

• How do you build an AI which , when it executes becomes more ethical than you?

• We need to comprehend the structure of ethical questions in a way that we have already comprehended the structure of chess

54

CONCLUSION

55

The Flip Side by Aaron Sloman

Should we be afraid of what intelligent machines

might do to us?

It is very unlikely that intelligent machines could possibly produce more dreadful behaviour towards humans than humans already produce towards each other, all round the world even in the supposedly most civilised and advanced countries, both at individual levels and at social or national levels.

56

The Blind Side

• Here is Asimo, the world’s most humanoid robot as described by Honda

57

The Blind Side (contd.)

• Despite wonderful advances, the humanoids that we create fall short of those that we imagine.

• We need to shift the idea from humanizing a robot and focus on robotising the things we already have – we should improve existing objects instead of creating one device that is exterior to the other objects and can interact with the regular household

58

The Blind Side (contd.)

• An example – Instead of having a robot park my car, it would be good if my car parks itself.

• Another issue is shall these devices be involved in social activities too?

• Imagine your children growing up with their pet robot, playing games, sharing feelings with it and losing on what you actually experienced – fun and frolic childhood

59

References

• Wikipedia – the free encyclopediahttp://en.wikipedia.org/wiki/Ethics_of_artificial_intelligencehttp://en.wikipedia.org/wiki/Three_Laws_of_Robotics

• “An Outline for determining the ethics of artificial intelligence” by Richard Lucas

• “The Ethics of Artificial Intelligence” by Nick Bostrom and Eliezer Yudkowsky

• “Ethical Issues in Advance Artificial Intelligence” by Nick Bostrom

• Linkshttp://www.cs.bham.ac.uk/research/projects/cogaff/misc/asimov-three-laws.html

http://www.wired.com/gadgetlab/2011/03/robots-humans/