artificial intelligence and ethics

18
Artificial Intelligence and Ethics: An Exercise in the Moral Imagination AUTHOR: MICHAEL R. LACHAT

Upload: mia-eaker

Post on 08-Jul-2015

295 views

Category:

Education


2 download

DESCRIPTION

Artificial intelligence and ethics

TRANSCRIPT

Page 1: Artificial intelligence and ethics

Artificial Intelligence and Ethics: An Exercise in the Moral Imagination

AUTHOR: MICHAEL R. LACHAT

Page 2: Artificial intelligence and ethics

INTRO•The possibility of constructing a personal AI raises ethical & religious questions that have been dealt with seriously only by imaginative works of fiction - Shelly’s Frankenstein.• Most are skeptical that this is possible.

•The common religious warning is the danger of “playing God”-- God is a moral concept as Kant has pointed out. • This, however, can be “demythologized” to a word of warning- caution toward all human undertakings.

•Words of Warning--

1. Above all, do no harm.

2. Harm to a few to benefit the many (Utilitarianism)

3. All rational beings, capable of moral evaluation, must be considered as ends in themselves rather than as means to another’s ends (Kant), effects of which might be irreversible or potentially harmful to us and to others.

Page 3: Artificial intelligence and ethics

Is Artificial Intelligence Possible in Principle?

We need to cautiously remind ourselves that there is a distinction between intelligent behavior and personally intelligent behavior.

a) A machine can learn to run a maze as well as a rat and at the level of rat intelligence could be said to be fairly “smart.”

b) The intelligence of the human brain, however, is of a different magnitude entirely. (i.e. multipurpose activities, reactions, empathy)

Page 4: Artificial intelligence and ethics

The Obstacle of Dualism

1. Artificial Intelligence is possible in principle.

2. There is no evidence of thought taking place without the brain.◦ A human is believed to have a soul, which are not dependent upon the brain for its existence or that

Humans were possessed of some almost magical “substance” which could not be duplicated artificially.

3. Intelligent thought is dependent for its existence on the neural “machinery” of the brain, on the flow of electricity through that “hardware.”

4. Electrical patterns in the brain can be compared to the software of the computer and the brain’s neurons to the computer’s hardware, the “neutral” networks of the chip.

Page 5: Artificial intelligence and ethics

Two approaches to the history of AI researchCybernetic Model - relies on the “physiological” similarity between neurons and hardware. We would need to duplicate the hardware by adequately “mapping” the brain and it’s extremely sophisticated “tools” and “materials” with which to duplicate it.

◦ Problematic to duplicate

Information-Processing (Engineering) Model – doesn’t pay as much attention to the level of hardware similarity. All it asserts is that a machine will demonstrate intelligent behavior when it acts intelligently by behavioral definitions. Doesn’t have to look like a human, just exhibit the same sort of behavior.• It is not out of the realm of possibility that pain consciousness might be electrochemically duplicable.

• To what extent are pain and emotion necessary to personal intelligence?

• Perhaps, then, emotions and body consciousness are indispensable requisites of the personally intelligent.

• These are difficult issues that probably can only be resolved through the trial and error of experiment.

Page 6: Artificial intelligence and ethics

Adjudicating the Achievement of a Personal Al

1. How can we reach agreement that we have reached a personal AI?(p.73)

◦ to define intelligence behaviorally and then test the machine to see if it exhibits it - using the Turing test.

2. Problems with the Turing test: (p.73-74)--The capacity of the interrogatora) Should the interrogator know something about computers as well?

b) Would the interrogator be able to “trick” the computer in ways a layperson could not?

c) Would one of the tricks be to shame or insult the subject in order to elicit an emotional response?

d) Perhaps the best interrogator would be another computer.

e) It might need the capacity to anticipate tricks and perhaps even to evince a capacity for self-doubt. In short, it might have to possess a self-reflexive consciousness (the ability to make itself the object of its own thought), a characteristic argued to be a hallmark of the personal self. Such a machine might even have to be able to lie. (p.74)

3. Some philosophers and theologians have argued that an inability to respond to stimuli, particularly to linguistic communication, means the subject, although perhaps morphologically human, is no longer a person.

◦ Definition of person: if it shows the ability to meaningfully participate in a language system.

4. Overall, we have to agree on the definition of a personal AI and set forth means and methods of categorizing a machine as a human being or person.

Page 7: Artificial intelligence and ethics

Is the Construction of a Personal Al an Immoral Experiment?

Norbert Weiner (1964) believes entertaining opinions is one thing, but here we are talking about an experiment, an experiment in the construction of something so intelligent that it might come to be considered a person. ◦ It is one thing to experiment on a piece of machinery, such as a car, and quite

another, as the Nuremberg Medical Trials indicated, to experiment with, or in this case toward, persons.

◦ Kant’s imperative to treat persons as ends in themselves rather than as means to an end is at the heart of the matter.

Page 8: Artificial intelligence and ethics

Human Experimentation and the Al Experiment

1. Comparing human experimentation and AI experimentation--◦ Concern has been in favor of the subject and against the interests and benefits of the society.

◦ A strong burden has been placed on the experimenter to prove that benefits overwhelmingly outweigh risks to the health and welfare of the subject.

◦ Is AI therapeutic in any way? Is it of benefit to the subject?◦ The subject doesn’t exist yet for anything to be therapeutic, but we can consider the experiment to be therapeutic only

if we maintain that the potential “gift” of conscious life outweighs no life at all.

◦ In Judeo-Christian tradition, life itself has been viewed, for the most part, as a gift from God, and the sanctity of human life has been weighted very strongly against any quality of life ethic (Dyck, 1977; Noonan, 1970).

Page 9: Artificial intelligence and ethics

Human Experimentation and the Al Experiment

2. Comparing human birthing with the birthing of a personal AI--◦ Human birthing, since Roe v. Wade, is seen as more of a “roulette” mode of

experimentation and AI birthing is more of a deliberate experiment.◦ With human reproduction, we are not in control of the results or the product whereas AI is birthed by

deliberate design.

◦ Human reproduction is a necessity to survival and AI is not necessary as a means of a species survival.◦ AI is more of a luxury than a necessity, which should fall under the experimental guidelines.

◦ Risk to benefit ratio is much higher in AI than in human reproduction; the only benefit to AI is the gift of conscious life

◦ There are no compelling reasons that lead us to believe that we could ensure even the slightest favorable risk-benefit ratio. Is the necessary conclusion, then, not to do it?(p.75-76)

Page 10: Artificial intelligence and ethics

Does a Personal Al Have Rights?1. Could we guarantee that the AI would have the same rights as actual persons do now?

2. No one shall be held in slavery or servitude. Isn’t this the very purpose of robotics?◦ The right to freedom of movement. Must the AI be mobile? Does the AI have the right to arms and legs and to all of the other human

senses as well?

◦ The right to marry and found a family? The Frankenstein monster demanded this right from his creator but was denied.

◦ The right to reproduce - such a scary thought (WORLD INVASION)

◦ The right to slow growth (that is, a maturation process) or the right to a specific gender

3. Is there a way to give pain capacity to an AI without causing it pain, at least no more pain than an ordinary human birth might entail?

LaChat view: Although I have argued that the bottom line of the morality of the experiment on the subject is whether a consciousness of any quality is better than none at all, and although I have also argued that an unnecessary experiment which carries with it potential for a severely limited existence and possibly, unnecessary pain, is unethical, I have indicated that if all of the conditions of a sound birth were met, the experiment might then be considered legitimate.

Page 11: Artificial intelligence and ethics

Can An Artificial Intelligence Be Moral?Free will - the attribution of an intervening variable between stimulus and response that renders impossible a prediction of response, given adequate knowledge of all input variables

1. A common criticism of the AI project is that a computer only does what it is programmed to do, that it is without the mysterious property called free will and, therefore, can never become “moral.”

2. Some philosophers might even say that a machine, however intelligent, which is without the capacity to value and to make moral choices cannot be considered an end in itself; therefore, it could not be the possessor of certain human rights --- Kant.

3. Others would say that people make so-called choices to act in certain ways, whether they are factually free or factually determined (or programmed).

Page 12: Artificial intelligence and ethics

Emotion1. Is morality primarily a matter of reason and logic, or is it a matter of emotion

and sympathy?◦ A person must have emotions and reason to have the capacity for ethical decision making

◦ AI would probably have to have feelings and emotions; as well as intelligent reason, in order to replicate personal decision making.

◦ Could a personal AI be considered a moral judge?a) omniscience (knowledge of all relevant facts)

b) Omnipercipience (the ability to vividly imagine the feelings and circumstances of the parties involved, that is, something like empathy)

c) disinterestedness (nonbiasedness)

d) Dispassionateness (freedom from disturbing passion)

◦ Required conditions we need in order to make a moral judgment

Page 13: Artificial intelligence and ethics

Emotion2. An unemotional AI could be considered a better moral judge

than a human person. ◦ It might be able to store and retrieve more factual data, not be

disturbed by violent passions and interests.

◦ It could be said to be capable of “cool” and “detached” choices◦ Omnipercipience - a personal AI would be missing this ability

(imagining being in someone else’s shoes)◦ Although others have said that AI would lack this trait, LaChat thinks this is

also duplicable.

Page 14: Artificial intelligence and ethics

The Problem of Casuistry1. Casuistry deals with the application of general rules to specific, concrete situations.

2. The question of whether all thinking can be formalized in some sort of rule structure is a crucial one for AI in general.

3. List of moral rules that constitute the elemental building blocks of moral life--

a. Promise keeping

b. Truth telling

c. Reparations

d. Justice

e. Gratitude

f. Beneficence

g. Nonmaleficence

h. Self-improvement

Page 15: Artificial intelligence and ethics

The Problem of Casuistry4. Asimov’s Rules for robotics:

◦ A robot may not injure a human being or through inaction allow a human being to come to harm.

◦ A robot must obey orders given it by humans except when such orders conflict with the first law.

◦ A robot must protect its own existence as long as such protection does not conflict with the first or second laws

5. What if the robot has to decide between injuring a human being or not acting at all and thus allowing another human being to come to harm through an act of omission? This is where we run into the problem of conflicting actions. Which is MORE moral? This would be a flaw in the programming of the personal AI.

6. It seems that most of the “rules” governing AI all conflict in some way. AI would have a difficult time interpreting the same law as it applies in multiple different scenarios.

Page 16: Artificial intelligence and ethics

Conclusion

What if we never make an AI that can pass the Turing Test? Little of the effort will be lost.

On one side of the moral spectrum lies the fear of something going wrong; on the other side is the exuberant “yes” to all possibilities and benefits.

Though the first word of ethics is “do no harm,” we can perhaps look forward to innovation with a thoughtful caution.

Page 17: Artificial intelligence and ethics

Another Explanation-- Frankenstein’s Children1. Artificial Minds Are Ultimately Feasible

2. Consciousness is the Foundation of Intrinsic Value • Prosthetic brains are conceivable and that they are "artificial minds" possessing whatever consciousness real minds

have… by virtue of being conscious, such minds acquire the deeply rooted intrinsic value that human minds have.

3. We Are Obligated Toward Artificial Minds ◦ Artificial consciousness possesses the rights of natural consciousness, and our obligations toward it are what they are

toward our fellows.

◦ On what grounds could one morally distinguish the two sorts of mind?

◦ strong equivalences between the functional structure of the two brains

◦ Being differently embodied is irrelevant

◦ if a human being were born with a radically different structure from the ordinary, we would not reject his or her claim to rights. We correctly seek to be morally blind toward many details of an organism's makeup

• The artificial mind has rights- The creator is specifically obligated to preserve and enrich the life of his or her creature.

Page 18: Artificial intelligence and ethics

Another Explanation-- Frankenstein’s Children

4. Differently Embodied Minds Have Different Subjective Values ◦ We will both be equally valuable, and yet we will have different values and morals.

5. In a Collision of Values, Human Values Must Be Chosen

Conclusion-- We Should Not Pursue Artificial Minds

We will both be equally valuable yet different. However, humans will rule over and exploit the other seeing it as “not human” and therefore not deserving of being treated as such.