1 cmsc 671 fall 2010 class #14– wednesday, october 20

18
1 CMSC 671 CMSC 671 Fall 2010 Fall 2010 Class #14– Wednesday, October 20

Upload: beverly-stewart

Post on 02-Jan-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

1

CMSC 671CMSC 671Fall 2010Fall 2010

Class #14– Wednesday, October 20

Page 2: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

2

Today’s class• History of AI

– Key people– Significant events

• Future of AI– Where are we going

• Philosophy of AI– Can we build intelligent machines?

• If we do, how will we know they’re intelligent?

– Should we build intelligent machines?• If we do, how should we treat them…• …and how will they treat us?

• Class debate– “Robot be Good”; Asimov’s Three Laws; Google cars

Page 3: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

3

History of AI

Chronology of AI; Russell & Norvig Ch. 26

Page 4: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

4

Key people (AI prehistory)

• George Boole invented propositional logic (1847)

• Karel Capek coined the term “robot” (1921)

• Isaac Asimov wrote many sf books and essays (I, Robot (1950) introduced the Laws of Robotics – if you haven’t read it, you should!)

• John von Neumann: minimax (1928), computer architecture (1945)

• Alan Turing: universal machine (1937), Turing test (1950)

• Norbert Wiener founded the field of cybernetics (1940s)

• Marvin Minsky: neural nets (1951), AI founder, blocks world, Society of Mind

• John McCarthy invented Lisp (1958) and coined the term AI (1957)

• Allen Newell, Herbert Simon: GPS (1957), AI founders

• Noam Chomsky: analytical approach to language (1950s)

Page 5: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

5

Key people (early AI history)

• Hubert and Stuart Dreyfus: anti-AI specialists

• Ed Feigenbaum: DENDRAL (first expert system, 1960s)

• Terry Winograd: SHRDLU (blocks world, 1960s)

• Roger Schank: conceptual dependency graphs, scripts (1970s)

• Shakey: mobile robot (SRI, 1969)

• Doug Lenat: AM, EURISKO (math discovery, 1970s)

• Ed Shortliffe, Bruce Buchanan: MYCIN (uncertainty factors, 1970s)

Page 6: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

6

Key events: Genesis of AI

• Turing test, proposed in 1950 and debated ever since

• Neural networks, 1940s and 1950s, among the earliest theories of how we might reproduce intelligence

• Logic Theorist and GPS, 1950s, early symbolic AI

• Dartmouth University summer conference, 1956, established AI as a discipline

• Early years: focus on search, learning, knowledge representation

• Development of Lisp, late 1950s

Page 7: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

7

Key events: Adolescence of AI• The movie 2001: A Space Odyssey (1968) brought AI to the public’s

attention• Early expert systems: DENDRAL, Meta-DENDRAL, MYCIN• Arthur Samuels’s checkers player, Doug Lenat’s AM and EURISKO

systems, and Werbos’s and Rumelhart’s backpropagation algorithm held out hope for the ability of AI systems to learn

• Hype surrounding expert systems led to an inevitable decline in interest in the mid to late 1980s, when it was realized they couldn’t do everything

• Hype surrounding neural networks in the late 1980s led to similar disappointment in the 1990s

• Roger Schank’s conceptual dependency theory and Doug Lenat’s Cyc started to address problems of common-sense reasoning and representation

• Hans Berliner’s heuristic search player defeated the world backgammon champion in 1979

Page 8: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

8

Key events: AI adulthood (barely)

• Many commercial expert systems introduced, especially in the 1970s and 1980s

• Fuzzy logic and neural networks used in controllers, especially in Japan and Europe

• Recent developments and areas of great interest include:– Bayesian reasoning and Bayes nets– Ontologies, knowledge reuse, and knowledge acquisition– Mixed-initiative systems that combine the best of human and

computer reasoning– Multi-agent systems, Internet economies, intelligent agents– Autonomous systems for space exploration, search and rescue,

hazardous environments

Page 9: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

9

What do AI researchers do?• Subject headings from AAAI-10 conference proceedings:

– Constraints, Satisfiability, and Search 35 papers– Knowledge-Based Information Systems 3 papers– Knowledge Representation and Reasoning 23 papers– Machine Learning 49 papers– Multiagent Systems 42 papers– Multidisciplinary Topics 8 papers– Natural Language Processing 7 papers– Reasoning about Plans, Processes, and Actions 17 papers– Reasoning under Uncertainty 10 papers– Robotics 4 papers

– Short Papers (miscellaneous) 3 papers– AI and Bioinformatics Special Track 4 papers– AI and the Web Special Track 31 papers– Challenges in AI Special Track (position papers) 4 papers– Integrated Intelligence Special Track 10 papers– Physically Grounded AI Special Track 11 papers– New Scientific and Technical Advances in Research 12 papers– Senior Member Papers 3 papers– Student Abstracts 23 papers– Doctoral Consortium 15 papers

Page 10: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

10

Are we there yet?

• Great strides have been made in knowledge representation and decision making

• Many successful applications have been deployed to (help) solve specific problems

• Key open areas remain:– Incorporating uncertain reasoning– Real-time deliberation and action– Perception (including language) and action (including speech)– Lifelong learning / knowledge acquisition– Common-sense knowledge– Methodologies for evaluating intelligent systems

Page 11: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

11

Philosophy of AI

Alan M. Turing, “Computing Machinery and Intelligence”

John R. Searle, “Minds, Brains, and Programs”

Page 12: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

12

Philosophical debates

• What is AI, really?– What does an intelligent system look like?

– Does an AI need—and can it have—emotions, consciousness, empathy, love?

• Can we ever achieve AI, even in principle?

• How will we know if we’ve done it?

• If we can do it, should we?

Page 13: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

13

Turing test

• Basic test: – Interrogator in one room, human in another, system in a third

– Interrogator asks questions; human and system answer

– Interrogator tries to guess which is which

– If the system wins, it’s passed the Turing Test

• The system doesn’t have to tell the truth (obviously…)

Page 14: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

14

Turing test objections

• Objections are basically of two forms:– “No computer will ever be able to pass this test”

– “Even if a computer passed this test, it wouldn’t be intelligent”

• Chinese Room argument (Searle, 1980), responses, and counterresponses– Robot reply

– Systems reply

Page 15: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

15

“Machines can’t think”

• Theological objections• “It’s simply not possible, that’s all”• Arguments from incompleteness theorems

– But people aren’t complete, are they?

• Machines can’t be conscious or feel emotions– Reductionism doesn’t really answer the question: why can’t

machines be conscious or feel emotions??

• Machines don’t have Human Quality X• Machines just do what we tell them to do

– Maybe people just do what their neurons tell them to do…

• Machines are digital; people are analog

Page 16: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

16

“The Turing test isn’t meaningful”

• Maybe so, but…

If we don’t use the Turing test, what measure should we use?

• Very much an open question…

Page 17: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

17

Ethical concerns: Robot behavior

• How do we want our intelligent systems to behave?

• How can we ensure they do so?

• Asimov’s Three Laws of Robotics:1. A robot may not injure a human being or, through inaction, allow a

human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Page 18: 1 CMSC 671 Fall 2010 Class #14– Wednesday, October 20

18

Ethical concerns: Human behavior

• Is it morally justified to create intelligent systems with these constraints?– As a secondary question, would it be possible to do so?

• Should intelligent systems have free will? Can we prevent them from having free will??

• Will intelligent systems have consciousness? (Strong AI) – If they do, will it drive them insane to be constrained by artificial

ethics placed on them by humans?

• If intelligent systems develop their own ethics and morality, will we like what they come up with?