lecture 26 of 42

28
Computing & Information Sciences Kansas State University Wednesday, 24 Oct 2007 CIS 530 / 730: Artificial Intelligence Lecture 26 of 42 Wednesday. 24 October 2007 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: http://snipurl.com/v9v3 Course web site: http://www.kddresearch.org/Courses/Fall-2007/CIS730 Instructor home page: http://www.cis.ksu.edu/~bhsu Reading for Next Class: Section 12.5 – 12.8, Russell & Norvig 2 nd edition nditional, Continuous, and Multi-Agent Plan Discussion: Probability Refresher

Upload: alain

Post on 21-Jan-2016

41 views

Category:

Documents


0 download

DESCRIPTION

Lecture 26 of 42. Conditional, Continuous, and Multi-Agent Planning Discussion: Probability Refresher. Wednesday. 24 October 2007 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: http://snipurl.com/v9v3 - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Lecture 26 of 42

Wednesday. 24 October 2007

William H. Hsu

Department of Computing and Information Sciences, KSU

KSOL course page: http://snipurl.com/v9v3

Course web site: http://www.kddresearch.org/Courses/Fall-2007/CIS730

Instructor home page: http://www.cis.ksu.edu/~bhsu

Reading for Next Class:

Section 12.5 – 12.8, Russell & Norvig 2nd edition

Conditional, Continuous, and Multi-Agent PlanningDiscussion: Probability Refresher

Page 2: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Lecture Outline

Today’s Reading: Sections 12.1 – 12.4, R&N 2e

Friday’s Reading: Sections 12.5 – 12.8, R&N 2e

Today: Practical Planning, concluded Conditional Planning

Replanning

Monitoring and Execution

Continual Planning

Hierarchical Planning Revisited Examples: Korf

Real-World Example

Friday and Next Week: Reasoning under Uncertainty Basics of reasoning under uncertainty

Probability review

BNJ interface (http://bnj.sourceforge.net)

Page 3: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Planning and Learning Roadmap

Bounded Indeterminacy (12.3)

Four Techniques for Dealing with Nondeterministic Domains

1. Sensorless / Conformant Planning: “Be Prepared” (12.3) Idea: be able to respond to any situation (universal planning)

Coercion

2. Conditional / Contingency Planning: “Plan B” (12.4) Idea: be able to respond to many typical alternative situations

Actions for sensing (“reviewing the situation”)

3. Execution Monitoring / Replanning: “Show Must Go On” (12.5) Idea: be able to resume momentarily failed plans

Plan revision

4. Continuous Planning: “Always in Motion, The Future Is” (12.6) Lifetime planning (and learning!)

Formulate new goals

Page 4: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Page 5: Lecture 26 of 42
Page 6: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Page 7: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Page 8: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Page 9: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Page 10: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Hierarchical Abstraction Planning:Review

Adapted from Russell and Norvig

Need for Abstraction Question: What is wrong with uniform granularity?

Answers (among many)Representational problems

Inferential problems: inefficient plan synthesis

Family of Solutions: Abstract Planning But what to abstract in “problem environment”, “representation”?

Objects, obstacles (quantification: later)

Assumptions (closed world)

Other entities

Operators

Situations

Hierarchical abstractionSee: Sections 12.2 – 12.3 R&N, pp. 371 – 380

Figure 12.1, 12.6 (examples), 12.2 (algorithm), 12.3-5 (properties)

Page 11: Lecture 26 of 42
Page 12: Lecture 26 of 42
Page 13: Lecture 26 of 42
Page 14: Lecture 26 of 42
Page 15: Lecture 26 of 42
Page 16: Lecture 26 of 42
Page 17: Lecture 26 of 42
Page 18: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Universal Quantifiers in Planning

Quantification within Operators p. 383 R&N

ExamplesShakey’s World

Blocks World

Grocery shopping

Others (from projects?)

Exercise for Next Tuesday: Blocks World

Page 19: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Practical Planning

The Real World What can go wrong with classical planning?

What are possible solution approaches?

Conditional Planning

Monitoring and Replanning (Next Time)

Adapted from Russell and Norvig

Page 20: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Review:How Things Go Wrong in Planning

Adapted from slides by S. Russell, UC Berkeley

Page 21: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Review:Practical Planning Solutions

Adapted from slides by S. Russell, UC Berkeley

Page 22: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Adapted from slides by S. Russell, UC Berkeley

Conditional Planning

Page 23: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Monitoring and ReplanningMonitoring and Replanning

Page 24: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Adapted from slides by S. Russell, UC Berkeley

Preconditions for Remaining Plan

Page 25: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Adapted from slides by S. Russell, UC Berkeley

Replanning

Page 26: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Making Decisions under Uncertainty

Adapted from slides by S. Russell, UC Berkeley

Page 27: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Probability:Basic Definitions and Axioms

Sample Space (): Range of a Random Variable X Probability Measure Pr()

denotes a range of “events”; X: Probability Pr, or P, is a measure over 2

In a general sense, Pr(X = x ) is a measure of belief in X = xP(X = x) = 0 or P(X = x) = 1: plain (aka categorical) beliefs (can’t be revised)All other beliefs are subject to revision

Kolmogorov Axioms 1. x . 0 P(X = x) 1 2. P() x P(X = x) = 1

3.

Joint Probability: P(X1 X2) Probability of the Joint Event X1 X2

Independence: P(X1 X2) = P(X1) P(X2)

1ii

1ii

ji21

XPXP

.XXji,X,X

Page 28: Lecture 26 of 42

Computing & Information SciencesKansas State University

Wednesday, 24 Oct 2007CIS 530 / 730: Artificial Intelligence

Basic Formulas for Probabilities

Product Rule (Alternative Statement of Bayes’s Theorem)

Proof: requires axiomatic set theory, as does Bayes’s Theorem

Sum Rule

Sketch of proof (immediate from axiomatic set theory)Draw a Venn diagram of two sets denoting events A and B

Let A B denote the event corresponding to A B…

Theorem of Total Probability Suppose events A1, A2, …, An are mutually exclusive and exhaustive

Mutually exclusive: i j Ai Aj =

Exhaustive: P(Ai) = 1

Then

Proof: follows from product rule and 3rd Kolmogorov axiom

BP

BAPB|AP

BAPBP APBAP

i

n

ii APA|BPBP

1

A B