1 midterm review. 2 topic 1 - agents agents and environments rationality peas (performance...

66
1 Midterm Review

Upload: joanna-alexander

Post on 12-Jan-2016

220 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

1

Midterm Review

Page 2: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

2

Topic 1 - Agents

Agents and environments Rationality PEAS (Performance measure, Environment,

Actuators, Sensors) Environment types Agent types

Page 3: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

3

PEAS: Specifying an automated taxi driver

Performance measure: • safe, fast, legal, comfortable, maximize profits

Environment: • roads, other traffic, pedestrians, customers

Actuators: • steering, accelerator, brake, signal, horn

Sensors: • cameras, sonar, speedometer, GPS

Page 4: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

4

Environment types: Definitions I Fully observable (vs. partially observable): An agent's sensors

give it access to the complete state of the environment at each point in time.

Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent.

• (If the environment is deterministic except for the actions of other agents, then the environment is strategic)

Static (vs. dynamic): The environment is unchanged while an agent is deliberating. • The environment is semidynamic if the environment itself does not

change with the passage of time but the agent's performance score does.

Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

Single agent (vs. multiagent): An agent operating by itself in an environment.

Page 5: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

5

Agent types

Four basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

Learning agents incorporate one of the above

Page 6: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

6

Topic 2 - Python

Reference Semantics, Names & Assignment Containers: Lists, Tuples, Dictionaries Mutability List Comprehensions Flow of Control: For loops Importing & Modules Defining new classes & methods; inheritance

Page 7: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

7

Topic 3 – Uninformed Search

Problem formulation Basic uninformed search algorithms

• Breadth-first search• Depth-first search• Iterative-deepening search

Page 8: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

8

Problem formulation A problem is defined by:

• An initial state, e.g. Arad• Successor function S(X)= set of action-state pairs

—e.g. S(Arad)={<Arad Zerind, Zerind>,…}

—initial state + successor function* = state space

• Goal test, can be—Explicit, e.g. x=‘at bucharest’—Implicit, e.g. checkmate(x)

• Path cost (additive)—e.g. sum of distances, number of actions executed, …—c(x,a,y) is the step cost, assumed to be >= 0

A solution is a sequence of actions from initial to goal state.

Optimal solution has the lowest path cost.

Page 9: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

9

Formulating a Search Problem

Decide:

Which properties matter & how to represent• Initial State, Goal State, Possible Intermediate States

Which actions are possible & how to represent• Operator Set

Which action is next• Path Cost Function

Page 10: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

10

Basic search algorithms: Tree Search How do we find the solutions of previous problems?

• Search the state space through explicit tree generation— ROOT= initial state.— Nodes and leafs generated through successor function.

function TREE-SEARCH(problem,fringe) return a solution or failurefringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)loop do

if EMPTY?(fringe) then return failurenode PICK-NEXT-NODE(fringe, STRATEGY(problem))if GOAL-TEST[problem] applied to STATE[node] succeeds

then return SOLUTION(node)fringe INSERT-ALL(EXPAND(node, problem), fringe)

A strategy is defined by picking the order of node expansion

Page 11: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

11

Evaluating Search Strategies Strategy = order of expansion

Dimensions for evaluation• Completeness- always find the solution?

• Time complexity - # of nodes generated

• Space complexity - # of nodes in memory

• Optimality - find least cost solution?

Time/space complexity measurements• b, maximum branching factor of search tree

• d, depth of the least cost solution

• m, maximum depth of the state space ()

Page 12: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

12

Breadth-first search

Idea: • Expand shallowest unexpanded node

Implementation:• Fringe is FIFO Queue:

—Put successors at the end of fringe successor list.

Page 13: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

13

Properties of breadth-first search

Complete? Yes (if b is finite) Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) Space? O(bd+1) (keeps every node in memory) Optimal? Yes (if cost = 1 per step)

(not optimal in general)

b: maximum branching factor of search tree

d: depth of the least cost solution

m: maximum depth of the state space ()

Page 14: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

14

Depth-first search

Idea: • Expand deepest unexpanded node

Implementation:• Fringe is LIFO Queue:

—Put successors at the front of fringe successor list.

Page 15: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

15

Properties of depth-first search

Complete? No: fails in infinite-depth spaces, spaces with loops• Modify to avoid repeated states along path

complete in finite spaces

Time? O(bm): terrible if m is much larger than d• but if solutions are dense, may be much faster than breadth-first

Space? O(bm), i.e., linear space!

Optimal? No

b: maximum branching factor of search treed: depth of the least cost solution

m: maximum depth of the state space ()

Page 16: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

16

Depth-first vs Breadth-first Use depth-first if

• Space is restricted

• There are many possible solutions with long paths and wrong paths can be detected quickly

• Search can be fine-tuned quickly

Use breadth-first if• Possible infinite paths

• Some solutions have short paths

• Can quickly discard unlikely paths

Page 17: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

17

Iterative deepening search

A general strategy to find best depth limit l.• Complete: Goal is always found at depth d, the

depth of the shallowest goal-node.

• Often used in combination with DF-search

Combines benefits of DF-search & BF-search

Page 18: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

18

ID search, Evaluation III

Complete: YES (no infinite paths)

Time complexity:

Space complexity:

Optimal: YES if step cost is 1.

O(bd )

O(bd)

Page 19: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

19

Calculating path cost, g(N) All paths are often not equal

Simplest cost: All moves equal in cost• Cost = # of nodes in path-1• g(N) = depth(N)

Expand lowest cost node first = breadth first

Assign unique cost to each step• N0, N1, N2, N3 = nodes visited• C(i,j): Cost of going from Ni to Nj

• g(N3)=C(0,1)+C(1,2)+C(2,3)

Page 20: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

20

Uniform-cost search (UCS) Extension of BF-search:

• Expand node with lowest path cost

Really (generalized) breadth first• AIMA calls this (strangely) uniform cost search

— And so will we….

Implementation: fringe = queue ordered by path cost.

Again: UC-search is the same as BF-search when all step-costs are equal.

Page 21: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

21

Summary of algorithms (for notes)

Criterion Breadth-First

Uniform-cost

Depth-First

Depth-limited

Iterative deepening

Bidirectional search

Complete?

YES* YES* NO YES, if l

d

YES YES*

Time bd+1 bC*/e bm bl bd bd/2

Space bd+1 bC*/e bm bl bd bd/2

Optimal? YES* YES* NO NO YES YES

Page 22: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

22

Topic 4 - Informed SearchPART I Informed = use problem-specific knowledge Best-first search and its variants A* - Optimal Search using Knowledge

Proof of Optimality of A*

Heuristic functions?• How to invent them

PART II Local search and optimization

• Hill climbing, local beam search, genetic algorithms,… Local search in continuous spaces Online search agents

Page 23: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

23

Heuristic functions & Greedy best-first search

Heuristic Functions

Let evaluation function f(n) = h(n) (heuristic)• h(n) = estimated cost of the cheapest path from

node n to goal node.• If n is goal then h(n)=0

Greedy Best-First Search

Expands the node that appears to be closest to goal Expands nodes based on f(n) = h(n)

• Some estimate of cost from n to goal For example, h(n) = hSLD(n) = straight-line distance from n to

goal

Page 24: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

24

Properties of greedy best-first search

Complete? • No – can get stuck in loops,

Time? • O(bm) – worst case (like Depth First Search)• But a good heuristic can give dramatic improvement

Space? • O(bm) – keeps all nodes in memory

Optimal? No

Page 25: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

25

A* search

Best-known form of best-first search. Idea: avoid expanding paths that are already

expensive. Evaluation function f(n)=g(n) + h(n)

• g(n) the cost (so far) to reach the node

• h(n) estimated cost to get from the node to the goal

• f(n) estimated total cost of path through n to goal

Implementation: Expand the node n with minimum f(n)

Page 26: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

26

Admissible heuristics

For A* to be optimal,h(n) must be an admissible heuristic • A heuristic is admissible if it never overestimates

the cost to reach the goal; i.e. it is optimistic• Formally, n, where n is a node:

1. h(n) <= h*(n) where h*(n) is the true cost from n

2. h(n) >= 0 so h(G)=0 for any goal G.

• Example:

hSLD(n) never overestimates the actual road distance

Theorem: If h(n) is admissible, A* using Tree Search is optimal

Page 27: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

27

A* search, evaluation Completeness: YES Time complexity: (exponential with path length) Space complexity:(all nodes are stored) Optimality: YES, if h(n) is admissible

• Cannot expand fi+1 until fi is finished.

• A* expands all nodes with f(n)< f(G)

• A* expands one node with f(n)=f(G)

• A* expands no nodes with f(n)>f(G)

Also optimally efficient (not including ties)

Page 28: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

28

Optimality of A* using Tree-Search I (proof)

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

1. g(G2) > g(G) since G2 is suboptimal

2. f(G2) = g(G2) since h(G2) = 0 3. f(G) = g(G) since h(G) = 0

4. f(G2) > f(G) from 1,2,3

Page 29: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

29

Optimality of A* using Tree-Search II (proof)

Suppose some suboptimal goal G2 has been generated and is in thefringe. Let n be an unexpanded node in the fringe such that n is on ashortest path to an optimal goal G.

4. f(G2) > f(G) repeated 5. h*(n) h(n) since h is admissible, and therefore optimistic6. g(n) + h*(n) g(n) + h(n) from 57. g(n) + h*(n) f(n) substituting definition of f8. f(G) f(n) from definitions of f and h* (think about it!)

9. f(G2) >f(n) from 4 & 8

So A* will never select G2 for expansion

Page 30: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

30

Defining Heuristics & Relaxed Problems

Cost of an exact solution to a relaxed problem (fewer restrictions on operator)

A tile can move from square A to square B if A is adjacent to B and B is blank.• A tile can move from square A to square B if A is adjacent to B.

(hmd)

• A tile can move from square A to square B if B is blank.

• A tile can move from square A to square B. (hoop)

Page 31: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

31

Hill-climbing searchI. While ( uphill points):

• Move in the direction of increasing value, lessening distance to goal Properties:

• Terminates when a peak is reached.• Does not look ahead of the immediate neighbors of the current state.• Chooses randomly among the set of best successors, if there is more than one.• Doesn’t backtrack, since it doesn’t remember where it’s been

a.k.a. greedy local search"Like climbing Everest in thick fog with amnesia“

Problems:• Local Maxima (foothills)• Plateaus• Ridges

Remedies:• Random restart• Problem reformulation• In the end: Some problem spaces are great for hill climbing and others

are terrible.

Page 32: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

32

Topic 5: Adversarial Search & Games

Part I Motivation Game Trees Evaluation Functions

Part II The Minimax Rule Alpha-Beta Pruning Game-playing AI successes

Page 33: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

33

Game setup Two players: A and B A moves first and they take turns until the game is over.

Winner gets award, loser gets penalty. Games as search:

• Initial state: e.g. board configuration of chess• Successor function: list of (move,state) pairs specifying legal

moves.• Terminal test: Is the game finished?• Utility function: Gives numerical value of terminal states. E.g.

win (+1), lose (-1) and draw (0) in tic-tac-toe A uses search tree to determine next move.

Page 34: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

34

How to Play a Game by Searching

General Scheme• Consider all legal moves, each of which will lead to some new state

of the environment (‘board position’)

• Evaluate each possible resulting board position

• Pick the move which leads to the best board position.

• Wait for your opponent’s move, then repeat.

Key problems• Representing the ‘board’

• Representing legal next boards

• Evaluating positions

• Looking ahead

Page 35: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

35

Game Trees Represent the problem space for a game by a tree Nodes represent ‘board positions’; edges represent legal

moves. Root node is the position in which a decision must be

made. Evaluation function f assigns real-number scores to `board

positions.’ Terminal nodes represent ways the game could end,

labeled with the desirability of that ending (e.g. win/lose/draw or a numerical score)

Page 36: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

36

MAX & MIN Nodes When I move, I attempt to MAXimize my performance. When my opponent moves, he or she attempts to MINimize

my performance.

TO REPRESENT THIS:

If we move first, label the root MAX; if our opponent does, label it MIN.

Alternate labels for each successive tree level.• if the root (level 0) is our turn (MAX), all even levels will

represent turns for us (MAX), and all odd ones turns for our opponent (MIN).

Page 37: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

37

Evaluation functions

Evaluations how good a ‘board position’ is

• Based on static features of that board alone

Zero-sum assumption lets us use one function to

describe goodness for both players.

• f(n)>0 if we are winning in position n

• f(n)=0 if position n is tied

• f(n)<0 if our opponent is winning in position n

Build using expert knowledge, • Tic-tac-toe: f(n)=(# of 3 lengths open for me)

- (# open for you)

Page 38: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

38

The Minimax Procedure

1. Start with the current position as a MAX node.

2. Expand the game tree a fixed number of ply (half-moves).

3. Apply the evaluation function to the leaf positions.

4. Calculate back-up up values bottom-up.

5. Pick the move which was chosen to give the MAX value at the root.

Page 39: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

39

Alpha-Beta Pruning Traverse the search tree in depth-first order For each MAX node n, α(n)=maximum child value found so

far• Starts with –• Increases if a child returns a value greater than the current α(n)• Lower-bound on the final value

For each MIN node n, β(n)=minimum child value found so far• Starts with +• Decreases if a child returns a value less than the current β(n)• Upper-bound on the final value

MAX cutoff rule: At a MAX node n, cut off search if α(n)>=β(n)

MIN cutoff rule: At a MIN node n, cut off search if β(n)<=α(n) Carry α and β values down in search

Page 40: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

40

Effectiveness of Alpha-Beta Pruning

Guaranteed to compute same root value as Minimax

Worst case: no pruning, same as Minimax (O(bd)) Best case: when each player’s best move is the

first option examined, you examine only O(bd/2) nodes, allowing you to search twice as deep!

For Deep Blue, alpha-beta pruning reduced the average branching factor from 35-40 to 6.

Page 41: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

41CIS 391 - Intro to AI

41

Topic 6:Interpreting Line Drawings(An Introduction to Constraint Satisfaction)

Page 42: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

42 42

Convexity Labeling ConventionsEach edge in an image can be interpreted to be either a convex

edge, a concave edge or an occluding edge:

+ labels a convex edge (angled toward the viewer); - labels a concave edge (angled away from the viewer); labels an occluding edge. To its right is the body for

which the arrow line provides an edge. On its left is space.

convex concaveoccluding

Page 43: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

43CIS 391 - Intro to AI

43

“Arrow” Junctions have 3 Interpretations “L” Junctions have 6

Page 44: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

44CIS 391 - Intro to AI

44

“T” Junctions Have 4 Interpretations, “Y” Junctions Have 5

+ -

Page 45: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

45CIS 391 - Intro to AI

45

The Edge Consistency Constraint

Any consistent assignment of labels to the junctions in a picture must assign the same line label to any given line.

L4

+

+

L3 ...

...

+

-

L3 ...

...

L5

Page 46: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

46CIS 391 - Intro to AI

46

Search Trees for Line Labelling

Implementation by search tree:1. Select some junction as the root.2. Label children at the 1st level with all possible

interpretations of that junction. 3. Label their children with possible consistent

interpretations of some junction adjacent to that junction.

4. Each level of the tree adds one more labeled node to the growing interpretation.

5. Leaves represent either futile interpretations that cannot be continued or full interpretations of the line drawing.

Page 47: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

47CIS 391 - Intro to AI

47

The Top Three Levels of a Search Tree

A1 A3A2

A1L1 A1L5 A2L3 A3L6

Page 48: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

48

Constraint Propagation

Waltz’s insight: Pairs of adjacent junctions (junctions connected

by a line) constrain each other’s interpretations! These constraints can propagate along the

connected edges of the graph.

Waltz Filtering: Suppose junctions i and j are connected by an

edge. Remove any labeling from i that is inconsistent with every labeling in the set assigned in j.

By iterating over the graph, the constraints will propagate.

Page 49: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

49

The Waltz/Mackworth Constraint Propagation Algorithm

1. Associate with each junction in the picture a set of all Huffman/Clowes junction labels appropriate for that junction type;

2. Repeat until there is no change in the set of labels associate with any junction:

3. For each junction i in the picture:

4. For each neighboring junction j in the picture:

5. Remove any junction label from i for which there is no edge-consistent junction label on j.

Page 50: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

50

An Example of Constraint Propagation

A1,A2,A3

L1,L2,L3,L4,L5,

L6

A1,A2, A3

A1,A2, A3

L1,L3,L5,L6

A1,A3

A1,A2, A3

L1,L3,L5,L6

A1,A2,A3

A1,A2,A3

L1, L5,L6

A1,A3

A1, A3L1, L5,L6

A1,A3

Check

Given

Page 51: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

51

Arcs

Given this notion of (i,j) junction pairs, The crucial notion is an arc, where an arc is a

directed pair of neighboring junctions. For a neighboring pair of junctions i and j, there

are two arcs that connect them: (i,j) and (j,i).

i j

(i,j)

(j,i)

Page 52: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

52

Arc ConsistencyWe will define the algorithm for line labeling with respect to

arc consistency as opposed to edge consistency:

An arc (i,j) is arc consistent if and only if every label on junction i is consistent with some label on junction j.

To make an arc (i,j) arc consistent, for each label l on junction i,

if there is no label on j consistent with l then

remove l from i

(Any consistent assignment of labels to the junctions in a picture must assign the same line label to an arc (i,j) as to the inverse arc (j,i) ).

Page 53: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

53

The AC-3 Algorithm (for Line Labeling)

1. Associate with each junction in the picture a set of all Huffman/Clowes junction labels for that junction type;

2. Make a queue of all the arcs (i,j) in the picture;

3. Repeat until the queue is empty:a. Remove an arc (i,j) from the queue;

b. For each label l on junction i, if there is no label on j consistent with l then

remove l from i

c. If any label is removed from i then for all arcs (k,i) in the picture except (j,i),

put (k,i) on the queue if it isn’t already there.

Page 54: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

54

AC-3: Worst Case Complexity Analysis All junctions can be connected to every other junction,

• so each of n junctions must be compared against n-1 other junctions,

• so total # of arcs is n*(n-1), i.e. O(n2)

If there are d labels, checking arc (i,j) takes O(d2) time Each arc (i,j) can only be inserted into the queue d times Worst case complexity: O(n2d3)

For planar pictures, where lines can never cross, the number of arcs can only be linear in N, so for our pictures, the time complexity is only O(nd3)

Page 55: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

55

Topic 7:Constraint Satisfaction Problems

Page 56: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

56

Constraint satisfaction problems

What is a CSP?• Finite set of variables X1, X2, …, Xn

• Nonempty domain of possible values for each variable D1, D2, … Dd

• Finite set of constraints C1, C2, …, Cm

• Each constraint Ci limits the values that variables can take, e.g., X1 ≠ X2

A state is defined as an assignment of values to some or all variables.

Consistent assignment: assignment does not violate the constraints.

Page 57: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

57

Constraint satisfaction problems

An assignment is complete when every variable is assigned a value.

A solution to a CSP is a complete assignment that satisfies all constraints.

Some CSPs require a solution that maximizes an objective function.

Applications: • Scheduling problems

—Job shop scheduling —Scheduling the Hubble Space Telescope

• Floor planning for VLSI • Map coloring

Page 58: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

58

Constraint graph

Binary CSP: each constraint relates two variables

Constraint graph: • nodes are variables• arcs are constraints

CSP benefits• Standard representation pattern: variables with

values• Generic goal and successor functions• Generic heuristics (no domain specific expertise).

Graph can be used to simplify search.—e.g. Tasmania is an independent subproblem.

WA NT

WA SA

NT

S

ANT Q

Page 59: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

59

CSP as a standard search problem

A CSP can easily expressed as a standard search problem, as we have seen• Initial State: the empty assignment {}.• Successor function: Assign value to unassigned

variable provided that there is not conflict.• Goal test: the current assignment is complete.• Path cost: a constant cost for every step.

Solution is found at depth n , for n variables• Hence depth first search can be used

Page 60: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

60

Backtracking search Note that variable assignments are commutative

• Eg [ step 1: WA = red; step 2: NT = green ] equivalent to [ step 1: NT = green; step 2: WA = red ]

Only need to consider assignments to a single variable at each node b = d and there are dn leaves

Depth-first search for CSPs with single-variable assignments is called backtracking search

Backtracking search is the basic uninformed algorithm for CSPs

Can solve n-queens for n ≈ 25

Page 61: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

61

Improving backtracking efficiency

General-purpose methods can give huge gains in speed:• Which variable should be assigned next?• In what order should its values be tried?• Can we detect inevitable failure early?

Heuristics:1. Most constrained variable

2. Most constraining variable

3. Least constraining value

4. Forward checking

Page 62: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

62

Constraint propagation Forward checking propagates information from assigned to

unassigned variables, but doesn't provide early detection for all failures:

NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally

Page 63: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

63

Arc consistency: The General Case Simplest form of propagation makes each arc consistent X Y is consistent iff

for every value x of X there is some allowed y

Page 64: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

64

function AC-3(csp) return the CSP, possibly with reduced domains

inputs: csp, a binary csp with variables {X1, X2, …, Xn}

local variables: queue, a queue of arcs initially the arcs in csp

while queue is not empty do

(Xi, Xj) REMOVE-FIRST(queue)

if REMOVE-INCONSISTENT-VALUES(Xi, Xj) then

for each Xk in NEIGHBORS[Xi ] do

add (Xk, Xj) to queue

function REMOVE-INCONSISTENT-VALUES(Xi, Xj) return true iff we remove a value

removed false

for each x in DOMAIN[Xi] do

if no value y in DOMAIN[Xi] allows (x,y) to satisfy the constraints between Xi and Xj

then delete x from DOMAIN[Xi]; removed true

return removed

General version of AC-3

Time complexity: O(n2d3)

Typo on your version fixed

Page 65: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

65

Local search for CSPs Hill-climbing, simulated annealing typically work

with "complete" states, i.e., all variables assigned

To apply to CSPs:• allow states with unsatisfied constraints• operators reassign variable values

Variable selection: randomly select any conflicted variable

Value selection by min-conflicts heuristic:• choose value that violates the fewest constraints• i.e., hill-climb with h(n) = total number of violated constraints

Page 66: 1 Midterm Review. 2 Topic 1 - Agents  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment

66

Example: n-queens States: 4 queens in 4 columns (44 = 256 states) Actions: move queen in column Goal test: no attacks Evaluation: h(n) = number of attacks

Given random initial state, min-conflicts can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000)