informed search algorithms chapter 4. outline best-first search greedy best-first search a * search...

41
Informed search algorithms Chapter 4

Upload: kourtney-skains

Post on 29-Mar-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Informed search algorithms

Chapter 4

Page 2: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Outline

• Best-first search

• Greedy best-first search

• A* search

• Heuristics

• Local search algorithms

• Hill-climbing search

Page 3: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Best-first search

• Idea: use an evaluation function f(n) for each node– estimate of "desirability"Expand most desirable unexpanded node

• Implementation:Order the nodes in fringe in decreasing order of desirability

• Special cases:– greedy best-first search– A* search

Page 4: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Romania with step costs in km

Page 5: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Greedy best-first search

• Evaluation function f(n) = h(n) (heuristic)

= estimate of cost from n to goal

• e.g., hSLD(n) = straight-line distance from n to Bucharest

• Greedy best-first search expands the node that appears to be closest to goal

Page 6: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Greedy best-first search example

Page 7: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Greedy best-first search example

Page 8: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Greedy best-first search example

Page 9: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Greedy best-first search example

Page 10: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt

• Time? O(bm), but a good heuristic can give dramatic improvement

• Space? O(bm) -- keeps all nodes in memory

• Optimal? No

Page 11: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search

• Idea: avoid expanding paths that are already expensive

• Evaluation function f(n) = g(n) + h(n)

• g(n) = cost so far to reach n

• h(n) = estimated cost from n to goal

• f(n) = estimated total cost of path through n to goal

Page 12: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 13: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 14: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 15: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 16: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 17: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* search example

Page 18: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Admissible heuristics

• A heuristic h(n) is admissible if for every node n,

h(n) ≤ h*(n),

where h*(n) is the true cost to reach the goal state from n.

• An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

• Example: hSLD(n) (never overestimates the actual road distance)

• Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

Page 19: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Optimality of A* (proof)• Suppose some suboptimal goal G2 has been generated and is in the

fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

f(G2) = g(G2) since h(G2) = 0 (G2 is a goal)

> g(G) since G2 is sub-optimal = g(n) + d(n,G) since there is only one path from root to G (tree) g(n) + h(n) since h is admissible = f(n) by definition of f

Hence f(G2) > f(n), and A* will never select G2 for expansion. Recall that it is only when a node is picked for expansion that we check if it is goal.

Page 20: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Properties of A*

• Complete?? Yes

• Optimal?? Yes

• Let C* be the cost of the optimal solution.

• A* expands all nodes with f(n) < C*• A* expands some nodes with f(n) =

C*• A* expands no nodes with f(n) > C*

Page 21: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* using Graph-Search

• A* using Graph-Search can produce sub-optimal solutions, even if the heuristic function is admissible.

• Sub-optimal solutions can be returned because Graph-Search can discard the optimal path to a repeated state if it is not the first one generated.

Page 22: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Consistent heuristics• A heuristic is consistent if for every node n, every successor n' of n

generated by any action a,

c(n,a,n') + h(n') h(n)

• If h is consistent, we have f(n') = g(n') + h(n')

= g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n)

i.e., f(n) is non-decreasing along any path.

• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

Page 23: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Optimality under consistency• Recall: If h is consistent, then A* expands nodes in

order of increasing f value.• First goal node selected for expansion must be

optimal.– Recall that only when a node is selected for expansion only

then it is tested whether it is a goal state or not.

• Why?

• Because if a goal node G2 is selected later for expansion than G1 this means that: – f(G1) f(G2)

– g(G1) + h(G1) g(G2) + h(G2)

• recall h(G1) = h(G2)=0 since G1 and G2 are goal, hence

– g(G1) g(G2)

• which means that G2 is sub-optimal solution.

Page 24: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Straight-Line Distance Heuristic is Consistent

• Why?• We know that the general triangle inequality is satisfied

when each side is measured by the straight-line. • And c(n,a,n’) is greater or equal to the straight-line

distance between n and n’.– d(n.state, goalstate) d(n’.state, goalstate) + d(n.state, n’.state)

• By Euclidian distance triangle property

• Also recall that

– d(n.state, goalstate) = h(n) and

– d(n’.state, goalstate) = h(n’), and

– d(n.state, n’.state) c(n,a,n’) for any action a. Hence,

– h(n) h(n’) + d(n.state, n’.state) h(n’) + c(n,a,n’)• I.e. h is consistent.

Page 25: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Admissible heuristics

E.g., for the 8-puzzle:• h1(n) = number of misplaced tiles• h2(n) = total Manhattan distance |x1-x2|+|y1+y2|(i.e., no. of squares from desired location of each tile)

• h1(S) = ? • h2(S) = ?

Page 26: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Admissible heuristics

E.g., for the 8-puzzle:• h1(n) = number of misplaced tiles• h2(n) = total Manhattan distance |x1-x2|+|y1+y2|(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8• h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Page 27: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Why are they admissible?

• Misplaced tiles: No move can get more than one misplaced tile into place,so this measure is a guaranteed underestimate and hence admissible.

• Manhattan: In fact, each move can at best decrease by one the rectilinear distance of a tile from its goal.

Page 28: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Are they consistent?

• c(n,a,n’) = 1 for any action a• Claim: h1(n) h1(n’) + c(n,a,n’) = h1(n’) + 1

– Now, no move (action) can get more than one misplaced tile into place.

– Also, no move can create more than one new misplaced tile.

– Hence, the above follows. I.e. h1 is consistent.

• Similar reasoning for h2 as well.

Page 29: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* Graph-Search (without consistent heuristic)

• The problem is that sub-optimal solutions can be returned because Graph-Search can discard the optimal path to a repeated state if it is not the first one generated.

• When the heuristic function h is consistent, then we proved that the optimal path to any repeated state is the first one to be followed.

• When h is not consistent, we can still use Graph-Search if we discard the longer path. We store in the hashtable pairs (statekey, pathcost)

Graph-Search(problem, fringe) node = MakeNode(problem.initialstate);Insert(fringe, node);do

if ( Empty(fringe) ) return null; //failurenode = Remove(fringe);if (problem.GoalTest(node.state)) return node;key_cost_pair = Find( closed, node.state.getKey() );if (key_cost_pair == null || key_cost_pair.pathcost > node.pathcost)

Insert (closed, (node.state.getKey(), node.pathcost) );InsertAll (fringe, Expand(node, problem) );

while (true);

Page 30: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

A* using Graph-Search

• Let’s try it here

Page 31: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Dominance• If h2(n) ≥ h1(n) for all n (both admissible)• then h2 dominates h1 • h2 is better for search

– Let C* be the cost of the optimal solution.– A* expands all nodes with f(n) < C*– A* expands some nodes with f(n) = C*– A* expands no nodes with f(n) > C*– Hence, we want f(n) (g(n)+h(n)) to be as big as possible. Since we

can’t do anything about g(n) we are interested in having h(n) as big as possible.

• Typical search costs (average number of nodes expanded):

• d=12 IDS = 3,644,035 nodesA*(h1) = 227 nodes A*(h2) = 73 nodes

• d=24 IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

Page 32: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

The max of heuristics

• If a collection of admissible heuristics h1, …, hm is available for a problem, and none of them dominates any of the others, we create a compound heuristic as:– h(n) = max {h1(n), …, hm(n)}

• Is it admissible?• Yes, because each component is admissible, so h won’t over-estimate

the distance to the goal.

• If h1, …, hm are consistent, is h consistent as well?• Yes. For any action (move) a:

c(n,a,n’)+h(n’) =

c(n,a,n’)+ max {h1(n’), …, hm(n’)} =

max {c(n,a,n’)+h1(n’), …, c(n,a,n’)+hm(n’)} (from consistency of hi)

max {h1(n), …, hm(n)} = h(n)

Page 33: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Relaxed problems

• A problem with fewer restrictions on the actions is called a relaxed problem

• The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem

• If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution

• If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution

Page 34: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Local search algorithms

• In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution

• State space = set of "complete" configurations• Find configuration satisfying constraints,

e.g., n-queens

• In such cases, we can use local search algorithms, which keep a single "current" state, and try to improve it.

Page 35: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Example: n-queens

• Put n queens on an n × n board with no two queens on the same row, column, or diagonal

• How many successors from each state?• (n-1)*n

Page 36: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

8-queens problem

• h = number of pairs of queens that are attacking each other, either directly or indirectly

• h = 17 for the above state

Page 37: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Hill-climbing search• "Like climbing Everest in thick fog with amnesia"

• Hill-Climbing chooses randomly among the set of best successors, if there is more than one.

Page 38: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Hill-climbing search

• Problem: depending on initial state, can get stuck in local maxima

Page 39: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Hill-climbing search: 8-queens problem

• A local minimum with h = 1

Page 40: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Local Minima

• Starting from a randomly generated state of the 8-queens, hill-climbing gets stuck 86% of the time, solving only 14% of problems.

• It works quickly, taking just 4 steps on average when it succeeds, and 3 when it gets stuck – not bad for a state space with 88 17 million states.

• Memory is constant, since we keep only one state.

• Since it is so attractive, what can we do in order to not get stuck?

Page 41: Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

Random-restart hill climbing

• Well known adage: “If at first you don’t succeed, try, try again.”

• It conducts a series of hill-climbing searches from randomly generated initial states, stopping when a goal is found.

• If each hill-climbing search has a probability p of success, then the expected number of restarts is 1/p.

• For 8-queens, p=0.14, so we need roughly 7 iterations to find a goal, i.e. 22 steps (3 steps for failures and 4 for success)