2. searching

78
1 Problems & Problem Spaces in AI

Upload: annia-khan

Post on 17-Jan-2016

24 views

Category:

Documents


0 download

DESCRIPTION

searching strategies

TRANSCRIPT

Page 1: 2. Searching

1

Problems & Problem Spaces in AI

Page 2: 2. Searching

2

Problems

There are 3 general categories of problems in AI:

• Single-agent path finding problems.

• Two-player games.

• Constraint satisfaction problems.

Page 3: 2. Searching

3

Single Agent Path-Finding ProblemsSingle Agent Path-Finding Problems• In these problems, in each case, we have a single problem-

solver making the decisions, and the task is to find a sequence of primitive steps that take us from the initial location to the goal location.

• Famous examples:– Rubik’s Cube (Erno Rubik, 1975).– Sliding-Tile puzzle.– Navigation - Traveling Salesman Problem.

Page 4: 2. Searching

4

Two-Player Games• In a two-player game, one must consider the moves of an

opponent, and the ultimate goal is a strategy that will guarantee a win whenever possible.

• Two-player games have received the most attention of the researchers till now. But, nowadays, researchers are starting to consider more complex games, many of them involve an element of chance.

• The best Chess, Checkers, and Othello players in the world are computer programs!

Page 5: 2. Searching

5

Constraint-Satisfaction Problems

• In these problems, we also have a single-agent making all the decisions, but here we are not concerned with the sequence of steps required to reach the solution, but simply the solution itself.

• The task is to identify a state of the problem, such that all the constraints of the problem are satisfied.

• Famous Examples:– Eight Queens Problem.– Missionary and Cannibal – Sudoku

Page 6: 2. Searching

6

The Problem Spaces

• A problem space consists of a set of states of a problem and a set of operators that change the state.

– State : a symbolic structure that represents a single configuration of the problem in sufficient detail to allow problem solving to proceed.

– Operator : a function that takes a state and maps it to another state.

Page 7: 2. Searching

7

A problem (or, more precisely, a problem instance) consists of:

– A problem space– An initial state– A set of goal states

» explicitly or implicitly characterized– A solution, i.e.,

» a sequence of operator applications leading from the initial state to a goal state

» or, a path through the state space from initial to final state.

Page 8: 2. Searching

8

Representing states• At any moment, the relevant world is represented as a state

– Initial (start) state: S– An action (or an operation) changes the current state to

another state (if it is applied): state transition– An action can be taken (applicable) only if its precondition is

met by the current state– For a given state, there might be more than one applicable

action– Goal state: a state satisfies the goal description or passes the

goal test– Dead-end state: a non-goal state to which no action is

applicable

Page 9: 2. Searching

9

Representing statesThe state-space representation of a problem is a triplet (I, O,

G)– where: I - initial state, O - a set of operators on states, G - goal states

• A state space can be organized as a graph:nodes: states in the spacearcs: actions/operations

• A solution is a path from the initial state to a goal state.• The size of a problem is usually described in terms of the

number of states (or the size of the state space) that are possible.

Page 10: 2. Searching

10

Some example problems

• Toy problems and micro-worlds– Missionaries and Cannibals– 8-Puzzle– Cryptarithmetic– Remove 5 Sticks– Traveling Salesman Problem (TSP)

• Real-world-problems

Page 11: 2. Searching

11

Missionaries and Cannibals

Three cannibals and three missionaries come to a crocodile

infested river. There is a boat on their side that can be used

by either one or two people. If the cannibals ever

outnumber the missionaries on either of the river’s banks,

the missionaries will get eaten.

How can the boat be used to safely carry all the

missionaries and cannibals across the river?

Page 12: 2. Searching

There are 3 missionaries, 3 cannibals, and 1 boat that can carry

up to two people on one side of a river.

Goal: Move all the missionaries and cannibals across the river.

Constraint: Missionaries can never be outnumbered by cannibals on

either side of river, or else the missionaries are killed.

State: configuration of missionaries and cannibals and boat on each

side of river.

Operators: Move boat containing some set of occupants across the

river (in either direction) to the other side.

Missionaries and Cannibals

Page 13: 2. Searching

Missionaries and Cannibals Solution

Near side Far side

0- Initial setup: MMMCCC B -

1- Two cannibals cross over: MMMC B CC

2- One comes back: MMMCC B C

3- Two cannibals go over again: MMM B CCC

4- One comes back: MMMC B CC

5- Two missionaries cross: MC B MMCC

6- A missionary & cannibal return: MMCC B MC

7- Two missionaries cross again: CC B MMMC

8- A cannibal returns: CCC B MMM

9- Two cannibals cross: C B MMMCC

10- One returns: CC B MMMC

11- And brings over the third: - B MMMCCC

Page 14: 2. Searching

14

State Space Example

• 3 missionaries, 3 cannibals, 1 boat

• The canoe can hold at most two people

• Cannibals may never outnumber missionaries (on either side)

• Initial state is (3, 3, 1), representing the number of missionaries, cannibals,

boats on the initial side

• The goal state is (0, 0, 0)

• Operators are addition or subtraction of the vectors

(1 0 1), (2 0 1), (0 1 1), (0 2 1), (1 1 1)

• Operators apply if result is between (0 0 0) and (3 3 1)

Another:

(331,220,321,300,311,110,221,020,031,010,021,000).

Page 15: 2. Searching

Searching Strategies

Page 16: 2. Searching

Search Directions

• The objective of search procedure is to discover a path through a problem

spaces from an initial configuration to a goal state. There are two

directions in which a search could proceed:

1. DATA DIRECTED

– Forward, from that start states

2. GOAL DIRECTED

– Backward, from the goal states

Page 17: 2. Searching

Forward Search(Data-directed / Data-Driven Reasoning / Forward Chaining)

This search starts from available information and tries to

draw conclusion regarding the situation or the goal

attainment. This process continues until (hopefully) it

generates a path that satisfies the goal condition.

This search tries to encode knowledge about how to respond

to certain input.

Page 18: 2. Searching

Backward Search (Goal directed/driven / Backward Chaining)

This search starts from expectations of what the goal is or what is to happen (hypothesis), then it seeks evidence that supports (or contradicts) those expectations (or hypothesis). The problem solver begins with the goal to be solved, then finds rules or moves that could be used to generate this goal and determine what conditions must be true to use them.

These conditions become the new goals, sub goals, for the search. This process continues, working backward through successive sub goals, until (hopefully) a path is generated that leads back to the facts of the problem. In short it aims to encode knowledge about how to achieve particular goals.

Page 19: 2. Searching

• Goal driven search is particularly useful in situations in which the goal can be clearly specified (for example, a theorem that is to be proved or finding an exit from a maze).

• It is also clearly the best choice in situations such as medical diagnosis where the goal (the condition to be diagnosed) is known, but the rest of the data (in this case, the causes of the condition) needs to be found.

Page 20: 2. Searching

Goal-driven search uses knowledge of the goal to guide the search.

Use goal-driven search if;

A goal or hypothesis is given in the problem or can easily be formulated.

There are a large number of rules that match the facts of the problem and thus produce an increasing number of conclusions or goals. (inefficient)

Problem data are not given but must be acquired by the problem solver. (impossible)

Page 21: 2. Searching

• Data-driven search is most useful when the initial data is provided, and it is not clear what the goal is.

• For example, a system that analyzes astronomical data and thus makes deductions about the nature of stars and planets would receive a great deal of data, but it would not necessarily be given any direct goals.

• Rather, it would be expected to analyze the data and determine conclusions of its own. This kind of system has a huge number of possible goals that it might locate.

• In this case, data-driven search is most appropriate.

Page 22: 2. Searching

Data-driven search uses knowledge and constraints found in the given data to search along lines known to be true.

Use data-driven search if:

• All or most of the data are given in the initial problem statement.

• There are a large number of potential goals, but there are only a few ways to use the facts and the given information of a particular problem.

• It is difficult to form a goal or hypothesis.

Page 23: 2. Searching

Uninformed vs. Informed Search

• Uninformed Search Strategies

(Blind Search)

– Breadth-First search

– Depth-First search

– Uniform-Cost search

– Depth-First Iterative Deepening search

23

Page 24: 2. Searching

Uninformed vs. Informed Search

Informed Search Strategies (Heuristic Search)

• Hill climbing

• Best-first search

• Greedy Search

• Beam search

• Algorithm A

• Algorithm A*

Page 25: 2. Searching

S

CBA

D E

1 5 8

37

F

6

Page 26: 2. Searching

Formalizing Search in a State Space• A state space is a graph, (V, E) where V is a set of nodes and E is a set

of arcs, where each arc is directed from a node to another node• node: corresponds to a state

– state description – plus optionally other information related to the parent of the

node, operation to generate the node from that parent, and other bookkeeping data)

• arc: corresponds to an applicable action/operation. – the source and destination nodes are called as parent (immediate

predecessor) and child (immediate successor) nodes with respect to each other

– ancestors( (predecessors) and descendents (successors) – each arc has a fixed, non-negative cost associated with it,

corresponding to the cost of the action

Page 27: 2. Searching

• node generation: making explicit a node by applying an action to another node which has been made explicit

• node expansion: generate all children of an explicit node by applying all applicable operations to that node

• One or more nodes are designated as start nodes

• A goal test predicate is applied to a node to determine if its associated state is a goal state

• A solution is a sequence of operations that is associated with a path in a state space from a start node to a goal node

• The cost of a solution is the sum of the arc costs on the solution path

Page 28: 2. Searching

• State-space search is the process of searching through a state space for a solution by making explicit a sufficient portion of an implicit state-space graph to include a goal node. – Hence, initially V={S}, where S is the start node; when S is expanded,

its successors are generated and those nodes are added to V and the associated arcs are added to E. This process continues until a goal node is generated (included in V) and identified (by goal test)

• During search, a node can be in one of the three categories:– Not generated yet (has not been made explicit yet)– OPEN: generated but not expanded– CLOSED: expanded– Search strategies differ mainly on how to select an OPEN node for

expansion at each step of search

Page 29: 2. Searching

Breadth-First Search (BFS)

Breadth First Search Algorithm:

• In breadth-first search, when a state is examined, all of its siblings are

examined before any of its children.

• The space is searched level-by-level, proceeding all the way across one

level before going down to the next level.

• BFS is suitable for problems with shallow solutions

Page 30: 2. Searching
Page 31: 2. Searching

Algorithm outline:

Always select from the OPEN the node with the smallest depth

for expansion, and put all newly generated nodes into OPEN

OPEN is organized as FIFO (first-in, first-out) list, i.e., a queue.

Terminate if a node selected for expansion is a goal

Breadth-First Search (BFS)

Page 32: 2. Searching

Evaluating Search Strategies

Completeness– Is the algorithm guaranteed to find a solution when there is

one?– A Complete search strategy Guarantees finding a solution

whenever one exists.

Time Complexity– How long (worst or average case) does it take to find a solution?

Usually measured in terms of the number of nodes expanded

32

Page 33: 2. Searching

Space Complexity

• How much memory is needed to perform the search?

• Usually measured in terms of the maximum size that the “OPEN"

list becomes during the search

Optimality/Admissibility

• If a solution is found, is it guaranteed to be an optimal or highest

quality solution among several different solutions? For example, is

it the one with minimum cost?

Evaluating Search Strategies

Page 34: 2. Searching

Properties

– Complete

– Optimal (i.e., admissible) if all operators have the same cost. In such a

case this search finds the shallowest solution.

– Exponential time and space complexity,

O(b^d) nodes will be generated, where

d is the depth of the solution and

b is the branching factor (i.e., number of children) at each

node.

Breadth-First Search (BFS)

Page 35: 2. Searching

Example Illustrating Uninformed Search Strategies

S

CBA

D GE

1 5 8

94 5

37

Page 36: 2. Searching

Example Illustrating Uninformed Search Strategies

S

CBA

D GE

1 5 8

937 4 5

G’ G”

Page 37: 2. Searching

Breadth-First SearchExp. node OPEN list CLOSED list { S } {} S { A B C } {S} A { B C D E G } {S A} B { C D E G G' } {S A B} C { D E G G' G" } {S A B C} D { E G G' G" } {S A B C D} E { G G' G" } {S A B C D E} G { G' G" } {S A B C D E}

Solution path found is S A G <-- this G also has cost 10 Number of nodes expanded (including goal node) = 7

Page 38: 2. Searching

S

CBA

D E

1 5 8

9 4 53

7

G’ G”G

Page 39: 2. Searching

Depth-First Search (DFS)

Depth First Search Algorithm:

• In depth-first search, when a state is examined, all of its children

and their descendants are examined before any of its sibling.

goal

Page 40: 2. Searching

• Depth first search always expands the deepest node in the current fringe of the search tree.

• The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors.

• As those nodes are expanded, they are dropped from the fringe, so then the search "backs up " to the next shallowest node that still has unexplored successors.

Page 41: 2. Searching
Page 42: 2. Searching

Depth-First Search (DFS)

• Algorithm outline:

– Always select from the OPEN the node with the greatest depth for

expansion, and put all newly generated nodes into OPEN

– OPEN is organized as LIFO (last-in, first-out) list.

– Terminate if a node selected for expansion is a goal.

Page 43: 2. Searching

Depth-First Search (DFS)

• May not terminate without a "depth bound," i.e., cutting off search

below a fixed depth D (How to determine the depth bound?)

• Not complete (with or without cycle detection, and with or without

a cutoff depth)

• Exponential time, O(b^d), but only linear space, O(bd), required

• Can find deep solutions quickly if lucky

• When search hits a deadend, can only back up one level at a time

even if the "problem" occurs because of a bad operator choice near

the top of the tree. Hence, only does "chronological backtracking"

Page 44: 2. Searching

Example Illustrating Depth First Search

S

CBA

D GE

1 5 8

937 4 5

G’ G”

Page 45: 2. Searching

Depth-First Search (DFS)

exp. node OPEN list { S }

S { A B C } A { D E G B C} D { E G B C } E { G B C } G { B C }

Solution path found is S A G <-- this G has cost 10 Number of nodes expanded (including goal node) = 5

Total Search cost = 20

S

CBA

D E

15

8

937

G

Page 46: 2. Searching

Depth-First Iterative Deepening (DFID)

• BF and DF both have exponential time complexity O(b^d)BF is complete but has exponential space complexityDF is incomplete

• Space is often a harder resource constraint than time

• Can we have an algorithm that – Is complete– Has linear space complexity, and– Has time complexity of O(b^d)

DFID by Korf in 1985.

Page 47: 2. Searching

Depth-First Iterative Deepening (DFID)

DFID Algorithm

It involves repeatedly carrying out DFS on the tree, starting with a DFS limited to depth of one, then DFS of depth two, and so on, until a goal is found.

Page 48: 2. Searching
Page 49: 2. Searching
Page 50: 2. Searching

Depth-First Iterative Deepening (DFID)

• Complete (iteratively generate all nodes up to depth ‘d’)

• Optimal/Admissible if all operators have the same cost.

Otherwise, not optimal but does guarantee finding solution of

shortest length (like BF).

Page 51: 2. Searching

Depth-First Iterative Deepening (DFID)

Linear space complexity: (like DF)

Time complexity is a little worse than BFS or DFS

because nodes near the top of the search tree are

generated multiple times, but because almost all of

the nodes are near the bottom of a tree, the worst

case time complexity is still exponential.

Page 52: 2. Searching

Branching factor= Maximum Number of children of a node (Here it is b=2)

Page 53: 2. Searching
Page 54: 2. Searching
Page 55: 2. Searching

Uniform-Cost Search(UCS)

• Let g(n) = cost of the path from the start node to an open node n

• Algorithm outline:

– Always select from the OPEN the node with the least g(.) value for

expansion, and put all newly generated nodes into OPEN

– Nodes in OPEN are sorted by their g(.) values (in ascending order)

– Terminate if a node selected for expansion is a goal

Page 56: 2. Searching

S

CBA

D E

15

937

G G’ G”

8

54

Page 57: 2. Searching

Uniform-Cost Search(UCS)

exp.node Open list

{S(0)}

S {A(1) B(5) C(8)}

A {D(4) B(5) C(8) E(8) G(10)}

D {B(5) C(8) E(8) G(10)}

B {C(8) E(8) G’(9) G(10)}

C {E(8) G’(9) G(10) G”(13)}

E {G’(9) G(10) G”(13) }

G’ {G(10) G”(13) }

Solution path found is S B G <-- this G has cost 9, not 10

Number of nodes expanded (including goal node) = 7

S

CBA

D E

15

937

G G’ G”

8

54

Page 58: 2. Searching

Finding a Path to Node E using the UCS

Exp.Node Open ListA(0)

A C(2) B(10) C E’(5) B(10)E’ B(10)

Solution Path Found: A-C-ESolution Cost: 5Nodes Expanded: 3 (goal node included)

A

E

CB

DC’ E’

10 2

3 33

4

Page 59: 2. Searching

Use the following Search Tree and find a path from node A – Z using BFS, DFS and UCS

A

M

CB

E

Y

D F

10 2

3 33

46

ZS

1

13

Page 60: 2. Searching

Uniform-Cost Search (UCS)

• Complete (if cost of each action is not infinitesimal)– Goal node will eventually be generated (put in OPEN) and selected for

expansion (and passes the goal test)

• Exponential time and space complexity, – worst case: becomes BFS when all arcs cost the same

Page 61: 2. Searching

Uniform-Cost Search(UCS)

Optimal/Admissible

• Admissibility depends on the goal test being applied when a node is

removed from the OPEN list, not when it's parent node is expanded

and the node is first generated (delayed goal testing)

• Multiple solution paths (following different back pointers)

• Each solution path that can be generated from an open node n will

have its path cost >= g(n)

• When the first goal node is selected for expansion (and passes the goal

test), its path cost is less than or equal to g(n) of every OPEN node n

(and solutions entailed by n)

Page 62: 2. Searching

When to use what

• Depth-First Search: – Many solutions exist– Know (or have a good estimate of) the depth of solution

• Breadth-First Search: – Some solutions are known to be shallow

• Uniform-Cost Search: – Actions have varying costs – Least cost solution is required This is the only uninformed search that worries about costs.

Page 63: 2. Searching
Page 64: 2. Searching

Heuristic

Origin: from Greek Word ‘heuriskein’, meaning, “to discover”.Webster’s New World Dictionary defines Heuristic as “helping to discover

or learn”

A heuristic is a method to help solve a problem, commonly informal. It is particularly used for a method that often rapidly lead to a solution that is usually reasonably close to the best possible answer.

Heuristics are "rules of thumb", educated guesses, intuitive judgments or simply common sense.

A heuristic contributes to the reduction of search in a problem-solving activity.64

Page 65: 2. Searching

Heuristic SearchUses domain-dependent (heuristic) information in order to search the

space more efficiently.

Ways of using heuristic information:

Deciding which node to expand next, instead of doing the expansion in

a strictly single direction.

In the course of expanding a node, deciding which successor or

successors to generate, instead of blindly generating all possible

successors at one time;

Deciding that certain nodes should be discarded, or pruned, from the

search space.65

Page 66: 2. Searching

Heuristic Function

• It is a function that maps from problem state description to

measure of desirability, usually represented as number.

• Which aspects of the problem state are considered, how those

aspects are evaluated, and the weights given to individual

aspects are chosen in such a way that the value of the heuristic

function at a given node in the search process gives as good an

estimate as possible of whether that node is on the desired

path.

66

Page 67: 2. Searching

Major Benefits of Heuristics

1. Heuristic approaches have an inherent flexibility, which allows them to be used on ill-structured and complex problems.

2. These methods by design may be simpler for the decision maker to understand, especially when it is composed of qualitative analysis. Thus chances are much higher for implementing proposed solution.

3. These methods may be used as part of an iterative procedure that guarantees the finding of an optimal solution.

67

Page 68: 2. Searching

The A* Search Heuristic

• A search algorithm to find the shortest path through a search space to a goal state using a heuristic.

• f = g + h• g - the cost of getting from the initial state to the

current state• h - the cost of getting from the current state to a goal

state

Page 69: 2. Searching

(8-Puzzle Example)

The evaluation of the state (f) = g + h

g - the cost of getting from the initial state to the current state

(How many moves have been made)

h - the cost of getting from the current state to a goal state

(Count the number of tiles that are misplaced)

Page 70: 2. Searching
Page 71: 2. Searching

Same Search Without Using Heuristics

Page 72: 2. Searching

A* Search Heuristic (TSP Example)

Travel From City A to City CDistance traveled (g) = 2 milesDistance still to go (h) = 6 milesValue of Current State (f) = g + h = 8

Values for ‘h’

Page 73: 2. Searching

(TSP Example) (cont..)

Page 74: 2. Searching

Hill Climbing:-

• In this technique we begin by testing nodes in order just as we would in a

blind search, each time we discover that the node we are testing does not

satisfy the goal state, we calculate the difference between the node and

the goal state, by comparing the difference we can determine that

whether we are moving closer to the goal state or further away from it.

• If we are moving away from the goal then we can backtrack and select a

new path.

Page 75: 2. Searching

Hill Climbing:-

In hill climbing the basic idea is to always head

towards a state which is better than the current

one.

So, if you are at town A and you can get to town B and

town C (and your target is town D) then you should make

a move IF town B or C appear nearer to town D than town

A does.

Page 76: 2. Searching

• Hill climbing terminates when there are no successors of the current state

which are better than the current state itself.

Page 77: 2. Searching

Algorithm:• Evaluate the initial state. If it is also a goal state, then return it

and quit. Otherwise, continue with the initial state as the current state.

• Loop until a solution is found or until there are no new operators left to be applied in the current state:– Select an operator that has not yet been applied to the

current state and apply it to produce a new state.– Evaluate the new state

• If it is a goal state, then return it and quit.• If it is not a goal state but it is better than the current state,

then make it the current state.• If it is not the better than the current state, then continue

in the loop.

Page 78: 2. Searching

Game Playing• Games hold an inexplicable fascination for many peoples and the

perception that computer might play games has existed at least as

long as computers. It is one of the oldest areas of endeavor in Artificial

Intelligence.

• There were two reasons that games appeared to be a good domain in

which to explore machine intelligence:

– They provide a structures task in which it is very easy to measure

success or failure.

– They did not obviously require large amounts of knowledge. They

were thought to be solvable by straightforward search from the

starting state to a winning position.