problems h searc vs. games - york universitytsotsos/cse6340-2013/games-jkt.pdf · 2012-02-06 ·...

38
Games vs. search problems “Unpredictable” opponent solution is a strategy specifying a move for every possible opponent reply Time limits unlikely to find goal, must approximate Plan of attack: Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952–57) Pruning to allow deeper search (McCarthy, 1956) Chapter 6 3

Upload: others

Post on 03-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Gamesvs.searchproblems

“Unpredictable”opponent⇒solutionisastrategyspecifyingamoveforeverypossibleopponentreply

Timelimits⇒unlikelytofindgoal,mustapproximate

Planofattack:

•Computerconsiderspossiblelinesofplay(Babbage,1846)

•Algorithmforperfectplay(Zermelo,1912;VonNeumann,1944)

•Finitehorizon,approximateevaluation(Zuse,1945;Wiener,1948;Shannon,1950)

•Firstchessprogram(Turing,1951)

•Machinelearningtoimproveevaluationaccuracy(Samuel,1952–57)

•Pruningtoallowdeepersearch(McCarthy,1956)

Chapter63

Page 2: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Typesofgames

deterministicchance

perfect information

imperfect information

chess, checkers,go, othello

backgammonmonopoly

bridge, poker, scrabblenuclear war

battleships,blind tictactoe

Chapter64

Page 3: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Gametree(2-player,deterministic,turns)

X XX X

XX

X

X X

MAX (X)

MIN (O)

XX

O

OO XO

OOO

OO O

MAX (X)

XO XO XOXXX

XX

XX

MIN (O)

XOXXOXXOX

. . .. . .. . .. . .

. . .

. . .

. . .

TERMINALX X

−1 0+1 Utility

Chapter65

Page 4: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Minimax

Perfectplayfordeterministic,perfect-informationgames

Idea:choosemovetopositionwithhighestminimaxvalue=bestachievablepayoffagainstbestplay

E.g.,2-plygame:

MAX

31286 4 21452

MIN

3

A1A3 A2

A13 A12 A11A21A23 A22A33 A32 A31

322

Chapter66

Page 5: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Minimaxalgorithm

functionMinimax-Decision(state)returnsanaction

inputs:state,currentstateingame

returntheainActions(state)maximizingMin-Value(Result(a,state))

functionMax-Value(state)returnsautilityvalue

ifTerminal-Test(state)thenreturnUtility(state)

v←−∞

fora,sinSuccessors(state)dov←Max(v,Min-Value(s))

returnv

functionMin-Value(state)returnsautilityvalue

ifTerminal-Test(state)thenreturnUtility(state)

v←∞

fora,sinSuccessors(state)dov←Min(v,Max-Value(s))

returnv

Chapter67

Page 6: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofminimax

Complete??

Chapter68

Page 7: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofminimax

Complete??Onlyiftreeisfinite(chesshasspecificrulesforthis).NBafinitestrategycanexisteveninaninfinitetree!

Optimal??

Chapter69

Page 8: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofminimax

Complete??Yes,iftreeisfinite(chesshasspecificrulesforthis)

Optimal??Yes,againstanoptimalopponent.Otherwise??

Timecomplexity??

Chapter610

Page 9: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofminimax

Complete??Yes,iftreeisfinite(chesshasspecificrulesforthis)

Optimal??Yes,againstanoptimalopponent.Otherwise??

Timecomplexity??O(bm

)

Spacecomplexity??

Chapter611

Page 10: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofminimax

Complete??Yes,iftreeisfinite(chesshasspecificrulesforthis)

Optimal??Yes,againstanoptimalopponent.Otherwise??

Timecomplexity??O(bm

)

Spacecomplexity??O(bm)(depth-firstexploration)

Forchess,b≈35,m≈100for“reasonable”games⇒exactsolutioncompletelyinfeasible

Butdoweneedtoexploreeverypath?

Chapter612

Page 11: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Imperfect Decisions    

In complex games, there is not enough time to generate the entire search tree    

Can modify the minimax strategy by changing the utility function to an evaluation function (a heuristic) and to cut the search before the terminal states are reached using a Cutoff-Test

   

One kind of evaluation function is a weighted linear function    

w1f1 + w2f2 + w3f3 ....... + wnfn    

the w’s are weights and the f’s are features of the particular position    

Non-linear functions can also be used - but they are harder to develop (may be learned?)

   

Cut-offs may be simple (such as providing a depth limit) or use iterative deepening to go as far down the search tree as time allows

   

In general, simple strategies for cut-off have problems  

 

Page 12: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Using an evaluation function in Tic-Tac-Toe :

 

 

 

 

     

     

     

     

     

     

     

 

Page 13: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

   

Depth-First Version of MINIMAX - search proceeds recursively from left to right in a depth-first fashion To determine the minimax value of V(J): 1. if J is terminal, return V(J)=e(J); otherwise 2. Generate J's successors J1, J2, J3, ...Jb. 3. Evaluate V(J1), V(J2), V(J3), ....V(Jb) from left to right 4. If J is a MAX node, return V(J) = max[V(J1), V(J2), V(J3),....V(Jb)] 5. if J is a MIN node, return V(J) = min[ V(J1), V(J2), V(J3),....V(Jb) ] There is no need to generate all successors at once and keep them in storage until all are evaluated. Can do this in a backtracking style too and avoid all the storage costs Backtracking Version of MINIMAX To determine the minimax value of V(J): 1. if J is terminal, return V(J)=e(J); otherwise 2. for k=1, 2, ... , b do: a. Generate Jk, the kth successor of J. b. Evaluate V(Jk) c. if k=1, set CV(J) to V(J1), Otherwise for k>=2, set CV(J) to max[CV(J), V(Jk)] if J is MAX or set CV(J) to min[CV(J), V(Jk)] if J is MIN 3. return V(J) = CV(J) CV(J) represents the current value of the node J and is updated each time a child node is evaluated. In both versions, the evaluation of a node is not complete until all of it successors have been evaluated.

Page 14: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Alpha-Beta Pruning Alpha-Beta pruning modifies a minimax search so that not all branches need be examined (intuition: as soon as a branch is found to lead to disaster it is no longer explored) Let α be the value of the best choice (largest) found so far along the path for MAX and β to be the best choice (smallest) so far along the path for MIN. A sub-tree is pruned as soon as it is determined that it is worse than the current α or β value. Note that the current value of a MAX node can never decrease (because we always seek the maximum of its successors) and that of a MIN node can never increase (because we always seek the minimum of its successors) Branches are cut off according to dynamically adjusted bounds: 1. The α-bound - The cutoff for a MIN node J is a lower bound called α, equal to the highest current value of all MAX ancestors of J. The exploration of J can be terminated as soon as its current value CV equals or falls below α. 2. The β-bound - The cutoff for a MAX node J is an upper bound called β, equal to the lowest current value of all MIN ancestors of J. The exploration of J can be terminated as soon as its current value CV equals or rises above β.

Page 15: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Recursive algorithm for pruning and bound-updating is a procedure called V(J; α,β) α and β are two parameters, α < β; they are set to be the highest current value of all MAX ancestors of J and the lowest current value of all MIN ancestors of J, respectively Procedure returns V(J), the minimax value of J if it lies between α and β; Otherwise it returns α (if V(J)<=α) or β (if(V(J)>= β). If J is a root of a game tree, its minimax value will be obtained by V(J; -•, +•) V(J; α,β) 1. if J is terminal return V(J)=e(J). otherwise, let J1, J2, J3, ...Jb be the successors of J, set k to 1, if J is a MAX node, go to step 2 else go to step 2' 2. Set α to max[α, V(Jk; α,β) ] 2'. Set β to min[β, V(Jk; α,β) ] 3. if α >= β, return β, else continue 3'. if β <= α, return α, else continue 4. if k=b, return α; 4'. if k=b, return β; else set k to k+1 and go to step 2 else set k to k+1 and go to step 2' Performance depends on ordering of the successor nodes (if the largest value node is the first checked, for example, then you are done - of course you never really know unless the expansion actually includes a method for achieving the optimal ordering) Pruning occurs at step 3 and 3' by abandoning some set of successors of J

Page 16: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Propertiesofα–β

Pruningdoesnotaffectfinalresult

Goodmoveorderingimproveseffectivenessofpruning

With“perfectordering,”timecomplexity=O(bm/2

)⇒doublessolvabledepth

Asimpleexampleofthevalueofreasoningaboutwhichcomputationsarerelevant(aformofmetareasoning)

Unfortunately,3550

isstillimpossible!

Chapter620

Page 17: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Expected Values in Games of Chance Suppose that dice are involved in a game; how can we deal with this?

A roll of the dice determines the set of legal moves possible for a given turn. Can extend the game tree already developed by including chance nodes in addition to MAX and MIN nodes

Page 18: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

A chance node represents a possible roll of the dice. For a normal set of dice, there are 21 unique rolls. The probability of doubles is 1/36 and the probability of one of the other 15 rolls is 1/18. In backgammon 5 - 6 is the same as 6 - 5. Each chance node has branches leading from it that represent the possible moves for that particular roll. For example if White rolls 3 and 5, there are two reasonable moves (and a few unreasonable ones too)  

Page 19: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Utility values are set by the following formula (represents the score for a full backgammon game): stake x [straight loss (1) or gammon (2) or backgammon (3)] x [doubling cube value (1, 2, 4 , 8 , 16 , 32, or 64)] = total score Stake and doubling is ignored for the rest of this discussion. So utility value is one of -3, -2, -1, 1, 2, 3. Recall that for a minimax strategy, the game was deterministic; the minimax value of a particular node is fully determined by the utility values of the leaf nodes of the game tree. Here, one can only compute an expected value over all possible rolls. To compute this expected value, terminal nodes still need a utility function for assignment of value of board position.

Page 20: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Say a particular chance node C is to be evaluated, a chance node whose successors are MAX nodes. di are the possible rolls of the dice, and P(di) is the probability of obtaining a particular roll di.

Let S(C, di) be the set of positions generated by applying the legal moves for dice roll di at position C, then

expectimax(C)=P(di)max s∈S(C,di) i ∑(utility(s))

What is this really doing? The expected value of a position is the weighted sum of the utilities, where the weight is the probability of that utility. If the chance node to be evaluated has MIN successors, the corresponding formula is

expectimin(C)=P(di)min s∈S(C,di)

i ∑(utility(s))

Using the usual minimax algorithm as a foundation, the expected value version of minimax includes modifications. The expectimax formula is not applied at each level. Starting from the terminal nodes, and moving upwards the formulae applied are:

Page 21: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

This assumes that one can generate the entire game tree, and for backgammon this is expensive. Number of possible states is estimated at over 10

20.

Also have high branching factor due to the dice rolls. At each roll there are 21 dice combinations possible, with an average of about 20 legal moves per roll - so there are over 400 branches! This is much larger than in checkers and chess (typical branching ratios quoted for these games are 8-10 for checkers and 30-40 for chess), and too large to reach significant depth. So consider ways of approximating the utility values.  

Page 22: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Hans Berliner, Scientific American, 243(1), p64 - 73, 1980 Berliner's program BKG 9.8 was the first computer program to defeat a world champion at any board or card game. Used evaluations functions: His experience was the following. 1. Started with a 'standard' function - each term represented a particular feature of a position and used constant coefficients to indicate the importance of each feature. The problem is that a constant coefficient represents only the average importance of a feature 2. Divided backgammon positions into classes, each with a different function. (motivated by alpha-beta pruning - could ignore whole classes) Problems with borders between classes - evaluation functions should yield close values but didn't always. 3. Transitions between classes made smooth not abrupt. Led to the SNAC approach (smooth non-linear application coefficients). Included application coefficients that were special, slowly changing variables that controlled the transition.

Page 23: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Checkers

http://www.cs.ualberta.ca/~chinook/ One Jump Ahead: Challenging Human Supremacy in Checkers J. Shaeffer, 1997, Springer-Verlag

2 players 12 pieces each Goal: Avoid being the player who can no longer move (usually when a player has no pieces left) Rules: Move forward on dark diagonal, 1 square at a time Opponent's piece captured when jumped to empty square diagonally behind opponent's piece Creation of a "king," a piece that can move backward and forward, occurs when piece is moved to opponent's last row

Page 24: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Is checkers complex? Here are the total number of checker positions sorted according to the number of pieces on the board. # PIECES # POSITIONS 1 120 2 6,972 3 261,224 4 7,092,774 5 148,688,232 6 2,503,611,964 7 34,779,531,480 8 406,309,208,481 9 4,048,627,642,976 10 34,778,882,769,216 11 259,669,578,902,016 12 1,695,618,078,654,976 13 9,726,900,031,328,256 14 49,134,911,067,979,776 15 218,511,510,918,189,056 16 852,888,183,557,922,816 17 2,905,162,728,973,680,640 18 8,568,043,414,939,516,928 19 21,661,954,506,100,113,408 20 46,352,957,062,510,379,008 21 82,459,728,874,435,248,128 22 118,435,747,136,817,856,512 23 129,406,908,049,181,900,800 24 90,072,726,844,888,186,880 Total: 500,995,484,682,338,672,639

Page 25: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Of particular interest are those positions where either the material is even if there is an even number of pieces on the board, or the difference is no more than one when there are an odd number of pieces present (for example, 4 vs 3 and 3 vs 4 for 7 pieces). 1 vs 0: 60 (x 2) 1 vs 1: 3,488 2 vs 1: 98,016 (x 2) 2 vs 2: 2,662,932 3 vs 3: 46,520,744 (x 2) 3 vs 3: 783,806,128 4 vs 3: 9,527,629,380 (x 2) 4 vs 4: 111,378,534,401 5 vs 4: 998,874,699,888 (x 2) 5 vs 5: 8,586,481,972,128 6 vs 5: 58,769,595,279,296 (x 2) 6 vs 6: 384,033,878,250,176 7 vs 6: 2,046,244,120,757,760 (x 2) 7 vs 7: 10,359,927,057,187,840 8 vs 7: 43,428,742,062,013,440 (x 2) 8 vs 8: 171,975,762,422,069,760 9 vs 8: 569,058,493,921,640,448 (x 2) 9 vs 9: 1,765,698,358,650,175,488 A vs 9: 4,596,454,069,579,874,304 (x 2) A vs A: 11,113,460,838,901,284,864 B vs A: 22,520,313,165,772,750,848 (x 2) B vs B: 41,842,926,176,229,654,528 C vs B: 64,703,454,024,590,950,400 (x 2) C vs C: 90,072,726,844,888,186,880 Total number of positions: 329,847,169,676,858,217,781 Brute force is hopeless.

Page 26: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Some History First checkers program written started in late 40's by Arthur Samuel at IBM (running several 'test' computers at a time overnight!). Used 'checker books' Used: •alpha-beta search •convergence forward pruning (prune when a-b values become sufficiently close so unlikely that much of an advantage would be found by pursing the sub-tree) •tapered marginal forward pruning (a-b pruning where a constant is added/subtracted to the backed-up values; tapered, the value of the constant changes as the level increases) •shallow search for tapered n-best forward pruning (only the n-best successors are pursued; n decreases as depth of search increases) and plausibility move ordering •termination criteria - game over, minimum depth, maximum depth, forward pruning, dead position 27 checkers features - with a linear evaluation function. Defined new features that he called signatures in terms of 27 original features. The signatures were no longer linear combinations of features; non-linearities in terms of feature interactions were possible. Also book moves were stored on magnetic tape (remember what that is?) and the programmer would control a 'sense switch' on the computer to tell the program to run a book move

Page 27: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Chinook is the World Man-Machine Champion, the first computer program to win a human world championship. This feat is recognized by the Guinness Book of World Records. (on-line publications as well as complete championship games are available at their web site). Chinook was developed by a team of researchers led by Dr. Jonathan Schaeffer of the Department of Computing Science at the University of Alberta. Chinook's strength comes from deep search, a good evaluation function and a database of all endgames with 6 pieces or less Typical checkers position has 8 legal moves (without captures), (chess 35-40) and a capture move has 1.25 Uses alpha-beta search (minimum depth of 19 ply) with iterative deepening 2 ply at a time Chinook divides the game into 5 phases: opening, middlegame, early endgame, late endgame, and database. Each of the first 4 phases uses a linear evaluation function of 22 variables with weights being set manually. The last phase needs no evaluation function because it has perfect information (2.5 x 10

9 6-piece positions)

There are a few positions where things are subtle; Chinook is unable to search deep enough to uncover these subtleties. Chinook uses an anti-book...a database of positions to avoid to help with these (about 2000 of them)

Page 28: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Othello (Reversi)

2 players Black-and-white disks Goal: Have most disks on the board at the end of the game Rules: Players alternate placing disks on unoccupied board spaces If opponent's disks are trapped between other player's disks, opponent's disks are flipped to the other player's color

Page 29: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

LOGISTELLO written by Michael Buro. beat the world champion Othello player, Takeshi Murakami of Japan, in a match held August of 1997. It used a neural network to learn from previous games to improve its knowledge of the game over time and beat Takeshi 6 out of 6 games, Evaluation features game stage dependent tables for each of the following patterns: horizontals/verticals of length 8 diagonals of length 4-8 3x3 corner 2x5 corner edge+2X Feature combination linear Search NegaScout with corner quiescence search and multi-ProbCut iterative deepening Move sorting hash-table containing moves and value bounds (2

21 entries)

response killer lists shallow searches Search speed (on a Pentium-Pro 200) middle-game: ~160,000 nodes/sec endgame: ~480,000 nodes/sec Search depth (in a 2x30 minutes game) middle-game: 18-23 selective including 10-15 brute-force ply endgame: win/loss/draw determination at 26-22 empty squares, exact score 1-2 ply later Opening book

Page 30: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

consists for the moment of about 23000 games and evaluations of "best" move alternatives is automatically updated currently several machines are working all day long on book improvement

Page 31: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Othello Programs that Learn Genetic Algorithms seem very productive - Darwersi is one of these programs Genetic algorithms have six basic steps in common. Need to determine a representation for members of the population and a way to measure 'fitness'. First, a set of potential solutions must be initialized to form the starting population. Second, each solution is evaluated according to its fitness. Third, new solutions are created using mutation and crossover on the current population, typically with more crossover than mutation, say a 3:1 ratio. Conserving and combining features is generally more helpful than varying them. A mutated descendent will differ from its parent in only a single bit. Crossover requires two parents and produces two offspring. Select a random point along the binary vector and split each parent at this point. Selecting those individuals to breed requires some element of chance. In nature some animals are lucky and some unlucky but those with better genes reproduce more. Genetic algorithms allow more offspring from high scoring individual than from low scoring individuals. Fourth, if there is no space for the new offspring, room must be made within the current population for the new individuals. Fifth, the new solutions are evaluated using the scoring system and inserted into the population. Sixth, if there is no more time then stop, otherwise go back to step three and make some more individuals.

Page 32: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child
Page 33: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

CHESS

2 players ; 16 pieces each (1 king, 1 queen, 2 rooks, 2 bishops, 2 knights, 8 pawns) Goal: Capture opponent's king (checkmate) Rules: Pieces are captured when landed on by opponent's piece Type of piece dictates movement options

Page 34: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Early Chess Programs Alan Turing designed the first chess program for a computer in 1951; it was hand-simulated and never actually programmed. He used a depth-first minimax procedure. The Los Alamos program written by Kister et al. in 1957 used a depth-first minimax for a 6x6 chessboard. Allen Newell was the first to apply alpha-beta to chess in 1959. The evaluation functions were based on experiments on several world champions. John McCarthy later also used alpha-beta search with a linear evaluation function, competed with a program developed at the Moscow Institute of Theoretical and Experimental Physics by G.M. Adelson-Velskiy, and in 1968 the Moscow program beat the Stanford program 2-1.

Richard Greenblatt, Donald Eastlake and Stephen Crocker at MIT wrote an early chess program using alpha-beta search in 1967 called Mac Hack. Their program evaluates the moves from a position and not from the successor positions (for efficiency) - so search is shallow (1 level). It uses those results for plausibility ordering of moves and for a tapered n-best forward pruning of moves. The program also has book openings and detects duplicate positions in the game tree in order to both avoid duplicate searches and detect draws by repetition of positions. It makes its move in about a minute - and most good players beat it easily.

Page 35: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Deep Blue Deep Blue's evaluation function looks at four basic chess values: material, position, King safety and tempo. Material is based on the "worth" of particular chess pieces. For example, if a pawn is valued at 1, then the rook is worth 5 and the Queen is valued at 9. The King, of course, is beyond value because his loss means the loss of the game. The simplest way to understand position is by looking at your pieces and counting the number of safe squares they can attack. King safety is a defensive aspect of position. It is determined by assigning a value to the safety of the King's position in order to know how to make a purely defensive move. Tempo is related to position but focuses on the race to develop control of the board. A player is said to "lose a tempo" if he dillydallies while the opponent is making more productive advances. Deep Blue is not only the finest chess-playing computer in the world, it is also the fastest. This makes perfect sense, because history has proven that the fastest computers conduct the most extensive searches into possible positions. More searches gives the computer a wider array of moves to choose from and therefore a greater chance of choosing the optimum move. Deep Blue employs a system called selective extensions to examine chessboard positions.

Page 36: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Selective extensions allow the computer to more efficiently search deeply into critical board arrangements. Instead of attempting to conduct an exhaustive "brute force" search into every possible position, Deep Blue selectively chooses distinct paths to follow, eliminating irrelevant searches in the process. Deep Blue uses "live" software that can actually generate up to 200,000,000 positions per second when searching for the optimum move. The software begins this process by taking a strategic look at the board. It then computes everything it knows about the current position, integrates the chess information pre-programmed by the development team, and then generates a multitude of new possible arrangements. From these, it then chooses its best possible next move. Deep Blue's extensive searches make full use of the computer's massively parallel design. "At the search level, you're saying 'OK, here's the position. I need to search all the moves," says Joe Hoane, the Deep Blue development team member in charge of software. "And you go search all the moves, all at the same time, preferably on a bunch of different computers." The software inside of Deep Blue is one all-inclusive program written in C, running under the AIX operating system. Deep Blue utilizes the IBM SP Parallel System called MPI. "It's a message-passing system," says Hoane. "So the search is just all control logic. You're passing control messages back and forth that say, well, what am I doing? Did you finish this? OK, here's your next job. That kind of thing at the SP level." The latest iteration of the Deep Blue computer is a 32-node IBM RS/6000 SP high-performance computer, which utilizes the new Power Two Super Chip processors (P2SC). Each node of the SP employs a single microchannel card containing 8 dedicated VLSI chess processors, for a total of 256 processors working in tandem. The net result is a scalable, highly parallel system capable of calculating 60 billion moves within three minutes, which is the time allotted to each player's move in classical chess.

Page 37: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

Deep Blue vs Gary Kasparov 1. Deep Blue can examine and evaluate up to 200,000,000 chess positions per second . Garry Kasparov can examine and evaluate up to three chess positions per second 2. Deep Blue has a small amount of chess knowledge and an enormous amount of calculation ability. Garry Kasparov has a large amount of chess knowledge and a somewhat smaller amount of calculation ability. 3. Garry Kasparov uses his tremendous sense of feeling and intuition to play world champion-calibre chess. Deep Blue is a machine that is incapable of feeling or intuition. 4. Deep Blue has benefitted from the guidance of five IBM research scientists and one international grandmaster. Garry Kasparov is guided by his coach Yuri Dokhoian and by his own driving passion play the finest chess in the world. 5. Garry Kasparov is able to learn and adapt very quickly from his own successes and mistakes. Deep Blue, as it stands today, is not a "learning system." It is therefore not capable of utilizing artificial intelligence to either learn from its opponent or "think" about the current position of the chessboard. 6. Deep Blue can never forget, be distracted or feel intimidated by external forces (such as Kasparov's infamous "stare"). Garry Kasparov is an intense competitor, but he is still susceptible to human frailties such as fatigue, boredom and loss of concentration. 7. Deep Blue is stunningly effective at solving chess problems, but it is less "intelligent" than even the stupidest human. Garry Kasparov is highly intelligent. He has authored three books, speaks a variety of languages, is active politically and is a regular guest speaker at international conferences.

Page 38: problems h searc vs. Games - York Universitytsotsos/CSE6340-2013/games-jkt.pdf · 2012-02-06 · CV(J) represents the current value of the node J and is updated each time a child

8. Any changes in the way Deep Blue plays chess must be performed by the members of the development team between games. Garry Kasparov can alter the way he plays at any time before, during, and/or after each game. 9. Garry Kasparov is skilled at evaluating his opponent, sensing their weaknesses, then taking advantage of those weaknesses. While Deep Blue is quite adept at evaluating chess positions, it cannot evaluate its opponent's weaknesses. 10. Garry Kasparov is able to determine his next move by selectively searching through the possible positions. Deep Blue must conduct a very thorough search into the possible positions to determine the most optimal move (which isn't so bad when you can search up to 200 million positions per second).