the easy-factors heuristiclironcoh/msc_thesis.pdf · the easy-factors heuristic liron cohen...
TRANSCRIPT
The Easy-Factors Heuristic
Liron Cohen
Submitted in partial fulfilment of the requirements
of the degree of Master of Science
Under the supervision of Prof. Jeffrey S. Rosenschein
July, 2013
Rachel and Selim Benin
School of Computer Science and Engineering
The Hebrew University of Jerusalem
Israel
Acknowledgments
First, I want to express my gratitude for the ongoing support from my adviser Jeffrey
Rosenschein. Jeff, I feel very lucky to be advised by you, both academically and
mentally. I very much appreciate your willingness to guide me in whatever I found
interesting, even though it was not completely align with your core work. I enjoyed
every minute spent at your office and always got out encouraged!
I would like to thank Carmel Domshlak for extremely helpful discussions, sharp-
ening my ideas and putting them in the right context. I am also thankful to Rina
Dechter for her insights and interesting thoughts on graphical models as well as
encouraging me to pursue further studies. Finally, I want to thank my lab mate
and friend Nir Pochter who introduced me to the field of automated planning and
answered any question that crossed my mind.
Finally, I want to thank my parents, Avi and Orna, for their endless financial
backing, encouragement and loving care. Without them, none of this work would
have been possible. I also want to thank my siblings, Shiran and Gilran, for making
me smile, lift my spirit, and remind me what really important in life. Lastly, I want
to mention my dog, Baz, who was a great companion for much of this journey. From
the bottom of my heart, I love you all!
Abstract
State-space search with implicit abstraction heuristics, such as Fork-Decomposition,
is a state-of-the-art technique for optimal planning. However, the appropriate in-
stances where this approach can be used are somewhat limited by the fixed structure
required (i.e., a fork), which bounds the informativeness of the abstracted sub-
problems. Addressing this issue, we introduce a new implicit abstraction heuris-
tic that does not a priori adhere to any specific structure, but rather examines
the specific problems structure, and abstracts it accordingly. The abstraction is
guided by theoretical insights from locally-optimal factored planning; the abstracted
sub-problems have bounded parameterized complexity. Empirical evaluation, which
measured the impact of these theoretical bounds on several IPC domains, supports
the claim that this heuristic balances the trade-off between accuracy and runtime.
Contents
1 Introduction 3
2 Background 5
2.1 Classical Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Knowledge Compilation . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 IPC Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Heuristic Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Abstraction Heuristics . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Motivation and Related Work . . . . . . . . . . . . . . . . . . . . . . 14
3 Factored Planning 15
3.1 Tree-Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Factored Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Empirical Analysis of Factored Planning . . . . . . . . . . . . . . . . 20
4 The Easy-Factors Heuristic 23
4.1 w-Cutset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Node-Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5 Conclusions and Future Work 33
1Introduction
State space search is a popular approach to optimal planning ; the efficiency of a
search for an optimal plan is dominated by the quality of the admissible heuristic
being used. There are two desirable properties for an effective heuristic: it should
be both fast to compute, as well as accurate.
A common method for generating informative heuristics is to use abstraction
heuristics, in which a planning task is projected to a subset of its variables. Different
schemes have been proposed to automatically create admissible domain-independent
abstraction heuristics, such as pattern-databases [Edelkamp, 2002] and merge-and-
shrink [Helmert et al., 2007]. Since these kinds of heuristics capture the abstracted
planning task states explicitly, the information that they may provide about the
problem is limited by their fixed size.
A different approach that has overcome this limitation is called implicit abstrac-
tion heuristics [Katz and Domshlak, 2010]. Heuristic functions of this class, such
as Fork-Decomposition, abstract the problem along its causal graph into tractable
fragments for optimal planning, each of which may have a large search-space, and
thus might be too large to be represented by an explicit representation. However,
Fork-Decomposition is not without shortcomings. It abstracts the problem into a
fixed simple structure that may not always be informative.
Recently, a factored approach to optimal planning [Brafman and Domshlak, 2006]
provided important insight into its parameterized complexity. The factored repre-
sentation exploits inherent independencies in a planning problem so as to decompose
a large domain into smaller factors. The analysis showed that the worst-case num-
ber of nodes needed to be expanded, when optimally solving the planning problem,
is exponential in two parameters: the problem causal graph’s tree-width, and the
maximal local-length of the solution.
We present a novel approach for the design of heuristic functions, which is
specifically tailored to the problem at hand’s structure. Taking advantage of the
task’s underlying graphical model, we employ tools from graph theory, such as w-
3
cutset [Bidyuk and Dechter, 2004] and nodes-contraction, to provide an efficient way
of bounding the two parameters that dominate the problem’s complexity. This al-
lows us to push the envelope of implicit heuristics beyond fixed-structures, such
as forks, towards less restricted structures with bounded parameterized-complexity.
Such less coarse abstractions will better reflect the original problem, which in turn
may provide more accurate approximations and thus better heuristics.
The next section formally introduces planning problems and the heuristic search
approach for solving them. We then overview factored-planning and evaluate its
accuracy in predicting the number of expanded nodes, as a function of the problem’s
parameters. Finally, we present the easy-factors heuristic and provide a preliminary
analysis of its informativeness given varying parameter values.
Graph Theory
Tree
Decom-
position
w-
Cutset
Constraints
ReasoningConstraint
Graph
Factored
Planning
Knowledge
Compilation
Causal
Graph
DomainTransition
Graph
Representation
SAS
PDDL
Easy
Factors
Heuristic
Figure 1.1: Thesis “Mind-Map”. In Chapter 2, Representation and Knowledge Compilation are discussed – PDDLis introduced as well as its conversion to SAS, from which the Causal Graph and the Domain Transition Graph canbe extracted. In chapter 3, Factored Planning is discussed – it relies on ideas from Constraint Reasoning and toolsfrom Graph Theory (Tree-Decomposition). In chapter 4, the Easy Factors Heuristic is discussed – it is developedfrom the ideas introduced in previous chapters as well as other tools from Graph Theory (w-cutset).
4
2Background
One of the artificial intelligence (AI) pillars is that of general problem solving. Per-
haps the best example is that of domain-independent planning – Given a description
of the current situation, a description of possible actions and a description of the
goals, the planning task is to identify a valid sequence of actions, called a plan, that
transforms current situation into one that satisfies some goal description.
In the following section, we define a specific representation for planning called
SAS. We then discuss different kind of knowledge one can extract from this lan-
guage. Later in this chapter, we introduce the state-space heuristic search approach
for planning and a specific family of domain-independent heuristics based on ab-
stractions. Finally, we conclude this chapter with strong motivation for our work.
2.1 Classical Planning
Various languages have been defined in order to describe domain-independent plan-
ning tasks formally. Planning Domain Definition Language (PDDL) [Mcdermott et al., 1998]
is currently the de facto standard language. PDDL is used to represent planning
domains and problems in all the recent International Planning Competitions (IPC).
In this paper, we will consider a specific representation called Simplified Action
Structures (SAS) which we will define shortly. We note that SAS description can be
generated automatically from its PDDL description [Helmert, 2006]. The advantage
of the SAS representation will become clear when we discuss knowledge compilation.
Intuitively, planning tasks specified in SAS+ are represented by state variables with
finite domains and set of actions, each with associated cost. More formally,
Definition 1. SAS+
A planning task in SAS+ language is Π = 〈V ,A, I,G, cost〉:
• V = {v, ..., vn} is a set of state variables, each with a finite domain Dv.
5
• A is a finite set of actions, where each action a is a pair 〈pre(a); eff(a)〉 of
partial assignments to V .
• cost : A → R+ is the action cost function
• I is a complete assignment to V representing the initial state, and G is a partial
assignment to V representing a set of states that are a goal.
Throughout this paper, we will use the following conventions for “applying”
actions – we say that an action a ∈ A is applicable in state s ∈ DV iff s[v] = pre(a)[v]
whenever pre(a)[v] is specified. Applying a changes the value v to eff(a)[v] for any
v specified in eff(a)[v]. We will specify an action a by 〈 pre(a) ; eff(a) 〉.
2.1.1 Knowledge Compilation
Two data structures play an important role during search – the Causal Graph (CG)
and the Domain Transition Graph (DTG). They are commonly referred to as knowl-
edge compilation because they compile important information about both the plan-
ning domain and the planning problem. Later, we will examine their usage and
importance in establishing the computational complexity of optimally solving the
planning task (specified in SAS). The CG and DTGs data structures can be gener-
ated as part of preprocessing phase. Thus, it is reasonable to assume we compute
them only once before we start the search for an optimal solution. We also note that
creating the CG and DTGs from the problem specification is far less time consuming
than the search process itself.
Definition 2. Causal Graph (CG)
The causal graph [Knoblock, 1994] encodes hierarchical dependencies between dif-
ferent state variables. That is, if changes in the value of variable v′ can depend on
the value of variable v then the CG contains an arc from v to v′. This means that,
if we intend to change variable v (by applying an action a) then the set of relevant
variables (that is, those in pre(a)) are exactly the ancestors of v in the causal graph,
denoted by anc(v). In the preprocess stage, we compute the CG once for the plan-
ning problem Π. Formally, given a planning problem Π, the causal graph, CGΠ , is
a mixed graph:1
• Each variable v ∈ V is represented by a node
• Each action a ∈ A such that v ∈ pre(a) and v′ ∈ eff(a) is represented by a
directed arc v → v′
1Mixed graph is a graph with both directed edges and undirected edges.
6
• Each action a ∈ A such that v ∈ eff(a) and v′ ∈ eff(a) is represented by an
undirected arc v − v′
Once again, the set of ancestors of v in CG(Π) are the set of relevant vari-
ables if our goal is to change v’s value. Note that our formulation is a little differ-
ent than the common (this will make it easier to formulate our planning prob-
lem as CSP). For example if our problem have two variables v, v and action
〈v = d, v = d; v = d′, v = d′〉 then the CG will look like:
v1++v2kk
Definition 3. Domain Transition Graph (DTG)
The domain transition graph [Johnson and Backstrom, 1998] encodes dependencies
between different values of a given state variable. That is, DTG(v) encodes the
necessary conditions for a value change of the variable v. In the preprocess stage,
we compute the DTG for each state variable. Formally, given a state variable v, the
domain transition graph, DTG(v), is a directed graph:
• Each value d ∈ Dv is represented by a node
• Each action a ∈ A such that d = pre(a)[v] and d′ = eff(a)[v] is represented
by a directed arc d→ d′ labeled with pre(a)[V \ v] and cost(a)
In our previous example with variables v, v and action 〈v = d, v = d; v = d′, v = d′〉labeled a with cost 1, then the DTGs are:
DTG(v1) : d1 d′1//v2=d2
cost:1DTG(v2) : d2 d′2//
v1=d1
cost:1
2.1.2 IPC Domains
The International Planning Competition (IPC) is a biennial event that provide a
forum for empirical comparison of planning systems. The following is a short de-
scription of several domains that further examine later:
• Gripper: a robot moves a set of balls from one room to another, it can grip
two balls at a time, one in each gripper.
• Logistics: There are several cities, each containing several locations, some of
which are airports. There are also trucks, which can drive within a single
city, and airplanes, which can fly between airports. The goal is to get some
packages from various locations to various new locations.
7
• Visit All: a robot at the middle of a square n× n grid that must visit all the
cells in the grid.
(a) 2× 2 Visitall Illustration
(define (domain grid-visit-all)
(:requirements :typing)
(:types place - object)
(:predicates (connected ?x ?y - place)
(at-robot ?x - place)
(visited ?x - place) )
(:action move
:parameters (?curpos ?nextpos - place)
:precondition (and (at-robot ?curpos)
(connected ?curpos ?nextpos) )
:effect (and (at-robot ?nextpos)
(not (at-robot ?curpos))
(visited ?nextpos) ) )
)
(b) Visitall’s PDDL Domain Description
(define (problem grid-2)
(:domain grid-visit-all)
(:objects loc-x0-y0 loc-x0-y1 loc-x1-y0 loc-x1-y1 - place )
(:init
(at-robot loc-x1-y1)
(visited loc-x1-y1)
(connected loc-x0-y0 loc-x1-y0)
(connected loc-x0-y0 loc-x0-y1)
(connected loc-x0-y1 loc-x1-y1)
(connected loc-x0-y1 loc-x0-y0)
(connected loc-x1-y0 loc-x0-y0)
(connected loc-x1-y0 loc-x1-y1)
(connected loc-x1-y1 loc-x0-y1)
(connected loc-x1-y1 loc-x1-y0) )
(:goal (and (visited loc-x0-y0)
(visited loc-x0-y1)
(visited loc-x1-y0)
(visited loc-x1-y1) ) )
)
(c) 2× 2 Visitall’s PDDL Problem Description
Figure 2.1: 2× 2 Visit-all Example
8
variables:
at-robot { at[0,0];at[0,1];at[1,0];at[1,1] }
visited[0,0] { T;F }
visited[0,1] { T;F }
visited[1,0] { T;F }
visited[1,1] { T;F }
init:
at-robot = at[1,1]
visited[1,1] = T
goal:
visited[0,0] = T
visited[0,1] = T
visited[1,0] = T
visited[1,1] = T
operators:
move-at[0,0]-at[0,1]:
PRE: at-robot = at[0,0]
EFF: at-roboy = at[0,1] ; visited[0,1] = T
...
...
(a) 2× 2 Visitall’s SAS Representation
a tRobot
visited[0,1] visited[1,0] visited[0,0]
(b) 2× 2 Visitall’s CG
at [0 ,0] a t [0 ,1]
a t [1 ,0] a t [1 ,1]
(c) DTG(atRobot)
T
F
(d) DTG(visited[i, j])
Figure 2.2: 2× 2 Visit-all Example
9
(a) 4-balls Gripper Illustration
(define (domain gripper-strips)
(:predicates (room ?r) (ball ?b) (gripper ?g) (at-robby ?r)
(at ?b ?r) (free ?g) (carry ?o ?g))
(:action move
:parameters (?from ?to)
:precondition ( and (room ?from) (room ?to) (at-robby ?from) )
:effect (and (at-robby ?to) (not (at-robby ?from) ) ) )
(:action pick
:parameters (?obj ?room ?gripper)
:precondition (and (ball ?obj) (room ?room) (gripper ?gripper)
(at ?obj ?room) (at-robby ?room) (free ?gripper) )
:effect (and (carry ?obj ?gripper) (not (at ?obj ?room))
(not (free ?gripper) ) ) )
(:action drop
:parameters (?obj ?room ?gripper)
:precondition (and (ball ?obj) (room ?room) (gripper ?gripper)
(carry ?obj ?gripper) (at-robby ?room))
:effect (and (at ?obj ?room) (free ?gripper)
(not (carry ?obj ?gripper) ) ) )
)
(b) Gripper’s PDDL Domain Description
(define (problem gripper-1)
(:domain gripper-strips)
(:objects rooma roomb ball4 ball3 ball2 ball1 left right)
(:init (room rooma) (room roomb)
(ball ball4) (ball ball3) (ball ball2) (ball ball1)
(at-robby rooma)
(free left) (free right)
(at ball1 rooma) (at ball2 rooma)
(at ball3 rooma) (at ball4 rooma)
(gripper left) (gripper right))
(:goal (and (at ball1 roomb) (at ball2 roomb)
(at ball3 roomb) (at ball4 roomb) ) )
)
(c) 4-ball Gripper’s PDDL Problem Description
Figure 2.3: 4-ball Gripper Example
10
variables:
at-robby { rooma;roomb }
left,right { ball1;ball2;ball3;ball4;free }
b1,b2,b3,b4 { rooma;roomb;gripperLeft;gripperRight }
init:
at-robby = rooma
left,right = free
b1,b2,b3,b4 = rooma
goal:
b1,b2,b3,b4 = roomb
operators:
move-rooma-roomb:
PRE: at-robby = rooma
EFF: at-robby = roomb
...
...pickup-rooma-left-b1:
PRE: at-robby = rooma ; left = free ; b1 = rooma
EFF: left = ball1 ; b1 = leftGripper
...
...putdown-roomb-right-b4:
PRE: at-robby = rooma ; right = ball4 ; b4 = rightGripper
EFF: right = free ; b4 = roomb
...
...
(a) 4-ball Gripper’s SAS Representation
atRobby
left r ight
ball1 ball2 ball3 ball4
(b) 4-ball Gripper’s CG
Figure 2.4: 4-ball Gripper Example
11
ball1
f ree
ball2 ball3 ball4
(a) DTG(left), DTG(right)
rooma
roomb
(b) DTG(at-robby)
Figure 2.5: 4-ball Gripper Example
2.2 Heuristic Search
A known fact about the planning problem is that it is computationally hard. Even
when considering the restricted case of propositional planning, the problem is PSPACE-
complete [Bylander, 1991]. Not without hope, much progress was achieved by using
state-space search. The most dominant approach is to use heuristic search. We focus
our attention to the A∗ heuristic search algorithm [P. E. Hart and Raphael, 1968].
The search component of A∗ refer to the process of finding a shortest path
in a graph. This graph represents some exploration of a problem’s state space.
Typical planning problems are humongous and thus our search is performed over
an implicit graph.2 The heuristic component of A∗ refer to an estimate function
f(s) = g(s) + h(s) that guides node expansion of the search process.3
Clearly, the ability of the heuristic function to direct A∗’s node expansion to-
wards the goal has crucial effect on the efficiency of the search process. Furthermore,
in domain-independent planning, such as planning in SAS, the planner must come
up with good heuristic functions automatically. A plethora of ideas for (automatic)
heuristic generation were published throughout the last decade, such as delete relax-
ation based heuristics [Hoffmann and Nebel, 2001], pattern databases based heuris-
tics [Edelkamp, 2002], merge and shrink based heuristics [Helmert et al., 2007] and
many others.
2Searching an implicit graph means we do not have access to “unexplored” parts of it. Weexplore such graphs by iteratively expanding nodes.
3g(u) is the weight of the path from the start node to s and h(u) is an estimate of the remainingcost from s to the goal.
12
2.2.1 Abstraction Heuristics
A common method for generating informative heuristics for domain independent
planning is abstraction heuristics. Deriving heuristic values is done by projections
of the problem to subsets of its state-variables. More formally, an abstraction of
a planning task is a mapping φ. Each state s is mapped to an abstract state
φ(s). The abstraction heuristic value, hφ(s), is the distance from φ(s) to the
nearest abstract goal state in the transition system induced by φ. If in addition
cost(φ(s), φ(s′)) ≤ cost(s, s′) for any two states s, s′ then hφ(s) is admissible.
A well known example of abstraction heuristic is the pattern database (PDB)
heuristic, which is based on projecting the planning task onto a subset of its state
variables and perform explicit search for optimal plan in the abstracted state
[Edelkamp, 2001]. PDB is not limited to the use of a single abstraction. Computing
several abstractions and taking the maximal solution cost is always an admissible
heuristic, and usually also more informative. Clearly, using summation instead of
maximization will yield higher estimates, but by doing so we can no longer guaran-
tee an underestimated solution cost. Additive abstractions heuristic, are a group of
abstractions such that their combined solutions costs does not overestimate the cost
of solving the original problem. Thus they can be used to find optimal solutions.
Definition 4. additive abstractions
Given a planning problem Π over state-variables V , V = {V, ..., Vm} is a set of
additive abstractions of Π if
• ∀i, Vi ⊆ V
• ∀i,Πi = 〈Vi, Ai, Ii, Gi〉
– Ii = I [Vi], Gi = G[Vi]
– Ai = {a[Vi] | a ∈ A, eff(a)[Vi] 6= ∅}
• ∀a ∈ A,C(a) ≥∑mi=Ci(a
[Vi])
A sufficient condition for additivity offered by [Edelkamp, 2001], is that no oper-
ator affects variables in two different abstractions. A newer approach, that subsumes
PDB additive abstractions, is called merge-and-shrink [Helmert et al., 2007]. It al-
low variables to be imperfectly reflected. There is, however, an inherent limitation
in both of these approaches – since explicit search is performed, the abstract state
space have to be small enough that it could: (1) be solved by blind search; and (2)
cached in memory. In the following subsection we discuss how we might overcome
this limitation by implicit abstraction heuristics.
13
2.3 Motivation and Related Work
The following is an excerpt from Malte’s book, chapter 13 [Helmert, 2008], “ ... The
purely hierarchical decompositions we have pursued in this work are not the only
kinds of task decompositions one could envisage. While the hierarchical approach
does have the benefit of admitting very efficient heuristic computations, it does show
its weaknesses for tasks with very highly connected causal graphs, as in
the Depots domain. In principle, it would thus be very appealing to consider
local subtasks, defined by small subgraphs of the causal graph, without
previously eliminating causal cycles ... perhaps better heuristic accuracy can
outweigh worse per-state performance in many cases. This is a very interesting area
for further investigation ... Another question of interest is whether it is possible
to improve performance significantly by considering other subtasks than those
induced by star-shaped subgraphs of the causal graph ... ”. These questions
best summarize the motivation to our research.
The Causal Graph Heuristic [Helmert, 2004] estimates the cost of reaching the
goal from a given search state by solving a number of subtasks of the planning task
which are derived by looking at small “windows” of a pruned causal graph. “... The
more connected the causal graph of a task is, the more conditions will be ignored by
the causal graph heuristic, and one would expect that this would have a detrimental
effect on heuristic accuracy... ”. This raises the question, do we really have to ignore
many conditions when the causal graph is highly connected?
Fork Decomposition [Katz and Domshlak, 2010] is yet another kind of problem
abstraction based on the causal graph. In the previous subsection we argued that, by
virtue of time and space limitation, explicit heuristics have inherent limitations. To
bypass them, [Katz and Domshlak, 2010] purposed a new framework called implicit
abstractions. The key idea is to rely on abstract tasks, Πi, that have specific
structure which is easy to solve because it belong to tractable fragment of planning.
Confined by the small array of such known fragments, the abstraction they offered
was fairly simple (structure wise) — Fork-decomposition, is a heuristic that abstract
a problem to a family of tractable problems having fork structure (either forward
or invertible). In this work, we will argue that parametrized complexity have the
potential to enhance abstraction heuristics by adding richer structures that preserve
more of the original problem’s representation. In the next chapter we present the
foundation for such a heuristic.
14
3Factored Planning
In many subfields of AI, such as probabilistic reasoning or constraint satisfaction
problems, algorithms that take advantage of the problem’s structure plays a crucial
role. Nevertheless, identifying and exploiting the problem’s structure in automated
planning remain a key challenge. Perhaps the most promising approach to auto-
matically exploit structure is that of Factored Planning. A factored planner tries to
decompose the problem at hand to subproblems with minimal interaction between
them, solve each subproblem independently, and then use the solutions to construct
a valid plan for the (original) problem.
In the next section we provide background on an important concept from Graph
Theory called tree-width. We then move on to a thorough discussion on the factored
planning approach by [Brafman and Domshlak, 2006]. Finally, we test the claims
of [Brafman and Domshlak, 2006] empirically on various IPC domains.
3.1 Tree-Width
[Robertson and Seymour, 1986] introduced graph’s parameter called tree-width . In-
tuitively, the tree-width of a graph measures how much it resembles a “fat tree”.
The concept of tree-width is important because many combinatorial problems, such
as Constraint Satisfaction Problems (CSPs), can be solved in time exponential only
in the tree-width of their associated variable-interaction graphs. In a CSP, the
variable-interaction graph is referred to as its constraint graph.1 One can think of
the tree-width of a constraint graph as a measure of the size of the largest subprob-
lem that needs to be solved while solving the original problem using the paradigm
of dynamic programming. In the next section, when we consider planning problems
described in SAS, the concept of variable-interaction graph is captured by the CG.
1This is an undirected graph that has a node for each variable, and an edge between two nodeswhenever the corresponding variables participate in the same constraint. For proper introductionto CSPs we refer the reader to [Dechter, 2003].
15
Definition 5. Tree Decomposition
A tree-decomposition (T,X) of a graph G = (V,E) consists of tree T = (I, F ) and
bags {Xi ⊆ V, i ∈ I} such that the following three conditions are satisfied:
• ⋃i∈I Xi = V
(the nodes in the bags cover all nodes in the graph.)
• For every edge (v, w) ∈ E there exist i ∈ I with v, w ∈ Xi
(every edge in the graph appear in at least one bag.)
• For every node v ∈ V , Tv = {i ∈ I|v ∈ Xi} is a subtree of T
(the running-intersection property is satisfied.)
The width of a tree-decomposition is maxi |Xi| − . The tree-width of G is the
minimum width of any tree decomposition of G. Although finding an optimal tree-
decomposition (that is, one with minimum tree-width) is NP-hard [Arnborg et al., 1987],
there are many fast approximations for this problem. Figure 3.1 shows an example
of tree-decomposition of a graph with tree-width 3.
3.2 Factored Planning
Finding an optimal solution to unrestricted SAS problem is known to be PSPACE-
complete [Backstrom and Nebel, 1996]. Even so, we would like to employ the idea of
divide and conquer through domain decomposition. Factored (single-agent) planning
tries to exploit independencies in the problem, and decompose it into factors with
limited interaction between them. We examine locally optimal factored planning
[Brafman and Domshlak, 2006]. To guide the problem factorization, the graphical
model used is the problem’s CG. This approach have a clear worst-case complexity
analysis which we empirically evaluate in the next section.
First, we illustrate how to encode a given planning task Π described in SAS as a
CSP.2 Denote Ai = {a ∈ A : eff(a)[vi] 6= ∅} the set of all actions affecting vi. Given
Π, construct CSPΠ(d) = 〈X,D,C〉 as follows:
• Variables – X = {v(j)i : ≤ i ≤ n , ≤ j ≤ d}
• Domains – Dom(v(j)i ) = {(a, t) : a ∈ Ai ∪NOOP , ≤ t ≤ n · d}
Denote by αji and τ ji the action a and time-step t of v(j)i = (a, t) respectively. Also
denote by s(t) the state at time-step t. The CSP have the following constraints:
2Note that in their paper, Brafman & Domshlak proposed a slightly different formulation ofoperators, represented as 〈preconditions, prevail-conditions, effects〉
16
A
B C
D
E
F
G
H I
J
(a) Example Graph G.
A
B C
D
E
F
G
H I
J
(b) Fully connected components of G.
A, B, C, D C, F, E
G, F, J
E, H, I
C
F
E
(c) Tree-Decomposition of G with tree-width 3.
Figure 3.1: Tree-Decomposition Example17
(a) Sokoban’s Causal Graph
(b) Tree-Decomposition of (a)
Figure 3.2: Illustration of the complexity of a casual graph. Figure (a) shows the causal graph of a Sokobaninstance (problem-02) from the IPC repository. Each number represent a different variable. Figure (b) shows thetree-decomposition of the graph in (a). Observe that it gives us better understanding of the mutual dependenciesbetween the variables. (These figures were generated automatically using DOT.)
18
1. Initial State Constraints : guarantees that the first change to a variable (that
is, at time-step 1) is caused by an action whose precondition is satisfied in the
initial state. Formally,
for all vi ∈ X, αi is an action such that pre(αi ) ⊆ I
2. Goal State Constraints : guarantees that the last change to a variable (that is,
at time-step d) is caused by an action whose effects are specified in the goal
state. Formally,
for all vdi ∈ X, αdi is an action such that eff(αdi ) ⊆ G
3. Local Precondition Constraints : guarantees that a change to a variable is
caused by applying an action whose precondition and effect are locally sat-
isfied (locally in the sense of this variable). Formally,
for all vi ∈ V and k ∈ {2 . . . d} we define a constraint on {v(k−)i , v
(k)i }:
pre(αki )[i] = eff(αk−i )[i] and τ ki > τ k−1i
4. Directed-Arcs Constraints : guarantees that a change to a variable is caused
by applying an action whose precondition and effect are satisfied according
to the causal dependencies in the problem (as specified by the CG) or it is
temporally independent. Formally,
for any directed edge vj → vi in CGΠ and for all k, l ∈ { . . . d}we define a constraint on {v(k)
i , v(l)j , v
(l+)j }:
pre(αki )[j] = empty
or
τ ki < τ lj and τ ki > τ l+1j
or
τ ki > τ lj and τ ki < τ l+1j and pre(αki )[j] = eff(αlj)[j]
5. Undirected-Arcs Constraints : guarantees that a change to two variables is
caused by applying the same action at some time-step or they are independent
(that is, no action can affect both of them). Formally,
for any edge vj − vi in CGΠ and for all k ∈ { . . . d}we define a constraint on {v(k)
i , v()j , . . . , v
(d)j }:
αki /∈ Ai ∩ Ajor∨
≤l≤d αki = αlj and τ ki = τ lj
19
[Brafman and Domshlak, 2006] showed a clear relation between the causal graph’s
tree-width and the primal graph of the constraint problem’s tree-width. Specifically,
tw(CSPΠ(d)) = tw(CGΠ)·d
Let Plan(Π) be the set of all plans for Π. For each plan ρ ∈ Plan(Π), denote ρi
the subset of all actions affecting the state variable vi. Then the local depth of Π is
δ = minρ∈Plan(Π)
max≤i≤n
|ρi|
With these notations, it can be shown that the worst case time complexity of solving
a planning problem is
O (n · (nδa)wδ+δ)�� ��3.1
This factored method, at least theoretically, should work extremely well when our
problem can be broken to many “loosely coupled” subproblems, each of which has
a local solution that is computable in short time. Moreover, for fixed w and δ the
problem’s complexity reduces to a polynomial. This is the intuition behind the
construction of the easy-factors heuristic that we present in the next chapter.
3.3 Empirical Analysis of Factored Planning
Our ultimate goal is to devise a new heuristic based on abstractions chosen according
to some notion of underlying structure. In order to justify our reliance on factored
planning for capturing this structure, we examine whether the theoretical result in
equation (3.1) indeed align with empirical findings. Specifically, we would like to
verify that, at least in some planning domains, problems with bounded w and δ
can be solved quickly. Furthermore, we want to find out if w and δ can be used to
predict the running time of state-of-the-art planner.
In order to do so, we incorporated a tree-decomposition library3 into Fast-
Downward planning system [Helmert, 2006]. We used the LM-cut heuristic – one of
the leading heuristics for optimal planning [Helmert and Domshlak, 2009]. To the
best of our knowledge, this is the first attempt to validate these theoretical results
on standard IPC domains. Out of the 32 domains and 585 problems examined, we
summarize some of the results in Figure 3.3.4
First, we note that for all the domains in Figure 3.3 the relation between w · δ3TreeD - http://www.itu.dk/∼sathi/treed/4All experiments conducted on Intel Xeon CPU 3.06GHz with 4GB RAM
20
0
500000
1000000
1500000
2000000
0 5 10 15 20 25 30 35 40 45
expa
nded
tw · δ
VisitAll
0
2000000
4000000
6000000
8000000
10000000
10 15 20 25 30 35 40 45 50
expa
nded
tw · δ
Gripper
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
10 20 30 40 50 60 70 80
expa
nded
tw · δ
Woodworking
0
50000
100000
150000
200000
0 5 10 15 20 25 30
expa
nded
tw · δ
Logistics00
0
50000
100000
150000
200000
250000
300000
350000
10 20 30 40 50 60 70 80 90
expa
nded
tw · δ
Elevators
0
50000
100000
150000
200000
250000
100 200 300 400 500
expa
nded
tw · δ
Blocks
Visit All Gripper Woodworkinginst. vars tw δ ex-
pandedinst. vars tw δ ex-
pandedinst. vars tw δ ex-
pandedp02-f 4 1 3 4 p01 7 3 4 99 p01 31 11 2 15p04-f 16 1 15 590 p03 11 3 8 10529 p03 36 15 3 5245p06-h 16 1 23 723 p05 15 3 12 372607 p06 43 20 4 167358p08-h 33 1 43 1944990 p07 19 3 16 10082493 p10 46 23 2 27
Logistics00 Elevators Blocksinst. vars tw δ ex-
pandedinst. vars tw δ ex-
pandedinst. vars tw δ ex-
pandedp4-0 7 3 6 76 p02 11 4 6 2124 p4-0 9 5 6 9p5-2 8 3 2 9 p07 15 6 6 28983 p9-0 19 10 30 13922p8-1 12 4 6 43665 p14 14 7 12 355323 p12-0 25 12 34 64209p12-0 17 5 6 116555 p20 14 5 8 45147 p14-1 29 14 36 234506
Figure 3.3: Factored Planning Analysis
and the expanded nodes is approximately exponential, regardless to the domain.
Second, w · δ provides better estimates then the number of state variables in the
problem. For example, the Elevator instances p14 and p20 have the same number
of state variables (14), but p14 required almost 8 more expanded nodes (355323 for
p14 vs. 45147 for p20). However, p14 have a w · δ = 84 which is roughly twice
than that of p20. Moreover, observe that p07 from the same domain have 15 state
variables but require only 28983 nodes expansion to find optimal solution. This can
be explained by its relatively ”easy-factors” (w · δ = 36). The same argument is
valid for p06 & p10 of Woodworking and p4− 0 & p5− 2 of Logistics.
21
4The Easy-Factors Heuristic
An important family of heuristics for domain independent planning is that of ab-
stractions. We already mentioned explicit abstractions such as PDBs and merge-
and-shrink, and pointed out their shortcomings. We then discussed how time
and space limitations might be overcome with implicit abstractions, such as fork-
decomposition. However, fork-decomposition is not without drawbacks – regardless
to the problem at hand, all abstractions have a simple fork-shaped CG. Thus, not
only that the CG have a tree-width at most 1, but it is also limited to depth 1.
Further, in the previous chapter we introduced factored planning and convinced
ourselves that the CG’s tree-width and local plan length (w and δ accordingly) are
closely related to the empirical run-time of a state of the art planner. With this
insight in mind, we design a new heuristic that abstract a given problem along
these two parameters. That is, our heuristic does not a priori adhere to any specific
structure for the abstracted sub-problems, but rather first examines the problem at
hand and only then decides how to abstract it. Our heuristic guarantees abstracted
sub-problems with w ≤ w and δ ≤ δ for some integers w and δ. Thus, by virtue
of their construction, the abstracted sub-problems have bounded parametrized com-
plexity. Such an abstraction also have the potential to be more informative than
other “fixed”-structure implicit heuristics. As we will discuss later, this is because
it keeps as much structure as possible in the abstracted domain. We call our new
heuristic the easy-factors heuristic.
In the next section we explain how to intelligently abstract a problem such that
it has a CG with bounded tree-width, while in some sense retaining as much of
its structure as possible. We then move to treat the second parameter, local plan
length, and discuss how to bound it. Finally, the easy-factors heuristic is based on
the premise that planning problems with bounded w and δ can be solved quickly. In
the last section we evaluate qualitatively the performance of this heuristic on IPC
domains as well as the aforementioned premise.
23
4.1 w-Cutset
This section revolves around the CG’s tree-width, w. We suggest an intelligent1 way
to break the CG down to sub-graphs with bounded tree-width. First, observe that
if the CG have independent components, then there are subsets of state-variables
with no interaction. This means that we can solve each component independently.
Clearly, that is an important case, but it can be recognized and exploited quite
easily.2 Thus, we consider only CGs that are connected.
A cycle-cutset of a graph is a set of nodes, that once removed from the graph,
results in a cycle-free graph. More generally, a w-cutset of a graph is a set of nodes,
that once removed, results in a graph with tree-width at most w. Finding a minimal
w-cutset is an NP-complete problem. Nevertheless, the following greedy algorithm
guarantee an O(+ lnm) approximation, where m is the maximal number of clusters
of size greater than w + [Bidyuk and Dechter, 2004].
Greedy w-Cutset Algorithm
Input: Vars {v, ..., vn} , Bags {X, ..., Xm} , wOutput: A set C ⊆ V s.t. maxi |Xi \ C| ≤ w
C = ∅while ∃Xi s.t. |Xi| > w do
∀ ≤ i ≤ n , fi ←m∑j=
1[vi∈Xj ∧ |Xj |>w]
v ← max≤i≤n
fi
∀ ≤ j ≤ m , Xj ← Xj \ vV ← V \ vC ← C
⋃{v}end
Although cutsets are usually used for conditioning, we employ them solely as a
mean to manipulate the causal graph. We argue that taking out the set of state-
variables returned by the minimal w-cutset procedure is the sensible thing to do
because: (1) it remove the minimal number of state-variables from the problem to
achieve a sub-problem with the desired tree-width; and (2) it maintains the highest
tree-width possible (that is, it returns a graph with tree-width equals to w).
Maintaining high tree-width is important because the higher the tree-width, the
more complex causal dependencies are represented in the CG. One can think of the
1Intelligent in the sense that CG’s sub-graphs are chosen such that they maintain as muchstructure as possible.
2Simply divide the original problem to independent sub-problems and solve each independently.
24
“hardness”3 associated with a problem as partially derived from the level of causal
dependencies between its state-variables. Thus, ideally we would like to maintain as
much of this “hardness” in the abstracted sub-problem as possible because it results
in more accurate and informative heuristics.
Now, given a planning problem Π = 〈V,A, I,G〉, assume we want to generate
abstracted sub-problem with causal graph that have tree-width at most some fixed
integer w. In order to do so, we can use the greedy w-Cutset algorithm as follows.
Denote by C the list of variables returned by the the greedy w-Cutset algorithm.
We can build the abstracted planning problem Π′ = 〈V ′, A′, I ′, G′〉 with:
• The set of state-variables that contrive the abstract sub-problem is V ′ = V \C.
• The set of abstract actions A′ ⊆ A contain any action with some v′ ∈ V ′ in
the effects list. We will remove any variable not in V ′ from the specification
of the abstracted action.
• The initial and goal abstract states, I ′ and G′, are I and G projected onto V ′.
Clearly, by it’s construction the abstracted planning problem Π′ have causal graph
with tree-width w.
Now, given a planning problem Π = 〈V,A, I,G〉 we can call the greedy w-Cutset
algorithm recursively, and extract subset of state-variables. The scheme described in
Figure 4.1 results in subsets {V1, ..., Vm} obtained from V . We can build abstracted
sub-problems {Π1, ...,Πm}, as described above, such that for all i the tree-width of
CGΠiis at most w.
3“Hardness” in the sense of the length (or cost) of the plan needed to achieve the goal.
Find subsetsInput: w , SAS Planning Problem Π = 〈V,A, I,G〉Output: A set V, ..., Vm ⊆ V s.t. ∀i, CG(Π [Vi]) ≤ w
i← while tw(CG(Π)) > w do
X ← Compute tree-decomposition of CG(Π)C ← Greedy w-Cutset algorithm(V,X, w)Vi ← V \ CV ← V \ Vii← i+
end
Figure 4.1: Tree-Width Relaxation Scheme
25
A
B C
D
E
F
G
H I
J
(a) 4-cutset
B C
D
E
F
G
H I
J
(b) 3-cutset
B
D
G
H I
J
(c) 2-cutset
D
I
J
(d) 1-cutset
Figure 4.2: Removing a w-cutset from a graph
26
4.2 Node-Contraction
We now turn our attention to the the local plan length, δ. Along with w, δ dominates
the run-time complexity of factored planning. Unfortunately, and unlike w, there is
no straight forward way to determine δ from the problem specification. Moreover,
for general planning problems we cannot even bound δ. Nevertheless, we present a
simple idea for generating abstracted problems that are likely to have smaller δ’s.
Our method is based on nodes contraction in the DTG.
Consider for example the Visitall domain. Every instance in this domains has a
tree-width of 1. We already seen in chapter 3.3 that the number of expanded nodes
is exponential in δ, hence the importance of bounding it. The longest local plan in
Visitall domain is that of the atRobot state-variable, thus δ is determined by the
size of the grid.4 Intuitively, one way to simplify this planning problem would be to
merge locations into one “meta-location”. This in turn is equivalent to contracting
nodes in the DTG. Moreover, since every instance from this domains has a tree-
width of 1, the only way to derive tractable abstraction is to decrease δ.5 Figure 4.3
illustrates this basic idea on a small 2× 2 grid.
In order to refine the aforementioned idea, let us examine another domain –
Logistics. Recall that in this domain we want to move parcels between different
locations in different cities. We can use trucks within a city and aircrafts between
cities. It is very reasonable to assume that driving a truck is cheaper than flying
an airplane. However, adding either driving or flying action to a plan would add
one to the length of some local variable, regardless to the action chosen. Since
we use solution costs of abstracted sub-problems as heuristic values to guide the
search in the original problem, we wish to maintain the original costs as much as
possible. Thus, given the choice of contracting two locations within the same city or
two airports in different cities, we will choose to contract locations inside the city.
Intuitively, this approach contracts closer locations and as a result the abstracted
sub-problem would represent only major traffic arteries.
The algorithm in Figure 4.4 capture exactly this intuition. It iterates over every
node’s DTG. For each DTG, the algorithm contract nodes until there are no more
than δ vertices for some integer δ. At each step, two nodes are contracted if the
edge between them has the lowest cost. Recall that an edge between two nodes
in the DTG represents an action that change the DTG’s state-variable from the
4This is because atRobot has to change at least the size of the grid times, while all othervariables, those who denote whether a grid has been visited, change only once.
5We note that the CG in this case looks like a fork. Nevertheless, fork-decomposition methodfailed to scale well and could not solve instances with grid sizes bigger than 10× 10.
27
at [0 ,0] a t [0 ,1]
a t [1 ,0] a t [1 ,1]
(a) Original DTG(atRobot)
(b) Contracted DTG(atRobot)
Figure 4.3: Illustration of Visitall DTG Abstraction. Figure (a) represents he DTG of the state-variable atRobotof a 2× 2 Visitall problem. Since the grid size is 2× 2 and since the robot can be in any of the 4 different locations,atRobot’s DTG have 4 nodes. In figure (b), we illustrate a possible contraction of location [0,0] with [0,1] andof location [1,0] with [1,1]. Thus, the contracted DTG of the variable atRobot has only 2 nodes, representing 2“meta”-grids.
28
DTG Nodes Contraction
Input: δ , SAS Planning Problem Π = 〈V,A, I,G〉Output: A SAS Planning Problem s.t. any DTG(v) have less then δ nodes
for i← to |V | do(V , E) = DTG(vi)Let a priority queue Q contain all the edges in E a
while |V| > δ doe← Q.removeMin()contract the two nodes of e
end
end
aPriorities are based on the action cost attributed with the edge
Figure 4.4: DTG Contraction
value of one node to the value of the other. Thus, contracting the lowest cost edge
corresponds to abstracting away the cheapest possible change in that variable (i.e.,
contracting these two different values of the variable into one “meta”-value).
We note that our scheme is somewhat similar to the well known delete relaxation
[Hoffmann and Nebel, 2001]. Consider delete relaxation in the Gripper domain –
essentially, deleting negative effects results in augmenting locations to the robot
(that is, at timestep t the robot is present in all rooms it visited by timestep t). On
the other hand, contractions merge (closer) locations to one meta-location, rather
than augment locations as time goes by. Thus, contractions are less prune to lower-
estimates in these situations.
Finally, although contractions can account for costs that delete relaxation does
not, we note that decreasing δ does not guarantee tractable abstractions. Such a
negative example can be found in the Gripper domain. Consider a Gripper problem
with two rooms and several balls. Here, the atRobot’s DTG have only two nodes –
one for each room. If we contract the atRobot state-variable (that is, the robot is
present in both rooms all the time), we degrade the solution accuracy to something
comparable to that of delete relaxation. If we choose not to do so, the problem is
computationally harder to solve and thus less suitable to be used as heuristic.
Clearly, there is no reason to fix δ to be less than 2. One possible solution to this
issue is adaptive delete relaxation – instead of abstracting DTGs as a preprocess,
we monitor δ online (that is, during search). If at some point the local plan length
of some state-variable is greater than δ, it begin to augment values (like delete-
relaxation). Augmenting values to state-variable bound its local plan-length to the
number of different values it can get, hence bounding the parametrized-complexity.
29
4.3 Empirical Results
Planners are usually evaluated empirically by measuring their run-time on IPC do-
mains and the planning community is very focused on raw benchmark performance.
Unfortunately, after a lot of effort on our side,6 we could not incorporate the easy-
factors heuristic fully into the Fast-Downward planner. Nevertheless we give a lim-
ited qualitative analysis.
We consider the Gripper domain. Table 4.5 shows the number of expanded
nodes of a Fast-Downward’s A* search using our easy-factors heuristic. Note that
any abstract sub-problem that do not contain at least one state-variable in the goal
specification is simply ignored. Thus, in the Gripper domain, abstracting a problem
along its causal graph yield only one subproblem. The causal graph of the abstract
sub-problem for 2, 3 and 4-cutset can be seen in figure 4.6.7
As expected, the case of tw = 1 is not informative and is essentially equivalent
to blind search. Surprisingly, the case of tw = 2 (which is equivalent to fork-
decomposition) does not perform any better. Note that in those cases, blind search
will perform much better since it does not “waste” time solving abstracted sub-
problems. Even more interesting is the observation that, the abstraction with tw = 3
only adds a little information over an abstraction with tw = 2. Examining figure
4.6(b), we see that the only state-variable that was abstracted away is the location
of the robot. This provide the reason for the poor performance of the heuristic
– abstracting away the robot’s location results in a problem with solution length
(cost) that is far less than the original one.
One possible explanation for the poor performance on the gripper domain can
be found at [Helmert and Roger, 2008]. There, the authors showed that even with
heuristic with small constant error, about half of all reachable states need to be
considered, because they all lie on optimal paths to the goal. Finally, we note that
in the case of tw = 4 the abstraction is actually the original problem and thus
functioning as a perfect heuristic. Clearly, solving the original problem to provide
guidance to the search process is unacceptable.
6We implemented a new heuristic in Fast-Downward that creates abstracted sub-problems withbounded tree-width. We could not implement the second part of contracting nodes in DTGs.Another issue that we could not go around was that, at each step, we had to reset FD’s searchengine for the abstracted sub-problems search. The inability to reuse information from previoussearches means we have to start from scratch for every expanded node, which is very inefficient.
7In the case of tw = 1, the abstractions are to nodes only. In our example, we have 4 naivesubproblems, each is a ball’s state-variable.
30
Gripperinst. tw = 1 tw = 2 tw = 3 tw = 4p01 229 229 173 12p02 1803 1803 1611 18p03 11689 11689 11225 24p04 N.A 68479 67559 30
Figure 4.5: Gripper domain Expanded nodes as a function of abstraction tree-width
atRobby
left r ight
ball1 ball2 ball3 ball4
(a) tw = 4
left
ball1 ball2ball3 ball4
r ight
(b) tw = 3
left
ball1 ball2 ball3 ball4
(c) tw = 2
Figure 4.6: Abstracted Gripper Problems
31
5Conclusions and Future Work
In this paper, we presented a new implicit heuristic that have bounded parametrized-
complexity. We called it the Easy-Factors heuristic, since it breaks down the original
problem into abstracted subproblems (factors), each of them can be solved in time
polynomial in the input size and exponential in bounded parameters (therefore “eas-
ily” solvable). The novelty of our heuristic also lays in the fact that the abstracted
sub-problems are generated after examining the original problem’s structure, rather
than a priori adhering to any specific (fixed) structure. These abstractions are
guided by theoretical and empirical insights from locally-optimal factored planning
that we empirically evaluated. We provided initial empirical evaluation of the easy-
factors heuristic, which suggests that it may balance the trade-off between its values
accuracy and the time spent computing them.
As future work, the first step would be to fully incorporate this heuristic into
Fast-Downward planning system. Preferably, this implementation should include
incremental search [Koenig and Likhachev, 2002], so it can leverage on knowledge
gained in past iterations (that is, solutions to the abstracted problems from previous
iteration). Another improvement could be gained by using the exact same ideas hi-
erarchically – if solving an abstracted problem proves to be time consuming, solve it
with search guided by the Easy-Factors heuristic, but now consider the abstracted
problem as the “original” one. Our approach also lend itself naturally to paral-
lel computation, because each abstracted subproblem can be solved independently
by (possibly different) planners. Finally, we would like to explore other ideas for
bounding δ and w, such as variable renaming [Ramırez and Geffner, 2007].
33
Bibliography
[Arnborg et al., 1987] Arnborg, S., Corneil, D. G. and Proskurowski, A. (1987).Complexity of Finding Embeddings in a k-Tree. SIAM Journal of Alg. and Dis-crete Methods 8, 277–284.
[Backstrom and Nebel, 1996] Backstrom, C. and Nebel, B. (1996). Complexity Re-sults for SAS+ Planning. Technical report.
[Bidyuk and Dechter, 2004] Bidyuk, B. and Dechter, R. (2004). On Finding Min-imal w-cutset. In UAI ’04, Proceedings of the 20th Conference in Uncertaintyin Artificial Intelligence, Banff, Canada, July 7-11, 2004, (Chickering, D. M. andHalpern, J. Y., eds), pp. 43–50, AUAI Press.
[Brafman and Domshlak, 2006] Brafman, R. I. and Domshlak, C. (2006). FactoredPlanning: How, When, and When Not. In The Twenty-First National Conferenceon Artificial Intelligence (AAAI 2006) pp. 809–814,.
[Bylander, 1991] Bylander, T. (1991). Complexity Results for Planning. In Pro-ceedings of the 12th International Joint Conference on Artificial Intelligence,(Myopoulos, John; Reiter, R., ed.), pp. 274–279, Morgan Kaufmann, Sydney,Australia.
[Dechter, 2003] Dechter, R. (2003). Constraint Processing. The Morgan KaufmannSeries in Artificial Intelligence, Elsevier Science.
[Edelkamp, 2001] Edelkamp, S. (2001). Planning with Pattern Databases. InPROCEEDINGS OF THE 6TH EUROPEAN CONFERENCE ON PLANNING(ECP-01) pp. 13–24,.
[Edelkamp, 2002] Edelkamp, S. (2002). Symbolic Pattern Databases in HeuristicSearch Planning. In Proceedings of the Sixth International Conference on Arti-ficial Intelligence Planning Systems, ICAPS ’2007, April 23-27, 2002, Toulouse,France, (Ghallab, M., Hertzberg, J. and Traverso, P., eds), pp. 274–283, AAAI.
[Helmert, 2004] Helmert, M. (2004). A Planning Heuristic Based on Causal GraphAnalysis. In ICAPS pp. 161–170,.
[Helmert, 2006] Helmert, M. (2006). The Fast Downward Planning System. J. Artif.Intell. Res. (JAIR) 26, 191–246.
[Helmert, 2008] Helmert, M. (2008). Understanding Planning Tasks: Domain Com-plexity and Heuristic Decomposition, vol. 4929, of Lecture Notes in ComputerScience. Springer.
35
[Helmert and Domshlak, 2009] Helmert, M. and Domshlak, C. (2009). Landmarks,Critical Paths and Abstractions: What’s the Difference Anyway? In ICAPS,(Gerevini, A., Howe, A. E., Cesta, A. and Refanidis, I., eds), AAAI.
[Helmert et al., 2007] Helmert, M., Haslum, P. and Hoffmann, J. (2007). FlexibleAbstraction Heuristics for Optimal Sequential Planning. In Proceedings of theSeventeenth International Conference on Automated Planning and Scheduling,ICAPS 2007, Providence, Rhode Island, USA, September 22-26, 2007, (Boddy,M. S., Fox, M. and Thiebaux, S., eds), pp. 176–183, AAAI.
[Helmert and Roger, 2008] Helmert, M. and Roger, G. (2008). How Good is AlmostPerfect? In AAAI pp. 944–949,.
[Hoffmann and Nebel, 2001] Hoffmann, J. and Nebel, B. (2001). The FF planningsystem: fast plan generation through heuristic search. J. Artif. Int. Res. 14,253–302.
[Johnson and Backstrom, 1998] Johnson, P. and Backstrom, C. (1998). State-Variable Planning under Structural Restrictions: Algorithms and Complexity.Artificial Intelligence 100, 125–176.
[Katz and Domshlak, 2010] Katz, M. and Domshlak, C. (2010). Implicit Abstrac-tion Heuristics. J. Artif. Intell. Res. (JAIR) 39, 51–126.
[Knoblock, 1994] Knoblock, C. A. (1994). Automatically Generating Abstractionsfor Planning. Artificial Intelligence 68, 243–302.
[Koenig and Likhachev, 2002] Koenig, S. and Likhachev, M. (2002). D* lite. InEighteenth national conference on Artificial intelligence pp. 476–483, AmericanAssociation for Artificial Intelligence, Menlo Park, CA, USA.
[Mcdermott et al., 1998] Mcdermott, D., Ghallab, M., Howe, A., Knoblock, C.,Ram, A., Veloso, M., Weld, D. and Wilkins, D. (1998). PDDL - The Plan-ning Domain Definition Language. Technical Report TR-98-003 Yale Center forComputational Vision and Control,.
[P. E. Hart and Raphael, 1968] P. E. Hart, N. J. N. and Raphael, B. (1968). Aformal basis for the heuristic determination of minimum cost paths. IEEE Trans-actions on Systems, Science, and Cybernetics SSC-4, 100–107.
[Ramırez and Geffner, 2007] Ramırez, M. and Geffner, H. (2007). Structural Relax-ations by Variable Renaming and Their Compilation for Solving MinCostSAT. InPrinciples and Practice of Constraint Programming - CP 2007, 13th InternationalConference, CP 2007, Providence, RI, USA, September 23-27, 2007, Proceedings,(Bessiere, C., ed.), vol. 4741, of Lecture Notes in Computer Science pp. 605–619,Springer.
[Robertson and Seymour, 1986] Robertson and Seymour (1986). Graph Minors. II.Algorithmic Aspects of Tree-Width. ALGORITHMS: Journal of Algorithms 7.
36