on the foundations of grounding in

95
ON THE FOUNDATIONS OF GROUNDING IN ANSWER SET PROGRAMMING ROLAND KAMINSKI University of Potsdam TORSTEN SCHAUB University of Potsdam Abstract. We provide a comprehensive elaboration of the theoretical foundations of variable instantiation, or grounding, in Answer Set Pro- gramming (ASP). Building on the semantics of ASP’s modeling language, we introduce a formal characterization of grounding algorithms in terms of (fixed point) operators. A major role is played by dedicated well- founded operators whose associated models provide semantic guidance for delineating the result of grounding along with on-the-fly simplifications. We address an expressive class of logic programs that incorporates recur- sive aggregates and thus amounts to the scope of existing ASP modeling languages. This is accompanied with a plain algorithmic framework detailing the grounding of recursive aggregates. The given algorithms correspond essentially to the ones used in the ASP grounder gringo. 1. Introduction Answer Set Programming (ASP; [46]) allows us to address knowledge- intense search and optimization problems in a greatly declarative way due to its integrated modeling, grounding, and solving workflow [31,41]. Prob- lems are modeled in a rule-based logical language featuring object variables, function symbols, recursion, and aggregates, among others. Moreover, the underlying nonmonotonic semantics allows us to express defaults and reach- ability in an easy way. A corresponding logic program is then turned into a propositional format by systematically replacing all object variables by variable-free terms. This process is called grounding. Finally, the actual ASP solver takes the resulting propositional version of the original program and computes its answer sets. Given that both grounding and solving constitute the computational cornerstones of ASP, it is surprising that the importance of grounding E-mail addresses: [email protected], [email protected]. 1 arXiv:2108.04769v1 [cs.AI] 10 Aug 2021

Upload: others

Post on 15-May-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ON THE FOUNDATIONS OF GROUNDING IN

ON THE FOUNDATIONS OF GROUNDING IN

ANSWER SET PROGRAMMING

ROLAND KAMINSKI

University of Potsdam

TORSTEN SCHAUB

University of Potsdam

Abstract. We provide a comprehensive elaboration of the theoreticalfoundations of variable instantiation, or grounding, in Answer Set Pro-gramming (ASP). Building on the semantics of ASP’s modeling language,we introduce a formal characterization of grounding algorithms in termsof (fixed point) operators. A major role is played by dedicated well-founded operators whose associated models provide semantic guidance fordelineating the result of grounding along with on-the-fly simplifications.We address an expressive class of logic programs that incorporates recur-sive aggregates and thus amounts to the scope of existing ASP modelinglanguages. This is accompanied with a plain algorithmic frameworkdetailing the grounding of recursive aggregates. The given algorithmscorrespond essentially to the ones used in the ASP grounder gringo.

1. Introduction

Answer Set Programming (ASP; [46]) allows us to address knowledge-intense search and optimization problems in a greatly declarative way dueto its integrated modeling, grounding, and solving workflow [31,41]. Prob-lems are modeled in a rule-based logical language featuring object variables,function symbols, recursion, and aggregates, among others. Moreover, theunderlying nonmonotonic semantics allows us to express defaults and reach-ability in an easy way. A corresponding logic program is then turned intoa propositional format by systematically replacing all object variables byvariable-free terms. This process is called grounding. Finally, the actual ASPsolver takes the resulting propositional version of the original program andcomputes its answer sets.

Given that both grounding and solving constitute the computationalcornerstones of ASP, it is surprising that the importance of grounding

E-mail addresses: [email protected], [email protected].

1

arX

iv:2

108.

0476

9v1

[cs

.AI]

10

Aug

202

1

Page 2: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 2

has somehow been eclipsed by that of solving. This is nicely reflectedby the unbalanced number of implementations. With lparse [58], (thegrounder in) dlv [17], and gringo [27], three grounder implementations facedozens of solver implementations, among them smodels [56], (the solverin) dlv [44], assat [49], cmodels [34], clasp [30], wasp [2] just to name themajor ones. What caused this imbalance? One reason may consist inthe high expressiveness of ASP’s modeling language that allows moderngrounders to mimic universal Turing machines (cf. [27]). Another may lie inthe popular viewpoint that grounding amounts to database materialization,and thus that most fundamental research questions have been settled. Andfinally the semantic foundations of full-featured ASP modeling languageshave been established only recently [24, 37], revealing the semantic gap tothe just mentioned idealized understanding of grounding. In view of this,research on grounding focused on algorithm and system design [17,27] andthe characterization of language fragments guaranteeing finite propositionalrepresentations [9, 32,45,58].

As a consequence, the theoretical foundations of grounding are muchless explored than those of solving. While there are several alternativeways to characterize the answer sets of a logic program [47], and thus thebehavior of a solver, we still lack indepth formal characterizations of theinput-output behavior of ASP grounders. Although we can describe theresulting propositional program up to semantic equivalence, we have noformal means to delineate the actual set of rules.

To this end, grounding involves some challenging intricacies. First of all,the entire set of systematically instantiated rules is infinite in the worst —yet not uncommon — case. For a simple example, consider the program:

p(a)

p(X)← p(f(X))

This program induces an infinite set of variable-free terms, viz. a, f(a),f(f(a)), . . . , that leads to an infinite propositional program by systematicallyreplacing variable X by all these terms in the second rule. On the otherhand, modern grounders only produce the fact p(a) and no instances of thesecond rule, which is semantically equivalent to the infinite program. Aswell, ASP’s modeling language comprises (possibly recursive) aggregates,whose systematic grounding may be infinite in itself. To illustrate this, letus extend the above program with the rule

q ← #count{X : p(X)} = 1(1)

deriving q when the number of satisfied instances of p is one. Analogous toabove, the systematic instantiation of the aggregate’s element results in aninfinite set. Again, a grounder is able to infer a fact. That is, it detects thatthe set amounts to a singleton that satisfies the aggregate. After removingthe rule’s (satisfied) antecedent, it produces the fact q. In fact, a solverexpects a finite set of propositional rules including aggregates over finitely

Page 3: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 3

many objects only. Hence, in practice, the characterization of the groundingresult boils down to identifying a finite yet semantically equivalent set of rules(whenever possible). Finally, in practice, grounding involves simplificationswhose application depends on the ordering of rules in the input. In fact,shuffling a list of propositional rules only affects the order in which a solverenumerates answer sets, whereas shuffling a logic program before groundingmay lead to different though semantically equivalent sets of rules. To seethis, consider the program:

p(X)← ¬q(X) ∧ u(X) u(1) u(2)

q(X)← ¬p(X) ∧ v(X) v(2) v(3)

This program has two answer sets; both contain p(1) and q(3), while onecontains q(2) and the other p(2). Systematically grounding the programyields the obvious four rules. However, depending upon the order, the rulesare passed to a grounder, it already produces either the fact p(1) or q(3)via simplification. Clearly, all three programs are distinct but semanticallyequivalent in sharing the above two answer sets.

Building on the semantics of ASP modeling languages [24,37], we elaborateupon the foundations of ASP grounding and introduce a formal characteriza-tion of grounding algorithms in terms of (fixed point) operators. A majorrole is played by dedicated well-founded operators whose associated modelsprovide semantic guidance for delineating the result of grounding along withon-the-fly simplifications. We address an expressive class of logic programsthat incorporates recursive aggregates and thus amounts to the scope ofexisting ASP modeling languages [25]. This is accompanied with an algo-rithmic framework detailing the grounding of recursive aggregates. Thegiven grounding algorithms correspond essentially to the ones used in theASP grounder gringo [27]. In this way, our framework provides a formalcharacterization of one of the most widespread grounding systems.

Modern grounders like (the one in) dlv [17] or gringo [27] are based ondatabase evaluation techniques [1, 63]. Grounding is seen as an iterativebottom-up process guided by the successive expansion of a program’s Her-brand base, that is, the set of variable-free atoms constructible from thesignature of the rules at hand. This process is repeated until a fixed point isreached where no further atoms can be added. During this process, a groundrule is produced if its positive body atoms belong to the current Herbrandbase, in which case its head atom is added to the current Herbrand base.

From an algorithmic perspective, we show how a grounding framework(relying upon semi-naive database evaluation techniques) can be extended toincorporate recursive aggregates. An example of such an aggregate is shownin Table 1, giving an encoding of the Company Controls Problem [52]: Acompany X controls a company Y , if X directly or indirectly controls morethan 50% of the shares of Y . The aggregate #sum+ implements summationover positive integers. Notably, it takes part in the recursive definition ofpredicate controls in Table 1. A corresponding problem instance is given in

Page 4: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 4

controls(X,Y )← #sum+{S : owns(X,Y, S);

S,Z : controls(X,Z) ∧ owns(Z, Y, S)} > 50

∧ company(X) ∧ company(Y ) ∧X 6= Y

Table 1. Company Controls Encoding

Table 2. Note that a systematic instantiation of the four variables in Table 1

company(c1) company(c2) company(c3) company(c4)

owns(c1, c2, 60) owns(c1, c3, 20) owns(c2, c3, 35) owns(c3, c4, 51)

Table 2. Company Controls Instance

with the eight constants in Table 2 results in 64 ground rules. However, takentogether, the encoding and the instance are equivalent to the program inTable 3, which consists of four ground rules only. In fact, all literals in Table 3

controls(c1, c2)← #sum+{60 : owns(c1, c2, 60)} > 50

∧ company(c1) ∧ company(c2) ∧ c1 6= c2

controls(c3, c4)← #sum+{51 : owns(c3, c4, 51)} > 50

∧ company(c3) ∧ company(c4) ∧ c3 6= c4

controls(c1, c3)← #sum+{20 : owns(c1, c3, 20);

35, c2 : controls(c1, c2) ∧ owns(c2, c3, 35)} > 50

∧ company(c1) ∧ company(c3) ∧ c1 6= c3

controls(c1, c4)← #sum+{51, c3 : controls(c1, c3) ∧ owns(c3, c4, 51)} > 50

∧ company(c1) ∧ company(c4) ∧ c1 6= c4

Table 3. Relevant Grounding of Company Controls

can be completely evaluated in view of the problem instance, which moreoverallows us to evaluate the aggregate atoms, so that the grounding of theabove company controls instance boils down to the four facts controls(c1, c2),controls(c3, c4), controls(c1, c3), and controls(c1, c4).

These four facts are also obtained as output from the grounder gringo.Our paper is organized as follows.Section 2 lays the basic foundations of our approach. We start in Sec-

tion 2.1 by recalling definitions of (monotonic) operators on lattices; theyconstitute the basic building blocks of our characterization of groundingalgorithms. We then review infinitary formulas along with their stable and

Page 5: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 5

well-founded semantics in Sections 2.2, 2.3 and 2.4, respectively. In thiscontext, we explore several operators and define a class of infinitary logicprograms that allows us to capture full-featured ASP languages with (re-cursive) aggregates. Interestingly, we have to resort to concepts borrowedfrom id-logic [8, 60] to obtain monotonic operators that are indispensable forcapturing iterative algorithms. Finally, we define in Section 2.5 our conceptof program simplification and elaborate upon its semantic properties.

Section 3 is dedicated to the formal foundations of component-wise ground-ing. As mentioned, each rule is instantiated in the context of the currentHerbrand base. In addition, grounding has to take subsequent atom def-initions into account. To this end, we extend well-known operators andresulting semantic concepts with contextual information, usually capturedby two- and four-valued interpretations, respectively, and elaborate upontheir formal properties that are relevant to grounding. In turn, we generalizethe contextual operators and semantic concepts to sequences of programsin order to reflect component-wise grounding. The major emerging conceptis essentially a well-founded model for program sequences that takes back-ward and forward contextual information into account. This model-theoreticconcept can be used for governing an ideal grounding process.

Section 4 turns to logic programs with variables and aggregates. Wealign the semantics of such aggregate programs with that of Ferraris [19] butconsider infinitary formulas [37]. In view of grounding aggregates, however, weintroduce an alternative translation of aggregates that is strongly equivalentto that of Ferraris but permits stronger propagation. As a result, we obtaina translation of finite, non-ground logic programs with aggregates intothe class of infinitary, ground logic programs defined in Section 2. Suchinfinitary programs can be turned into finitary ones by means of the programsimplification introduced in Section 2.5 under certain conditions.

Section 5 further refines our semantic approach to reflect actual groundingprocesses. To this end, we define the concept of an instantiation sequencebased on rule dependencies. We then use the contextual operators of Section 3to define approximate models of instantiation sequences. While approximatemodels are in general less precise than well-founded ones, they are bettersuited for on-the-fly grounding along an instantiation sequence. Nonethelessthey are strong enough to allow for completely evaluating stratified programs.

Section 6 lays out the basic algorithms for grounding rules, components,and entire programs and characterizes their output in terms of the semanticconcepts developed in the previous sections. Of particular interest is thetreatment of aggregates, which are decomposed into dedicated normal rulesbefore grounding, and re-assembled afterwards. This allows us to groundrules with aggregates by means of grounding algorithms for normal rules.

The previous sections focus on the theoretical and algorithmic cornerstonesof grounding. Section 7 refines these concepts by further detailing aggregatepropagation, algorithm specifics, and the treatment of language constructsfrom gringo’s input language.

Page 6: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 6

Finally, we relate our contributions to the state of the art in Section 8 andsummarize it in Section 9.

The developed approach is implemented in gringo series 4 and 5. However,to ease comprehensibility, we have moreover implemented the presentedapproach in µ-gringo1 and equipped it with means for retracing the developedconcepts during grounding. This system may also enable some readers toconstruct and to experiment with own grounder extensions.

This paper draws on material presented during an invited talk at thethird workshop on grounding, transforming, and modularizing theories withvariables [29].

2. Foundations

2.1. Operators on lattices

This section recalls basic concepts on operators on complete lattices.A complete lattice is a partially ordered set (L,≤) in which every subset

S ⊆ L has a greatest lower bound and a least upper bound in (L,≤).An operator O on lattice (L,≤) is a function from L to L. It is monotone

if x ≤ y implies O(x) ≤ O(y) for each x, y ∈ L; and it is antimonotone ifx ≤ y implies O(y) ≤ O(x) for each x, y ∈ L.

Let O be an operator on lattice (L,≤). A prefixed point of O is an x ∈ Lsuch that O(x) ≤ x. A postfixed point of O is an x ∈ L such that x ≤ O(x).A fixed point of O is an x ∈ L such that x = O(x), i.e., it is both a prefixedand a postfixed point.

Theorem 1 (Knaster-Tarski; [59]). Let O be a monotone operator on com-plete lattice (L,≤). Then, we have the following properties:

(a) Operator O has a least fixed and prefixed point which are identical.(b) Operator O has a greatest fixed and postfixed point which are identical.(c) The fixed points of O form a complete lattice.

2.2. Formulas and Interpretations

We begin with a propositional signature Σ consisting of a set of atoms.Following [60], we define the sets F0,F1, . . . of formulas as follows:

• F0 is the set of all propositional atoms in Σ,• Fi+1 is the set of all elements of Fi, all expressions H∧ and H∨ withH ⊆ Fi, and all expressions F → G with F,G ∈ Fi.

The set F =⋃∞i=0Fi contains all (infinitary propositional) formulas over Σ.

In the following, we use the following shortcuts:

• > = ∅∧ and ⊥ = ∅∨,• ¬F = F → ⊥ where F is a formula, and• F ∧G = {F,G}∧ and F ∨G = {F,G}∨ where F and G are formulas.

1The µ-gringo system is available at https://github.com/potassco/mu-gringo.

Page 7: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 7

An occurrence of a subformula in a formula is called positive, if the numberof implications containing that occurrence in the antecedent is even, andstrictly positive if that number is zero; if that number is odd the occurrenceis negative. The sets F+ and F− gather all atoms occurring positively ornegatively in formula F , respectively; if applied to a set of formulas, bothexpressions stand for the union of the respective atoms in the formulas. Also,we define F± = F+ ∪ F− as the set of all atoms occurring in F .

A two-valued interpretation over signature Σ is a set I of propositionalatoms such that I ⊆ Σ. Atoms in an interpretation I are considered trueand atoms in Σ \ I as false. The set of all interpretations together with the⊆ relation forms a complete lattice.

The satisfaction relation between interpretations and formulas is definedas follows:

• I |= a for atoms a if a ∈ I,• I |= H∧ if I |= F for all F ∈ H,• I |= H∨ if I |= F for some F ∈ H, and• I |= F → G if I 6|= F or I |= G.

An interpretation I is a model of a set H of formulas, written I |= H, if itsatisfies each formula in the set.

In the following, all atoms, formulas, and interpretations operate on thesame (implicit) signature, unless mentioned otherwise.

2.3. Stable models

Our terminology in this section keeps following the one in [60].The reduct F I of a formula F w.r.t. an interpretation I is defined as:

• ⊥ if I 6|= F ,• a if I |= F and F = a ∈ F0,

• {GI | G ∈ H}∧ if I |= F and F = H∧,

• {GI | G ∈ H}∨ if I |= F and F = H∨, and• GI → HI if I |= F and F = G→ H.

An interpretation I is a stable model of a formula F if it is among the (setinclusion) minimal models of F I .

Note that the reduct removes (among other unsatisfied subformulas) alloccurrences of atoms that are false in I. Thus, the satisfiability of the reductdoes not depend on such atoms, and all minimal models of F I are subsetsof I. Hence, if I is a stable model of F , then it is the only minimal modelof F I .

Sets H1 and H2 of infinitary formulas are equivalent if they have the samestable models and classically equivalent if they have the same models; theyare strongly equivalent if, for any set H of infinitary formulas, H1 ∪H andH1 ∪ H are equivalent. As shown in [36], this also allows for replacing apart of any formula with a strongly equivalent formula without changing the

Page 8: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 8

set of stable models. We use the following reduct-based characterization ofstrong equivalence [62].

Proposition 2. Sets H1 and H2 of infinitary formulas are strongly equivalentiff HI1 and HI2 are classically equivalent for all two-valued interpretations I.

See proof on page 65.In the following, we do not consider stable models of arbitrary formulas

but formulas with atoms as head and formulas as body. Accordingly, anF-program is set of rules of form h← F where h ∈ F0 and F ∈ F . We useH(h ← F ) = h to refer to rule heads and B(h ← F ) = F to refer to rulebodies. We extend this to programs by letting H(P ) = {H(r) | r ∈ P} andB(P ) = {B(r) | r ∈ P}.

An interpretation I is a model of P , written I |= P , if I |= B(r)→ H(r)for all r ∈ P . The latter is also written as I |= r. We define the reductof an F-program P w.r.t. an interpretation I as P I = {rI | r ∈ P} whererI = H(r) ← B(r)I . Note that rI leaves the head of r intact and onlyreduces its body. As above, an interpretation I is a stable model of P if I isamong the minimal models of P I .

This program-oriented reduct yields the same stable models as obtainedby applying the full reduct to the corresponding infinitary formula.

Proposition 3. The F-program P has the same stable models as the formula{B(r)→ H(r) | r ∈ P}∧.

See proof on page 65.For programs, Truszczynski introduces in [60] an alternative reduct, re-

placing each negatively occurring atom with ⊥, if it is falsified, and with >,otherwise. More precisely, the so-called id-reduct FI of a formula F w.r.t.an interpretation I is defined as

aI = a aI = > if a ∈ IaI = ⊥ if a /∈ I

H∧I = {FI | F ∈ H}∧ H∧I = {FI | F ∈ H}∧

H∨I = {FI | F ∈ H}∨ H∨I = {FI | F ∈ H}∨

(F → G)I = FI → GI (F → G)I = FI → GI

where a is an atom, H a set of formulas, and F and G are formulas.The id-reduct of an F-program P w.r.t. an interpretation I is PI = {rI |

r ∈ P} where rI = H(r)← B(r)I . As with rI , the transformation of r intorI leaves the head of r unaffected.

Example 1. Consider the program containing the single rule

p← ¬¬p.

Page 9: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 9

We get the following reduced programs w.r.t. interpretations ∅ and {p}:

{p← ¬¬p}∅ = {p← ⊥} {p← ¬¬p}{p} = {p← ¬⊥}{p← ¬¬p}∅ = {p← ¬¬p} = {p← ¬¬p}{p} = {p← ¬¬p}

Note that both reducts leave the rule’s head intact.

Extending the definition of positive occurrences, we define a formula as(strictly) positive if all its atoms occur (strictly) positively in the formula. Wedefine an F-program as (strictly) positive if all its rule bodies are (strictly)positive.

Proposition 4. Let F be a formula, and I and J be interpretations.If F is positive and I ⊆ J , then I |= F implies J |= F .

See proof on page 65.The following two propositions shed some light on the two types of reducts.

Proposition 5. Let F be a formula, and I and J be interpretations.Then,

(a) if F is positive then F I is positive,(b) I |= F iff I |= F I ,(c) if F is strictly positive and I ⊆ J then I |= F iff I |= F J .

See proof on page 65.

Proposition 6. Let F be a formula, and I, J , and X be interpretations.Then,

(a) FI is positive,(b) I |= F iff I |= FI ,(c) if F is positive then F = FI , and(d) if I ⊆ J then X |= FJ implies X |= FI .

See proof on page 66.As put forward in [64], we may associate with each program P its one-step

provability operator TP , defined for any interpretation X as

TP (X) = {H(r) | r ∈ P,X |= B(r)}.

Proposition 7 ([60]). Let P be a positive F-program.Then, the operator TP is monotone.

Fixed points of TP are models of P guaranteeing that each contained atomis supported by some rule in P ; prefixed points of TP correspond to themodels of P . According to Theorem 1 (a), the TP operator has a least fixedpoint for positive F -programs. We refer to this fixed point as the least modelof P , and write it as LM (P ).

Now, in view of Proposition 6 (a), any id-reduct PI of a program w.r.t.an interpretation I possesses a least model LM (PI). This gives rise to the

Page 10: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 10

following definition of a stable operator [60]: Given an F-program P , itsid-stable operator is defined for any interpretation I as

SP (I) = LM (PI).

The fixed points of SP are the id-stable models of P .Note that neither the program reduct P I nor the formula reduct F I

guarantee (least) models. Also, stable models and id-stable models do notcoincide in general.

Example 2. Reconsider the program from Example 1, comprising rule

p← ¬¬p.This program has the two stable models ∅ and {p}, but the empty model isthe only id-stable model.

Proposition 8 ([60]). Let P be an F-program.Then, the id-stable operator SP is antimonotone.

No analogous antimonotone operator is obtainable for F-programs byusing the program reduct P I (and for general theories with the formulareduct F I). To see this, reconsider Example 2 along with its two stablemodels ∅ and {p}. Given that both had to be fixed points of such an operator,it would behave monotonically on ∅ and {p}.

Truszczynski identifies in [60] a class of programs for which stable modelsand id-stable models coincide. The set N consists of all formulas F such thatany implication in F has ⊥ as consequent and no occurrences of implicationin its antecedent. An N -program consist of rules of form h ← F whereh ∈ F0 and F ∈ N .

Proposition 9 ([60]). Let P be an N -program.Then, the stable and id-stable models of P coincide.

Note that a positive N -program is also strictly positive.

2.4. Well-founded models

In the following, we deal with pairs of sets and extend the basic set relationsand operations accordingly. Given sets I ′, I, J ′, J , and X, we define:

• (I ′, J ′) ≺ (I, J) if I ′ ≺ I and J ′ ≺ J for (≺,≺) ∈ {(@,⊂), (v,⊆)}• (I ′, J ′) ◦ (I, J) = (I ′ ◦ I, J ′ ◦ J) for (◦, ◦) ∈ {(t,∪), (u,∩), (r, \)}• (I, J) ◦ X = (I, J) ◦ (X,X) for ◦ ∈ {t,u,r}

Our terminology in this section follows the one in [61].A four-valued interpretation over signature Σ is represented by a pair

(I, J) v Σ×Σ where I stands for certain and J for possible atoms. Intuitively,an atom that is

• certain and possible is true,• certain but not possible is inconsistent,• not certain but possible is unknown, and

Page 11: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 11

• not certain and not possible is false.

A four-valued interpretation (I ′, J ′) is more precise than a four-valuedinterpretation (I, J), written (I, J) ≤p (I ′, J ′), if I ⊆ I ′ and J ′ ⊆ J . Theprecision ordering also has an intuitive reading: the more atoms are certainor the fewer atoms are possible, the more precise is an interpretation. Theleast precise four-valued interpretation over Σ is (∅,Σ). As with two-valuedinterpretations, the set of all four-valued interpretations over a signature Σtogether with the relation ≤p forms a complete lattice. A four-valuedinterpretation is called inconsistent if it contains an inconsistent atom;otherwise, it is called consistent. It is total whenever it makes all atomseither true or false.

In analogy to [61], we define the id-well-founded operator of an F -programP for any four-valued interpretation (I, J) as

WP (I, J) = (SP (J), SP (I)).

This operator is monotone w.r.t. the precision ordering ≤p. Hence, by Theo-rem 1 (a), WP has a least fixed point, which defines the id-well-founded modelof P , also written as WM (P ). In what follows, we drop the prefix ‘id’ andsimply refer to the id-well-founded model of a program as its well-foundedmodel. (We keep the distinction between stable and id-stable models.)

Any well-founded model (I, J) of an F-program P satisfies I ⊆ J .

Lemma 10. Let P be an F-Program.Then, the well-founded model WM (P ) of P is consistent.

See proof on page 67.

Example 3. Consider program P3 consisting of the following rules:

a

b← a

c← ¬bd← c

e← ¬dWe compute the well-founded model of P3 starting from (∅,Σ):

SP (Σ) = {a, b} SP (∅) = {a, b, c, d, e}SP ({a, b, c, d, e}) = {a, b} SP ({a, b}) = {a, b, e}

SP ({a, b, e}) = {a, b, e} SP ({a, b}) = {a, b, e}SP ({a, b, e}) = {a, b, e} SP ({a, b, e}) = {a, b, e}

The well-founded model of P3 is ({a, b, e}, {a, b, e}).Unlike general F-programs, the class of N -programs warrants the same

stable and id-stable models for each of its programs. Unfortunately, N -programs are too restricted for our purpose (for instance, for capturing

Page 12: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 12

aggregates in rule bodies2). To this end, we define a more general class ofprograms and refer to them as R-programs. Although id-stable models of R-programs may differ from their stable models (see below), their well-foundedmodels encompass both stable and id-stable models. Thus, well-foundedmodels can be used for characterizing stable model-preserving programtransformations. In fact, we see in Section 2.5 that the restriction of F-to R-programs allows us to provide tighter semantic characterizations ofprogram simplifications.

We define R to be the set of all formulas F such that implications in Fhave no further occurrences of implications in their antecedents. Then, anR-program consist of rules of form h ← F where h ∈ F0 and F ∈ R. Aswith N -programs, a positive R-program is also strictly positive.

Our next result shows that id-well-founded models can be used for ap-proximating regular stable models of R-programs.

Theorem 11. Let P be an R-program and (I, J) be the well-founded modelof P .

If X is a stable model of P , then I ⊆ X ⊆ J .

See proof on page 69.

Example 4. Consider the R-program P4:3

c← (b→ a) a← b

a← c b← a

Observe that {a, b, c} is the only stable model of P4, the program does nothave any id-stable models, and the well-founded model of P4 is (∅, {a, b, c}).In accordance with Theorem 11, the stable model of P4 is enclosed in thewell-founded model.

Note that the id-reduct handles b→ a the same way as ¬b ∨ a. In fact,the program obtained by replacing

c← (b→ a)

with

c← ¬b ∨ a

is an N -program and has neither stable nor id-stable models.

Further, note that the program in Example 2 is not an R-program, whereasthe one in Example 3 is an R-program.

2Ferraris’ semantics [19] of aggregates introduces implications, which results in rulesbeyond the class of N -programs.

3The choice of the body b→ a is not arbitrary since it can be seen as representing theaggregate #sum{1 : a,−1 : b} ≥ 0.

Page 13: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 13

2.5. Program simplification

In this section, we define a concept of program simplification and showhow its result can be characterized by the semantic means from above. Inparticular, we delineate its preservation of well-founded and stable models.

Definition 1. Let P be an F -program, and (I, J) be a four-valued interpre-tation.

We define the simplification of P w.r.t. (I, J) as

P (I,J) = {r ∈ P | J |= B(r)I}.

For simplicity, we drop parentheses and we write P I,J instead of P (I,J)

whenever clear from context.The program simplification P I,J acts as a filter eliminating inapplicable

rules that fail to satisfy the condition J |= B(r)I . That is, first, all negativelyoccurring atoms in B(r) are evaluated w.r.t. the certain atoms in I andreplaced accordingly by ⊥ and >, respectively. Then, it is checked whetherthe reduced body B(r)I is satisfiable by the possible atoms in J . Only in thiscase, the rule is kept in P I,J . No simplifications are applied to the remainingrules. This is illustrated in Example 5 below.

Note that for an F -program P the head atoms in P I,J correspond to theresult of applying the provability operator of program PI to the possibleatoms in J :

Lemma 12. Let P be an F-program and (I, J) be a four-valued interpreta-tion.

Then, we have H(P I,J) = TPI (J).

See proof on page 70.Our next result shows that programs simplified with their well-founded

model maintain this model; it is based on the following proposition.

Proposition 13. Let P be an F-program and (I, J) be the well-foundedmodel of P .

Then, we have

(a) SP I,J (I ′) = J for all I ′ ⊆ I, and(b) SP I,J (J ′) = SP (J ′) for all J ⊆ J ′.

See proof on page 70.

Theorem 14. Let P be an F-program and (I, J) be the well-founded modelof P .

Then, P and P I,J have the same well-founded model.

See proof on page 71.

Page 14: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 14

Example 5. In Example 3, we computed the well-founded model ({a, b, e}, {a, b, e})of P3. With this, we obtain the simplified program P ′3 = P

{a,b,e},{a,b,e}3

a

b← a

e← ¬dafter dropping c← ¬b and d← c.

Next, we check that the well-founded model of P ′3 corresponds to thewell-founded model of P3:

SP ′3(Σ) = {a, b} SP ′3(∅) = {a, b, e}SP ′3({a, b, e}) = {a, b, e} SP ′3({a, b}) = {a, b, e}SP ′3({a, b, e}) = {a, b, e} SP ′3({a, b, e}) = {a, b, e}

We observe that it takes two applications of the well-founded operator toobtain the well-founded model. This could be reduced to one step if atomsfalse in the well-founded model would be removed from the negative bodiesby the program simplification. Keeping them is a design decision with thegoal to simplify notation in the following.

The next series of results further elaborates on semantic invariants guar-anteed by our concept of program simplification. The first result shows thatit preserves all stable models between the sets used for simplification.

Theorem 15. Let P be an F-program, and I, J and X be two-valuedinterpretations.

If I ⊆ X ⊆ J , then X is a stable model of P iff X is a stable model ofP I,J .

See proof on page 72.As a consequence, we obtain that R-programs simplified with their well-

founded model also maintain stable models.

Corollary 16. Let P be an R-program and (I, J) be the well-founded modelof P .

Then, P and P I,J have the same stable models.

See proof on page 72.For instance, the R-program in Example 3 and its simplification in Ex-

ample 5 have the same stable model. Unlike this, the rule p ← ¬¬p fromExample 2 induces two stable models, while its simplification w.r.t. its well-founded model (∅, ∅) yields an empty program admitting the empty stablemodel only.

The next two results show that any program between the original and itssimplification relative to its well-founded model preserves the well-foundedmodel, and that this extends to all stable models for R-programs.

Page 15: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 15

Theorem 17. Let P and Q be F-programs, and (I, J) be the well-foundedmodel of P .

If P I,J ⊆ Q ⊆ P , then P and Q have the same well-founded models.

See proof on page 72.

Corollary 18. Let P and Q be R-programs, and (I, J) be the well-foundedmodel of P .

If P I,J ⊆ Q ⊆ P , then P and Q are equivalent.

See proof on page 73.

3. Splitting

One of the first steps during grounding is to group rules into componentssuitable for successive instantiation. This amounts to splitting a logic programinto a sequence of subprograms. The rules in each such component are theninstantiated with respect to the Herbrand base of all previous components,starting with some component consisting of facts only. In other words,grounding is always performed relative to a set of context atoms. Moreover,atoms found to be true or false can be used to apply on-the-fly simplifications.

Accordingly, this section parallels the above presentation by extendingthe respective formal concepts with contextual information provided bycontext atoms in a two- and four-valued setting. We then assemble theresulting concepts to enable their consecutive application to sequences ofsubprograms. Interestingly, the resulting notion of splitting is more generalthen the traditional concept [48] since it allows us to partition rules in anarbitrary way. In the following, we use the superscript c to indicate contextualinterpretations.

To begin with, we extend the one-step provability operator accordingly.

Definition 2. Let P be an F -program and Ic be a two-valued interpretation.For any two-valued interpretation I, we define the one-step provability

operator of P relative to Ic as

T Ic

P (I) = TP (Ic ∪ I).

A prefixed point of T Ic

P is a also a prefixed point of TP . Thus, each prefixedpoint of T I

c

P is a model of P but not necessarily the other way round.To see this, consider program P = {a ← b}. We have TP (∅) = ∅ and

T{b}P (∅) = {a}. Hence, ∅ is a (pre)fixed point of TP but not of T

{b}P since

{a} 6⊆ ∅. The set {a} is a prefixed point of both operators.Alternatively, the incorporation of context atoms can also be seen as a

form of partial evaluation applied to the underlying program.

Definition 3. Let Ic be a two-valued interpretation.

Page 16: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 16

We define the partial evaluation of an F-formula w.r.t. Ic as follows:

peIc(a) = > if a ∈ Ic pe Ic(a) = a

peIc(a) = a if a /∈ Ic

peIc(H∧) = {peIc(F ) | F ∈ H}∧ pe Ic(H∧) = {pe Ic(F ) | F ∈ H}∧

peIc(H∨) = {peIc(F ) | F ∈ H}∨ pe Ic(H∨) = {pe Ic(F ) | F ∈ H}∨

peIc(F → G) = pe Ic(F )→ peIc(G) pe Ic(F → G) = peIc(F )→ pe Ic(G)

where a is an atom, H a set of formulas, and F and G are formulas.

The partial evaluation of an F-program P w.r.t. a two-valued interpreta-tion Ic is peIc(P ) = {peIc(r) | r ∈ P} where peIc(r) = H(h)← peIc(B(r)).Accordingly, the partial evaluation of rules boils down to replacing satisfiedpositive occurrences of atoms in rule bodies by >.

We observe the following relationship between the relative one-step opera-tors and partial evaluations.

Observation 19. Let P be a positive F-program and Ic be a two-valuedinterpretation.

Then, we have for any two-valued interpretation I that

T Ic

P (I) = TpeIc (P )(I).

Clearly, peIc(P ) is positive whenever P is positive. In this case, we obtainthat T I

c

P is monotone and has a least fixed point corresponding to the leastmodel of peIc(P ).

We use this correspondence to define a contextual stable operator.

Definition 4. Let P be an F -program and Ic be a two-valued interpretation.For any two-valued interpretation J , we define the id-stable operator

relative to Ic as

SIc

P (J) = LM (peIc(P )J).

While the operator is antimonotone w.r.t. its argument J , it is monotoneregarding its parameter Ic. Note that peIc(P )J = peIc(PJ).

Proposition 20. Let P be an F-program, and Ic and J be two-valuedinterpretations.

We get the following properties:

(a) J ′ ⊆ J implies SIc

P (J) ⊆ SIcP (J ′), and

(b) Ic′ ⊆ Ic implies SIc′P (J) ⊆ SIcP (J).

See proof on page 73.Moreover, we observe the following properties.

Observation 21. Let P be an F-program, and Ic and J be two-valuedinterpretations.

We get the following properties:

(a) S∅P (J) = SP (J),

Page 17: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 17

(b) SIc

P (J) ⊆ H(P ), and

(c) SIc

P (J) = SIc∩B(P )+

P (J ∩B(P )−).

By building on the relative stable operator, we next define its well-foundedcounterpart. Unlike above, the context is now captured by a four-valuedinterpretation.

Definition 5. Let P be an F-program and (Ic, Jc) be a four-valued inter-pretation.

For any four-valued interpretation (I, J), we define the well-founded oper-ator relative to (Ic, Jc) as

W(Ic,Jc)P (I, J) = (SI

c

P (J ∪ Jc), SJc

P (I ∪ Ic)).

As above, we drop parentheses and simply write W I,JP instead of W

(I,J)P .

Also, we keep refraining from prepending the prefix ‘id’ to the well-foundedoperator along with all concepts derived from it below.

Unlike the stable operator, the relative well-founded one is monotone onboth its argument and parameter.

Proposition 22. Let P be an F-program, and (I, J) and (Ic, Jc) be four-valued interpretations.

We get the following properties:

(a) (I ′, J ′) ≤p (I, J) implies W Ic,Jc

P (I ′, J ′) ≤p W Ic,Jc

P (I, J), and

(b) (Ic′, Jc′) ≤p (Ic, Jc) implies W Ic′,Jc′

P (I, J) ≤p W Ic,Jc

P (I, J).

See proof on page 74.From Proposition 22 (a) and Theorem 1 (a), we get that the relative

well-founded operator has a least fixed point.

Definition 6. Let P be an F-program and (Ic, Jc) be a four-valued inter-pretation.

We define the well-founded model of P relative to (Ic, Jc), written WM (Ic,Jc)(P ),

as the least fixed point of W Ic,Jc

P .

Whenever clear from context, we keep dropping parentheses and simplywrite WM I,J(P ) instead of WM (I,J)(P ).

In what follows, we use the relativized concepts defined above to delineatethe semantics and resulting simplifications of the sequence of subprogramsresulting from a grounder’s decomposition of the original program. Forsimplicity, we first present two propositions capturing the composition understable and well-founded operations, before we give the general case involvinga sequence of programs.

Just like superscript c, we use the superscript e (and similarly letter Efurther below) to indicate atoms whose defining rules are yet to come.

As in traditional splitting, we begin by differentiating a bottom and atop program. In addition to the input atoms J and context atoms in Ic, wemoreover distinguish a set of external atoms, Ie, which occur in the bottom

Page 18: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 18

program but are defined in the top program. Accordingly, the bottomprogram has to be evaluated relative to Ic ∪ Ie (and not just Ic as above) toconsider what could be derived by the top program.

Proposition 23. Let P b and P t be F-programs, Ic and J be two-valued

interpretations, I = SIc

Pb∪P t(J), Ie = I ∩ (B(P b)+ ∩H(P t)), Ib = SI

c∪IePb (J),

and It = SIc∪IbP t (J).

Then, we have I = Ib ∪ It.

See proof on page 74.For characterizing the relative well-founded models of split programs, we

use four-valued interpretations, (Ic, Jc) and (Ie, Je), to capture context andexternal atoms, respectively.

Proposition 24. Let P b and P t be F-programs, (Ic, Jc) be a four-valued

interpretation, (I, J) = WM Ic,Jc(P b ∪ P t), (Ie, Je) = (I, J) u (B(P b)

± ∩H(P t)), (Ib, Jb) = WM (Ic,Jc)t(Ie,Je)(P b), and (It, J t) = WM (Ic,Jc)t(Ib,Jb)(P t).

Then, we have (I, J) = (Ib, Jb) t (It, J t).

See proof on page 75.Partially expanding the statements of the two previous results nicely

reflects the decomposition of the application of the id-stable operator andthe well-founded founded model of a program:

SIc

Pb∪P t(J) = SIc∪IePb (J) ∪ SIc∪IbP t (J) and

WM Ic,Jc(P b ∪ P t) = WM (Ic,Jc)t(Ie,Je)(P b) tWM (Ic,Jc)t(Ib,Jb)(P t).

Note that the formulation of both propositions forms the external interpreta-tions, Ie and (Ie, Je), by selecting atoms from the overarching interpretationI or the well-founded model (I, J), respectively. This warrants the corre-spondence of the overall interpretations to the union of the bottom andtop interpretations. This global approach is dropped below (after the nextexample) and leads to less precise composed models.

Example 6. Let us illustrate the above approach via the following program.

a(P b)

b(P b)

c← a(P t)

d← ¬b(P t)

The well-founded model of this program relative to (Ic, Jc) = (∅, ∅) is

(I, J) = ({a, b, c}, {a, b, c}).First, we partition the four rules of the program into P b and P t as given

above. We get (Ie, Je) = (∅, ∅) since B(P b)± ∩H(P t) = ∅. Let us evaluate

P b before P t. The well-founded model of P b relative to (Ic, Jc) t (Ie, Je) is

(Ib, Jb) = ({a, b}, {a, b}).

Page 19: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 19

With this, we calculate the well-founded model of P t relative to (Ic, Jc) t(Ib, Jb):

(It, J t) = ({c}, {c}).We see that the union of (Ib, Jb) t (It, J t) is the same as the well-foundedmodel of P b ∪ P t relative to (Ic, Jc).

This corresponds to standard splitting in the sense that {a, b} is a splittingset for P b ∪ P t and P b is the “bottom” and P t is the “top” (cf. [48]).

For a complement, let us reverse the roles of P b and P t. Unlike above, body

atoms in P b now occur in rule heads of P t, i.e., B(P b)± ∩H(P t) = {a, b}.

We thus get (Ie, Je) = ({a, b}, {a, b}). The well-founded model of P b relativeto (Ic, Jc) t (Ie, Je) is

(Ib, Jb) = ({c}, {c}).And the well-founded model of P t relative to (Ic, Jc) t (Ib, Jb) is

(It, J t) = ({a, b}, {a, b}).Again, we see that the union of both models is identical to (I, J).

This decomposition has no direct correspondence to standard splitting.

Next, we generalize the previous results from two programs to sequencesof programs. For this, we let I be a well-ordered index set and direct ourattention to sequences (Pi)i∈I of F-programs.

Definition 7. Let (Pi)i∈I be a sequence of F-programs.We define the well-founded model of (Pi)i∈I as

WM ((Pi)i∈I) =⊔i∈I

(Ii, Ji)(2)

where

Ei = B(Pi)± ∩

⋃i<j

H(Pj),(3)

(Ici , J

ci ) =

⊔j<i

(Ij , Jj), and(4)

(Ii, Ji) = WM (Ici ,Jci )t(∅,Ei)(Pi).(5)

The well-founded model of a program sequence is itself assembled in (2)from a sequence of well-founded models of the individual subprograms in (5).This provides us with semantic guidance for successive program simplification,as shown below. In fact, proceeding along the sequence of subprogramsreflects the iterative approach of a grounding algorithm, one componentis grounded at a time. At each stage i ∈ I, this takes into account thetruth values of atoms instantiated in previous iterations, viz. (Ic

i , Jci ), as

well as dependencies to upcoming components in Ei. Note that unlikeProposition 24, the external atoms in Ei are identified purely syntactically,and the interpretation (∅, Ei) treats them as unknown. Grounding is thus

Page 20: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 20

performed under incomplete information and each well-founded model in (5)can be regarded as an over-approximation of the actual one. This is enabledby the monotonicity of the well-founded operator in Proposition 22 (b) thatonly leads to a less precise result when overestimating its parameter, as madeprecise in the following theorem.

Theorem 25. Let (Pi)i∈I be a sequence of F-programs.Then, WM ((Pi)i∈I) ≤p WM (

⋃i∈I Pi).

See proof on page 79.In fact, no loss is encountered when head literals never occur in the bodies

of previous programs, as shown next.

Corollary 26. Let (Pi)i∈I be a sequence of F-programs and Ei defined asin (3).

If Ei = ∅ for all i ∈ I then WM ((Pi)i∈I) = WM (⋃i∈I Pi).

See proof on page 79.Whenever head atoms do not interfere with negative body literals, the

relative well founded-model of a program can be calculated with just twoapplications of the relative id-stable model operator.

Lemma 27. Let P be an F-program such that B(P )− ∩ H(P ) = ∅ and(Ic, Jc) be a four-valued interpretation.

Then, WM Ic,Jc(P ) = (SI

c

P (Jc), SJc

P (Ic)).

See proof on page 80.Any sequence as in Corollary 26 in which each Pi additionally satisfies

the precondition of Lemma 27 has a total well-founded model. Furthermore,the well-founded model of such a sequence can be calculated with justtwo (independent) applications of the relative id-stable model operator perprogram Pi in the sequence.

The next two results transfer Theorem 25 and Corollary 26 to programsimplification by successively simplifying programs with the respective well-founded models of the previous programs.

Theorem 28. Let (Pi)i∈I be a sequence of F-programs, (I, J) = WM ((Pi)i∈I),and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (3) to (5).

Then, P I,Jk ⊆ P (Ick,Jck)t(Ik,Jk)t(∅,Ek)

k ⊆ Pk for all k ∈ I.See proof on page 80.

Corollary 29. Let (Pi)i∈I be a sequence of F-programs, (I, J) = WM ((Pi)i∈I),and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (3) to (5).

If Ei = ∅ for all i ∈ I, then P I,Jk = P(Ick,J

ck)t(Ik,Jk)

k for all k ∈ I.See proof on page 80.Clearly, the best simplifications are obtained when knowing the actual

well-founded model of the overall program. This can be achieved whenever Eiis empty, that is, if there is no need to approximate the impact of upcomingatoms, otherwise we can only guarantee the bounds in Theorem 28.

Page 21: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 21

Corollary 30. Let (Pi)i∈I be a sequence of R-programs, and (I, J) be thewell-founded model of

⋃i∈I Pi.

Then,⋃i∈I Pi and

⋃i∈IQi with P I,Ji ⊆ Qi ⊆ Pi have the same well-founded

and stable models.

See proof on page 81.

Corollary 31. Let (Pi)i∈I be a sequence of R-programs, and Ei, (Ici , J

ci ),

and (Ii, Ji) be defined as in (3) to (5).

Then,⋃i∈I Pi and

⋃i∈I P

(Ici ,Jci )t(Ii,Ji)t(∅,Ei)

i have the same well-foundedand stable models.

See proof on page 81.Let us mention that the two previous results extend to sequences of

F-programs and their well-founded models but not their stable models.Let us illustrate the above results with the following two examples.

Example 7. To illustrate Theorem 25, let us consider the following programs,P1 and P2:

a← ¬c(P1)

b← ¬d(P1)

c← ¬b(P2)

d← e(P2)

The well-founded model of P1 ∪ P2 is

(I, J) = ({a, b}, {a, b}).Let us evaluate P1 before P2. While no head literals of P2 occur positively

in P1, the head literals c and d of P2 occur negatively in rule bodies of P1.Hence, we get E1 = {c, d} and treat both atoms as unknown while calculatingthe well-founded model of P1 relative to (∅, {c, d}):

(I1, J1) = (∅, {a, b}).We obtain that both a and b are unknown. With this and E2 = ∅, we can

calculate the well-founded model of P2 relative to (I1, J1):

(I2, J2) = (∅, {c}).We see that because a is unknown, we have to derive c as unknown, too. Andbecause there is no rule defining e, we cannot derive d. Hence, (I1, J1)t(I2, J2)is less precise than (I, J) because, when evaluating P1, it is not yet knownthat c is true and d is false.

Next, we illustrate the simplified programs according to Theorem 28:

a← ¬c a← ¬c(P1)

b← ¬d b← ¬d(P1)

c← ¬b(P2)

Page 22: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 22

The left column contains the simplification of P1∪P2 w.r.t. (I, J) and the rightcolumn the simplification of P1 w.r.t. (I1, J1) and P2 w.r.t. (I1, J1) t (I2, J2).Note that d← e has been removed in both columns because e is false in both(I, J) and (I1, J1) t (I2, J2). But we can only remove c← ¬b from the leftcolumn because, while b is false in (I, J), it is unknown in (I1, J1) t (I2, J2).

Example 8. Next, let us illustrate Corollary 26 on an example. We takethe same rules as in Example 7 but use a different sequence:

d← e(P1)

b← ¬d(P1)

c← ¬b(P2)

a← ¬c(P2)

Observe that the head literals of P2 do not occur in the bodies of P1, i.e.,E1 = B(P1)± ∩H(P2) = ∅. The well-founded model of P1 is

(I1, J1) = ({b}, {b}).And the well-founded model of P2 relative to ({b}, {b}) is

(I2, J2) = ({a}, {a}).Hence, the union of both models is identical to the well-founded model ofP1 ∪ P2.

Next, we investigate the simplified program according to Corollary 29:

b← ¬d(P1)

a← ¬c(P2)

As in Example 7, we delete rule d← e because e is false in (I1, J1). But thistime, we can also remove rule c← ¬b because b is true in (I1, J1) t (I2, J2).

4. Aggregate Programs

We now turn to programs with aggregates and, at the same time, toprograms with variables. That is, we now deal with finite nonground programsthat may be turned by instantiation into infinite ground programs. Ourconcepts follow the ones in [24]; the semantics of aggregates is alignedwith [19] yet lifted to infinitary formulas (cf. [37, 60]).

We consider a signature Σ = (F, P, V ) consisting of sets of function,predicate, and variables symbols. The sets of variable and function symbolsare disjoint. Function and predicate symbols are associated with non-negativearities. In the following, we use lower case strings for function and predicatesymbols, and upper case strings for variable symbols. Also, we often dropthe term ‘symbol’ and simply speak of functions, predicates, and variables.

As usual, terms over Σ are defined inductively as follows:

• v ∈ V is a term and• f(t1, . . . , tn) is a term if f ∈ F is a function symbol of arity n and

each ti is a term over Σ.

Page 23: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 23

Parentheses for terms over function symbols of arity 0 are omitted.Unless stated otherwise, we assume that the set of (zero-ary) functions

includes a set of numeral symbols being in a one-to-one correspondence tothe integers. For simplicity, we drop this distinction and identify numeralswith the respective integers.

An atom over signature Σ has form p(t1, . . . , tn) where p ∈ P is a predicatesymbol of arity n and each ti is a term over Σ. As above, parentheses foratoms over predicate symbols of arity 0 are omitted. Given an atom a over Σ,a literal over Σ is either the atom itself or its negation ¬a. A literal withoutnegation is called positive, and negative otherwise.

A comparison over Σ has form

t1 ≺ t2(6)

where t1 and t2 are terms over Σ and ≺ is a relation symbol among <, ≤, >,≥, =, and 6=.

An aggregate element over Σ has form

t1, . . . , tm : a1 ∧ · · · ∧ an(7)

where ti is a term and aj is an atom, both over Σ for 0 ≤ i ≤ m and0 ≤ j ≤ n. The terms t1, . . . , tm are seen as a tuple, which is empty form = 0. For an aggregate element e of form (7), we use H(e) = (t1, . . . , tm)and B(e) = {a1, . . . , an}. We extend both to sets of aggregate elementsin the straightforward way, that is, H(E) = {H(e) | e ∈ E} and B(E) ={B(e) | e ∈ E}.

An aggregate atom over Σ has form

f{e1, . . . , en} ≺ s(8)

where n ≥ 0, f is an aggregate name among #count, #sum, #sum+, and#sum−, each ei is an aggregate element, ≺ is a relation symbol among <,≤, >, ≥, =, and 6= (as above), and s is a term.

Without loss of generality, we refrain from introducing negated aggregateatoms.4 We often refer to aggregate atoms simply as aggregates.

An aggregate program over Σ is a finite set of aggregate rules of form

h← b1 ∧ · · · ∧ bnwhere n ≥ 0, h is an atom over Σ and each bi is either a literal, a comparison,or an aggregate over Σ. We refer to b1, . . . , bn as body literals, and extendfunctions H(r) and B(r) to any aggregate rule r.

Examples of aggregate rules are given in (1) and Table 1.We say that an aggregate rule r is normal if its body does not contain

aggregates. An aggregate program is normal if all its rules are normal.

4Grounders like lparse and gringo replace aggregates systematically by auxiliary atomsand place them in the body of new rules implying the respective auxiliary atom. Thisresults in programs without occurrences of negated aggregates.

Page 24: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 24

A term, literal, aggregate element, aggregate, rule, or program is groundwhenever it does not contain any variables.

We assume that all ground terms are totally ordered by a relation ≤,which is used to define the relations <, >, ≥, =, and 6= in the standardway. For ground terms t1, t2 and a corresponding relation symbol ≺, we saythat ≺ holds between t1 and t2 whenever the corresponding relation holdsbetween t1 and t2. Furthermore, >, ≥, and 6= hold between ∞ and anyother term, and <, ≤, and 6= hold between −∞ and any other term. Finally,we require that integers are ordered as usual, and that all terms involvingfunction symbols are somehow totally ordered and larger than integers.

For defining sum-based aggregates, we define for a tuple t = t1, . . . , tm ofground terms the following weight functions:

w(t) =

{t1 if m > 0 and t1 is an integer

0 otherwise,

w+(t) = max{w(t), 0}, and

w−(t) = min{w(t), 0}.With this at hand, we now define how to apply aggregate functions to sets

of tuples of ground terms in analogy to [24].

Definition 8. Let T be a set of tuples of ground terms.We define

#count(T ) =

{|T | if T is finite,

∞ otherwise, 5

#sum(T ) =

{Σt∈Tw(t) if {t ∈ T | w(t) 6= 0} is finite,

0 otherwise,

#sum+(T ) =

{Σt∈Tw

+(t) if {t ∈ T | w(t) > 0} is finite,

∞ otherwise, and

#sum−(T ) =

{Σt∈Tw

−(t) if {t ∈ T | w(t) < 0} is finite,

−∞ otherwise.

Note that in our setting the application of aggregate functions to infinitesets of ground terms is of theoretical relevance only, since we aim at reducingthem to their finite equivalents so that they can be evaluated by a grounder.

A variable is global in

• a literal if it occurs in the literal,• a comparison if it occurs in the comparison,• an aggregate if it occurs in its bound, and• a rule if it is global in its head atom or in one of its body literals.

5We present two cases in analogy to the subsequent definitions.

Page 25: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 25

For example, in Table 1 variables X and Y are global in the rule, while Zand S are neither global in the rule nor the aggregate.

Definition 9. Let r be an aggregate rule.We define r to be safe

• if all its global variables occur in some positive literal in the body ofr and• if all its non-global variables occurring in an aggregate element e of

an aggregate in the body of r, also occur in some positive literal inthe condition of e.

For instance, the rule in Table 1 is safe.Note that comparisons are disregarded in the definition of safety. That is,

variables in comparisons have to occur in positive body literals.An aggregate program is safe if all its rules are safe.An instance of an aggregate rule r is obtained by substituting ground

terms for all its global variables. We use Inst(r) to denote the set of allinstances of r and Inst(P ) to denote the set of all ground instances of rulesin aggregate program P . An instance of an aggregate element e is obtainedby substituting ground terms for all its variables. We let Inst(E) stand forall instances of aggregate elements in a set E. Note that Inst(E) consists ofground expressions, which is not necessarily the case for Inst(r).

A literal, aggregate element, aggregate, or rule is closed if it does notcontain any global variables.

For example, the following rule is an instance of the one in Table 1.

controls(c1, c2)← #sum+{S : owns(c1, c2, S);

S,Z : controls(c1, Z), owns(Z, c2, S)} > 50

∧ company(c1) ∧ company(c2) ∧ c1 6= c2

Note that both the rule and its aggregate are closed. It is also noteworthyto realize that the two elements of the aggregate induce an infinite set ofinstances, among them

20 : owns(c1, c2, 20) and

35, c2 : controls(c1, c2), owns(c2, c3, 35).

We now turn to the semantics of aggregates as introduced in [19] but followits adaptation to closed aggregates in [24]: Let a be a closed aggregate ofform (8) and E be its set of aggregate elements. We say that a setD ⊆ Inst(E)of its elements’ instances justifies a, written D . a, if f(H(D)) ≺ s holds.

An aggregate a is monotone whenever D1 . a implies D2 . a for all D1 ⊆D2 ⊆ Inst(E), and accordingly a is antimonotone if D2 . a implies D1 . afor all D1 ⊆ D2 ⊆ Inst(E).

We observe the following monotonicity properties.

Proposition 32 ([37]).

Page 26: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 26

• Aggregates over functions #sum+ and #count together with aggregaterelations > and ≥ are monotone.• Aggregates over functions #sum+ and #count together with aggregate

relations < and ≤ are antimonotone.• Aggregates over function #sum− have the same monotonicity prop-

erties as #sum+ aggregates with the complementary relation.

Next, we give the translation τ from aggregate programs to R-programs,derived from [19,37]:

For a closed literal l, we have

τ(l) = l,

for a closed comparison l of form (6), we have

τ(l) =

{> if ≺ holds between t1 and t2

⊥ otherwise

and for a set L of closed literals, comparisons and aggregates, we have

τ(L) = {τ(l) | l ∈ L}.

For a closed aggregate a of form (8) and its set E of aggregate elements, wehave

τ(a) = {τ(D)∧ → τa(D)∨ | D ⊆ Inst(E), D 6 . a}∧(9)

where

τa(D) = τ(Inst(E) \D) for D ⊆ Inst(E),

τ(D) = {τ(e) | e ∈ D} for D ⊆ Inst(E), and

τ(e) = τ(B(e))∧ for e ∈ Inst(E).

For a closed aggregate rule r, we have

τ(r) = τ(H(r))← τ(B(r))∧.

For an aggregate program P , we have

τ(P ) = {τ(r) | r ∈ Inst(P )}.(10)

Observe that τ(P ) is indeed an R-program. In fact, only the transla-tion of aggregates introduces R-formulas; rules without aggregates formN -programs.

Example 9. To illustrate Ferraris’ approach to the semantics of aggregates,consider a count aggregate a of form

#count{X : p(X)} ≥ n.Since the aggregate is non-ground, the set G of its element’s instances consistsof all t : p(t) for each ground term t.

Page 27: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 27

The count aggregate cannot be justified by any subset D of G satisfying|{t | t : p(t) ∈ D}| < n, or D 6 . a for short. Accordingly, we have that τ(a)is the conjunction of all formulas

{p(t) | t : p(t) ∈ D}∧ → {p(t) | t : p(t) ∈ (G \D)}∨(11)

such that D ⊆ G and D 6 . a. Restricting the set of ground terms to thenumerals 1, 2, 3 and letting n = 2 results in the formulas

> → p(1) ∨ p(2) ∨ p(3),

p(1)→ p(2) ∨ p(3),

p(2)→ p(1) ∨ p(3), and

p(3)→ p(1) ∨ p(2).

Note that a smaller number of ground terms than n yields an unsatisfiableset of formulas.

However, it turns out that a Ferraris-style [19,37] translation of aggregatesis too weak for propagating monotone aggregates in our id-based setting.That is, when propagating possible atoms (i.e., the second component ofthe well-founded model), an id-reduct may become satisfiable although theoriginal formula is not. So, we might end up with too many possible atomsand a well-founded model that is not as precise as it could be. To see this,consider the following example.

Example 10. For some m,n ≥ 0, the program Pm,n consists of the followingrules:

p(i)← ¬q(i) for 1 ≤ i ≤ mq(i)← ¬p(i) for 1 ≤ i ≤ mr ← #count{X : p(X)} ≥ n

Given the ground instances G of the aggregate’s elements and some two-valued interpretation I, observe that

τ(#count{X : p(X)} ≥ n)I

is classically equivalent to

τ(#count{X : p(X)} ≥ n)I ∨ {p(t) ∈ B(G) | p(t) /∈ I}∨.(12)

Next, observe that for 1 ≤ m < n, the four-valued interpretation (I, J) =(∅, H(τ(Pm,n))) is the well-founded model of Pm,n:

Sτ(Pm,n)(J) = I and

Sτ(Pm,n)(I) = J.

Ideally, atom r should not be among the possible atoms because it can neverbe in a stable model. Nonetheless, it is due to the second disjunct in (12).

Page 28: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 28

Note that not just monotone aggregates exhibit this problem. In general,we get for a closed aggregate a with elements E and an interpretation I that

τ(a)I is classically equivalent to τ(a)I ∨ {c ∈ B(Inst(E)) | I 6|= c}∨.The second disjunct is undesirable when propagating possible atoms.

To address this shortcoming, we augment the aggregate translation so thatit provides stronger propagation. The result of the augmented translation isstrongly equivalent to that of the original translation (cf. Proposition 33).Thus, even though we get more precise well-founded models, the stablemodels are still contained in them.

Definition 10. We define τ∗ as the translation obtained from τ by replacingthe case of closed aggregates in (9) by the following:

For a closed aggregate a of form (8) and its set E of aggregate elements,we have

τ∗(a) = {τ(D)∧ → τ∗a (D)∨ | D ⊆ Inst(E), D 6 . a}∧

where

τ∗a (D) = {τ(C)∧ | C ⊆ Inst(E) \D,C ∪D . a} for D ⊆ Inst(E).

Note that just as τ also τ∗ is recursively applied to the whole program.Let us illustrate the modified translation by reconsidering the aggregate

from Example 9.

Example 11. Let us reconsider the count aggregate a:

#count{X : p(X)} ≥ nAs with τ(a) in Example 9, τ∗(a) yields a conjunction of formulas, oneconjunct for each set D ⊆ Inst(E) satisfying D 6 . a of the form:

{B(e) | e ∈ D}∧ →{{B(e) | e ∈ (C \D)}∧ | C . a,D ⊆ C ⊆ Inst(E)

}∨(13)

Restricting again the set of ground terms to the numerals 1, 2, 3 and lettingn = 2 results now in the formulas

> → (p(1) ∧ p(2)) ∨ (p(1) ∧ p(3)) ∨ (p(2) ∧ p(3)) ∨ (p(1) ∧ p(2) ∧ p(3)),

p(1)→ p(2) ∨ p(3) ∨ (p(2) ∧ p(3)),

p(2)→ p(1) ∨ p(3) ∨ (p(1) ∧ p(3)), and

p(3)→ p(1) ∨ p(2) ∨ (p(1) ∧ p(2)).

Note that the last disjunct can be dropped from each rule’s consequent. Andas above, a smaller number of ground terms than n yields an unsatisfiableset of formulas.

The next result ensures that τ(P ) and τ∗(P ) have the same stable modelsfor any aggregate program P .

Page 29: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 29

Proposition 33. Let a be a closed aggregate.Then, τ(a) and τ∗(a) are strongly equivalent.

See proof on page 81.

Example 12. For illustration, reconsider Program Pm,n from Example 10.As above, we apply the well-founded operator to program Pm,n for m < n

and four-valued interpretation (I, J) = (∅, H(τ∗(Pm,n))):

Sτ∗(Pm,n)(J) = I and

Sτ∗(Pm,n)(I) = J \ {r}.Unlike before, r is now found to be false since it does not belong toSτ∗(Pm,n)(∅).

To see this, let us anticipate Proposition 34 and observe that τ∗(a)I = τ∗(a)for a = #count{X : p(X)} ≥ n and any interpretation I. Hence, our refinedtranslation τ∗ avoids the problematic disjunct in (12) on the right. ByProposition 33, we can use τ∗(Pm,n) instead of τ(Pm,n); both formulas havethe same stable models.

Proposition 34. Let a be a closed aggregate.If a is monotone, then τ∗(a)I is classically equivalent to τ∗(a) for any

two-valued interpretation I.

See proof on page 82.Note that τ∗(a) is a negative formula whenever a is antimonotone; cf.

Proposition 51.Using this proposition, we augment the translation τ∗ to replace monotone

aggregates a by the strictly positive formula τ∗(a)∅. That is, we only keep theimplication with the trivially true antecedent in the aggregate translation.

While τ∗ improves on propagation, it may still produce infinitary R-formulas when applied to aggregates. This issue is addressed by restrictingthe translation to a set of (possible) atoms.

Definition 11. Let J be a two-valued interpretation. We define the transla-tion τ∗J as the one obtained from τ by replacing the case of closed aggregatesin (9) by the following:

For a closed aggregate a of form (8) and its set E of aggregate elements,we have

τ∗J (a) = {τ(D)∧ → τ∗a,J(D)∨ | D ⊆ Inst(E)|J , D 6 . a}∧

where

τ∗a,J(D) = {τ(C)∧ | C ⊆ Inst(E)|J \D,C ∪D . a} and

Inst(E)|J = {e ∈ Inst(E) | B(e) ⊆ J}.

Note that τ∗J (a) is a finite formula whenever J is finite.

Page 30: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 30

Clearly, τ∗J also conforms to τ∗ except for the restricted translation foraggregates defined above. The next proposition elaborates this by showingthat τ∗J and τ∗ behave alike whenever J limits the set of possible atoms.

Proposition 35. Let a be a closed aggregate, and I ⊆ J and X ⊆ J betwo-valued interpretations.

Then,

(a) X |= τ∗(a) iff X |= τ∗J (a),(b) X |= τ∗(a)I iff X |= τ∗J (a)I , and(c) X |= τ∗(a)I iff X |= τ∗J (a)I .

See proof on page 82.We are now in a position to outline how safe (non-ground) aggregate

programs can be turned into equivalent finite ground programs consisting offinite formulas only.

To this end, consider a safe aggregate program P along with the well-founded model (I, J) of τ∗(P ). We have already seen in Section 2.5 thatτ∗(P ) and τ∗(P )I,J have the same stable models, just like τ∗(P )I,J andτ∗J (P )I,J in view of Proposition 35.

Now, if (I, J) is finite, then τ∗(P )I,J is finite, too. The safety of all rulesin P implies that all global variables appear in positive body literals. Thus,the number of ground instances of each rule in τ∗(P )I,J is determined by thenumber of possible substitutions for its global variables. Clearly, there areonly finitely many possible substitutions such that all positive body literalsare satisfied by a finite interpretation J (cf. Definition 1). Furthermore, ifJ is finite, aggregate translations in τ∗J (P )I,J introduce finite subformulasonly. Thus, in this case, we obtain a finite propositional formula that has thesame stable models as τ∗(P ) (as well as τ(P ), the traditional Ferraris-stylesemantics of P [19, 37]).

An example of a class of aggregate programs inducing finite well-foundedmodels as above consists of programs over a signature with nullary functionsymbols only. Any such program can be turned into an equivalent finite setof finite propositional rules.

Example 13. Let Pm,n be the program from Example 10. The well-foundedmodel (I, J) of τ∗(Pm,n) is (∅, H(τ∗(Pm,n))) if n ≤ m.

The translation τ∗J (P3,2) consists of the rules

p(1)← ¬q(1), q(1)← ¬p(1),

p(2)← ¬q(2), q(2)← ¬p(2),

p(3)← ¬q(3), q(3)← ¬p(3), and

r ← τ∗J (#count{X : p(X)} ≥ 2)

where the aggregate translation corresponds to the conjunction of the formulasin Example 11. Note that the translation τ∗J (P3,2) is independent of thesignature of P3,2; any compatible signature including all numerals can bechosen.

Page 31: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 31

5. Dependency Analysis

We now further refine our semantic approach to reflect actual ground-ing processes. In fact, modern grounders process programs on-the-fly bygrounding one rule after another without storing any rules. At the sametime, they try to determine certain, possible, and false atoms. Unfortunately,well-founded models cannot be computed on-the-fly, which is why we definethe concept of an approximate model. More precisely, we start by defining in-stantiation sequences of (non-ground) aggregate programs based on their ruledependencies. We show that approximate models of instantiation sequencesare underapproximations of the well-founded model of the correspondingsequence of (ground) R-programs, as defined in Section 3. The precisionof both types of models coincides on stratified programs. We illustrate ourconcepts comprehensively at the end of this section in Examples 14 and 15.

To begin with, we extend the notion of positive and negative literalsto aggregate programs. For atoms a, we define a+ = (¬a)− = {a} anda− = (¬a)+ = ∅. For comparisons a, we define a+ = a− = ∅. For aggregatesa with elements E, we define positive and negative atom occurrences, usingProposition 34 to refine the case for monotone aggregates:

• a+ =⋃e∈E B(e),

• a− = ∅ if a is monotone, and• a− =

⋃e∈E B(e) if a is not monotone.

For a set of body literals B, we define B+ =⋃b∈B b

+ and B− =⋃b∈B b

−.We see in the following, that a special treatment of monotone aggregates

yields better approximations of well-founded models. A similar case could bemade for antimonotone aggregates but had led to a more involved algorithmictreatment.

Inter-rule dependencies are determined via the predicates appearing intheir heads and bodies. We define pred(a) to be the predicate symbolassociated with atom a and pred(A) = {pred(a) | a ∈ A} for a set Aof atoms. An aggregate rule r1 depends on another aggregate rule r2 ifpred(H(r2)) ∈ pred(B(r1)±). Rule r1 depends positively or negatively on r2

if pred(H(r2)) ∈ pred(B(r1)+) or pred(H(r2)) ∈ pred(B(r2)−), respectively.The strongly connected components of an aggregate program P are the

equivalence classes under the transitive closure of the dependency relationbetween all rules in P . A strongly connected component P1 depends onanother strongly connected component P2 if there is a rule in P1 that dependson some rule in P2. The transitive closure of this relation is anti-symmetric.

A strongly connected component of an aggregate program is unstratified ifit depends negatively on itself or if it depends on an unstratified component.A component is stratified if it is not unstratified.

Just for the record, we summarize next how dependencies transfer fromnon-ground aggregate programs to the corresponding ground R-programs.

Page 32: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 32

Observation 36. Let P1 and P2 be aggregate programs, and G1 = τ∗(P1)and G2 = τ∗(P2).

Then,

(a) P1 does not depend on P2 implies B(G1)± ∩H(G2) = ∅,(b) P1 does not positively depend on P2 implies B(G1)+ ∩H(G2) = ∅,(c) P1 does not negatively depend on P2 implies B(G1)− ∩H(G2) = ∅.

A topological ordering of the strongly connected components is then usedto guide grounding.

Definition 12. We define an instantiation sequence for P as a sequence (Pi)i∈Iof its strongly connected components such that i < j if Pj depends on Pi.

Note that the components can always be well-ordered because we onlyconsider finite aggregate programs. Examples of (refined) instantiationsequences are given in Figures 1 and 2.

Before further refining instantiation sequences below, we pin down twoproperties of interest. First of all, there are no external atoms in thecomponents of instantiation sequences.

Lemma 37. Let P be an aggregate program and (Pi)i∈I be an instantiationsequence for P .

Then, for the sequence (Gi)i∈I with Gi = τ∗(Pi), we have Ei = ∅ for eachi ∈ I where Ei is defined as in (3).

See proof on page 83.Together with Corollary 26, this shows that the consecutive construction

of the well-founded model along an instantiation sequence results in thewell-founded model of the entire program.

Moreover, for each stratified component in an instantiation sequence, weobtain a total well-founded model.

Lemma 38. Let P be an aggregate program and (Pi)i∈I be an instantiationsequence for P .

Then, for the sequence (Gi)i∈I with Gi = τ∗(Pi), we have Ii = Ji = SIciGi

(Ici )

for each stratified component Pi where (Ici , J

ci ) and (Ii, Ji) are defined as

in (4) and (5) in the construction of the well-founded model of (Gi)i∈I inDefinition 7.

See proof on page 83.Note that the total well-founded model of each stratified component can

be computed with one application of the id-stable operator.Trivial examples of such stratified components are P1 to P4 in Example 14

and P1 to P8 in Example 15, all of which consist of facts only and thus yieldtotal well-founded models. This also applies to P9 in Example 15 because itonly involves positive dependencies. Since this includes all components ofthe Company Controls instance from Section 1, it gets completely evaluatedduring grounding.

Page 33: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 33

We further refine instantiation sequences by partitioning each componentalong its positive dependencies.

Definition 13. Let P be an aggregate program and (Pi)i∈I be an instan-tiation sequence for P . Furthermore, for each i ∈ I, let (Pi,j)j∈Ii be an

instantiation sequence of Pi considering positive dependencies only.A refined instantiation sequence for P is a sequence (Pi,j)(i,j)∈J where the

index set J = {(i, j) | i ∈ I, j ∈ Ii} is ordered lexicographically.We call (Pi,j)(i,j)∈J a refinement of (Pi)i∈I.

We define a component Pi,j to be stratified or unstratified if the encom-passing component Pi is stratified or unstratified, respectively.

Examples of refined instantiation sequences are given in Figures 1 and 2.The advantage of such refinements is that they yield better or equal

approximations (cf. Theorem 41 and Example 14). On the downside, theyadmit once more external atoms (cf. Lemma 37), although their scope islimited to negative body literals:

Lemma 39. Let P be an aggregate program and (Pi,j)(i,j)∈J be a refined

instantiation sequence for P .Then, for the sequence (Gi,j)(i,j)∈J with Gi,j = τ∗(Pi,j), we have Ei,j ∩

B(Gi,j)+ = ∅ for each (i, j) ∈ J where Ei,j is defined as in (3).

See proof on page 83.We have already seen in Section 3 that external atoms may lead to less

precise semantic characterizations. This is just the same in the non-groundcase, whenever a component comprises predicates that are defined in afollowing component of a refined instantiation sequence. This leads us tothe concept of an approximate model obtained by overapproximating theextension of such externally defined predicates.

Definition 14. Let P be an aggregate program, (Ic, Jc) be a four-valuedinterpretation, E be a set of predicates, and P ′ be the program obtainedfrom P by removing all rules r with pred(B(r)−) ∩ E 6= ∅.

We define the approximate model of P relative to (Ic, Jc) as

AM(Ic,Jc)E (P ) = (I, J)

where

I = SIc

τ∗(P ′)(Jc) and

J = SJc

τ∗(P )(Ic ∪ I)

We keep dropping parentheses and simply write AM Ic,Jc

E (P ) instead of

AM(Ic,Jc)E (P ).

The approximate model amounts to an immediate consequence opera-tor, similar to the relative well-founded operator in Definition 5; it refrainsfrom any iterative applications, as used for defining a well-founded model.

Page 34: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 34

More precisely, the relative id-stable operator is applied twice to obtain theapproximate model. This is similar to Van Gelder’s alternating transforma-tion [65]. The certain atoms in I are determined by applying the operatorto the ground program obtained after removing all rules whose negativebody literals comprise externally defined predicates, while the possible atomsJ are computed from the entire program by taking the already computedcertain atoms in I into account. In this way, the approximate model mayresult in fewer unknown atoms than the relative well-founded operator whenapplied to the least precise interpretation (as an easy point of reference).How well we can approximate the certain atoms with the approximate op-erator depends on the set of external predicates E. When approximatingthe model of a program P in a sequence, the set E comprises all negativepredicates occurring in P for which possible atoms have not yet been fullycomputed. This leads to fewer certain atoms obtained from the reducedprogram, P ′ = {r ∈ P | pred(B(r)−) ∩ E = ∅}, stripped of all rules from Pthat have negative body literals whose predicates occur in E.

The next lemma identifies an essential prerequisite for an approximatemodel of a non-ground program to be an underapproximation of the well-founded model of the corresponding ground program.

Lemma 40. Let P be an aggregate program, E be a set of predicates, and(Ic, Jc) be a four-valued interpretation.

If pred(H(P ))∩pred(B(P )−) ⊆ E then AM Ic,Jc

E (P ) ≤p WM Ic,Jc∪Ec(τ∗(P ))

where Ec is the set of all ground atoms over predicates in E.

See proof on page 84.In general, a grounder cannot calculate on-the-fly a well-founded model.

Implementing this task efficiently requires an algorithm storing the groundedprogram, as, for example, implemented in an ASP solver. With Lemma 38,we see that a stratified component can be evaluated by applying the stablemodel operator once. In fact, modern grounders are able to calculate thisoperator on-the-fly. For unstratified components, an approximation of thewell-founded model is calculated. This is where we use the approximatemodel, which might be less precise than the well-founded model but can becomputed more easily.

With the condition of Lemma 40 in mind, we define the approximatemodel for an instantiation sequence. We proceed similar to Definition 7 buttreat in (14) all atoms over negative predicates that have not been completelydefined as external.

Definition 15. Let (Pi)i∈I be a (refined) instantiation sequence for P .Then, the approximated model of (Pi)i∈I is

AM ((Pi)i∈I) =⊔i∈I

(Ii, Ji).

Page 35: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 35

where

Ei = pred(B(Pi)−) ∩ pred(

⋃i≤j

H(Pj)),(14)

(Ici , J

ci ) =

⊔j<i

(Ij , Jj), and(15)

(Ii, Ji) = AMIci ,J

ci

Ei(Pi).(16)

Note that the underlying approximate model removes rules containingnegative literals over predicates in Ei when calculating certain atoms. Thisamounts to assuming all ground instances of atoms over Ei to be possible.6

Compared to (3), however, this additionally includes recursive predicatesin (14). The set Ei is empty for stratified components.

The next result extends Lemma 40 and shows that an approximate modelof an instantiation sequence constitutes an underapproximation of the well-founded model of the whole ground program.

Theorem 41. Let (Pi)i∈I be an instantiation sequence for aggregate pro-gram P and (Pj)j∈J be a refinement of (Pi)i∈I.

Then, AM ((Pi)i∈I) ≤p AM ((Pj)j∈J) ≤p WM (τ∗(P )).

See proof on page 84.The finer granularity of refined instantiation sequences leads to more

precise models. Intuitively, this is because a refinement of a component mayresult in a series of approximate models, which yield a more precise resultthan the approximate model of the entire component because in some casesfewer predicates are considered external in (14).

We remark that all instantiation sequences of a program have the sameapproximate model. However, this does not carry over to refined instantiationsequences because their evaluation is order dependent.

The two former issues are illustrated in Example 14.Whenever an aggregate program is stratified, the approximate model of

its instantiation sequence is total (and coincides with the well-founded modelof the entire ground program).

Theorem 42. Let (Pi)i∈I be an instantiation sequence of an aggregate pro-gram P such that Ei = ∅ for each i ∈ I as defined in (14).

Then, AM ((Pi)i∈I) is total.

See proof on page 85.The actual value of approximate models for grounding lies in their under-

lying series of consecutive interpretations delineating each ground program ina (refined) instantiation sequence. In fact, as outlined after Proposition 35,

6To be precise, rules involving aggregates that could in principle derive certain atomsmight be removed, too. Here, we are interested in a syntactic criteria that allows us tounderapproximate the set of certain atoms.

Page 36: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 36

whenever all interpretations (Ii, Ji) in (16) are finite so are the R-programs

τ∗Jci ∪Ji

(Pi)(Ici ,J

ci )t(Ii,Ji) obtained from each Pi in the instantiation sequence.

Theorem 43. Let (Pi)i∈I be a (refined) instantiation sequence of an aggregateprogram P , and let (Ic

i , Jci ) and (Ii, Ji) defined as in (15) and (16).

Then,⋃i∈I τ

∗Jci ∪Ji

(Pi)(Ici ,J

ci )t(Ii,Ji) and τ∗(P ) have the same well-founded

and stable models.

See proof on page 85.This union of R-programs is exactly the one obtained by the grounding

algorithm proposed in the next section (cf. Theorem 49).

Example 14. The following example shows different ways to split an ag-gregate program into sequences and gives the well-founded models andapproximated models for them. Let P be the following (aggregate) program,extending the one from the introductory section:

u(1) u(2)

v(2) v(3)

p(X)← ¬q(X) ∧ u(X) q(X)← ¬p(X) ∧ v(X)

x← ¬p(1) y ← ¬q(3)

u(1) u(2) v(2) v(3)

1 2 3 4

p(X)← ¬q(X) ∧ u(X) q(X)← ¬p(X) ∧ v(X)

(5, 1) (5, 2)

5

x← ¬p(1) y ← ¬q(3)

6 7

Figure 1. Rule dependencies for Example 14.

The (refined) instantiation sequence for program P is given in Figure 1.Rules are depicted in solid boxes. Solid and dotted edges between suchboxes depict positive and negative dependencies between the correspondingrules, respectively. A dashed box represents a component in an instantiationsequence and a dotted box a component in a refined instantiation sequence.

Page 37: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 37

If two components coincide, then they are depicted with a dashed/dottedbox. The number (or pair) in the corner of a component box indicates theindex in the corresponding (refined) instantiation sequence.

For F = {u(1), u(2), v(2), v(3)}, the well-founded model of τ∗(P ) is

WM (τ∗(P )) = ({p(1), q(3)}, {p(1), p(2), q(2), q(3)}) t F.Note that the set F comprises the facts derived from stratified components.For example, we can use Lemma 38 to obtain I1 = J1 = {u(1)} for componentP1, which also corresponds to the approximate model because the set E1 in(14) is empty for stratified components.

By Corollary 26, the ground sequence (τ(Pi))i∈I has the same well-foundedmodel as τ∗(P ):

WM ((Pi)i∈I) = ({p(1), q(3)}, {p(1), p(2), q(2), q(3)}) t F.However, the approximated model of the instantiation sequence (Pi)i∈I,

defined in Definition 15, is less precise, viz.

AM ((Pi)i∈I) = (∅, {p(1), p(2), q(2), q(3), x, y}) t F.This is because we have to use AM F,F

E (P5) to approximate the well-foundedmodel of component P5. Here, the set E = {a/1, b/1} determined by (14)forces us to unconditionally assume instances of ¬q(X) and ¬p(X) to betrue. Thus, we get (I5, J5) = (∅, {p(1), p(2), q(2), q(3)}) for the intermediateinterpretation in (16). This is also reflected in Definition 14, which makesus drop all rules containing negative literals over predicates in E whencalculating true atoms.

Unlike above, the well-founded model of the refined sequence of groundprograms (τ(Pi,j))(i,j)∈J is

WM ((τ(Pi,j))(i,j)∈J) = ({q(3)}, {p(1), p(2), q(2), q(3), x}) t F,which is actually less precise than the well-founded model of P . This isbecause literals over ¬q(X) are unconditionally assumed to be true becausetheir instantiation is not yet available when P5,1 is considered. Thus, weget (I5,1, J5,1) = (∅, {p(1), p(2)}) for the intermediate interpretation in (5).Unlike this, the atom p(3) is false when considering component P5,2 and q(3)becomes true. In fact, we get (I5,2, J5,2) = ({q(3)}, {q(2), q(3)}). Observethat (I5, J5) from above is less precise than (I5,1, J5,1) t (I5,2, J5,2).

In accord with Theorem 41, we approximate the well-founded model w.r.t.the refined instantiation sequence (Pi,j)(i,j)∈J and obtain

AM ((Pi,j)(i,j)∈J) = ({q(3)}, {p(1), p(2), q(2), q(3), x}) t F,which, for this example, is equivalent to the well-founded model of thecorresponding ground refined instantiation sequence and more precise thanthe approximate model of the instantiation sequence.

Remark 1. The reason why we use the refined grounding is that we cannotexpect a grounding algorithm to calculate the well-founded model for a

Page 38: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 38

component without further processing. But at least some consequencesshould be considered. Gringo is designed to ground on-the-fly without storingany rules, so it cannot be expected to compute all possible consequences butit should at least take all consequences from preceding interpretations intoaccount. With the help of a solver, we could calculate the exact well-foundedmodel of a component after it has been grounded.

Example 15. The dependency graph of the company controls encoding isgiven in Figure 2 and follows the conventions in Example 14. Because theencoding only uses positive literals and monotone aggregates, groundingsequences cannot be refined further. Since the program is positive, we canapply Theorem 42. Thus, the approximate model of the grounding sequenceis total and corresponds to the well-founded model of the program. We usethe same abbreviations for predicates as in Figure 2. The well-founded modelis (F ∪ I, F ∪ I) where

F = {c(c1), c(c2), c(c3), c(c4),

o(c1, c2, 60), o(c1, c3, 20), o(c2, c3, 35), o(c3, c4, 51)} and

I = {s(c1, c2), s(c3, c4), s(c1, c3), s(c1, c4)}.

o(c1, c2, 60) c(c1) c(c3) o(c2, c3, 35)

o(c1, c3, 20) c(c2) c(c4) o(c3, c4, 51)

s(X,Y )← #sum+{S : o(X,Y, S);

S,Z : s(X,Z) ∧ o(Z, Y, S)} > 50

∧ c(X) ∧ c(Y ) ∧X 6= Y

1

2

3

4

5

6

7

8

9

Figure 2. Rule dependencies for the company controls en-coding and instance in Tables 1 and 2 where c = company ,o = owns, and s = controls.

6. Algorithms

This section lays out the basic algorithms for grounding rules, components,and entire programs and characterizes their output in terms of the semanticconcepts developed in the previous sections. Of particular interest is the

Page 39: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 39

treatment of aggregates, which are decomposed into dedicated normal rulesbefore grounding, and re-assembled afterwards. This allows us to groundrules with aggregates by means of grounding algorithms for normal rules.

In the following, we refer to terms, atoms, comparisons, literals, aggregateelements, aggregates, or rules as expressions. As in the preceding sections,all expressions, interpretations, and concepts introduced below operate onthe same (implicit) signature Σ unless mentioned otherwise.

A substitution is a mapping from the variables in Σ to terms over Σ. Weuse ι to denote the identity substitution mapping each variable to itself. Aground substitution maps all variables to ground terms or themselves. Theresult of applying a substitution σ to an expression e, written eσ, is theexpression obtained by replacing each variable v in e by σ(v). This directlyextends to sets E of expressions, that is, Eσ = {eσ | e ∈ E}.

The composition of substitutions σ and θ is the substitution σ ◦ θ where(σ ◦ θ)(v) = θ(σ(v)) for each variable v.

A substitution σ is a unifier of a set E of expressions if e1σ = e2σ for alle1, e2 ∈ E. In what follows, we are interested in one-sided unification, alsocalled matching. A substitution σ matches a non-ground expression e to aground expression g, if eσ = g and σ maps all variables not occurring in e tothemselves. We call such a substitution the matcher of e to g. Note that amatcher is a unique ground substitution unifying e and g, if it exists. Thismotivates the following definition.

For a (non-ground) expression e and a ground expression g, we define:

match(e, g) =

{{σ} if there is a matcher σ from e to g

∅ otherwise

When grounding rules, we look for matches of non-ground body literalsin the Herbrand base accumulated so far. The latter is captured by a four-valued interpretation to distinguish certain atoms among the possible ones.This is made precise in the next definition.

Definition 16. Let σ be a substitution, l be a literal or comparison, and(I, J) be a four-valued interpretation.

We define the set of matches for l in (I, J) w.r.t. σ

for an atom l = a as

MatchesI,Ja (σ) = {σ ◦ σ′ | a′ ∈ J, σ′ ∈ match(aσ, a′)},

for a ground literal l = ¬a as

MatchesI,J¬a (σ) = {σ | aσ 6∈ I}, and

for a ground comparison l = t1 ≺ t2 as in (6) as

MatchesI,Jt1≺t2(σ) = {σ | ≺ holds between t1σ and t2σ}.

Page 40: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 40

In this way, positive body literals yield a (possibly empty) set of substitu-tions, refining the one at hand, while negative and comparison literals areonly considered when ground and then act as a test on the given substitution.

1 function GroundRuleI,Jr,f,J ′(σ, L)

2 if L 6= ∅ then // match next

3 (G, l)← (∅, Selectσ(L));

4 foreach σ′ ∈ MatchesI,Jl (σ) do

5 G← G ∪ GroundRuleI,Jr,f,J ′(σ′, L \ {l});6 return G;

7 else if f = t or B(rσ)+ * J ′ then // rule instance

8 return {rσ};9 else // rule seen

10 return ∅;Algorithm 1: Grounding Rules

Our function for rule instantiation is given in Algorithm 1. It takes asubstitution σ and a set L of literals and yields a set of ground instances ofa safe normal rule r, passed as a parameter; it is called from Algorithm 2with the identity substitution and the body literals B(r) of r. The otherparameters consist of a four-valued interpretation (I, J) comprising the cur-rent Herbrand base along with its certain atoms, a two-valued interpretationJ ′ reflecting the previous value of J , and a Boolean flag f used to avoidduplicate ground rules in consecutive calls to Algorithm 1. The idea is toextend the current substitution in Lines 4 to 5 until we obtain a groundsubstitution σ that induces a ground instance rσ of rule r. To this end,Selectσ(L) picks for each call some literal l ∈ L such that l ∈ L+ or lσ isground. That is, it yields either a positive body literal or a ground nega-

tive or ground comparison literal, as needed for computing MatchesI,Jl (σ).

Whenever an applications of Matches for the selected literal in B(r) resultsin a non-empty set of substitutions, the function is called recursively for eachsuch substitution. The recursion terminates if at least one match is found foreach body literal and an instance rσ of r is obtained in Line 8. The set ofall such ground instances is returned in Line 6. (Note that we refrain fromapplying any simplifications to the ground rules and rather leave them intactto obtain more direct formal characterizations of the results of our groundingalgorithms.) The test B(rσ)+ * J ′ in Line 7 makes sure that no groundrules are generated that were already obtained by previous invocations ofAlgorithm 1. This is relevant for recursive rules and reflects the approach ofsemi-naive database evaluation [1].

We can characterize the result of Algorithm 1 as follows.

Proposition 44. Let r be a safe normal rule, (I, J) be a finite four-valuedinterpretation, f ∈ {t, f}, and J ′ be a finite two-valued interpretation.

Page 41: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 41

Then, a call to GroundRuleI,Jr,f,J ′(ι, B(r)) returns the finite set of in-

stances g of r satisfying

J |= τ(B(g))∧I and (f = t or B(g)+ * J ′).(17)

See proof on page 86.In terms of the program simplification in Definition 1, the first condition

amounts to checking whether H(g)← τ(B(g))∧ is in τ(P )I,J , which is thesimplification of the (ground) R-program τ(P ) preserving all stable modelsbetween I and J . The two last conditions are meant to avoid duplicates froma previous invocation. Since r is a normal rule, translation τ is sufficient.

For characterizing the result of Algorithm 1 in terms of aggregate programs,we need the following definition.

Definition 17. Let P be an aggregate program and (I, J) be a four-valuedinterpretation.

We define InstI,J(P ) as the set of all instances g of rules in P satisfyingJ |= τ∗(B(g))∧I .

Similar to above, an instance g belongs to InstI,J(P ) iffH(g)← τ∗(B(g))∧ ∈τ∗(r)I,J . Note that the members of InstI,J(P ) are not necessarily ground,since non-global variables may remain within aggregates; though they areground for normal rules.

Algorithm 1 is called consecutively within a loop in Algorithm 2. Thepurpose of the Boolean flag f is to ensure that initially all rules are grounded.In subsequent iterations, duplicates are omitted by setting the flag to falseand filtering rules whose positive bodies are a subset of the atoms J ′ used inprevious iterations.

In fact, when grounding from scratch with J ′ = ∅ and f = t, the righthand side of (17) just like the test in Line 7 are satisfied:

Corollary 45. Let r be a safe normal rule and (I, J) be a finite four-valuedinterpretation.

Then, InstI,J({r}) = GroundRuleI,Jr,t,∅(ι, B(r)).

See proof on page 88.This relation can be seen as the base case of an iterative construction.In view of this, the next result shows how subsequent iterations extend

previous ones.

Lemma 46. Let r be a safe normal rule, (I, J) be a finite four-valuedinterpretation, and J ′ ⊆ J be a two-valued interpretation.

Then, we have

InstI,J({r}) = InstI,J′({r}) ∪ GroundRuleI,Jr,f ,J ′(ι, B(r)).

See proof on page 89.Now, let us turn to the treatment of aggregates. To this end, we define

the following translation of aggregate programs to normal programs.

Page 42: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 42

Definition 18. Let P be a safe aggregate program over signature Σ.Let Σ′ be the signature obtained by extending Σ with fresh predicates

αa,r/n, and(18)

εa,r/n(19)

for each aggregate a occurring in a rule r ∈ P where n is the number ofglobal variables in a, and fresh predicates

ηe,a,r/(m+ n)(20)

for each aggregate element e occurring in aggregate a in rule r where m isthe size of the tuple H(e).

We define Pα, P ε, and P η as normal programs over Σ′ as follows.

• Program Pα is obtained from P by replacing each aggregate occur-rence a in P with

αa,r(X1, . . . , Xn)(21)

where αa,r/n is defined as in (18) and X1, . . . , Xn are the globalvariables in a.• Program P ε consists of rules

εa,r(X1, . . . , Xn)← t ≺ b ∧ b1 ∧ · · · ∧ bl(22)

for each predicate εa,r/n as in (19) where X1, . . . , Xn are the globalvariables in a, a is an aggregate of form f{E} ≺ b occurring in r,t = f(∅) is the value of the aggregate function applied to the emptyset, and b1, . . . , bl are the body literals of r excluding aggregates.• Program P η consists of rules

ηe,a,r(t1, . . . , tm, X1, . . . , Xn)← c1 ∧ · · · ∧ ck ∧ b1 ∧ · · · ∧ bl(23)

for each predicate ηe,a,r/m+ n as in (20) where t1, . . . , tm = H(e),X1, . . . , Xn are the global variables in a, c1, . . . , ck = B(e), and b1, . . . , blare the body literals of r excluding aggregates.

Summarizing the above, we translate an aggregate program P over Σ intoa normal program Pα along with auxiliary normal rules in P ε and P η, allover a signature extending Σ by the special-purpose predicates in (18) to (20).In fact, there is a one-to-one correspondence between the rules in P and Pα,so that we get P = Pα and P ε = P η = ∅ whenever P is already normal.

Page 43: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 43

Example 16. We illustrate the translation of aggregate programs on thecompany controls example in Table 1. We rewrite the rule r =

controls(X,Y )←a︷ ︸︸ ︷

#sum+{S : owns(X,Y, S)︸ ︷︷ ︸e1

;

S,Z : controls(X,Z) ∧ owns(Z, Y, S)︸ ︷︷ ︸e2

} > 50

∧ company(X) ∧ company(Y ) ∧X 6= Y

containing an aggregate a, into rules r1 to r4:

controls(X,Y )← αa,r(X,Y )(r1)

∧ company(X) ∧ company(Y ) ∧X 6= Y,

εa,r(X,Y )← 0 > 50(r2)

∧ company(X) ∧ company(Y ) ∧X 6= Y,

ηe1,a,r(S,X, Y )← owns(X,Y, S)(r3)

∧ company(X) ∧ company(Y ) ∧X 6= Y, and

ηe2,a,r(S,Z,X, Y )← controls(X,Z) ∧ owns(Z, Y, S)(r4)

∧ company(X) ∧ company(Y ) ∧X 6= Y.

We have Pα = {r1}, P ε = {r2}, and P η = {r3, r4}.This example nicely illustrates how possible instantiations of aggregate

elements are gathered via the rules in P η. Similarly, the rules in P ε collectinstantiations warranting that the result of applying aggregate functions tothe empty set is in accord with the respective bound. In both cases, therelevant variable bindings are captured by the special head atoms of the rules.In turn, groups of corresponding instances of aggregate elements are used inDefinition 21 to sanction the derivation of ground atoms of form (21). Theseatoms are ultimately replaced in Pα with the original aggregate contents.

We next define two functions gathering information from instances ofrules in P ε and P η. In particular, we make precise how groups of aggregateelement instances are obtained from ground rules in P η.

Definition 19. Let P be an aggregate program, and Gε and Gη be subsetsof ground instances of rules in P ε and P η, respectively. Furthermore, let a bean aggregate occurring in some rule r ∈ P and σ be a substitution mappingthe global variables in a to ground terms.

We define

εr,a(Gε, σ) =

⋃g∈Gε

match(raσ, g)

where ra is a rule of form (22) for aggregate occurrence a, and

ηr,a(Gη, σ) = {eσθ | g ∈ Gη, e ∈ E, θ ∈ match(reσ, g)}

Page 44: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 44

where E are the aggregate elements of a and re is a rule of form (23) foraggregate element occurrence e in a.

Given that σ maps the global variables in a to ground terms, raσ is groundwhereas reσ may still contain local variables from a. The set εr,a(G

ε, σ)has an indicative nature: For an aggregate aσ, it contains the identitysubstitution when the result of applying its aggregate function to the emptyset is in accord with its bound, and it is empty otherwise. The constructionof ηr,a(G

η, σ) goes one step further and reconstitutes all ground aggregateelements of aσ from variable bindings obtained by rules in Gη. Both functionsplay a central role below in defining the function Propagate for derivingground aggregate placeholders of form (21) from ground rules in Gε and Gη.

Example 17. We show how to extract aggregate elements from groundinstances of rules (r3) and (r4) in Example 16.

Let Gε be empty and Gη be the program consisting of the following rules:

ηe1,a,r(60, c1, c2)← owns(c1, c2, 60)

∧ company(c1) ∧ company(c2) ∧ c1 6= c2,

ηe1,a,r(20, c1, c3)← owns(c1, c3, 20)

∧ company(c1) ∧ company(c3) ∧ c1 6= c3,

ηe1,a,r(35, c2, c3)← owns(c2, c3, 35)

∧ company(c2) ∧ company(c3) ∧ c2 6= c3,

ηe1,a,r(51, c3, c4)← owns(c3, c4, 51)

∧ company(c3) ∧ company(c4) ∧ c3 6= c4, and

ηe2,a,r(35, c2, c1, c3)← controls(c1, c2) ∧ owns(c2, c3, 35)

∧ company(c1) ∧ company(c3) ∧ c1 6= c3.

Clearly, we have εr,a(Gε, σ) = ∅ for any substitution σ because Gε = ∅.

This means that aggregate a can only be satisfied if at least one of itselements is satisfiable. In fact, we obtain non-empty sets ηr,a(G

η, σ) ofground aggregate elements for four substitutions σ:

ηr,a(Gη, σ1) = {60 : owns(c1, c2, 60)} for σ1 : X 7→ c1, Y 7→ c2,

ηr,a(Gη, σ2) = {51 : owns(c3, c4, 51)} for σ2 : X 7→ c3, Y 7→ c4,

ηr,a(Gη, σ3) = {35, c2 : controls(c1, c2) ∧ owns(c2, c3, 35);

20 : owns(c1, c3, 20)} for σ3 : X 7→ c1, Y 7→ c3, and

ηr,a(Gη, σ4) = {35 : owns(c2, c3, 35)} for σ4 : X 7→ c2, Y 7→ c3.

For capturing the result of grounding aggregates relative to groups ofaggregate elements gathered via P η, we restrict their original translation to

Page 45: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 45

subsets of their ground elements. That is, while τ∗(a) and τ∗a (·) draw in Defi-nition 10 on all instances of aggregate elements in a, their counterparts τ∗G(a)and τ∗a,G(·) are restricted to a subset of such aggregate element instances:7

Definition 20. Let a be a closed aggregate and of form (8) and let E be theset of its aggregate elements. Let G ⊆ Inst(E) be a set of aggregate elementinstances.

We define the translation τ∗G(a) of a w.r.t. G as follows:

τ∗G(a) = {τ(D)∧ → τ∗a,G(D)∨ | D ⊆ G,D 6 . a}∧

where

τ∗a,G(D) = {τ(C)∧ | C ⊆ G \D,C ∪D . a}.As before, this translation maps aggregates, possibly including non-global

variables, to a conjunction of (ground) R-rules. The resulting R-formula isused below in the definition of functions Assemble and Propagate.

Example 18. We consider the four substitutions σ1 to σ4 together with thesets G1 = ηr,a(G

η, σ1) to G4 = ηr,a(Gη, σ4) from Example 17 for aggregate a.

Following the discussion after Proposition 34, we get the formulas:

τ∗G1(aσ1) = owns(c1, c2, 60),

τ∗G2(aσ2) = owns(c3, c4, 51),

τ∗G3(aσ3) = controls(c1, c2) ∧ owns(c2, c3, 35) ∧ owns(c1, c3, 20), and

τ∗G4(aσ4) = ⊥.

Observe that the first three formulas capture the first three aggregates inTable 3,

The function Propagate yields a set of ground atoms of form (21) that areused in Algorithm 2 to ground rules having such placeholders among theirbody literals. Each such special atom is supported by a group of groundinstances of its aggregate elements.

Definition 21. Let P be an aggregate program, (I, J) be a four-valuedinterpretation, and Gε and Gη be subsets of ground instances of rules in P ε

and P η, respectively.

We define PropagateI,JP (Gε, Gη) as the set of all atoms of form ασ such

that εr,a(Gε, σ) ∪G 6= ∅ and J |= τ∗G(aσ)I with G = ηr,a(G

η, σ) where α isan atom of form (21) for aggregate a in rule r and σ is a ground substitutionfor r mapping all global variables in a to ground terms.

An atom ασ is only considered if σ warrants ground rules in Gε or Gη,signaling that the application of α to the empty set is feasible when applyingσ or that there is a non-empty set of ground aggregate elements of α obtained

7Note that the restriction to sets of ground aggregate elements is similar to the one totwo-valued interpretations in Definition 11.

Page 46: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 46

after applying σ, respectively. If this is the case, it is checked whether the setG of aggregate element instances warrants that τ∗G(aσ) admits stable modelsbetween I and J .

Example 19. We show how to propagate aggregates using the sets G1 toG4 and their associated formulas from Example 18. Suppose that I = J =F ∪ {controls(c1, c2)} using F from Example 15.

Observe that J |= τ∗G1(aσ1)I , J |= τ∗G2

(aσ2)I , J |= τ∗G3(aσ3)I , and J 6|=

τ∗G4(aσ4)I . Thus, we get PropagateI,JP (Gε, Gη) = {αa,r(c1, c2), αa,r(c1, c3), αa,r(c3, c4)}.

The function Assemble yields an R-program in which aggregate place-holder atoms of form (21) have been replaced by their corresponding R-formula.

Definition 22. Let P be an aggregate program, and Gα and Gη be subsetsof ground instances of rules in Pα and P η, respectively.

We define Assemble(Gα, Gη) as the R-program obtained from Gα byreplacing

• all comparisons by > and• all atoms of form ασ by the corresponding formulas τ∗G(aσ) withG = ηr,a(G

η, σ) where α is an atom of form (21) for aggregate ain rule r and σ is a ground substitution for r mapping all globalvariables in a to ground terms.

Example 20. We show how to assemble aggregates using the sets G1 to G3

for aggregate atoms that have been propagated in Example 19. Therefore,let Gα be the program consisting of the following rules:

controls(c1, c2)← αa,r(c1, c2) ∧ company(c1) ∧ company(c2) ∧ c1 6= c2,

controls(c3, c4)← αa,r(c3, c4) ∧ company(c3) ∧ company(c4) ∧ c3 6= c4, and

controls(c1, c3)← αa,r(c1, c3) ∧ company(c1) ∧ company(c3) ∧ c1 6= c3.

Then, program Assemble(Gα, Gη) consists of the following rules:

controls(c1, c2)← τ∗G1(aσ1) ∧ company(c1) ∧ company(c2) ∧ >,

controls(c3, c4)← τ∗G2(aσ2) ∧ company(c3) ∧ company(c4) ∧ >, and

controls(c1, c3)← τ∗G3(aσ3) ∧ company(c1) ∧ company(c3) ∧ >.

The next result shows how a (non-ground) aggregate program P is trans-formed into a (ground) R-program τ∗J (P )I,J in the context of certain andpossible atoms (I, J) via the interplay of grounding P ε and P η, derivingaggregate placeholders from their ground instances Gε and Gη, and finallyreplacing them in Gα by the original aggregates’ contents.

Proposition 47. Let P be an aggregate program, (I, J) be a finite four-valued

interpretation, Gε = InstI,J (P ε), Gη = InstI,J(P η), Jα = PropagateI,JP (Gε, Gη),

and Gα = InstI,J∪Jα(Pα).

Then,

Page 47: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 47

(a) Assemble(Gα, Gη) = τ∗J (P )I,J and(b) H(Gα) = Tτ∗(P )I (J).

See proof on page 89.Property (b) highlights the relation of the possible atoms contributed by

Gα to a corresponding application of the immediate consequence operator. Infact, this is the first of three such relationships between grounding algorithmsand consequence operators.

1 function GroundComponent(P, I, J)2 (Gα, Gε, Gη, f, Jα, Jα′, J ′)← (∅, ∅, ∅, t, ∅, ∅, ∅);3 repeat

// ground aggregate elements

4 Gε ← Gε ∪⋃r∈P ε GroundRuleI,Jr,f,J ′(ι, B(r));

5 Gη ← Gη ∪⋃r∈P η GroundRuleI,Jr,f,J ′(ι, B(r));

// propagate aggregates

6 Jα ← PropagateI,JP (Gε, Gη);

// ground remaining rules

7 Gα ← Gα ∪⋃r∈Pα GroundRuleI,J∪Jαr,f,J ′∪Jα′(ι, B(r));

8 (f, Jα′, J ′, J)← (f , Jα, J, J ∪H(Gα));

9 until J ′ = J ;

10 return Assemble(Gα, Gη);

Algorithm 2: Grounding Components

Let us now turn to grounding components of instantiation sequences inAlgorithm 2. The function GroundComponent takes an aggregate programP along with two sets I and J of ground atoms. Intuitively, P is a compo-nent in a (refined) instantiation sequence and I and J form a four-valuedinterpretation (I, J) comprising the certain and possible atoms gatheredwhile grounding previous components (although their roles get reversed inAlgorithm 3). After variable initialization, GroundComponent loops over con-secutive rule instantiations in Pα, P ε, and P η until no more possible atomsare obtained. In this case, it returns in Line 10 the R-program obtainedfrom Gα by replacing all ground aggregate placeholders of form (21) withthe R-formula corresponding to the respective ground aggregate. The bodyof the loop can be divided into two parts: Lines 4 to 6 deal with aggregatesand Lines 7 and 8 care about grounding the actual program. In more detail,Lines 4 and 5 instantiate programs P ε and P η, whose ground instances, Gε

and Gη, are then used in Line 6 to derive ground instances of aggregateplaceholders of form (21). The grounded placeholders are then added viavariable Jα to the possible atoms J when grounding the actual program Pα

in Line 7, where J ′ and Jα′ hold the previous value of J and Jα, respectively.For the next iteration, J is augmented in Line 8 with all rule heads in Gα

and the flag f is set to false. Recall that the purpose of f is to ensure

Page 48: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 48

that initially all rules are grounded. In subsequent iterations, duplicates areomitted by setting the flag to false and filtering rules whose positive bodiesare a subset of the atoms J ′ ∪ Jα′ used in previous iterations.

While the inner workings of Algorithm 2 follow the blueprint given byProposition 47. its outer functionality boils down to applying the id-stableoperator of the corresponding ground program in the context of the certainand possible atoms gathered so far.

Proposition 48. Let P be an aggregate program, (Ic, Jc) be a finite four-valued interpretation, and J = SJ

c

τ∗(P )(Ic).

Then,

(a) GroundComponent(P, Ic, Jc) terminates iff J is finite.

If J is finite, then

(b) GroundComponent(P, Ic, Jc) = τ∗Jc∪J(P )Ic,Jc∪J and

(c) H(GroundComponent(P, Ic, Jc)) = J .

See proof on page 90.

1 function Ground(P )2 let (Pi)i∈I be a refined instantiation sequence for P ;

3 (F,G)← (∅, ∅);4 foreach i ∈ I do5 let P ′i be the program obtained from Pi as in Definition 14;

6 F ← F ∪ GroundComponent(P ′i , H(G), H(F ));

7 G← G ∪ GroundComponent(Pi, H(F ), H(G));

8 return G;

Algorithm 3: Grounding Programs

Finally, Algorithm 3 grounds an aggregate program by iterating over thecomponents of one of its refined instantiation sequences. Just as Algorithm 2reflects the application of a stable operator, function Ground follows thedefinition of an approximate model when grounding a component (cf. Defini-tion 14). At first, facts are computed in Line 6 by using the program strippedfrom rules being involved in a negative cycle overlapping with the present orsubsequent components. The obtained head atoms are then used in Line 7 ascertain context atoms when computing the ground version of the componentat hand. The possible atoms are provided by the head atoms of the groundprogram built so far, and with roles reversed in Line 6. Accordingly, thewhole iteration aligns with the approximated model of the chosen refinedinstantiation sequence (cf. Definition 15), as made precise next.

Theorem 49. Let P be an aggregate program, (Pi)i∈I be a refined instan-tiation sequence for P , and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (14)

to (16).If (Pi)i∈I is selected by Algorithm 3 in Line 2, then we have that

Page 49: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 49

(a) the call Ground(P ) terminates iff AM ((Pi)i∈I) is finite, and

(b) if AM ((Pi)i∈I) is finite, then Ground(P ) =⋃i∈I τ

∗Jci ∪Ji

(Pi)(Ici ,J

ci )t(Ii,Ji).

See proof on page 91.As already indicated by Theorem 43, grounding is governed by the series of

consecutive approximate models (Ii, Ji) in (16) delineating the stable modelsof each ground program in a (refined) instantiation sequence. Whenever eachof them is finite, we also obtain a finite grounding of the original program.Note that the entire ground program is composed of the ground programsof each component in the chosen instantiation sequence. Hence, differentsequences may result in different overall ground programs.

Most importantly, our grounding machinery guarantees that an obtainedfinite ground program has the same stable models as the original non-groundprogram.

Corollary 50. Let P be an aggregate program.If Ground(P ) terminates then P and Ground(P ) have the same well-

founded and stable models.

See proof on page 91.

P ′5,1 = ∅

(a) I5,1 = I4 ∪H(GC(P ′5,1, J4, I4))

1u(X) ¬q(X) p(X)

1.1u(1) ¬q(1) p(1)

u(2) ¬q(2) p(2)

(b) J5,1 = J4 ∪H(GC(P5,1, I5,1, J4))

1v(X) ¬p(X) q(X)

1.1v(2) ×v(3) ¬p(3) q(3)

(c) I5,2 = I5,1 ∪H(GC(P ′5,2, J5,1, I5,1))

1v(X) ¬p(X) q(X)

1.1v(2) ¬p(2) q(2)

v(3) ¬p(3) q(3)

(d) J5,2 = J5,1 ∪H(GC(P5,2, I5,2, J5,1))

1¬p(1) x

1.1×(e) I6 = I5,2 ∪H(GC(P ′

6, J5,2, I5,2))

1¬p(1) x

1.1¬p(1) x

(f) J6 = J5,2 ∪H(GC(P6, I6, J5,2))

1¬q(3) y

1.1×(g) I7 = I6 ∪H(GC(P ′

7, J6, I6))

1¬q(3) y

1.1×(h) J7 = J6 ∪H(GC(P7, I7, J6))

Table 4. Grounding of components P5,1, P5,2, P6, and P7

from Example 14 where GC = GroundComponent.

Example 21. The execution of the grounding algorithms on Example 14is illustrated in Table 4. Each table depicts a call to GroundComponent

where the header above the double line contains the (literals of the) rulesto be grounded and the rows below trace how nested calls to GroundRule

Page 50: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 50

proceed. The rules in the header contain the body literals in the order asthey are selected by GroundRule with the rule head as the last literal. Callsto GroundRule are depicted with vertical lines and horizontal arrows. Avertical line represents the iteration of the loop in Lines 4 to 5. A horizontalarrow represents a recursive call to GroundRule in Line 5. Each row in thetable not marked with × corresponds to a ground instance as returned byGroundRule. Furthermore, because all components are stratified, we onlyshow the first iteration of the loop in Lines 3 to 9 of Algorithm 2 as thesecond iteration does not produce any new ground instances.

Grounding components P1 to P4 results in the theories F = G = {u(1)←>, u(2)← >, v(2)← >, v(3)← >}. Since grounding is driven by the sets oftrue and possible atoms, we focus on the interpretations Ii and Ji where i isa component index in the refined instantiation sequence. We start tracingthe grounding starting with I4 = J4 = {u(1), u(2), v(2), v(3)}.

The grounding of P5,1 as depicted in Tables 4a and 4b. We have E = {b/1}because predicate b/1 is used in the head of the rule in P5,2. Thus, P ′5,1 = ∅and GroundComponent(∅, J4, I4) in Line 6 returns the empty set and we getI5,1 = I4. In the next line, the algorithm calls GroundComponent(P5,1, I5,1, J4)and we get J5,1 = {p(1), p(2)} Note that at this point, it is not known thatq(1) is not derivable and so the algorithm does not derive p(1) as a fact.

The grounding of P5,2 is given in Tables 4c and 4d. This time, we haveE = ∅ and P5,2 = P ′5,2. Thus, the first call to GroundRule determines q(3) to

be true while the second call additionally determines the possible atom q(2).The grounding of P6 is illustrated in Tables 4e and 4f. Note that we obtain

that x is possible because p(1) was not determined to be true.The grounding of P7 is depicted in Tables 4g and 4h. Note that, unlike

before, we obtain that y is false because q(3) was determined to be true.Furthermore, observe that the choice of the refined instantiation sequence

determines the output of the algorithm. In fact, swapping P5,1 and P5,2 inthe sequence would result in x being false and y being possible.

Example 22. We illustrate the grounding of aggregates on the companycontrols example in Tables 1 and 2 using the grounding sequence (Pi)1≤i≤9and the set of facts F from Example 15. Observe that the grounding ofcomponents P1 to P8 produces the program {a← > | a ∈ F}. We focus onhow component P9 is grounded. Because there are no negative dependencies,the components P9 and P ′9 in Line 5 of Algorithm 3 are equal. To groundcomponent P9, we use the rewriting from Example 16.

The grounding of component P9 is illustrated in Table 5, which followsthe same conventions as in Example 21. Because the program is positive, thecalls in Lines 6 and 7 in Algorithm 3 proceed in the same way and we depictonly one of them. Furthermore, because this example involves a recursiverule with an aggregate, the header consists of five rows separated by dashedlines processed by Algorithm 2. The first row corresponds to P ε9 groundedin Line 4, the second and third to P η9 grounded in Line 5, the fourth to

Page 51: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 51

10 > 50 c(X) c(Y ) X 6= Y εa,r(X,Y )

2o(X,Y, S) c(X) c(Y ) X 6= Y ηe1,a,r(S,X, Y )

3s(X,Z) o(Z, Y, S) c(X) c(Y ) X 6= Y ηe2,a,r(S,Z,X, Y )

4Propagate

5αa,r(X,Y ) c(X) c(Y ) X 6= Y s(X,Y )

1.1×1.2o(c1, c2, 60) c(c1) c(c2) c1 6= c2 ηe1,a,r(60, c1, c2)

o(c1, c3, 20) c(c1) c(c3) c1 6= c3 ηe1,a,r(20, c1, c3)

o(c2, c3, 35) c(c2) c(c3) c2 6= c3 ηe1,a,r(35, c2, c3)

o(c3, c4, 51) c(c3) c(c4) c3 6= c4 ηe1,a,r(51, c3, c4)

1.3×1.4{αa,r(c1, c2), αa,r(c3, c4)}1.5αa,r(c1, c2) c(c1) c(c2) c1 6= c2 s(c1, c2)

αa,r(c3, c4) c(c3) c(c4) c3 6= c4 s(c3, c4)

2.1×2.2×2.3s(c1, c2) o(c2, c3, 35) c(c1) c(c3) c1 6= c3 ηe1,a,r(35, c2, c1, c3)

s(c3, c4) ×2.4{αa,r(c1, c3)}2.5αa,r(c1, c3) c(c1) c(c3) c1 6= c3 s(c1, c3)

3.1×3.2×3.3s(c1, c3) o(c3, c4, 51) c(c1) c(c4) c1 6= c4 ηe1,a,r(51, c3, c1, c4)

3.4{αa,r(c1, c4)}3.5αa,r(c1, c4) c(c1) c(c4) c1 6= c4 s(c1, c4)

4.1×4.2×4.3s(c1, c4) ×4.4∅4.5×

Table 5. Tracing grounding of component P9 where c =company , o = owns, and s = controls.

aggregate propagation in Line 6, and the fifth to Pα9 grounded in Line 7.After the header follow the iterations of the loop in Lines 3 to 9. Because thecomponent is recursive, processing the component requires four iterations,which are separated by solid lines in the table. The right-hand-side columnof the table contains the iteration number and a number indicating whichrow in the header is processed. The row for aggregate propagation lists theaggregate atoms that have been propagated.

Page 52: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 52

The grounding of rule r2 in row 1.1 does not produce any rule instances inany iteration because the comparison 0 > 50 is false. By first selecting thisliteral when grounding the rule, the remaining rule body can be completelyignored. Actual systems implement heuristics to prioritize such literals.Next, in the grounding of rule r3 in row 1.2, direct shares given by facts overowns/3 are accumulated. Because the rule does not contain any recursivepredicates, it only produces ground instances in the first iteration. Unlikethis, rule r4 contains the recursive predicate controls/2 . It does not produceinstances in the first iteration in row 1.3 because there are no correspondingatoms yet. Next, aggregate propagation is triggered in row 1.4, resultingin aggregate atoms αa,r(c1, c2) and αa,r(c3, c4), for which enough shareshave been accumulated in row 1.2. Note that this corresponds to thepropagation of the sets G1 and G2 in Example 19. With these atoms, ruler1 is instantiated in row 1.5, leading to new atoms over controls/2. Observethat, by selecting atom αa,r(X,Y ) first, GroundRule can instantiate the rulewithout backtracking.

In the second iteration, the newly obtained atoms over controls/2 yieldatom ηe1,a,r(35, c2, c1, c3) in row 2.3, which in turn leads to the aggregate atomαa,r(c1, c3) resulting in further instances of r4. Note that this corresponds tothe propagation of the set G3 in Example 19.

The following iterations proceed in a similar fashion until no new atomsare accumulated and the grounding loop terminates. Note that the utilizedselection strategy affects the amount of backtracking in rule instantiation.One particular strategy used in gringo is to prefer atoms over recursivepredicates. If there is only one such atom, GroundRule can select this atomfirst and only has to consider newly derived atoms for instantiation. Thetable is made more compact by applying this strategy. Further techniquesare available in the database literature [63] that also work in case of multipleatoms over recursive predicates.

7. Refinements

Up to now, we were primarily concerned by characterizing the theoreticaland algorithmic cornerstones of grounding. This section refines these conceptsby further detailing aggregate propagation, algorithm specifics, and thetreatment of language constructs from gringo’s input language.

7.1. Aggregate Propagation

We used in Section 6 the relative translation of aggregates for propagation,namely, formula τ∗G(aσ) in Definition 21, to check whether an aggregate issatisfiable. In this section, we identify several aggregate specific propertiesthat allow us to implement more efficient algorithms to perform this check.

To begin with, we establish some properties that greatly simplify thetreatment of (arbitrary) monotone or antimonotone aggregates.

Page 53: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 53

We have already seen in Proposition 34 that τ∗(a)I is classically equivalentto τ∗(a) for any closed aggregate a and two-valued interpretation I. Here isits counterpart for antimonotone aggregates.

Proposition 51. Let a be a closed aggregate.If a is antimonotone, then τ∗(a)I is classically equivalent to > if I |= τ∗(a)

and ⊥ otherwise for any two-valued interpretation I.

See proof on page 92.

Example 23. In Example 19, we check whether the interpretation J satisfiesthe formulas τ∗G1

(aσ1)I to τ∗G4(aσ4)I .

Using Proposition 34, this boils down to checking∑

e∈Gi,J |=B(e)H(e) > 50

for each 1 ≤ i ≤ 4. We get 60 > 50, 51 > 50, 55 > 50, and 35 6> 50 for eachGi, which agrees with checking J |= τ∗Gi(aσi)I .

An actual implementation can maintain a counter for the current value ofthe sum for each closed aggregate instance, which can be updated incremen-tally and compared with the bound as new instances of aggregate elementsare grounded.

Next, we see that such counter based implementations are also possible#sum aggregates using the <, ≤, >, or ≥ relations. We restrict our attentionto finite interpretations because Proposition 52 is intended to give an idea onhow to implement an actual propagation algorithm for aggregates (infiniteinterpretations would add more special cases). Furthermore, we just considerthe case that the bound is an integer here; the aggregate is constant for anyother ground term.

Proposition 52. Let I be a finite two-valued interpretation, E be a set ofaggregate elements, and b be an integer.

For T = H({e ∈ Inst(E) | I |= B(e)}), we get

(a) τ∗(#sum{E} � b)I is classically equivalent to τ∗(#sum+{E} � b′)with � ∈ {≥, >} and b′ = b−#sum−(T ), and

(b) τ∗(#sum{E} ≺ b)I is classically equivalent to τ∗(#sum−{E} ≺ b′)with ≺ ∈ {≤, <} and b′ = b−#sum+(T ).

See proof on page 92.The remaining propositions identify properties that can be exploited when

propagating aggregates over the = and 6= relations.

Proposition 53. Let I be a two-valued interpretation, E be a set of aggregateelements, and b be a ground term.

We get the following properties:

(a) τ∗(f{E} < b)I ∨ τ∗(f{E} > b)I implies τ∗(f{E} 6= b)I , and(b) τ∗(f{E} = b)I implies τ∗(f{E} ≤ b)I ∧ τ∗(f{E} ≥ b)I .

See proof on page 93.The following proposition identifies special cases when the implications in

Proposition 53 are equivalences. Another interesting aspect of this proposition

Page 54: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 54

is that we can actually replace #sum aggregates over = and 6= with aconjunction or disjunction, respectively, at the expense of calculating a lessprecise approximate model. The conjunction is even strongly equivalent tothe original aggregate under Ferraris’ semantics but not the disjunction.

Proposition 54. Let I and J be two-valued interpretations, f be an aggregatefunction among #count, #sum+, #sum− or #sum, E be a set of aggregateelements, and b be an integer.

We get the following properties:

(a) for I ⊆ J , we have J |= τ∗(f{E} < b)I ∨ τ∗(f{E} > b)I iff J |=τ∗(f{E} 6= b)I , and

(b) for J ⊆ I, we have J |= τ∗(f{E} = b)I iff J |= τ∗(f{E} ≤ b)I ∧τ∗(f{E} ≥ b)I .

See proof on page 93.The following proposition shows that full propagation of #sum, #sum+,

or #sum− aggregates over relations = and 6= involves solving the subsetsum problem [51]. We assume that we propagate w.r.t. some polynomialnumber of aggregate elements. Propagating possible atoms when using the =relation, i.e., when I ⊆ J , involves deciding an NP problem and propagatingcertain atoms when using the 6= relation, i.e., when J ⊆ I, involves decidinga co-NP problem.8 Note that the decision problem for #count aggregates ispolynomial, though.

Proposition 55. Let I and J be finite two-valued interpretations, f be anaggregate function, E be a set of aggregate elements, and b be a ground term.

For TI = {H(e) | e ∈ Inst(E), I |= B(e)} and TJ = {H(e) | e ∈Inst(E), J |= B(e)}, we get the following properties:

(a) for J ⊆ I, we have J |= τ∗(f{E} 6= b)I iff there is no set X ⊆ TIsuch that f(X ∪ TJ) = b, and

(b) for I ⊆ J , we have J |= τ∗(f{E} = b)I and iff there is a set X ⊆ TJsuch that f(X ∪ TI) = b.

See proof on page 95.

7.2. Algorithmic Refinements

The calls in Lines 6 and 7 in Algorithm 3 can sometimes be combinedto calculate certain and possible atoms simultaneously. This can be donewhenever a component does not contain recursive predicates. In this case,it is sufficient to just calculate possible atoms along with rule instances inLine 7 augmenting Algorithm 1 with an additional check to detect whethera rule instance produces a certain atom. Observe that this condition appliesto all stratified components but can also apply to components depending onunstratified components. In fact, typical programs following the generate,

8Note that clingo’s grounding algorithm does not attempt to solve these problems inall cases. It simply over- or underapproximates the satisfiability using Proposition 53.

Page 55: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 55

define, and test methodology [46,53] of ASP, where the generate part useschoice rules [56] (see below), do not contain unstratified negation at all.When a grounder is combined with a solver built to store rules and performinferences, one can optimize for the case that there are no negative recursivepredicates in a component. In this case, it is sufficient to compute possibleatoms along with their rule instances and leave the computation of certainatoms to the solver. Finally, note that gringo currently does not separatethe calculation of certain and possible atoms at the expense of computing aless precise approximate model and possibly additional rule instances.

Example 24. For the following example, gringo computes atom p(4) asunknown but the algorithms in Section 6 identify it as true.

r(1, 4) p(1)← ¬q(1)

r(2, 3) q(1)← ¬p(1)

r(3, 1) p(2)

p(Y )← p(X) ∧ r(X,Y ).

When grounding the last rule, gringo determines p(4) to be possible in thefirst iteration because p(1) is unknown at this point. In the second iteration,it detects that p(1) is a fact but does not use it for grounding again. If therewere further rules depending negatively on predicate p/1, inapplicable rulesmight appear in gringo’s output.

Another observation is that the loop in Algorithm 2 does not producefurther rule instances in a second iteration for components without recursivepredicates. Gringo maintains an index [23] for each positive body literal tospeed up matching of literals; whenever none of these indexes, used in rulesof the component at hand, are updated, further iterations can be skipped.

Just like dlv ’s grounder, gringo adapts algorithms for semi-naive evaluationfrom the field of databases. In particular, it works best on linear programs [1],having at most one positive literal occurrence over a recursive predicate ina rule body. The program in Table 1 for the company controls problemis such a linear program because controls/2 is the only recursive predicate.Algorithm 1 can easily be adapted to efficiently ground linear programs bymaking sure that the recursive positive literal is selected first. We then onlyhave to consider matches that induce atoms not already used for instantiationsin previous iterations of the loop in Algorithm 2 to reduce the amount ofbacktracking to find rule instances. In fact, the order in which literals areselected in Line 3 is crucial for the performance of Algorithm 1. Gringo usesan adaptation of the selection heuristics presented in [43] that additionallytakes into account recursive predicates and terms with function symbols.

To avoid unnecessary backtracking when grounding general logic programs,gringo instantiates rules using an algorithm similar to the improved semi-naive evaluation with optimizations for linear rules as in [1].

Page 56: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 56

7.3. Capturing gringo’s input language

We presented aggregate programs where rule heads are simple atoms. Be-yond that, gringo’s input language offers more elaborate language constructsto ease modeling.

A prominent such construct are so called choice rules [56]. Syntactically,one-element choice rules have the form {a} ← B, where a is an atom and Ba body. Semantically, such a rule amounts to a ∨ ¬a ← B or equivalentlya← ¬¬a ∧B. We can easily add support for grounding choice rules, that is,rules where the head is not a plain atom but an atom marked as a choice, bydiscarding choice rules when calculating certain atoms and treating themlike normal rules when grounding possible atoms. A translation that allowsfor supporting head aggregates using a translation to aggregate rules andchoice rules is given in [24]. Note that gringo implements further refinementsto omit deriving head atoms if a head aggregate cannot be satisfied.

Another language feature that can be instantiated in a similar fashion asbody aggregates are conditional literals. Gringo adapts the rewriting andpropagation of body aggregates to also support grounding of conditionalliterals.

Yet another important language feature are disjunctions in the head ofrules [33]. As disjunctive logic programs, aggregate programs allow us tosolve problems from the second level of the polynomial hierarchy. In fact,using Lukasiewicz’ theorem [50], we can write a disjunctive rule of form

a ∨ b← B

as the shifted strongly equivalent R-program:

a← (b→ a) ∧Bb← (a→ b) ∧B.

We can use this as a template to design grounding algorithms for disjunctiveprograms. In fact, gringo calculates the same approximate model for thedisjunctive rule and the shifted program.

The usage of negation as failure is restricted in R-programs. Note thatany occurrence of a negated literal l in a rule body can be replaced by anauxiliary atom a adding rule a← l to the program. The resulting programpreserves the stable models modulo the auxiliary atoms. This translation canserve as a template for double negation or negation in aggregate elements assupported by gringo.

Integrity constraints are a straightforward extension of logic programs.They can be grounded just like normal rules deriving an auxiliary atom thatstands for ⊥. Grounding can be stopped whenever the auxiliary atom isderived as certain. Integrity constraints also allow for supporting negatedhead atoms, which can be shifted to rule bodies [39] resulting in integrityconstraints, and then treated like negation in rule bodies.

Page 57: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 57

A frequently used convenience feature of gringo are term pools [24, 25].The grounder handles them by removing them in a rewriting step. Forexample, a rule of form

h(X;Y, Z)← p(X;Y ), q(Z)

is factored out into the following rules:

h(X,Z)← p(X), q(Z)

h(X,Z)← p(Y ), q(Z)

h(Y, Z)← p(X), q(Z)

h(Y, Z)← p(Y ), q(Z)

We can then apply the grounding algorithms developed in Section 6.To deal with variables ranging over integers, gringo supports interval

terms [24, 25]. Such terms are handled by a translation to inbuilt rangepredicates. For example the program

h(l..u)

for terms l and u is rewritten into

h(A)← rng(A, l, u)

by introducing auxiliary variable A and range atom rng(A, l, u). The rangeatom provides matches including all substitutions that assign integer valuesbetween l and u to A. Special care has to be taken regarding rule safety, therange atom can only provide bindings for variable A but needs variables inthe terms l and u to be provided elsewhere.

A common feature used when writing logic programs are terms involvingarithmetic expressions and assignments. Both influence which rules areconsidered safe by the grounder. For example the rule

h(X,Y )← p(X + Y, Y ) ∧X = Y + Y

is rewritten into

h(X,Y )← p(A, Y ) ∧X = Y + Y ∧A = X + Y

by introducing auxiliary variable A. The rule is safe because we can matchthe literals in the order as given in the rewritten rule. The comparisonX = Y + Y extends the substitution with an assignment for X and thelast comparison serves as a test. Gringo does not try to solve complicatedequations but supports simple forms like the one given above.

Last but not least, gringo does not just support terms in assignments butit also supports aggregates in assignments. To handle such kind of aggregates,the rewriting and propagation of aggregates has to be extended. This isachieved by adding an additional variable to the aggregate replacementatoms (21), which is assigned by propagation. Additional care has to to betaken during the rewriting to ensure that the rewritten rules are safe.

Page 58: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 58

8. Related Work

This section aims at inserting our contributions into the literature, startingwith theoretical aspects over algorithmic ones to implementations.

Splitting for infinitary formulas has been introduced in [35] generalizingresults of [20, 40]. To this end, the concept of an A-stable models is intro-duced [35] We obtain the following relationship between our definition of astable model relative to a set Ic and A-stable models: For an N -program P ,we have that if X is a stable model of P relative to Ic, then X ∪ Ic is an(A \ Ic)-stable model of P . Similarly, we get that if X is an A-stable model

of P , then SX\AP (X) is a stable model of P relative to X \A. The difference

between the two concepts is that we fix atoms Ic in our definition whileA-stable models allow for assigning arbitrary truth values to atoms in A \A(cf. [35, Proposition 1]). With this, let us compare our handling of programsequences to symmetric splitting [35]. Let (Pi)i∈I be a refined instantiationsequence of aggregate program P , and F =

⋃i<j τ

∗(Pi) and G =⋃i≥j τ

∗(Pi)

for some j ∈ I such that H(F ) 6= H(G). We can use the infinitary splittingtheorem in [35] to calculate the stable model of F∧ ∧G∧ through the H(F )-and A\H(F )-stable models of F∧∧G∧. Observe that instantiation sequencesdo not permit positive recursion between their components and infinite walksare impossible because an aggregate program consists of finitely many rulesinducing a finite dependency graph. Note that we additionally require thecondition H(F ) 6= H(G) because components can be split even if their headatoms overlap. Such a split can only occur if overlapping head atoms inpreceding components are not involved in positive recursion.

Next, let us relate our operators to the ones defined in [60]. First ofall, it is worthwhile to realize that the motivation in [60] is to conceiveoperators mimicking model expansion in id-logic by adding certain atoms.More precisely, let Φ, St , and Wf stand for the versions of the Fitting,stable, and well-founded operators defined in [60]. Then, we get the followingrelations to the operators defined in the previous sections:

StP,Ic(J) = lfp(ΦP,Ic(·, J))

= lfp(T Ic

PJ) ∪ Ic

= SIc

P (J) ∪ Ic.

For the well-founded operator we obtain

Wf P,Ic(I, J) = W Ic,Ic

P (I, J) t Ic.

Our operators allow us to directly calculate the atoms derived by a program.The versions in [60] always include the input facts in their output and thewell-founded operator only takes certain but not possible atoms as input.

In fact, we use operators as in [13] to approximate the well-founded modeland to obtain a ground program. While we apply operators to infinitaryformulas (resulting from a translation of aggregates) as introduced in [60],

Page 59: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 59

there has also been work on applying operators directly to aggregates. Theauthors of [66] provide an overview. Interestingly, the high complexity ofapproximating the aggregates pointed out in Proposition 55 has already beenidentified in [54].

Simplification can be understood as a combination of unfolding (droppingrules if a literal in the positive body is not among the head atoms of a program,i.e., not among the possible atoms) and negative reduction (dropping rules ifan atom in the negative body is a fact, i.e., the literal is among the certainatoms) [6, 7]. Even the process of grounding can be seen as a directed wayof applying unfolding (when matching positive body literals) and negativereduction (when matching negative body literals). When computing facts,only rules whose negative body can be removed using positive reduction areconsidered.

The algorithms in [42] to calculate well-founded models perform a compu-tation inspired by the alternating sequence to define the well-founded modelas in [65]. Our work is different in so far as we are not primarily interested incomputing the well-founded model but the grounding of a program. Hence,our algorithms stop after the second application of the stable operator (thefirst to compute certain and the second to compute possible atoms). At thispoint, a grounder can use algorithms specialized for propositional programsto simplify the logic program at hand. Algorithmic refinements for normallogic programs as proposed in [42] also apply in our setting.

Last but not least, let us outline the evolution of grounding systems overthe last two decades.

The lparse [57] grounder introduced domain- or omega-restricted pro-grams [58]. Unlike safety, omega-restrictedness is not modular. That is, theunion of two omega-restricted programs is not necessarily omega-restrictedwhile the union of two safe programs is safe. Apart from this, lparse supportsrecursive monotone and antimonotone aggregates. However, our companycontrols encoding in Table 1 is not accepted because it is not omega-restricted.For example, variable X in the second aggregate element in Table 1 needsa domain predicate. Even if we supplied such a domain predicate, lparsewould instantiate variable X with all terms provided by the domain predicateresulting in a large grounding. As noted in [21], recursive nonmonotone ag-gregates (sum aggregates with negative weights) are not supported correctlyby lparse.

Gringo 1 and 2 add support for lambda-restricted programs [32] extendingomega-restricted programs. This augments the set of predicates that can beused for instantiation but is still restricted as compared to safe programs.That is, lambda-restrictedness is also not modular and our company controlsprogram is still not accepted. At the time, the development goal was to becompatible to lparse but extend the class of accepted programs. Notably,gringo 2 adds support for additional aggregates [28]. Given its origin, gringoup to version 4 handles recursive nonmonotone aggregates in the sameincorrect way as lparse.

Page 60: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 60

The grounder of the dlv system has been the first one to implementgrounding algorithms based on semi-naive evaluation [15]. Furthermore,it implements various techniques to efficiently ground logic programs [18,43,55]. The dlvA system is the first dlv -based system to support recursiveaggregates [12], which is nowadays also available in recent versions of idlv [10].

Gringo 3 closed up to dlv being the first gringo version to implementgrounding algorithms based on semi-naive evaluation [27]. The systemaccepts safe rules but still requires lambda-restrictedness for predicates withinaggregates. Hence, our company controls encoding is still not accepted.

Gringo 4 implements grounding of aggregates with algorithms similar tothe ones presented in Section 6 [29]. Hence, it is the first version that acceptsour company controls encoding.

Finally, gringo 5 refines the translation of aggregates as in [3] to properlysupport nonmonotone recursive aggregates and refines the semantics of poolsand undefined arithmetics [24].

Another system with a grounding component is the idp system [11]. Itsgrounder instantiates a theory by assigning sorts to variables. Even though itsupports inductive definitions, it relies solely on the sorts of variables [67] toinstantiate a theory. In case of inductive definitions, this can lead to instancesof definitions that can never be applied. We believe that the algorithmspresented in Section 6 can also be implemented in an idp system decreasingthe instantiation size of some problems (e.g., the company controls problempresented in the introduction).

9. Conclusion

We have provided a first comprehensive elaboration of the theoreticalfoundations of grounding in ASP. This was enabled by the establishment ofsemantic underpinnings of ASP’s modeling language in terms of infinitary(ground) formulas [24,37]. Accordingly, we start by identifying a restrictedclass of infinitary programs, namely, R-programs, by limiting the usage ofimplications. Such programs allow for tighter semantic characterizations thangeneral F -programs, while being expressive enough to capture logic programswith aggregates. Interestingly, we rely on id-well-founded models [8, 61] toapproximate the stable models of R-programs (and simplify them in astable-models preserving way). This is due do the fact that the id-well-operator enjoys monotonicity, which lends itself to the characterizationof iterative grounding procedures. The actual semantics of non-groundaggregate programs is then defined via a translation to R-programs. Thissetup allows us to characterize the inner workings of our grounding algorithmsfor aggregate programs in terms of the operators introduced for R-programs.It turns out that grounding amounts to calculating an approximation of theid-well-founded model together with a ground program simplified with thatmodel. This does not only allow us prove the correctness of our groundingalgorithms but moreover to characterize the output of a grounder like gringo

Page 61: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 61

in terms of established formal means. To this end, we have shown how tosplit aggregate programs into components and to compute their approximatemodels (and corresponding simplified ground programs). The union ofthese models corresponds to an approximate model of the whole groundprogram (and the union of the simplified ground programs corresponds to asimplification of the whole ground program). Even though, we limit ourselvesto R-programs, we capture the core aspects of grounding: a monotonicallyincreasing Herbrand base and on-the-fly (ground) rule generation. Additionallanguage features of gringo’s input language are relatively straightforward toaccommodate by extending the algorithms presented in this paper.

For reference, we implemented the presented algorithms in a prototypicalgrounder, µ-gringo, supporting aggregate programs (see Footnote 1). While itis written to be as concise as possible and not with efficiency in mind, it mayserve as a basis for experiments with custom grounder implementations. Theactual gringo system supports a much larger language fragment. There aresome differences compared to the algorithms presented here. First, certainatoms are removed from rule bodies if not explicitly disabled via a commandline option. Second, translation τ∗ is only used to characterize aggregatepropagation. In practice, gringo translates ground aggregates to monotoneaggregates [3]. Further translation [5] or even native handling [26] of them isleft to the solver. Finally, in some cases, gringo might produce more rulesthan the algorithms presented above. This should not affect typical programs.A tighter integration of grounder and solver to further reduce the number ofground rules is an interesting topic of future research.

References

[1] S. Abiteboul, R. Hull, and V. Vianu, Foundations of databases, Addison-Wesley, 1995.[2] M. Alviano, C. Dodaro, N. Leone, and F. Ricca, Advances in WASP, Proceedings

of the thirteenth international conference on logic programming and nonmonotonicreasoning (lpnmr’15), 2015, pp. 40–54.

[3] M. Alviano, W. Faber, and M. Gebser, Rewriting recursive aggregates in answer setprogramming: Back to monotonicity, Theory and Practice of Logic Programming 15(2015), no. 4-5, 559–573.

[4] C. Baral, G. Brewka, and J. Schlipf (eds.), Proceedings of the ninth internationalconference on logic programming and nonmonotonic reasoning (lpnmr’07), LectureNotes in Artificial Intelligence, vol. 4483, Springer-Verlag, 2007.

[5] J. Bomanson, M. Gebser, and T. Janhunen, Improving the normalization of weightrules in answer set programs, Proceedings of the fourteenth european conference onlogics in artificial intelligence (jelia’14), 2014, pp. 166–180.

[6] S. Brass and J. Dix, Semantics of (disjunctive) logic programs based on partial evalua-tion, Journal of Logic Programming 40 (1999), no. 1, 1–46.

[7] S. Brass, J. Dix, B. Freitag, and U. Zukowski, Transformation-based bottom-up compu-tation of the well-founded model, Theory and Practice of Logic Programming 1 (2001),no. 5, 497–538.

[8] M. Bruynooghe, M. Denecker, and M. Truszczynski, ASP with first-order logic anddefinitions, AI Magazine 37 (2016), no. 3, 69–80.

Page 62: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 62

[9] F. Calimeri, S. Cozza, G. Ianni, and N. Leone, Computable functions in ASP: Theoryand implementation, Proceedings of the twenty-fourth international conference onlogic programming (iclp’08), 2008, pp. 407–424.

[10] F. Calimeri, D. Fusca, S. Perri, and J. Zangari, I-DLV: the new intelligent grounderof DLV, Intelligenza Artificiale 11 (2017), no. 1, 5–20.

[11] B. De Cat, B. Bogaerts, M. Bruynooghe, and M. Denecker, Predicate logic as amodelling language: The IDP system, CoRR abs/1401.6312 (2014).

[12] T. Dell’Armi, W. Faber, G. Ielpa, N. Leone, and G. Pfeifer, Aggregate functions indisjunctive logic programming: Semantics, complexity, and implementation in DLV,Proceedings of the eighteenth international joint conference on artificial intelligence(ijcai’03), 2003, pp. 847–852.

[13] M. Denecker, V. Marek, and M. Truszczynski, Approximations, stable operators, well-founded fixpoints and applications in nonmonotonic reasoning, Logic-based artificialintelligence, 2000, pp. 127–144.

[14] T. Eiter, W. Faber, and M. Truszczynski (eds.), Proceedings of the sixth internationalconference on logic programming and nonmonotonic reasoning (lpnmr’01), LectureNotes in Computer Science, vol. 2173, Springer-Verlag, 2001.

[15] T. Eiter, N. Leone, C. Mateis, G. Pfeifer, and F. Scarcello, A deductive system fornonmonotonic reasoning, Proceedings of the fourth international conference on logicprogramming and nonmonotonic reasoning (lpnmr’97), 1997, pp. 363–374.

[16] E. Erdem, J. Lee, Y. Lierler, and D. Pearce (eds.), Correct reasoning: Essays onlogic-based AI in honour of Vladimir Lifschitz, Lecture Notes in Computer Science,vol. 7265, Springer-Verlag, 2012.

[17] W. Faber, N. Leone, and S. Perri, The intelligent grounder of DLV, Correct reasoning:Essays on logic-based AI in honour of Vladimir Lifschitz, 2012, pp. 247–264.

[18] W. Faber, N. Leone, S. Perri, and G. Pfeifer, Efficient instantiation of disjunctivedatabases, Technical Report DBAI-TR-2001-44, Technische Universitat Wien, 2001.

[19] P. Ferraris, Logic programs with propositional connectives and aggregates, ACM Trans-actions on Computational Logic 12 (2011), no. 4, 25.

[20] P. Ferraris, J. Lee, V. Lifschitz, and R. Palla, Symmetric splitting in the generaltheory of stable models, Proceedings of the twenty-first international joint conferenceon artificial intelligence (ijcai’09), 2009, pp. 797–803.

[21] P. Ferraris and V. Lifschitz, Weight constraints as nested expressions, Theory andPractice of Logic Programming 5 (2005), no. 1-2, 45–74.

[22] M. Garcia de la Banda and E. Pontelli (eds.), Proceedings of the twenty-fourthinternational conference on logic programming (iclp’08), Lecture Notes in ComputerScience, vol. 5366, Springer-Verlag, 2008.

[23] H. Garcia-Molina, J. Ullman, and J. Widom, Database systems: The complete book,2nd ed., Pearson Education, 2009.

[24] M. Gebser, A. Harrison, R. Kaminski, V. Lifschitz, and T. Schaub, Abstract Gringo,Theory and Practice of Logic Programming 15 (2015), no. 4-5, 449–463.

[25] M. Gebser, R. Kaminski, B. Kaufmann, M. Lindauer, M. Ostrowski, J. Romero, T.Schaub, and S. Thiele, Potassco user guide, 2nd ed., University of Potsdam, 2015.

[26] M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub, On the implementation ofweight constraint rules in conflict-driven ASP solvers, Proceedings of the twenty-fifthinternational conference on logic programming (iclp’09), 2009, pp. 250–264.

[27] M. Gebser, R. Kaminski, A. Konig, and T. Schaub, Advances in gringo series 3,Proceedings of the eleventh international conference on logic programming and non-monotonic reasoning (lpnmr’11), 2011, pp. 345–351.

[28] M. Gebser, R. Kaminski, M. Ostrowski, T. Schaub, and S. Thiele, On the inputlanguage of ASP grounder gringo, Proceedings of the tenth international conferenceon logic programming and nonmonotonic reasoning (lpnmr’09), 2009, pp. 502–508.

Page 63: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 63

[29] M. Gebser, R. Kaminski, and T. Schaub, Grounding recursive aggregates: Prelim-inary report, Proceedings of the third workshop on grounding, transforming, andmodularizing theories with variables (gttv’15), 2015.

[30] M. Gebser, B. Kaufmann, and T. Schaub, Conflict-driven answer set solving: Fromtheory to practice, Artificial Intelligence 187-188 (2012), 52–89.

[31] M. Gebser and T. Schaub, Modeling and language extensions, AI Magazine 37 (2016),no. 3, 33–44.

[32] M. Gebser, T. Schaub, and S. Thiele, Gringo: A new grounder for answer set pro-gramming, Proceedings of the ninth international conference on logic programmingand nonmonotonic reasoning (lpnmr’07), 2007, pp. 266–271.

[33] M. Gelfond and V. Lifschitz, Classical negation in logic programs and disjunctivedatabases, New Generation Computing 9 (1991), 365–385.

[34] E. Giunchiglia, Y. Lierler, and M. Maratea, Answer set programming based on propo-sitional satisfiability, Journal of Automated Reasoning 36 (2006), no. 4, 345–377.

[35] A. Harrison and V. Lifschitz, Stable models for infinitary formulas with extensionalatoms, Theory and Practice of Logic Programming 16 (2016), no. 5-6, 771–786.

[36] A. Harrison, V. Lifschitz, D. Pearce, and A. Valverde, Infinitary equilibrium logic andstrongly equivalent logic programs, Artificial Intelligence 246 (2017), 22–33.

[37] A. Harrison, V. Lifschitz, and F. Yang, The semantics of gringo and infinitarypropositional formulas, Proceedings of the fourteenth international conference onprinciples of knowledge representation and reasoning (kr’14), 2014.

[38] P. Hill and D. Warren (eds.), Proceedings of the twenty-fifth international conference onlogic programming (iclp’09), Lecture Notes in Computer Science, vol. 5649, Springer-Verlag, 2009.

[39] T. Janhunen, On the effect of default negation on the expressiveness of disjunctiverules, Proceedings of the sixth international conference on logic programming andnonmonotonic reasoning (lpnmr’01), 2001, pp. 93–106.

[40] T. Janhunen, E. Oikarinen, H. Tompits, and S. Woltran, Modularity aspects ofdisjunctive stable models, Proceedings of the ninth international conference on logicprogramming and nonmonotonic reasoning (lpnmr’07), 2007, pp. 175–187.

[41] B. Kaufmann, N. Leone, S. Perri, and T. Schaub, Grounding and solving in answerset programming, AI Magazine 37 (2016), no. 3, 25–32.

[42] D. Kemp, P. Stuckey, and D. Srivastava, Magic sets and bottom-up evaluation of well-founded models, Logic programming, proceedings of the 1991 international symposium,1991, pp. 337–351.

[43] N. Leone, S. Perri, and F. Scarcello, Improving ASP instantiators by join-orderingmethods, Proceedings of the sixth international conference on logic programming andnonmonotonic reasoning (lpnmr’01), 2001, pp. 280–294.

[44] N. Leone, G. Pfeifer, W. Faber, T. Eiter, G. Gottlob, S. Perri, and F. Scarcello,The DLV system for knowledge representation and reasoning, ACM Transactions onComputational Logic 7 (2006), no. 3, 499–562.

[45] Y. Lierler and V. Lifschitz, One more decidable class of finitely ground programs,Proceedings of the twenty-fifth international conference on logic programming (iclp’09),2009, pp. 489–493.

[46] V. Lifschitz, Answer set programming and plan generation, Artificial Intelligence 138(2002), no. 1-2, 39–54.

[47] , Twelve definitions of a stable model, Proceedings of the twenty-fourth inter-national conference on logic programming (iclp’08), 2008, pp. 37–51.

[48] V. Lifschitz and H. Turner, Splitting a logic program, Proceedings of the eleventhinternational conference on logic programming, 1994, pp. 23–37.

[49] F. Lin and Y. Zhao, ASSAT: computing answer sets of a logic program by SAT solvers.,Artificial Intelligence 157 (2004), no. 1-2, 115–137.

Page 64: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 64

[50] J. Lukasiewicz, Die logik und das grundlagenproblem, Les Entreties de Zurich sur lesFondaments et la Methode des Sciences Mathematiques 12 (1941), no. 6-9, 82–100.

[51] S. Martello and P. Toth, Knapsack problems: Algorithms and computer implementa-tions, John Wiley & sons, 1990.

[52] I. Mumick, H. Pirahesh, and R. Ramakrishnan, The magic of duplicates and aggregates,Proceedings of the sixteenth international conference on very large data bases (vldb’90),1990, pp. 264–277.

[53] I. Niemela, Answer set programming without unstratified negation, Proceedings of thetwenty-fourth international conference on logic programming (iclp’08), 2008, pp. 88–92.

[54] N. Pelov, M. Denecker, and M. Bruynooghe, Well-founded and stable semantics oflogic programs with aggregates, Theory and Practice of Logic Programming 7 (2007),no. 3, 301–353.

[55] S. Perri, F. Scarcello, G. Catalano, and N. Leone, Enhancing DLV instantiator bybackjumping techniques, Annals of Mathematics and Artificial Intelligence 51 (2007),no. 2-4, 195–228.

[56] P. Simons, I. Niemela, and T. Soininen, Extending and implementing the stable modelsemantics, Artificial Intelligence 138 (2002), no. 1-2, 181–234.

[57] T. Syrjanen, Lparse 1.0 user’s manual, 2001.[58] , Omega-restricted logic programs, Proceedings of the sixth international con-

ference on logic programming and nonmonotonic reasoning (lpnmr’01), 2001, pp. 267–279.

[59] A. Tarski, A lattice-theoretic fixpoint theorem and its applications, Pacific Journal ofMathematics 5 (1955), 285–309.

[60] M. Truszczynski, Connecting first-order ASP and the logic FO(ID) through reducts,Correct reasoning: Essays on logic-based AI in honour of Vladimir Lifschitz, 2012,pp. 543–559.

[61] , An introduction to the stable and well-founded semantics of logic programs,Declarative logic programming: Theory, systems, and applications, 2018, pp. 121–177.

[62] H. Turner, Strong equivalence made easy: nested expressions and weight constraints,Theory and Practice of Logic Programming 3 (2003), no. 4-5, 609–622.

[63] J. Ullman, Principles of database and knowledge-base systems, Computer SciencePress, 1988.

[64] M. van Emden and R. Kowalski, The semantics of predicate logic as a programminglanguage, Journal of the ACM 23 (1976), no. 4, 733–742.

[65] A. Van Gelder, The alternating fixpoint of logic programs with negation, Journal ofComputer and System Sciences 47 (1993), 185–221.

[66] L. Vanbesien, M. Bruynooghe, and M. Denecker, Analyzing semantics of aggregateanswer set programming using approximation fixpoint theory, CoRR abs/2104.14789(2021).

[67] J. Wittocx, M. Marien, and M. Denecker, Grounding FO and FO(ID) with bounds,Journal of Artificial Intelligence Research 38 (2010), 223–269.

Page 65: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 65

Appendix A. Proofs

Proposition 2. Sets H1 and H2 of infinitary formulas are strongly equivalentiff HI1 and HI2 are classically equivalent for all two-valued interpretations I.

Proof of Proposition 2. Let I and J be two-valued interpretations, and Hbe an infinitary formula. Clearly, I |= HJ iff I ∩ J |= HJ . Thus, we onlyneed to consider interpretations such that I ⊆ J . By Lemma 1 in [36], wehave that I |= HJ iff (I, J) is an HT-model of H. The proposition holdsbecause by Theorem 3 Item (iii) in [36], we have that H1 and H2 are stronglyequivalent iff they have the same HT models. �

Proposition 3. The F-program P has the same stable models as the formula{B(r)→ H(r) | r ∈ P}∧.

Proof of Proposition 3. Let F = {B(r)→ H(r) | r ∈ P}∧.First, we consider the case I |= F . By Lemma 1 in [60], this implies

that P I and F I are classically equivalent and thus have the same minimalmodels.9 Thus, I is a stable model of P iff I is a stable model of F .

Second, we consider the case I 6|= F and show that I is neither a stablemodel of F nor P . Proposition 1 in [60] states that I is a model of F iff I isa model of F I . Thus, I is not a stable model of F . Furthermore, becauseI 6|= F , there is a rule r ∈ P such that I 6|= B(r)→ H(r). Consequently, wehave I |= B(r) and I 6|= H(r). Using the above proposition again, we getI |= B(r)I . Because I |= B(r)I and I 6|= H(r), we get I 6|= rI and in turnI 6|= P I . Thus, I is not a stable model of P either. �

Proposition 4. Let F be a formula, and I and J be interpretations.If F is positive and I ⊆ J , then I |= F implies J |= F .

Proof of Proposition 4. This property can be shown by induction over therank of the formula. �

Proposition 5. Let F be a formula, and I and J be interpretations.Then,

(a) if F is positive then F I is positive,(b) I |= F iff I |= F I ,(c) if F is strictly positive and I ⊆ J then I |= F iff I |= F J .

Proof of Proposition 5.

Property (a). Because the reduct only replaces subformulas by ⊥, the resultingformula is still positive.

Property (b). Corresponds to Proposition 1 in [60].

Property (c). This property can be shown by induction over the rank of theformula. �

9To be precise, Lemma 1 in [60] is stated for a set of formulas, which can be understoodas an infinitary conjunction.

Page 66: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 66

Proposition 6. Let F be a formula, and I, J , and X be interpretations.Then,

(a) FI is positive,(b) I |= F iff I |= FI ,(c) if F is positive then F = FI , and(d) if I ⊆ J then X |= FJ implies X |= FI .

Proof of Proposition 6.

Property (a). Because the id-reduct replaces all negative occurrences of atoms,the resulting formula is positive.

Property (b). This property is easy to see because if reduct the replaces anatom a, it is replaced with either > or ⊥ depending on whether I |= a orI 6|= a. This does not change the satisfaction of the subformula w.r.t. I.

Property (c). Because a positive formula does not contain negative occurrencesof atoms, it is not changed by the id-reduct.

Property (d). We prove by induction over the rank of formula F that

X |= FJ implies X |= FI and(24)

X |= FI implies X |= FJ .(25)

Base. We consider the case that F is a formula of rank 0.

F is a formula of rank 0 implies F is an atom.

First, we show (24). We assume X |= FJ :

F is an atom implies FI = FJ = F.

FI = FJ and X |= FJ implies X |= FI .

Second, we show (25). We assume X |= FI :

F is an atom and X |= FI implies FI = >.FI = > implies F ∈ I.

F ∈ I and I ⊆ J implies F ∈ J.F is an atom and F ∈ J implies FJ = >.

FJ = > implies X |= FJ .

Hypothesis. We assume that (24) and (25) hold for formulas F of rankssmaller than i.

Step. We only show (24) because (25) can be shown in a similar way. Weconsider formulas F of rank i.

First, we consider the case that F is a conjunction of form H∧.

X |= H∧J implies X |= GJ for all G ∈ H.X |= GJ implies X |= GI by Hypothesis.

X |= GI implies X |= H∧I .

Page 67: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 67

The case for disjunctions can be proven in a similar way.Last, we consider the case that F is an implication of form G → H.

Observe that

FI = GI → HI and

FJ = GJ → HJ .

First, we consider the case X 6|= GJ :

X 6|= GJ implies X 6|= GI by Hypothesis.

X 6|= GI implies X |= FI .

Second, we consider the case X |= HJ :

X |= HJ implies X |= HI by Hypothesis.

X |= HI implies X |= FI . �

Lemma 10. Let P be an F-Program.Then, the well-founded model WM (P ) of P is consistent.

Proof of Lemma 10. This lemma follows from Proposition 14 in [13] ob-serving that the well-founded operator is a monotone symmetric operator.The proposition is actually a bit more general stating that the operatormaps any consistent four-valued interpretation to a consistent four-valuedinterpretation. �

Lemma 56. Let O and O′ be monotone operators over complete lattice (L,≤)with O′(x) ≤ O(x) for each x ∈ L.

Then, we get x′ ≤ x where x′ and x are the least fixed points of O′ and O,respectively.

Proof of Lemma 56. Let y be a prefixed point of O. We have O(y) ≤ y.Because O′(y) ≤ O(y), we get O′(y) ≤ y. So each prefixed point of O is alsoa prefixed point of O′.

Let S′ and S be the set of all prefixed points of O′ and O, respectively.We obtain S ⊆ S′. By Theorem 1 (a), we get that x′ is the greatest lowerbound of S′. Observe that x′ is a lower bound for S. By construction of S,we have x ∈ S. Hence, we get x′ ≤ x. �

Lemma 57. Let P and P ′ be F-programs and I be an interpretation.Then, P ′ ⊆ P implies SP ′(I) ⊆ SP (I).

Proof of Lemma 57. This lemma is a direct consequence of Lemma 56 ob-serving that the one-step provability operator derives fewer consequencesfor P ′. �

Lemma 58. Let P be an R-program, I be a two-valued interpretation, andJ = SP (I).

Then, X is a stable model of P , I ⊆ X, and I ⊆ J implies X ⊆ J .

Page 68: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 68

Proof of Lemma 58. Because X is a stable model of P , it is the only minimalmodel of PX . Furthermore, we have that J is a model of PI . To show thatX ⊆ J , we show that J is also a model of PX . For this, it is enough toshow that for each rule r ∈ P we have J 6|= B(r)I implies J 6|= B(r)X . Weprove inductively over the rank of the formula F = B(r) that J 6|= FI impliesJ 6|= FX .

Base. We consider the case that F is a formula of rank 0.If X 6|= F , we get J 6|= FX because FX = ⊥. Thus, we only have to

consider the case X |= F :

F is a formula of rank 0 implies F is an atom.

F is an atom implies FI = F .

X |= F and F is an atom implies FX = F.

FI = F and FX = F implies FI = FX .

J 6|= FI and FI = FX implies J 6|= FX .

Hypothesis. We assume that J 6|= FI implies J 6|= FX holds for formulas Fof ranks smaller than i.

Step. We consider the case that F is a formula of rank i.As in the base case, we only have to consider the case X |= F . Furthermore,

we have to distinguish the cases that F is a conjunction, disjunction, orimplication.

We first consider the case that F is a conjunction of form F∧:

X |= F implies FX = {GX | G ∈ F}∧.J 6|= FI and FI = {GI | G ∈ F}∧ implies J 6|= GI for some G ∈ F .

G ∈ F and F has rank i implies G has rank less than i.

J 6|= GI and G has rank less than i implies J 6|= GX by Hypothesis.

J 6|= GX and FX = {GX | G ∈ F}∧ implies J 6|= FX .

The case that F is a disjunction can be shown in a similar way to the casethat F is a conjunction.

Last, we consider the case that F is an implication of form G → H.Observe that G is positive because F has no occurrences of implications inits antecedent and, furthermore, given that F is a formula of rank i, H is aformula of rank less than i.

We show I |= G:

J 6|= FI and FI = GI → HI implies J |= GI

J |= GI and G is positive implies I |= G because GI ≡ >.

Page 69: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 69

We show J |= GX :

G is positive, I ⊆ X, and I |= G implies X |= G by Proposition 4.

X |= F , X |= G, and F = G→ H implies X |= H.

G is positive, I ⊆ X, and I |= G implies I |= GX by Proposition 5 (c).

G is positive implies GX is positive by Proposition 5 (a).

GX is positive, I ⊆ J , and I |= GX implies J |= GX by Proposition 4.

We show J 6|= HX :

I |= G and FI = GI → HI implies FI ≡ HI because GI ≡ >.FI ≡ HI and J 6|= FI implies J 6|= HI .

J 6|= HI and H has rank less than i implies J 6|= HX by hypothesis.

Because X |= F , we have FX = GX → HX . Using J |= GX and J 6|= HX ,we get J 6|= FX . �

Theorem 11. Let P be an R-program and (I, J) be the well-founded modelof P .

If X is a stable model of P , then I ⊆ X ⊆ J .

Proof of Theorem 11. Let X be a stable model of P .We prove by transfinite induction over the sequence of postfixed points

leading to the well-founded model:

(I0, J0) = (∅,Σ),

(Iα+1, Jα+1) = WP (Iα, Jα) for ordinals α, and

(Iβ, Jβ) = (⋃α<β

Iα,⋂α<β

Jα) for limit ordinals β.

We have that α < β implies (Iα, Jα) ≤p (Iβ, Jβ) for ordinals α and β, Iα ⊆ Jαfor ordinals α, and there is a least ordinal α such that (I, J) = (Iα, Jα).

Base. We have I0 ⊆ X ⊆ J0.

Hypothesis. We assume Iβ ⊆ X ⊆ Jβ for all ordinals β < α.

Step. If α = β + 1 is a successor ordinal we have

(Iα, Jα) = WP (Iβ, Jβ)

= (SP (Jβ), SP (Iβ)).

By the induction hypothesis we have Iβ ⊆ X ⊆ Jβ.

Page 70: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 70

First, we show Iα ⊆ X:

X is a (stable) model implies SP (X) ⊆ X.X ⊆ Jβ implies SP (Jβ) ⊆ SP (X).

SP (X) ⊆ X and SP (Jβ) ⊆ SP (X) implies SP (Jβ) ⊆ X.Iα = SP (Jβ) and SP (Jβ) ⊆ X implies Iα ⊆ X.

Second, we show X ⊆ Jα:

β < α implies (Iβ, Jβ) ≤p (Iα, Jα)

(Iβ, Jβ) ≤p (Iα, Jα) and Iα ⊆ Jα implies Iβ ⊆ JαX is a stable model, Iβ ⊆ X,

Jα = SP (Iβ), and Iβ ⊆ Jα implies X ⊆ Jα by Lemma 58.

We have shown Iα ⊆ X ⊆ Jα for successor ordinals.If α is a limit ordinal we have

(Iα, Jα) = (⋃β<α

Iβ,⋂β<α

Jβ).

Let x ∈ Iα. There must be an ordinal β < α such that x ∈ Iβ. SinceIβ ⊆ X by the hypothesis, we have x ∈ X. Thus, Iα ⊆ X.

Let x ∈ X. For each ordinal β < α we have x ∈ Jβ because X ⊆ Jβ bythe hypothesis. Thus, we get x ∈ Jα. It follows that X ⊆ Jα.

We have shown Iα ⊆ X ⊆ Jα for limit ordinals. �

Lemma 12. Let P be an F-program and (I, J) be a four-valued interpreta-tion.

Then, we have H(P I,J) = TPI (J).

Proof of Lemma 12. The program P I,J contains all rules r ∈ P such thatJ |= B(r)I . This are the exactly the rules whose heads are gathered by theT operator. �

Proposition 13. Let P be an F-program and (I, J) be the well-foundedmodel of P .

Then, we have

(a) SP I,J (I ′) = J for all I ′ ⊆ I, and(b) SP I,J (J ′) = SP (J ′) for all J ⊆ J ′.

Proof of Proposition 13. Throughout the proof we use

SP (J) = I,

SP (I) = J,

P I,J ⊆ P , and

I ⊆ J because the well-founded model is consistent.

Page 71: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 71

Property (a). We show J = SP I,J (I). Let J = SP I,J (I) and r ∈ P \ P I,J :

P I,J ⊆ P and

J = SP I,J (I) and

J = SP (I) implies J ⊆ J by Lemma 57.

r /∈ P I,J implies J 6|= B(r)I .

J ⊆ J and J 6|= B(r)I implies J 6|= B(r)I by Proposition 4.

J = SP I,J (I) and J 6|= B(r)I implies J |= PI .

J |= PI and J = SP (I) implies J ⊆ J .J ⊆ J and J ⊆ J implies J = J .

Thus, we get that SP I,J (I) = J .With this we can continue to prove SP I,J (I ′) = J . Let r ∈ P I,J :

r ∈ P I,J implies J |= B(r)I .

r ∈ P I,J and P I,J ⊆ P implies r ∈ P.J |= B(r)I , r ∈ P, and SP (I) = J implies H(r) ∈ J.

J |= B(r)I and I ′ ⊆ I implies J |= B(r)I′ by Proposition 6 (d).

H(r) ∈ J and J |= B(r)I′ implies SP I,J (I ′) ⊆ J.I ′ ⊆ I and J = SP I,J (I) implies J ⊆ SP I,J (I ′).

Thus, we get SP I,J (I ′) = J .

Property (b). Let I ′ = SP I,J (J ′) and r ∈ P \ P I,J :

r /∈ P I,J implies J 6|= B(r)I .

I ⊆ J , J ⊆ J ′, and J 6|= B(r)I implies J 6|= B(r)J ′ . by Proposition 6 (d)..

I ′ ⊆ I, I ⊆ J , and J 6|= B(r)J ′ implies I ′ 6|= B(r)J ′ . by Proposition 4.

I ′ = SP I,J (J ′) and I ′ 6|= B(r)J ′ implies SP (J ′) ⊆ SP I,J (J ′).

P I,J ⊆ P implies SP I,J (J ′) ⊆ SP (J ′) by Lemma 57.

Thus, we get SP I,J (J ′) = SP (J ′). �

Theorem 14. Let P be an F-program and (I, J) be the well-founded modelof P .

Then, P and P I,J have the same well-founded model.

Page 72: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 72

Proof of Theorem 14. By Proposition 13, we have (I, J) = WP I,J (I, J). Let

(I , J) = WM (P I,J):

(I , J) = WM (P I,J) and (I, J) = WP I,J (I, J) implies (I , J) ≤p (I, J).

by Theorem 1 (c).

I ⊆ I implies SP I,J (I) = SP I,J (I)

by Proposition 13 (a).

J = SP I,J (I) = SP I,J (I) = J implies J = J.

I = SP I,J (J), SP I,J (J) = I, and J = J implies I = I.

We obtain (I, J) = (I , J). �

Theorem 15. Let P be an F-program, and I, J and X be two-valuedinterpretations.

If I ⊆ X ⊆ J , then X is a stable model of P iff X is a stable model ofP I,J .

Proof of Theorem 15. We first show that all rule bodies removed by thesimplification are falsified by X. Let r ∈ P \ P I,J and assume X |= B(r):

X |= B(r) implies X |= B(r)X by Proposition 6 (b).

X |= B(r)X and I ⊆ X implies X |= B(r)I by Proposition 6 (d).

X |= B(r)I and X ⊆ J implies J |= B(r)I by Proposition 4.

This is a contradiction and, thus, X 6|= B(r). We use the following conse-quence in the proof below:

X 6|= B(r) implies (P \ P I,J)X ≡ ∅.

To show the theorem, we show that PX and (P I,J)X

have the same

minimal models. Clearly, we have PX = (P I,J)X ∪ (P \ P I,J)

X. Using this

and (P \ P I,J)X ≡ ∅, we obtain that PX and (P I,J)

Xhave the same minimal

models. �

Corollary 16. Let P be an R-program and (I, J) be the well-founded modelof P .

Then, P and P I,J have the same stable models.

Proof of Corollary 16. The result follows from Theorems 11, 14 and 15. �

Theorem 17. Let P and Q be F-programs, and (I, J) be the well-foundedmodel of P .

If P I,J ⊆ Q ⊆ P , then P and Q have the same well-founded models.

Proof of Theorem 17. By Lemma 57, we have SP I,J (X) ⊆ SQ(X) ⊆ SP (X)for any two-valued interpretation X. Thus, by Theorem 14, we get (I, J) =WQ(I, J).

Page 73: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 73

Next, let (I , J) be a prefixed point of WQ with (I , J) ≤p (I, J). We have

(SQ(J), SQ(I)) ≤p (I , J) ≤p (I, J).

J ⊆ J implies SP I,J (J) = SQ(J) = SP (J)

by Proposition 13 (b).

SQ(J) = SP (J) and SQ(J) ⊆ I implies SP (J) ⊆ I .J ⊆ SQ(I) and SQ(I) ⊆ SP (I) implies J ⊆ SP (I).

SP (J) ⊆ I and J ⊆ SP (I) implies WP (I , J) ≤p (I , J).

WP (I , J) ≤p (I , J) implies (I, J) ≤p (I , J)

by Theorem 1 (a).

(I, J) ≤p (I , J) and (I , J) ≤p (I, J) implies (I, J) = (I , J).

By Theorem 1 (a), we obtain that WM (Q) = (I, J). �

Corollary 18. Let P and Q be R-programs, and (I, J) be the well-foundedmodel of P .

If P I,J ⊆ Q ⊆ P , then P and Q are equivalent.

Proof of Corollary 18. Observe that P I,J = QI,J . With this, the corollaryfollows from Corollary 16 and Theorem 17. �

Proposition 20. Let P be an F-program, and Ic and J be two-valuedinterpretations.

We get the following properties:

(a) J ′ ⊆ J implies SIc

P (J) ⊆ SIcP (J ′), and

(b) Ic′ ⊆ Ic implies SIc′P (J) ⊆ SIcP (J).

Proof of Proposition 20. Both properties can be shown by inspecting thereduced programs.

Property (a). Observe that we can equivalently write SIc

P (J) = SpeIc (P )(J)

and SIc

P (J ′) = SpeIc (P )(J′). With this and Proposition 8, we see that the

relative stable operator is antimonotone just as the stable operator.

Property (b). Observe that SIc

P (J) is equal to the least fixed point of T Ic

PJand

SIc′P (J) is equal to the least fixed point of T I

c′PJ

. Furthermore, observe that

T Ic′

PJ(X) ⊆ T IcPJ (X) for any two-valued interpretation X because Ic′ ⊆ Ic and

the underlying T operator is monotone. With this and Lemma 56, we haveshown the property. �

Proposition 22. Let P be an F-program, and (I, J) and (Ic, Jc) be four-valued interpretations.

We get the following properties:

(a) (I ′, J ′) ≤p (I, J) implies W Ic,Jc

P (I ′, J ′) ≤p W Ic,Jc

P (I, J), and

(b) (Ic′, Jc′) ≤p (Ic, Jc) implies W Ic′,Jc′

P (I, J) ≤p W Ic,Jc

P (I, J).

Page 74: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 74

Proof of Proposition 22. Both properties can be shown by using the mono-tonicity of the underlying relative id-stable operator:

Property (a). Given that SIc

P is antimonotone and J ′ ∪ Jc ⊆ J ∪ Jc, wehave SI

c

P (J ∪ Jc) ⊆ SIcP (J ′ ∪ Jc). Analogously, we can show SJc

P (I ′ ∪ Ic) ⊆SJ

c

P (I ∪ Ic). We get (SIc

P (J ∪ Jc), SJc

P (I ∪ Ic)) ≤p (SIc

P (J ′ ∪ Jc), SJc

P (I ′ ∪ Ic)).

Hence, W Ic,Jc

P is monotone.

Property (b). We have to show (SIc′P (J ∪ Jc′), SJ

c′P (I ∪ Ic′)) ≤p (SI

c

P (J ∪ Jc),SJ

c

P (I ∪ Ic)).

Given that Ic′ ⊆ Ic and J ∪ Jc ⊆ J ∪ Jc′, we obtain SIc′P (J ∪ Jc′) ⊆

SIc

P (J ∪ Jc) using Proposition 20. The same argument can be used for thepossible atoms of the four-valued interpretations. Given that Jc ⊆ Jc′ andI ∪ Ic′ ⊆ I ∪ Ic, we obtain SJ

c

P (I ∪ Ic) ⊆ SJc′P (I ∪ Ic′) using Proposition 20.

Hence, we have shown W Ic′,Jc′

P (I, J) ≤p W Ic,Jc

P (I, J). �

Observation 59. Let P be an F-program, and I, I ′ and Ic be two-valuedinterpretations.

We get the following properties:

(a) I |= P and Ic ⊆ I implies I |= peIc(P ),(b) I |= peIc(P ) and I ′ ∩B(P )+ ⊆ Ic implies I ∪ I ′ |= peIc(P ), and(c) I |= peIc(P ) implies I |= P .

Proposition 23. Let P b and P t be F-programs, Ic and J be two-valued

interpretations, I = SIc

Pb∪P t(J), Ie = I ∩ (B(P b)+ ∩H(P t)), Ib = SI

c∪IePb (J),

and It = SIc∪IbP t (J).

Then, we have I = Ib ∪ It.

Proof of Proposition 23. Let I = Ib ∪ It. Furthermore, we use the followingprograms:

P b = peIc(PbJ ) P b = peIc∪Ie(P

bJ ) = peIe(P

b)

P t = peIc(PtJ) P t = peIc∪Ib(P t

J) = peIb(P t)

Observe that

I = SIc

Pb∪P t(J) = LM (P b ∪ P t),

Ib = SIc∪IePb (J) = LM (P b), and

It = SIc∪IbP t (J) = LM (P t).

To show that I ⊆ I, we show that I is a model of both P b and P t. To show

that I ⊆ I, we show that I is a model of both P b and P t.

Page 75: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 75

Property I |= P b.

I = LM (P b ∪ P t) implies I |= P b.

I |= P b and Ie ⊆ I implies I |= P b

by Observation 59 (a).

Property I |= P t.

I = LM (P b ∪ P t) implies I |= P t.

I |= P b and Ib = LM (P b) implies Ib ⊆ I.I |= P t and Ib ⊆ I implies I |= P t.

by Observation 59 (a).

Property I |= P b. Let E = B(P b)+ ∩H(P t):

I ⊆ I and Ie = I ∩ E implies It ∩ E ⊆ Ie.

It = LM (P t) implies It ⊆ H(P t).

It ∩ E ⊆ Ie and It ⊆ H(P t) implies It ∩B(P b)+ ⊆ Ie.

It ∩B(P b)+ ⊆ Ie implies It ∩B(P b)

+ ⊆ Ie.

It ∩B(P b)+ ⊆ Ie and Ib = LM (P b) implies I |= P b

by Observation 59 (b).

I |= P b implies I |= P b

by Observation 59 (c).

Property I |= P t.

It = LM (P t) implies I |= P t

by Observation 59 (b).

I |= P t implies I |= P t

by Observation 59 (c). �

Proposition 24. Let P b and P t be F-programs, (Ic, Jc) be a four-valued

interpretation, (I, J) = WM Ic,Jc(P b ∪ P t), (Ie, Je) = (I, J) u (B(P b)

± ∩H(P t)), (Ib, Jb) = WM (Ic,Jc)t(Ie,Je)(P b), and (It, J t) = WM (Ic,Jc)t(Ib,Jb)(P t).

Then, we have (I, J) = (Ib, Jb) t (It, J t).

Proof of Proposition 24. Let P = P b ∪ P t and E = B(P b)± ∩H(P t). We

begin by evaluating P , P b and P t w.r.t. (I, J) and obtain

(I, J) = W Ic,Jc

P (I, J)

= (SIc

P (Jc ∪ J), SJc

P (Ic ∪ I)),

Page 76: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 76

(Ib, Jb) = W(Ic,Jc)t(Ie,Je)

Pb (I, J)

= (SIc∪IePb (Jc ∪ Je ∪ J), SJ

c∪Je

Pb (Ic ∪ Ie ∪ I)), and

(It, J t) = W(Ic,Jc)t(Ib,Jb)P t (I, J)

= (SIc∪IbP t (Jc ∪ Jb ∪ J), SJ

c∪Jb

P t (Ic ∪ Ib ∪ I)).

Using (Ie, Je) v (I, J) we get

(Ib, Jb) = (SIc∪IePb (Jc ∪ J), SJ

c∪Je

Pb (Ic ∪ I)).

By Proposition 23 and Observation 21 (c), we get

(Ib, Jb) v (I, J),

(It, J t) = (SIc∪IeP t (Jc ∪ J), SJ

c∪Je

P t (Ic ∪ I)), and

(I, J) = (Ib, Jb) t (It, J t).

We first show (Ib, Jb) = (Ib, Jb) and then (It, J t) = (It, J t).

Property (Ib, Jb) ≤p (Ib, Jb).

(Ib, Jb) v (I, J) and

(Ie, Je) = (I, J) u E implies (Ib, Jb) t (Ie, Je) v (I, J).

(I, J) = (Ib, Jb) t (It, J t) implies (It, J t) v (I, J).

(It, J t) v H(P t) implies (It, J t) uB(P b)± v (It, J t) u E.

(It, J t) uB(P b)± v (It, J t) u E and

(It, J t) v (I, J) implies (It, J t) uB(P b)± v (Ie, Je).

(It, J t) uB(P b)± v (Ie, Je) and

(I, J) = (Ib, Jb) t (It, J t) implies (I, J) uB(P b)± v (Ib, Jb) t (Ie, Je).

With the above, we use Observation 21 (c) to show that (Ib, Jb) is a fixed

point of W(Ic,Jc)t(Jc,Je)

Pb :

(Ib, Jb) = W(Ic,Jc)t(Ie,Je)

Pb (I, J)

= W(Ic,Jc)t(Ie,Je)

Pb ((I, J) uB(P b)±

)

= W(Ic,Jc)t(Ie,Je)

Pb ((Ib, Jb) t (Ie, Je))

= W(Ic,Jc)t(Ie,Je)

Pb (Ib, Jb)

Thus, by Theorem 1 (c), (Ib, Jb) ≤p (Ib, Jb).

Property (Ib, Jb) = (Ib, Jb). To show the property, let

(I , J) = (Ib, Jb) t (Ie, Je) t (It, J t),

Page 77: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 77

(Ie, Je) = W Ic,Jc

P (I , J) u E,(Ib, Jb) = (SI

c∪IePb (Jc ∪ J), SJ

c∪Je

Pb (Ic ∪ I)),

(It, J t) = (SIc∪IbP t (Jc ∪ J), SJ

c∪Jb

P t (Ic ∪ I)), and

W Ic,Jc

P (I , J) = (Ib, Jb) t (It, J t) by Proposition 23.

We get:

(Ib, Jb) ≤p (Ib, Jb) implies (I , J) ≤p (I, J).

(I , J) ≤p (I, J) implies W Ic,Jc

P (I , J) ≤p (I, J).

W Ic,Jc

P (I , J) ≤p (I, J) implies (Ie, Je) ≤p (Ie, Je).

(It, J t) uB(P b)± v (Ie, Je) implies (I , J) uB(P b)

± v (Ib, Jb) t (Ie, Je).

(I , J) uB(P b)± v (Ib, Jb) t (Ie, Je) implies Ib = SI

c∪IePb (Jc ∪ Je ∪ Jb)

and Jb = SJc∪Je

Pb (Ic ∪ Ie ∪ Ib).

by Observation 21 (c).

Ib = SIc∪IePb (Jc ∪ Je ∪ Jb) and

Jb = SJc∪Je

Pb (Ic ∪ Ie ∪ Ib) and

Ib = SIc∪IePb (Jc ∪ Je ∪ Jb) and

Jb = SJc∪Je

Pb (Ic ∪ Ie ∪ Ib) and

(Ie, Je) ≤p (Ie, Je) implies (Ib, Jb) ≤p (Ib, Jb)

by Proposition 22 (b).

(Ib, Jb) ≤p (Ib, Jb) and (Ib, Jb) ≤p (Ib, Jb) implies (Ib, Jb) ≤p (Ib, Jb).

It = SIc∪IbP t (Jc ∪ J) and

J t = SJc∪Jb

P t (Ic ∪ I) and

It = SIc∪IbP t (Jc ∪ J) and

J t = SJc∪Jb

P t (Ic ∪ I) and

(Ib, Jb) ≤p (Ib, Jb) and

(I , J) ≤p (I, J) implies (It, J t) ≤p (It, J t)

by Proposition 20 (a) and (b).

W Ic,Jc

P (I , J) = (Ib, Jb) t (It, J t) and

(Ib, Jb) ≤p (Ib, Jb) and (It, J t) ≤p (It, J t) implies W Ic,Jc

P (I , J) ≤p (Ib, Jb) t (It, J t).

(Ie, Je) ≤p (Ie, Je) and

(Ie, Je) vW Ic,Jc

P (I , J) and

Page 78: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 78

W Ic,Jc

P (I , J) ≤p (Ib, Jb) t (It, J t) and

(I , J) = (Ib, Jb) t (Ie, Je) t (It, J t) implies W Ic,Jc

P (I , J) ≤p (I , J).

WM Ic,Jc(P ) = (I, J) and

W Ic,Jc

P (I , J) ≤p (I , J) implies (I, J) ≤p (I , J)

by Theorem 1 (a).

(I , J) ≤p (I, J) and (I, J) ≤p (I , J) implies (I, J) = (I , J).

(I , J) = (Ib, Jb) t (Ie, Je) t (It, J t) implies (Ib, Jb) t (Ie, Je) v (I , J).

(I , J) = (Ib, Jb) t (Ie, Je) t (It, J t) and

(It, J t) uB(P b)± v (Ie, Je) implies (I , J) uB(P b)

± v (Ib, Jb) t (Ie, Je).

(I, J) = (I , J) and

(Ib, Jb) t (Ie, Je) v (I , J) and

(I , J) uB(P b)± v (Ib, Jb) t (Ie, Je) and

(Ib, Jb) = W(Ic,Jc)t(Ie,Je)

Pb (Ib, Jb) and

(Ib, Jb) = W(Ic,Jc)t(Ie,Je)

Pb (I, J) implies (Ib, Jb) = (Ib, Jb)

by Observation 21 (c).

Property (It, J t) = (It, J t). Observe that the lemma can be applied with P b

and P t exchanged. Let

E = B(P t)± ∩H(P b),

(Ie, Je) = (I, J) ∩ E,

(It, J t) = WM (Ic,Jc)t(Ie,Je)(P t), and

(Ib, Jb) = W(Ic,Jc)t(It,Jt)

Pb (I, J).

Using the properties shown so far, we obtain

(I, J) = (It, J t) t (Ib, Jb).

With this we get:

(Ib, Jb) = (Ib, Jb) and

(I, J) = (Ib, Jb) t (It, J t) and

(Ib, Jb) v H(P b) and

(Ie, Je) = (I, J) u E implies (Ib, Jb) uB(P t)± v (Ie, Je).

(I, J) = (It, J t) t (Ib, Jb) and

(Ib, Jb) v H(P b) and

Page 79: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 79

(Ie, Je) = (I, J) u E implies (Ib, Jb) uB(P t)± v (Ie, Je).

(Ib, Jb) uB(P t)± v (Ie, Je) and

(I, J) = (It, J t) t (Ib, Jb) implies (I, J) uB(P b)± v (It, J t) t (Ie, Je).

(Ib, Jb) uB(P t)± v (Ie, Je) and

(I, J) uB(P b)± v (It, J t) t (Ie, Je) and

(It, J t) = W(Ic,Jc)t(Ie,Je)P t (It, J t) and

(It, J t) = W(Ic,Jc)t(Ib,Jb)P t (I, J) implies (It, J t) = (It, J t)

by Observation 21 (c).

(Ib, Jb) uB(P t)± v (Ie, Je) and

(It, J t) = WM (Ic,Jc)t(Ie,Je)(P t) and

(It, J t) = WM (Ic,Jc)t(Ib,Jb)(P t) implies (It, J t) = (It, J t)

by Observation 21 (c).

Thus, we get (It, J t) = (It, J t). �

Theorem 25. Let (Pi)i∈I be a sequence of F-programs.Then, WM ((Pi)i∈I) ≤p WM (

⋃i∈I Pi).

Proof of Theorem 25. The theorem can be shown by transfinite inductionover the sequence indices. We do not give the full induction proof here butfocus on the key idea. Let (I ′i, J

′i) be the intermediate interpretations as in

(5) when computing the well-founded model of the sequence. Furthermore,let

(Ii, Ji) = WM (Ici ,Jci )t(Iei ,J

ei )(Pi)

be the intermediate interpretations where (Ici , J

ci ) is the union of the inter-

mediate interpretations as in (4) and

(Iei , J

ei ) = (I, J) ∩ Ei

with Ei as in (3).Observe that with Proposition 24, we have WM (

⋃i∈I Pi) =

⋃i∈I(Ii, Ji).

By Proposition 22 (b), we have (I ′i, J′i) ≤p (Ii, Ji) and, thus, we obtain that

WM ((Pi)i∈I) ≤p WM (⋃i∈I Pi). �

Corollary 26. Let (Pi)i∈I be a sequence of F-programs and Ei defined asin (3).

If Ei = ∅ for all i ∈ I then WM ((Pi)i∈I) = WM (⋃i∈I Pi).

Proof of Corollary 26. This can be proven in the same way as Theorem 25but note that because Ei is empty, we get (I ′i, J

′i) = (Ii, Ji). �

Page 80: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 80

Lemma 27. Let P be an F-program such that B(P )− ∩ H(P ) = ∅ and(Ic, Jc) be a four-valued interpretation.

Then, WM Ic,Jc(P ) = (SI

c

P (Jc), SJc

P (Ic)).

Proof of Lemma 27. Let (I, J) = WM Ic,Jc(P ).

We have J = SJc

P (Ic ∪ I). By Observation 21 (b), we get J ⊆ H(P ). With

this and B(P )− ∩H(P ) = ∅, we get B(P )− ∩ J = ∅. Thus, SIc

P (Jc ∪ J) =SI

c

P (Jc) by Observation 21 (c).The same arguments apply to show SJ

c

P (Ic ∪ I) = SJc

P (Ic). �

Theorem 28. Let (Pi)i∈I be a sequence of F-programs, (I, J) = WM ((Pi)i∈I),and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (3) to (5).

Then, P I,Jk ⊆ P (Ick,Jck)t(Ik,Jk)t(∅,Ek)

k ⊆ Pk for all k ∈ I.

Proof of Theorem 28. By Theorem 25, we have⊔i∈I

(Ii, Ji) ≤p (I, J).

We get⋃i∈I Ii ⊆ I and thus⋃

i≤kIi = Ic

k ∪ Ik ⊆ I.

Using J ⊆ ⋃i∈I Ji and Ji ⊆ H(Pi), we get

J ∩B(Pk)± ⊆ (

⋃i≤k

Ji ∪⋃k<i

H(Pi)) ∩B(Pk)±

⊆ (⋃i≤k

Ji ∪ Ek) ∩B(Pk)±

⊆ (Jck ∪ Jk ∪ Ek) ∩B(Pk)

±.

Using both results, we obtain

((Ick, J

ck) t (∅, Ek) t (Ik, Jk)) uB(Pk)

± ≤p (I, J) uB(Pk)±.

Because the body literals determine the simplification, we get

P I,Jk ⊆ P (Ick,Jck)t(∅,Ek)t(Ik,Jk)

k . �

Corollary 29. Let (Pi)i∈I be a sequence of F-programs, (I, J) = WM ((Pi)i∈I),and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (3) to (5).

If Ei = ∅ for all i ∈ I, then P I,Jk = P(Ick,J

ck)t(Ik,Jk)

k for all k ∈ I.

Proof of Corollary 29. This can be proven in the same way as Theorem 28but note that because Ei is empty, all ≤p and most ⊆ relations can bereplaced with equivalences. �

Page 81: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 81

Corollary 30. Let (Pi)i∈I be a sequence of R-programs, and (I, J) be thewell-founded model of

⋃i∈I Pi.

Then,⋃i∈I Pi and

⋃i∈IQi with P I,Ji ⊆ Qi ⊆ Pi have the same well-founded

and stable models.

Proof of Corollary 30. This corollary is a direct consequence of Theorems 17and 28 and Corollary 18. �

Corollary 31. Let (Pi)i∈I be a sequence of R-programs, and Ei, (Ici , J

ci ),

and (Ii, Ji) be defined as in (3) to (5).

Then,⋃i∈I Pi and

⋃i∈I P

(Ici ,Jci )t(Ii,Ji)t(∅,Ei)

i have the same well-foundedand stable models.

Proof of Corollary 31. This corollary is a direct consequence of Theorem 28and Corollary 30. �

Proposition 32 ([37]).

• Aggregates over functions #sum+ and #count together with aggregaterelations > and ≥ are monotone.• Aggregates over functions #sum+ and #count together with aggregate

relations < and ≤ are antimonotone.• Aggregates over function #sum− have the same monotonicity prop-

erties as #sum+ aggregates with the complementary relation.

Proposition 33. Let a be a closed aggregate.Then, τ(a) and τ∗(a) are strongly equivalent.

Proof of Proposition 33. We use Proposition 2 to show that both formulasare strongly equivalent.

Property I |= τ∗(a)J implies I |= τ(a)J for arbitrary interpretations I. Theformulas τ∗(a) and τ(a) only differ in the consequents of their implications.Observe that the consequents in τ∗(a) are stronger than the ones in τ(a).Thus, it follows that τ∗(a) is stronger than τ(a). Furthermore, observe thatthe same holds for their reducts.

Property I 6|= τ∗(a)J implies I 6|= τ(a)J for arbitrary interpretations I.Let G be the set of all instance of the aggregate elements of a. Because

I 6|= τ∗(a)J , there must be a set D ⊆ G such that D 6 . a, I |= (τ(D)∧)J , and

I 6|= (τ∗a (D)∨)J . With this, we construct the set

D = D ∪ {e ∈ G \D | I |= τ(e)J}.

Page 82: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 82

The construction of D and

D 6 . a and

I 6|= (τ∗a (D)∨)J

implies D 6 . a and I 6|= (τa(D)∨)J.

The construction of D and

I |= (τ(D)∧)J

implies I |= (τ(D)∧)J.

D 6 . a and

I 6|= (τa(D)∨)J

and

I |= (τ(D)∧)J

implies I 6|= τ(a)J . �

Proposition 34. Let a be a closed aggregate.If a is monotone, then τ∗(a)I is classically equivalent to τ∗(a) for any

two-valued interpretation I.

Proof of Proposition 34. LetG be the set of ground instances of the aggregateelements of a. Furthermore, observe that a monotone aggregate a is eitherconstantly true or not justified by the empty set.

In case that τ∗(a) ≡ >, we get τ∗(a)I ≡ > and the lemma holds.Next, we consider the case that the empty set does not justify the aggregate.

Observe that τ∗a (∅) is stronger than τ∗a (D) for any D ⊆ G. And, we havethat τ∗(a) contains the implication > → τ∗a (∅). Because of this, we haveτ∗(a) ≡ τ∗a (∅). Furthermore, all consequents in τ∗(a) are positive formulasand, thus, not modified by the reduct. Thus, the reduct (> → τ∗a (∅))I isequal to > → τ∗a (∅). And as before, it is stronger than all other implicationsin τ∗(a)I . Hence, we get τ∗(a)I ≡ τ∗a (∅). �

Proposition 35. Let a be a closed aggregate, and I ⊆ J and X ⊆ J betwo-valued interpretations.

Then,

(a) X |= τ∗(a) iff X |= τ∗J (a),(b) X |= τ∗(a)I iff X |= τ∗J (a)I , and(c) X |= τ∗(a)I iff X |= τ∗J (a)I .

Proof of Proposition 35. Remember that the translation τ∗(a) is a conjunc-tion of implications. The antecedents of the implications are conjunctions ofaggregate elements and the consequents are disjunctions of conjunctions ofaggregate elements.

Property (a). If the conjunction in an antecedent contains an element not inJ , then the conjunction is not satisfied by X and the implication does notaffect the satisfiability of τ∗(a). If a conjunction in a consequent contains anelement not in J , then X does not satisfy the conjunction and the conjunctiondoes not affect the satisfiability of the encompassing disjunction. Observethat both cases correspond exactly to those subformulas omitted in τ∗J (a).

Page 83: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 83

The remaining two properties follow for similar reasons. �

Lemma 37. Let P be an aggregate program and (Pi)i∈I be an instantiationsequence for P .

Then, for the sequence (Gi)i∈I with Gi = τ∗(Pi), we have Ei = ∅ for eachi ∈ I where Ei is defined as in (3).

Proof of Lemma 37. This lemma is a direct consequence of Observation 36 (a)and the anti-symmetry of the dependency relation between components. �

Lemma 38. Let P be an aggregate program and (Pi)i∈I be an instantiationsequence for P .

Then, for the sequence (Gi)i∈I with Gi = τ∗(Pi), we have Ii = Ji = SIciGi

(Ici )

for each stratified component Pi where (Ici , J

ci ) and (Ii, Ji) are defined as

in (4) and (5) in the construction of the well-founded model of (Gi)i∈I inDefinition 7.

Proof of Lemma 38. In the following, we use Ei and (Ici , J

ci ) for the sequence

(Gi)i∈I as defined in (3) and (4). Note that, by Lemma 37, we have Ei = ∅.We prove by induction.

Base. Let Pi be a stratified component that does not depend on any othercomponent. Because Pi does not depend on any other component, we have⋃j<iH(Gj)∩B(Gi)

± = ∅. Thus, by Observation 21 (b), we get Ici ∩B(Gi)

± =

Jci ∩B(Gi)

± = ∅. By Observation 21 (c), we get (Ii, Ji) = WM Ici ,Jci (Gi) =

WM Ici ,Ici (Gi). Because Pi is stratified, we have B(Gi)

− ∩ H(Gi) = ∅. We

then use Lemma 27 to obtain Ii = Ji = SIciGi

(Ici ).

Hypothesis. We assume that the theorem holds for any component Pj withj < i.

Step. Let Pi be a stratified component. For any j < i, component Pieither depends on Pj or not. If Pi depends on Pj , then Pj is stratifiedand we get Ij = Jj , by the induction hypothesis. If Pi does not depend

on Pj , then Ij ∩ B(Gi)± = Jj ∩ B(Gi)

± = ∅. By Observation 21 (c), we

get (Ii, Ji) = WM Ici ,Jci (Gi) = WM Ici ,I

ci (Gi). Just as in the base case, by

Lemma 27, we get Ii = Ji = SIciGi

(Ici ). �

Lemma 39. Let P be an aggregate program and (Pi,j)(i,j)∈J be a refined

instantiation sequence for P .Then, for the sequence (Gi,j)(i,j)∈J with Gi,j = τ∗(Pi,j), we have Ei,j ∩

B(Gi,j)+ = ∅ for each (i, j) ∈ J where Ei,j is defined as in (3).

Proof of Lemma 39. The same arguments as in the proof of Lemma 37 canbe used but using Observation 36 (b) instead. �

Lemma 40. Let P be an aggregate program, E be a set of predicates, and(Ic, Jc) be a four-valued interpretation.

If pred(H(P ))∩pred(B(P )−) ⊆ E then AM Ic,Jc

E (P ) ≤p WM Ic,Jc∪Ec(τ∗(P ))

where Ec is the set of all ground atoms over predicates in E.

Page 84: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 84

Proof of Lemma 40. Let G = τ∗(P ), G′ = τ∗(P ′) with P ′ as in Definition 14,

(I, J) = AM Ic,Jc

E (P ), and (I ′, J ′) = WM Ic,Jc∪Ec(G).

We first show I ⊆ I ′, or equivalently

SIc

G′(Jc) ⊆ SIcG (Jc ∪ Ec ∪ J ′).

Because, G′ ⊆ G we get

SIc

G′(Jc ∪ Ec ∪ J ′) ⊆ SIcG (Jc ∪ Ec ∪ J ′).

Because pred(B(P ′)−) ∩ E = ∅, all rules r ∈ G′ satisfy B(r)− ∩ Ec 6= ∅ andwe obtain

SIc

G′(Jc ∪ J ′) ⊆ SIcG′(Jc ∪ Ec ∪ J ′).

Because pred(H(P )) ∩ pred(B(P )−) ⊆ E and pred(B(P ′)−) ∩ E = ∅, allrules r ∈ G′ satisfy B(r)− ∩ J ′ = ∅ and we obtain

SIc

G′(Jc) = SI

c

G′(Jc ∪ J ′)

⊆ SIcG (Jc ∪ Ec ∪ J ′).

To show J ′ ⊆ J , we use I ⊆ I ′ and Proposition 20 (a):

SJc

G (Ic ∪ I ′) ⊆ SJc

G (Ic ∪ I). �

Theorem 41. Let (Pi)i∈I be an instantiation sequence for aggregate pro-gram P and (Pj)j∈J be a refinement of (Pi)i∈I.

Then, AM ((Pi)i∈I) ≤p AM ((Pj)j∈J) ≤p WM (τ∗(P )).

Proof of Theorem 41. We first show AM ((Pj)j∈J) ≤p WM (τ∗(P )) and then

AM ((Pi)i∈I) ≤p AM ((Pj)j∈J).

Property (AM ((Pj)j∈J) ≤p WM (τ∗(P ))). Let Ej , (Icj , J

cj ), and (Ij , Jj) be

defined as in (14) to (16) for the sequence (Pj)j∈J. Similarly, let E′j , (Icj′, Jc

j′),

and (I ′j , J′j) be defined as in (3) to (5) for the sequence (Gj)j∈J with Gj =

τ∗(Pj). Furthermore, let Ecj be the set of all ground atoms over atoms in Ej .

We first show E′j ⊆ Ecj for each j ∈ J by showing that E′j ⊆ Ec

j . ByLemma 39, only negative body literals have to be taken into account:

E′j = B(Gj)± ∩

⋃j<k

H(Gk)

= B(Gj)− ∩

⋃j<k

H(Gk).

Page 85: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 85

Observe that pred(B(Gj)− ⊆ pred(B(Pj)

−)) and pred(H(Gj)) ⊆ pred(H(Pj)).Thus, we get

pred(E′j) = pred(B(Gj)− ∩

⋃j<k

H(Gk))

⊆ pred(B(Pj)−) ∩ pred(

⋃j<k

H(Pk))

⊆ pred(B(Pj)−) ∩ pred(

⋃j≤k

H(Pk))

= Ej .

It follows that E′j ⊆ Ecj .

By Theorem 25, we have⊔j∈J(I

′j , J′j) ≤p WM (τ∗(G)). To show the

theorem, we show (Ij , Jj) ≤p (I ′j , J′j). We omit the full induction proof

and focus on the key idea: Using Lemma 40, whose precondition holds byconstruction of Ej , and Proposition 22, we get

AMIcj ,J

cj

Ej(Pj) ≤p WM (Icj ,J

cj )t(∅,Ec

j )(Gj)

≤p WM (Icj′,Jcj′)t(∅,E′j)(Gj).

Property (AM ((Pi)i∈I) ≤p AM ((Pi,j)(i,j)∈J)). We omit a full induction proof

for this property because it would be very technical. Instead, we focus onthe key idea why the approximate model of a refined instantiation sequenceis at least as precise as the one of an instantiation sequence.

Let Ei and Ei,j be defined as in (14) for the instantiation and refinedinstantiation sequence, respectively. Clearly, we have Ei,j ⊆ Ei for each(i, j) ∈ J. Observe, that (due to rule dependencies and Observation 21 (c))calculating the approximate model of the refined sequence, using Ei insteadof Ei,j in (16), would result in the same approximate model as for theinstantiation sequence. With this, the property simply follows from themonotonicity of the stable operator. �

Theorem 42. Let (Pi)i∈I be an instantiation sequence of an aggregate pro-gram P such that Ei = ∅ for each i ∈ I as defined in (14).

Then, AM ((Pi)i∈I) is total.

Proof of Theorem 42. Clearly, we have Ei = ∅ if all components are stratified.With this, the theorem follows from Lemma 38. �

Theorem 43. Let (Pi)i∈I be a (refined) instantiation sequence of an aggregateprogram P , and let (Ic

i , Jci ) and (Ii, Ji) defined as in (15) and (16).

Then,⋃i∈I τ

∗Jci ∪Ji

(Pi)(Ici ,J

ci )t(Ii,Ji) and τ∗(P ) have the same well-founded

and stable models.

Proof of Theorem 43. Let Ei, (Ici , J

ci ), and (Ii, Ji) be defined as in (14) to (16)

for the sequence (Pi)i∈I. Similarly, let E′i, (Ici′, Jc

i′), and (I ′i, J

′i) be defined

Page 86: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 86

as in (3) to (5) for the sequence (Gi)i∈I with Gi = τ∗(Pi). Furthermore, weassume w.l.o.g. that I = {1, . . . , n}.

We have already seen in the proof of Theorem 41, that the atoms E′i area subset of the ground atoms over predicates Ei and that (Ii, Ji) ≤p (I ′i, J

′i).

Observing that ground atoms over predicates Ei can only appear negatively

in rule bodies, we obtain that G(Ici′,Jci′)∪(I′i,J

′i)∪(∅,E′i)

i = G(Ici′,Jci′)∪(I′i,J

′i)

i . By

Theorem 28 and Corollary 30, we obtain that⋃i∈IG

(Ici ,Jci )t(Ii,Ji)

i and τ∗(P )have the same well-founded and stable models. To shorten the notation, welet

Fi = τ∗(Pi)(Ici ,J

ci )t(Ii,Ji), Hi = τ∗Jc

i ∪Ji(Pi)(Ici ,J

ci )t(Ii,Ji),

F =⋃i∈IFi, and H =

⋃i∈IHi.

With this, it remains to show that programs F and H have the same well-founded and stable models.

We let J =⋃i∈I Ji. Furthermore, we let τ∗(a) be a subformula in Fi and

τ∗Jci ∪Ji

(a) be a subformula in Hi where both subformulas originate from the

translation of the closed aggregate a. (We see below that existence of oneimplies the existence of the other because both formulas are identical in theircontext.)

Because an aggregate always depends positively on the predicates occurringin its elements, the intersection between

⋃i<kH(Fk) =

⋃i<k Jk and the

atoms occurring in τ∗(a) is empty. Thus the two formulas τ∗Jci ∪Ji

(a) and

τ∗J (a) are identical. Observe that each stable model of either F and H isa subset of J . By Proposition 35, satisfiability of the aggregates formulasas well as their reducts is the same for subsets of J . Thus, both formulashave the same stable models. Similarly, the well-founded model of bothformulas must be more-precise than (∅, J). By Proposition 35, satisfiabilityof the aggregate formulas as well as their id-reducts is the same. Thus, bothformulas have the same well-founded model. �

Proposition 44. Let r be a safe normal rule, (I, J) be a finite four-valuedinterpretation, f ∈ {t, f}, and J ′ be a finite two-valued interpretation.

Then, a call to GroundRuleI,Jr,f,J ′(ι, B(r)) returns the finite set of in-

stances g of r satisfying

J |= τ(B(g))∧I and (f = t or B(g)+ * J ′).(17)

Proof of Proposition 44. Observe that the algorithm does not modify f , r,

(I, J), and J ′. To shorten the notation below, letGσ,L = GroundRuleI,Jr,f,J ′(σ, L).

Page 87: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 87

Calling Gι,B(r), the algorithm maintains the following invariants in subse-quent calls Gσ,L:

(B(r) \ L)σ+ ⊆ J,(1)

(B(r) \ L)σ− ∩ I = ∅, and(2)

each comparison in (B(r) \ L)σ holds.(3)

We only prove the first invariant because the latter two can be shown in asimilar way. We prove by induction.

Base. For the call Gι,B(r), the invariant holds because the set differenceB(r) \ L is empty for L = B(r).

Hypothesis. We assume the invariant holds for call Gσ,L and show that it ismaintained in subsequent calls.

Step. Observe that there are only further calls if L is non-empty. In Line 3, abody literal l is selected from L. Observe that it is always possible to selectsuch a literal. In case that there are positive literals in L, we can simplyselect one of them. In case that there are no positive literals in L, σ replacesall variables in the positive body of r. Because r is safe, all literals in Lσare ground and we can select any one of them.

In the case that l is a positive literal, all substitutions σ′, obtained by

calling MatchesI,Jl (σ) in the following line, ensure

lσ′ ∈ J.

Furthermore, σ is more general than σ′. Thus, we have

(B(r) \ L)σ′+

= (B(r) \ L)σ+

⊆ J.

In Line 5, the algorithm calls Gσ′,L′ with L′ = L \ {l}. We obtain

(B(r) \ L′)σ′+ = (B(r) \ L)σ′+ ∪ {lσ′}

⊆ J.In the case that l is a negative literal or a comparison, we get (B(r) \ L)+ =

(B(r) \ L \ {l})+. Furthermore, the substitution σ is either not changed oris discarded altogether. Thus, the invariant is maintained in subsequent callsto GroundRule.

We prove by induction over subsets L of B(r) with corresponding substi-tution σ satisfying invariants (1)–(3) that GL,σ is finite and that g ∈ GL,σiff g is a ground instance of rσ that satisfies (17).

Base. We show the base case for L = ∅. Using invariant (1), we only have toconsider substitutions σ with B(r)+σ ⊆ J . Because r is safe and σ replacesall variables in its positive body, σ also replaces all variables in its head andnegative body. Thus, rσ is ground and the remainder of the algorithm just

Page 88: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 88

filters the set {rσ} while the invariants (1)–(3) ensure that J |= τ(B(rσ))∧I .The condition in Line 2 cannot apply because L = ∅. The condition in Line 9discards rules rσ not satisfying f = t or B(rσ)+ * J ′.

Hypothesis. We show that the property holds for L 6= ∅ assuming that itholds for subsets L′ ⊂ L with corresponding substitutions σ′.

Step. Because L 6= ∅ we only have to consider the case in Line 2.First, the algorithm selects an element l ∈ L. We have already seen that it

is always possible to select such an element. Let L′ = L \ {l}. The algorithmthen loops over the set

Σ = MatchesI,Jl (σ)

and, in Lines 4 to 5, computes the union

Gσ,L =⋃σ′∈Σ

Gσ′,L′ .

First, we show that the set Gσ,L is finite. In case l is not a positiveliteral, the set Σ has at most one element. In case l is a positive literal,observe that there is a one-to-one correspondence between Σ and the set{lσ′ | σ′ ∈ Σ}. We obtain that Σ is finite because {lσ′ | σ′ ∈ Σ} ⊆ J and Jis finite. Furthermore, using the induction hypothesis, each set Gσ′,L′ in theunion Gσ,L is finite. Hence, the set Gσ,L returned by the algorithm is finite.

Second, we show g ∈ Gσ,L implies that g is a ground instance of rσsatisfying (17). We have that g is a member of some Gσ′,L′ . By the inductionhypothesis, g is a ground instance of rσ′ satisfying (17). Observe that g isalso a ground instance of rσ because σ is more general than σ′.

Third, we show that each ground instance g of rσ satisfying (17) is alsocontained in Gσ,L. Because g is a ground instance of rσ, there is a substitutionθ more specific than σ such that g = rθ. In case that the selected literal l ∈ Lis a positive literal, we have lθ ∈ J . Then, there is also a substitution θ′ suchthat θ′ ∈ match(lσ, lθ). Let σ′ = σ ◦ θ′. By Definition 16, we have σ′ ∈ Σ.It follows that g ∈ Gσ,L because g ∈ Gσ′,L′ by the induction hypothesis andGσ′,L′ ⊆ Gσ,L. In the case that l is not a positive literal, we have σ ∈ Σ andcan apply a similar argument.

Hence, we have shown that the proposition holds for Gι,B(r). �

Corollary 45. Let r be a safe normal rule and (I, J) be a finite four-valuedinterpretation.

Then, InstI,J({r}) = GroundRuleI,Jr,t,∅(ι, B(r)).

Proof of Corollary 45. The corollary directly follows from Proposition 44and the definition of InstI,J({r}). �

Lemma 46. Let r be a safe normal rule, (I, J) be a finite four-valuedinterpretation, and J ′ ⊆ J be a two-valued interpretation.

Page 89: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 89

Then, we have

InstI,J({r}) = InstI,J′({r}) ∪ GroundRuleI,Jr,f ,J ′(ι, B(r)).

Proof of Lemma 46. Let G be the set of all ground instances of r and

GX,Yf = {g ∈ G | Y |= τ(B(g))∧I , (f = t or B(g)+ * X)}.

By Proposition 44 and Corollary 45, we can reformulate the lemma as

G∅,Jt = G∅,J′

t ∪GJ ′,Jf . We have

G∅,Jt = {g ∈ G | J |= τ(B(g))∧I }G∅,J

t = {g ∈ G | J ′ |= τ(B(g))∧I }, and

GJ′,J

f = {g ∈ G | J |= τ(B(g))∧I , B(g)+ * J ′}.

Observe that, given J ′ ⊆ J , we can equivalently write G∅,J′

t as

G∅,J′

t = {g ∈ G | J |= τ(B(g))∧I , B(g)+ ⊆ J ′}.

Because B(g)+ ⊆ J ′ and B(g)+ * J ′ cancel each other, we get

G∅,J = G∅,J ′ ∪GJ ′,J . �

Proposition 47. Let P be an aggregate program, (I, J) be a finite four-valued

interpretation, Gε = InstI,J (P ε), Gη = InstI,J(P η), Jα = PropagateI,JP (Gε, Gη),

and Gα = InstI,J∪Jα(Pα).

Then,

(a) Assemble(Gα, Gη) = τ∗J (P )I,J and(b) H(Gα) = Tτ∗(P )I (J).

Proof of Proposition 47. We first show Property (a) and then (b).

Property (a). For a rule r ∈ P , we use rα to refer to the correspondingrule with replaced aggregate occurrences in Pα. Similarly, for a groundinstance g of r, we use gα to to refer to the corresponding instance of rα.Observe that τ∗J (P )I,J = τ∗J (InstI,J(P )). We show that g ∈ InstI,J(P ) iff

gα ∈ InstI,J∪Jα(Pα). In the following, because the rule bodies of g and

gα only differ regarding aggregates and their replacement atoms, we onlyconsider rules with aggregates in their bodies.

Case g ∈ InstI,J(P ). Let r be a rule in P containing aggregate a, α bethe replacement atom of form (21) for a, and σ be a ground substitutionsuch that rσ = g. We show that for each aggregate aσ ∈ B(g), we haveεr,a(G

ε, σ) ∪ G 6= ∅ and J |= τ∗G(aσ)I with G = ηr,a(Gη, σ) and in turn

Jα |= ασ. Because J |= τ∗(B(g))∧I , we get J |= τ∗J (aσ)I . It remains to showthat εr,a(G

ε, σ)∪G 6= ∅ and τ∗J (aσ) = τ∗G(aσ). Observe that τ∗J (aσ) = τ∗G(aσ)because the set G obtained from rules in Gη contains all instances of elementsof aσ whose conditions are satisfied by J while the remaining literals of these

Page 90: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 90

rules are contained in the body of g. Furthermore, observe that if no aggregateelement is satisfied by J , we get εr,a(G

ε, σ) 6= ∅ because the correspondingground instance of (22) is satisfied.

Case gα ∈ InstI,J∪Jα(Pα). Let rα be a rule in Pα containing replacement

atom α of form (21) for aggregate a and σ be a ground substitution suchthat rασ = gα. Because J ∪ Jα |= τ(B(gα))∧I , we have ασ ∈ Jα. Thus, weget that J |= τ∗G(aσ)I with G = ηr,a(G

η, σ). We have already seen in theprevious case that τ∗J (aσ) = τ∗G(aσ). Thus, J |= τ∗J (aσ)I . Observing that

g = rσ and aσ ∈ B(g), we get g ∈ InstI,J(P ).

Property (b). This property follows from Property (a), Proposition 35,and Lemma 12. �

Proposition 48. Let P be an aggregate program, (Ic, Jc) be a finite four-valued interpretation, and J = SJ

c

τ∗(P )(Ic).

Then,

(a) GroundComponent(P, Ic, Jc) terminates iff J is finite.

If J is finite, then

(b) GroundComponent(P, Ic, Jc) = τ∗Jc∪J(P )Ic,Jc∪J and

(c) H(GroundComponent(P, Ic, Jc)) = J .

Proof of Proposition 48. We prove Properties (a) and (b) by showing thatthe function calculates the stable model by iteratively calling the T operatoruntil a fixed-point is reached.

Property (a) and (b). At each iteration i of the loop starting with 1, let

Jαi be the value of PropagateI,JP (Gε, Gη) in Line 6, Gεi , Gηi , and Gαi be the

values on the right-hand-side of the assignments in Lines 4, 5 and 7, andJi = H(Gαi ). Furthermore, let J0 = ∅.

By Corollary 45 and Lemma 46, we get

Gεi = InstIc,Jc∪Ji−1(P ε),

Gηi = InstIc,Jc∪Ji−1(P η),

Jαi = PropagateI,JP (Gεi , G

ηi ), and

Gαi = InstIc,Jc∪Jαi ∪Ji−1(Pα).

Using Proposition 47 (b), and observing the one-to-one correspondencebetween Gαi and τ∗(P )I

c,Jc∪Jci we get

Ji = H(Gαi )

= Tτ∗(P )Ic (Jc ∪ Ji−1).

Observe that, if the loop exits, then the algorithm computes the fixedpoint of T J

c

τ∗(P )Ic, i.e., J = SI

c

τ∗(P )(Jc). Furthermore, observe that this fixed

point calculation terminates whenever SIc

τ∗(P )(Jc) is finite. Finally, we obtain

GroundComponent(P, Ic, Jc) = τ∗Jc∪J(P )Ic,Jc∪J using Proposition 47 (a).

Page 91: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 91

Property (c). We have seen above that J is a fixed point of T Jc

τ∗(P )Ic. By

Proposition 47 (b) and observing that function Assemble only modifies rulebodies, we get H(GroundComponent(P, Ic, Jc)) = J . �

Theorem 49. Let P be an aggregate program, (Pi)i∈I be a refined instan-tiation sequence for P , and Ei, (Ic

i , Jci ), and (Ii, Ji) be defined as in (14)

to (16).If (Pi)i∈I is selected by Algorithm 3 in Line 2, then we have that

(a) the call Ground(P ) terminates iff AM ((Pi)i∈I) is finite, and

(b) if AM ((Pi)i∈I) is finite, then Ground(P ) =⋃i∈I τ

∗Jci ∪Ji

(Pi)(Ici ,J

ci )t(Ii,Ji).

Proof of Theorem 49. Since the program is finite, its instantiation sequencesare finite, too. We assume w.l.o.g. that I = {1, . . . , n} for some n ≥ 0. We letF ci and Gc

i be the values of variables F and G at iteration i at the beginningof the loop in Lines 4 to 7, and Fi and Gi be the results of the calls toGroundComponent in Lines 6 and 7 at iteration i.

By Proposition 48, we get that Lines 5 to 7 correspond to an applicationof the approximate model operator as Definition 14. For each iteration i, weget

(F ci , G

ci ) =

⊔j<i

(Fi, Gi),

(Ici , J

ci ) = (H(F c

i ), H(Gci )),

(Ii, Ji) = (H(Fi), H(Gi)), and

Gi = τ∗Jci ∪Ji(Pi)

(Ici ,Jci )t(Ii,Ji)

whenever (Ii, Ji) is finite. In the case that each (Ii, Ji) is finite, the algorithmreturns in Line 8 the program

Gcn ∪Gn =

⋃i∈IGi

=⋃i∈Iτ∗Jci ∪Ji(Pi)

(Ici ,Jci )t(Ii,Ji).

Clearly, the algorithm terminates iff each call to GroundComponent is finite,which is exactly the case when AM ((Pi)i∈I) is finite. �

Corollary 50. Let P be an aggregate program.If Ground(P ) terminates then P and Ground(P ) have the same well-

founded and stable models.

Proof of Corollary 50. This is a direct consequence of Theorems 43 and 49.�

Proposition 51. Let a be a closed aggregate.If a is antimonotone, then τ∗(a)I is classically equivalent to > if I |= τ∗(a)

and ⊥ otherwise for any two-valued interpretation I.

Page 92: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 92

Proof of Proposition 51. LetG be the set of ground instances of the aggregateelements of a and D ⊆ G be a set such that D 6 . a.

Due to the antimonotonicity of the aggregate, we get τ∗a (D) = ⊥. Thus,the reduct is constant because all consequents in τ∗(a) as well as τ∗(a)I areequal to ⊥ and the antecedents in τ∗(a) are completely evaluated by thereduct. Hence, the lemma follows by Proposition 6 (b). �

Proposition 52. Let I be a finite two-valued interpretation, E be a set ofaggregate elements, and b be an integer.

For T = H({e ∈ Inst(E) | I |= B(e)}), we get

(a) τ∗(#sum{E} � b)I is classically equivalent to τ∗(#sum+{E} � b′)with � ∈ {≥, >} and b′ = b−#sum−(T ), and

(b) τ∗(#sum{E} ≺ b)I is classically equivalent to τ∗(#sum−{E} ≺ b′)with ≺ ∈ {≤, <} and b′ = b−#sum+(T ).

Proof of Proposition 52. We only show Property (a) because the proof ofProperty (b) is symmetric.

Let a = #sum{E} � b and a+ = #sum+{E} � b′. Given an arbitrarytwo-valued interpretation J , we consider the following two cases:

Case J 6|= τ∗(a)I . There is a set D ⊆ G such that D 6 . a, I |= τ(D)∧, and

J 6|= τ∗a (D)∨. Let D = D ∪ {e ∈ G | I |= τ(e), w(H(e)) < 0}.Clearly, D 6 . a and I |= τ(D)∧. Furthermore, J 6|= τ∗a (D)∨ because we

constructed D so that τ∗a (D)∨ is stronger than τ∗a (D)∨ because more elementswith negative weights have to be taken into considerations.

Next, observe that D 6 . a+ holds because we have #sum+(H(D)) =

#sum+(H(D)) and #sum−(H(D)) = #sum−(T ), which corresponds to

the value subtracted from the bound of a+. To show that J 6|= τ∗a+(D)∨,

we show τ∗a+(D)∨ is stronger than τ∗a (D)∨. Let C ⊆ G \ D be a set of

elements such that D∪C .a+. Because the justification of a+ is independent

of elements with negative weights, each clause in τ∗a+(D)∨ involving anelement with a negative weight is subsumed by another clause without thatelement. Thus, we only consider sets C containing elements with positive

weights. Observe that D ∪C . a holds because we have #sum(H(D ∪C)) =

#sum+(H(D ∪ C)) + #sum−(T ). Hence, we get J 6|= τ∗a+(D)∨.

Case J 6|= τ∗(a+)I . There is a set D ⊆ G such that D 6 . a+, I |= τ(D)∧, and

J 6|= τ∗a+(D)∨. Let D = D ∪ {e ∈ G | I |= τ(e), w(H(e)) < 0}.Observe that D 6 . a+, I |= τ(D)∧, and J 6|= τ∗a+(D)∨. As in the previous

case, we can show that τ∗a (D)∨ is stronger than τ∗a+(D)∨ because clauses

in τ∗a (D)∨ involving elements with negative weights are subsumed. Hence,

we get J 6|= τ∗a (D)∨. �

Proposition 53. Let I be a two-valued interpretation, E be a set of aggregateelements, and b be a ground term.

Page 93: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 93

We get the following properties:

(a) τ∗(f{E} < b)I ∨ τ∗(f{E} > b)I implies τ∗(f{E} 6= b)I , and(b) τ∗(f{E} = b)I implies τ∗(f{E} ≤ b)I ∧ τ∗(f{E} ≥ b)I .

Proof of Proposition 53. Let G be the set of ground instances of E, a≺ =f{E} ≺ b for aggregate relation ≺, and J be a two-valued interpretation.

Property (a). We show that J |= τ∗(a<)I ∨ τ∗(a>)I implies J |= τ∗(a6=)I .

Case J |= τ∗(a<)I . Observe that τ∗(a 6=) is conjunction of implications of form

τ(D)∧ → τ∗a6=(D)∨ with D ⊆ G and D 6 . a6=. Furthermore, note that D 6 . a6=implies D 6 . a<. Thus, τ∗(a<) contains the implication τ(D)∧ → τ∗a<(D)∨.

Because J |= τ∗(a<)I , we get I 6|= τ(D)∧ or J |= τ∗a<(D)∨. Hence, theproperty holds in this case because J |= τ∗a<(D)∨ implies J |= τ∗a6=(D)∨.

Case J |= τ∗(a>)I . The property can be shown analogously for this case.

Property (a). This property can be shown in a similar way as the previous one.We show by contraposition that J |= τ∗(a=)I implies J |= τ∗(a≤)I ∧ τ∗(a≥)I .

Case J 6|= τ∗(a≤)I . Observe that τ∗(a≤) is conjunction of implications of formτ(D)∧ → τ∗a≤(D)∨ with D ⊆ G and D 6 . a≤. Furthermore, note that D 6 . a≤implies D 6 . a=. Thus, τ∗(a=) contains the implication τ(D)∧ → τ∗a=(D)∨.

Because J 6|= τ∗(a≤)I , we get I |= τ(D)∧ and J 6|= τ∗a<(D)∨ for some D ⊆ Gwith D 6 . a≤. Hence, the property holds in this case because J 6|= τ∗a≤(D)∨

implies J 6|= τ∗a=(D)∨.

Case J 6|= τ∗(a≥)I . The property can be shown analogously for this case. �

Proposition 54. Let I and J be two-valued interpretations, f be an aggregatefunction among #count, #sum+, #sum− or #sum, E be a set of aggregateelements, and b be an integer.

We get the following properties:

(a) for I ⊆ J , we have J |= τ∗(f{E} < b)I ∨ τ∗(f{E} > b)I iff J |=τ∗(f{E} 6= b)I , and

(b) for J ⊆ I, we have J |= τ∗(f{E} = b)I iff J |= τ∗(f{E} ≤ b)I ∧τ∗(f{E} ≥ b)I .

Proof of Proposition 54. We only consider the case that f is the #sum func-tion because the other ones are special cases of this function. Furthermore,we only consider the only if directions because we have already establishedthe other directions in Proposition 53.

Let G be the set of ground instances of E, TI = H({g ∈ G | I |= B(g)}),and TJ = H({g ∈ G | J |= B(g)}).

Property (a). Because I ⊆ J , we get #sum+(TI) ≤ #sum+(TJ) and #sum−(TJ) ≤#sum−(TI). We prove by contraposition.

Page 94: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 94

Case J 6|= τ∗(a<)I and J 6|= τ∗(a>)I . We use Propositions 34 and 52 to getthe following two inequalities:

#sum−(TJ) ≥ b−#sum+(TI) because J 6|= τ∗(a<)I and

#sum+(TJ) ≤ b−#sum−(TI) because J 6|= τ∗(a>)I .

Using #sum−(TJ) ≤ #sum−(TI), we can rearrange as

b−#sum+(TI) ≤ #sum−(TJ)

≤ #sum−(TI)

≤ b−#sum+(TJ).

Using #sum+(TI) ≤ #sum+(TJ), we obtain

#sum+(TI) = #sum+(TJ).

Using #sum+(TI) = #sum+(TJ), we get

b−#sum+(TI) ≤ #sum−(TJ)

≤ #sum−(TI)

≤ b−#sum+(TI)

and, thus, obtain

#sum−(TI) = #sum−(TJ) and

b = #sum(TI)

= #sum(TJ).

Observe that this gives rise to an implication in τ∗(a 6=)I that is not satisfiedby J . Hence, we get J 6|= τ∗(a6=)I .

Property (b). Because J ⊆ I, we get #sum+(TJ) ≤ #sum+(TI) and #sum−(TI) ≤#sum−(TJ).

Case J |= τ∗(a≤)I and J |= τ∗(a≥)I . Using Propositions 34 and 52, we get

#sum+(TJ) ≥ b−#sum−(TI) because J |= τ∗(a≥)I and

#sum−(TJ) ≤ b−#sum+(TI) because J |= τ∗(a≤)I .

Observe that we can proceed as in the proof of the previous property becausethe relation symbols are just flipped. We obtain

#sum−(TI) = #sum−(TJ) and

b = #sum(TI)

= #sum(TJ).

We get J |= τ∗(a=)I because for any subset of tuples in TI that do not satisfythe aggregate, we have additional tuples in TJ that satisfy the aggregate. �

Page 95: ON THE FOUNDATIONS OF GROUNDING IN

FOUNDATIONS OF GROUNDING IN ASP 95

Proposition 55. Let I and J be finite two-valued interpretations, f be anaggregate function, E be a set of aggregate elements, and b be a ground term.

For TI = {H(e) | e ∈ Inst(E), I |= B(e)} and TJ = {H(e) | e ∈Inst(E), J |= B(e)}, we get the following properties:

(a) for J ⊆ I, we have J |= τ∗(f{E} 6= b)I iff there is no set X ⊆ TIsuch that f(X ∪ TJ) = b, and

(b) for I ⊆ J , we have J |= τ∗(f{E} = b)I and iff there is a set X ⊆ TJsuch that f(X ∪ TI) = b.

Proof of Proposition 55. Let a≺ = f{E} ≺ b for ≺ ∈ {=, 6=}.Property (a). We prove by contraposition that J |= τ∗(a 6=)I implies thatthere is no set X ⊆ TI such that f(X ∪ TJ) = b.

Case there is a set X ⊆ TI such that f(X ∪ TJ) = b. Let D = {e ∈ G | I |=B(e), H(e) ∈ X ∪ TJ}. Because TJ ⊆ TI D 6 . a6=. Furthermore, we haveI |= τ(D)∧. Observe that D contains all elements with conditions satisfiedby J . Hence, we get J 6|= τ∗a6=(D)∨ and, in turn, J 6|= τ∗(a6=)I .

We prove the remaining direction, again, by contraposition.

Case J 6|= τ∗(a 6=)I . There is a set D ⊆ G such that I |= τ(D)∧ and J 6|=τ∗a6=(D)∨. Let X = H(D). Because J 6|= τ∗a6=(D)∨, we get f(X ∪ TJ) = b.

Property (b). This property can be shown in a similar way as the previousone. �