combining introspection and communication with rationality and reactivity in agents

16

Upload: independent

Post on 24-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Combining introspection and communicationwith rationality and reactivity in agents ?P. Dell'Acqua1, F. Sadri2 and F.Toni21 Computing Science DepartmentUppsala UniversityBox 311, S-751 05 Uppsala, [email protected] Department of ComputingImperial College of Science, Technology and Medicine180 Queen's Gate, London SW7 2BZ, UKffs,[email protected] July 1998Abstract. We propose a logic-based language for programming agentsthat can reason about their own beliefs as well as the beliefs of otheragents and can communicate with each other. The agents can be reactive,rational/deliberative or hybrid, combining both reactive and rationalbehaviour. We illustrate the language by means of examples.1 IntroductionKowalski & Sadri [12] propose an approach to agents within an extended logicprogramming framework. In the remainder of the paper we will refer to theiragents as KS-agents. KS-agents are hybrid in that they exhibit both rational (ordeliberative) and reactive behaviour. The reasoning core of KS-agents is a proofprocedure that combines forward and backward reasoning. Backward reasoningis used primarily for planning, problem solving and other deliberative activities.Forward reasoning is used primarily for reactivity to the environment, possiblyincluding other agents. The proof procedure is executed within an observe-think-act cycle that allows the agent to be alert to the environment and react to itas well as think and devise plans. Both the proof procedure and the KS-agentarchitecture can deal with temporal information. The proof procedure (IFF proofprocedure [9]) treats both inputs from the environment and agents' actions asabducibles (hypotheses).Barklund et al. [1] and Costantini et al. [6] present an extension to the Re ec-tive Prolog programming paradigm [7, 8] to model agents that are introspectiveand that communicate with each other. We will refer to this approach as Re ec-tive Prolog with Communication (RPC). In RPC introspection is achieved via? To appear as LNAI, Springer-Verlag. c Springer-Verlag.

the meta-predicate solve and communication is achieved via the meta-predicatestell and told. A communication act is triggered everytime an agent agent1 has agoal of the form agent1 : told(agent2; A). This stands both for \agent1 is told byagent2 that A" as well as \agent1 asks agent2 whether A". This goal is solvedby agent2 telling agent1 that A, that is, agent2 : tell(agent1; A), standing for\agent2 tells agent1 that A". The information is passed from agent2 to agent1by eventually instantiating A. A main limitation of this approach is that agentscannot tell anything to other agents unless explicitly asked.The ability to provide agents with some sort of \proactive" communicationprimitive is widely documented in literature [18{20, 5, 14, 17]. For example, onecan model agents that advertise their services so that other agents, possibly withthe help of mediators, can �nd agents that provide services for them.An interesting application of agents is that of a virtual marketplace on theWeb where users create autonomous agents to buy and sell goods on their behalf.Chavez & Maes [3], for example, propose a marketplace, where users can createselling and buying agents by giving them a description of the item they want tosell or buy. The main goal of the approach is to help users in the negotiationsbetween buyers and sellers, and to sell the goods better (i.e., at a higher price)than the user would be able to otherwise, by taking advantage of their processingspeed and communication bandwidth. Chavez & Maes' agents are (i) proactive:\. . . they try to sell themselves, by going into a marketplace, contacting interestedparties (namely, buying agents) and negotiating with them to �nd the best deal",and (ii) autonomous: \. . . once released into the marketplace, they negotiate andmake decisions on their own, without requiring user intervention". Chavez &Maes point out their agents' lack for rationality: \Our experiment demonstratedthe need and desire for `smarter' agents whose decision making processes moreclosely mimic those of people and which can be directed at a more abstract,motivational level."In this paper we propose a combination of (a version of) the RPC program-ming paradigm and KS-agents. In the resulting framework reactive, rational orhybrid agents can reason about their own beliefs as well as the beliefs of otheragents and can communicate proactively with each other. In such a framework,the agents' behaviour can be regulated by condition-action rules such as: if I amasked by another, friendly agent about something and I can prove it from mybeliefs then I will tell the agent about it.In the proposed approach, the two primitives for communication in RPC, telland told, are treated as abducibles within the cycle of the KS-agent architecture.The remainder of the paper is structured as follows. In section 2 we givesome basic de�nitions (in particular, we de�ne ordinary and abductive logicprograms) and we review the IFF proof procedure. In section 3 we review theKS-agent architecture. In section 4 we review RPC. In sections 5 and 6 wede�ne our approach to introspection and communication in agents, respectively.In section 7 we illustrate the framework by means of an example. In section 8we conclude and discuss future work.

2 Preliminaries2.1 Basic de�nitionsA logic program is a set of clauses of the form:A L1 ^ : : : ^ Ln (n � 0)where every Li (1 � i � n) is a literal, A is an atom and all variables areimplicitly universally quanti�ed, with scope the entire clause. If A = p(t), witht a vector of terms, then the clause is said to de�ne p.The completion of a predicate p [4] de�ned in a given logic program Pby the set of clauses:p(t1) D1 : : : p(tk) Dk (k � 1)is the i�-de�nition:p(x)$ [x = t1 ^D1] _ : : : _ [x = tk ^Dk]where x is a vector of variables, all implicitly universally quanti�ed, with scopethe entire i�-de�nition. Any variable in a disjunct Di which is not in x is im-plicitly existentially quanti�ed, with scope the disjunct.If p is not de�ned in P , then the completion of p is the i�-de�nition p(x)$false:The selective completion compS(P ) of a logic program P with respect toa set S of predicates of the language of P is the union of the completions ofall predicates in S. Note that the completion of a logic program P [4] is theselective completion of P with respect to all predicates of the language of P ,together with Clark's equality theory [4].An integrity constraint is an implication of the form:L1 ^ : : : ^ Ln ) A (n � 0)where L1; : : : ; Ln are literals and A is an atom, possibly false. All variables in anintegrity constraint are implicitly universally quanti�ed, with scope the entireintegrity constraint.An abductive logic program [10] is a triple hP;A; Ii, where P is a logicprogram, A a set of predicates in the language of P , and I a set of integrityconstraints. The predicates in A are referred to as the abducible predicatesand the atoms built from the abducible predicates are referred to as abducibles.A is the set of non-abducible predicates in the language of P .Without loss of generality (see [10]), we can assume that abducible predicateshave no de�nitions in P . Abducibles can be thought of as hypotheses that canbe used to extend the given logic program in order to provide an \explanation"for given queries (or observations). Explanations are required to \satisfy" theintegrity constraints. Di�erent notions of explanation and satisfaction have beenused in the literature. The simplest notion of satisfaction is consistency of theexplanation with the program and the integrity constraints.In the sequel, constant, function and predicate symbols may be written asany sequence of characters in typewriter style. Variables may be written as anysequence of characters in italic style starting with a lower-case character. Thus,for example, buys may be a predicate symbol, Tom may be a constant symbol,while x and city are variables.

Example 1. Let the abductive logic program hP;A; Ii be given as followsP = 8<:has(x; y) buys(x; y)has(x; y) steals(x; y)honest(Tom) 9=;A = �buys; stealsI = �honest(x) ^ steals(x; y)) false :Then, given the observation G = has(Tom; computer), the set of abduciblesfbuys(Tom; computer)g is an explanation for G, satisfying I , whereas the setfsteals(Tom; computer)g is not, because it is inconsistent with I .2.2 The IFF proof procedureThe IFF proof procedure [9] is a rewriting procedure, consisting of a number ofinference rules, each of which replaces a formula by one which is equivalent toit in a theory T of i�-de�nitions. We assume T = compA(P ), for some givenabductive logic program hP;A; Ii. The basic inference rules are: 11. Unfolding: given an atom p(t) and a de�nitionp(x)$ D1 _ : : : _Dn in T ,p(t) is replaced by (D1 _ : : : _Dn)�, where � is the substitution fx=tg.2. Propagation: given an atom p(s) and an integrity constraintL1 ^ : : :p(t) : : : ^ Ln ) A,a new integrity constraint L1 ^ : : : t = s : : : ^ Ln ) A is added.3. Logical simpli�cation:[B _ C] ^ E is replaced by [B ^ E] _ [C ^ E] (splitting)notA ^ B ) C is replaced by B ) C _ A (negation elimination)B ^ false is replaced by false, B _ false is replaced by B, and so on.4. Equality rewriting: applies equality rewrite rules (see [9]) simulating theuni�cation algorithm of [16] and the application of substitutions.Given an initial goal G (a conjunction of literals, whose variables are free),a derivation for G is a sequence of formulae F1 = G^ I; : : : ; Fm such that eachderived goal Fi+1 is obtained from Fi by applying one of the inference rules,as follows: unfolding - to atoms that are either conjuncts in Fi or conjuncts inbodies of integrity constraints in Fi; propagation - to atoms that are conjuncts inFi and integrity constraints in Fi; equality rewriting and logical simpli�cation.Every negative literal notA as a conjunct in the initial goal as well as in anyderived goal is rewritten as an integrity constraint A) false.Every derivation relies upon some control strategy. Some strategies are prefer-able to others. E.g., splitting should always be postponed as long as possible,because it is an explosive operation.1 The full IFF proof procedure includes two additional inference rules, case analysisand factoring. In this paper, we omit these rules for simplicity.

Let F1 = G^I; : : : ; Fn = N_Rest be a derivation forG such thatN 6= false,N is some conjunction of literals and integrity constraints, and no inference stepcan be applied to N which has not already been applied earlier in the derivation.Then, F1; : : : ; Fn is a successful derivation. An answer extracted from Nis a pair (D; �) such that{ �0 is a substitution replacing all free and existentially quanti�ed variables inN by variable-free terms in the underlying language and�0 satis�es all equalities and disequalities in N ,{ D is the set of all abducible atoms that are conjuncts in N�0 and� is the restriction of �0 to the variables in G.The full proof procedure is proven sound and complete with respect to a givensemantics [9], based on Kunen's three-valued completion semantics.Example 2. Let hP;A; Ii be the abductive logic program in example 1. Then,compA(P ) is �has(x; y)$ buys(x; y) _ steals(x; y)honest(x)$ x = Tom � :The following is a (successful) derivation for the goal G = has(Tom; computer):F1 = G ^ IF2 = G ^ [x = Tom ^ steals(x; y)) false] (by unfolding)F3 = G ^ [steals(Tom; y)) false] (by equality rewriting)F4 = [buys(Tom; computer) _ steals(Tom; computer)] ^ (by unfolding)[steals(Tom; y)) false]F5 = [buys(Tom; computer) ^ [steals(Tom; y)) false]] _ (by splitting)[steals(Tom; computer) ^ [steals(Tom; y)) false]]:The answer (D = fbuys(Tom; computer)g; � = fg) can be extracted from the�rst disjunct of F5. An additional successful derivation is F1; : : : ; F9 withF6 = [buys(Tom; computer) ^[steals(Tom; y)) false]] _[steals(Tom; computer) ^[steals(Tom; y)) false]^[y = computer) false]] (by propagation)F7 = [buys(Tom; computer) ^ [steals(Tom; y)) false]] _[steals(Tom; computer) ^ [steals(Tom; y)) false] ^ false]](by equality rewriting)F8 = [buys(Tom; computer) ^ [steals(Tom; y)) false]] _ false(by logical simpli�cation)F9 = [buys(Tom; computer) ^ [steals(Tom; y)) false]](by logical simpli�cation)from which the same answer (D; �) as above can be extracted.Note that in this example, unfolding and equality rewriting in the conditions ofthe integrity constraint are performed before any other operation. In general,many of the operations involving the integrity constraints can be done at com-pile time, and the simpli�ed (more e�cient) version of the integrity constraintconjoined to the initial goal.

3 Kowalski-Sadri agentsEvery KS-agent can be thought of as an abductive logic program, equipped withan initial goal. The abducibles are actions to be executed as well as observa-tions to be performed. Updates, observations, requests and queries are treateduniformly as goals. The abductive logic program can be a temporal theory. Forexample, the event calculus [13] can be written as an abductive logic program:holds at(p; t2) happens(e; t1) ^ (t1 < t2)^initiates(e; p) ^ notbroken(t1; p; t2)broken(t1; p; t2) happens(e; t) ^ terminates(e; p) ^ (t1 < t < t2).The �rst clause expresses that a property p holds at some time t2 if it is initiatedby an event e at some earlier time t1 and is not broken (i.e. persists) from t1 tot2. The second clause expresses that a property p is broken (i.e. does not persist)from a time t1 to a later time t2 if an event e that terminates p happens at atime t between t1 and t2.The predicate happens is abducible, and can be used to represent both ob-servations, as events that have taken place in the past, or events scheduled totake place in the future. An integrity constraintI1 happens(e; t) ^ preconditions(e; t; p) ^ notholds at(p; t)) falseexpresses that an event e cannot happen at a time t if the preconditions p of edo not hold at time t.The predicates preconditions, initiates and terminates have application-speci�c de�nitions, e.g.preconditions(carry umbrella; t; p) p = own umbrellapreconditions(carry umbrella; t; p) p = borrowed umbrellainitiates(rain; raining)terminates(sun; raining).Additional integrity constraints might be given to represent reactive behaviourof intelligent agents, e.g.I2 happens(raining; t)) happens(carry umbrella; t+ 1)or to prevent concurrent execution of actions (events)happens(e1; t) ^ happens(e2; t)) e1 = e2.The basic \engine" of a KS-agent is the IFF proof procedure, executed via thefollowing cycle:

To cycle at time t,(i) observe any input at time t,(ii) record any such input,(iii) resume the IFF procedure by propagating the inputs,(iv) continue applying the IFF procedure,using for steps (iii) and (iv) a total of r units of time,(v) select an atomic action which can be executed at time t+ r + 2,(vi) execute the selected action at time t+ r + 2 and record the result,(vii) cycle at time t+ r + 3.The cycle starts at time t by observing and recording any inputs from the envi-ronment (steps (i) and (ii)). Steps (i) and (ii) are assumed to take one unit oftime each. Then, the proof procedure is applied for r units of time (steps (iii)and (iv)). The amount of resources r available in steps (iii) and (iv) is boundedby some prede�ned amount n. By decreasing n the agent is more reactive, byincreasing n the agent is more rational. Propagation is applied �rst (step (iii)),in order to allow for an appropriate reaction to the inputs. Afterwards, an ac-tion is selected and executed, taking care of recording the result (steps (v) and(vi)). Steps (v) and (vi) conjoined are assumed to take one unit of time. Selectedactions can be thought of as outputs into the environment, and observations asinputs from the environment. >From every agent's viewpoint, the environmentcontains all other agents.Selected actions correspond to abducibles in an answer extracted from a dis-junct in a derived goal in a derivation. The disjunct represents an intention, i.e.a (possibly partial) plan executed in stages. A sensible action selection strategymay select actions from the same disjunct (intention) at di�erent iterations ofthe cycle. Failure of a selected plan is obtained via logical simpli�cation, afterhaving propagated false into the selected disjunct.Actions that are generated in an intention may have times associated withthem. The times may be absolute, for example happens(ring bell; 3), or maybe within a constrained range, for example happens(step forward; t)^ (1 < t <10). In step (v), the selected action will either have an absolute time equal tot+ r+ 2 or a time range compatible with an execution time at t+ r+ 2. In thelatter case, recording of the result of the execution instantiates the time of theaction.Integrity constraints provide a mechanism not only for constraining explana-tions and plans, for example, as in I1, but also for allowing reactive, condition-action type of behaviour, for example, as in I2.4 Re ective Prolog with communication4.1 Re ective PrologRe ective Prolog (RP) [7, 8] is a metalogic programming language that extendsthe language of Horn clauses [11, 15] to include higher-order-like features.

The language is that of Horn clauses except that terms are de�ned di�erentlyin order to include names that are intended to represent at the meta-level theexpressions of the language itself. The alphabet of RP di�ers from the usual al-phabet of Horn clauses by making a distinction between variables and metavari-ables and by the presence of metaconstants in RP. Metavariables can only besubstituted with names of sentences of RP, and metaconstants are intended asnames for constants, function and predicate symbols of RP. If c is a constant, afunction or a predicate symbol, then we write 0c as a convenient way to representthe metaconstant that names c. Similarly, if 0c is a metaconstant, then its nameis 00c, and so on.Furthermore, the alphabet of RP contains the (unary) predicate symbolsolve. This allows us to extend at the meta-level the intended meaning of pred-icates (partially) de�ned at the object-level. Clauses de�ning solve will be re-ferred to as meta-level clauses, and clauses de�ning object-level predicates willbe referred to as object-level clauses. Metavariables are written as any sequencesof characters in italic style starting in the upper-case.Compound terms and atoms are represented at the meta-level as name terms.For example, the name of the term f(a; x) is the name term 0f(0a;0x), where 0fand 0a are the metaconstants that name the function symbol f and constant a,respectively, and 0x stands for the name of the value of the variable x.The intended connection between the object-level and the meta-level of RPis obtained by means of the following (inter-level) re ection axioms:A solve(0A) and solve(0A) A; for all atoms A.The �rst asserts that whenever an atom of the form solve(0A) is provable at themeta-level, then A is provable at the object-level. These axiom schemata are notexplicitly present in any given program but rather they are simulated within themodi�ed SLD-resolution underlying RP.Suppose, for example, that we want to express the fact that an object objsatis�es all the relations in a given class. If we want to formalise that statementat the object-level, then for every predicate q in classwe have to write the clauseq(obj). Instead, we may formalise our statement at the meta-level as:solve(P (0obj)) belongs to(P; class);where P is a metavariable ranging over the names of predicate symbols.4.2 CommunicationRPC is an extension of Re ective Prolog to accommodate communication be-tween agents [7, 8]. Agents are seen as logic programs. In an agent setting,solve may be seen as representing a given agent's beliefs, for example agent1 :solve(0A), for some atom A, stands for \agent1 believes A". RPC allows fortwo communication acts, expressed via the two meta-predicates tell(X;Y ) andtold(X;Y ). An atom tell(0agent2;0A) in the theory representing agent1 standsfor \agent1 tells agent2 that A holds". An atom told(0agent2;0A) in the theoryrepresenting agent1 stands for \agent1 is told by agent2 that A holds". Thelatter atom can also be interpreted as \agent1 asks agent2 whether A holds",i.e. the predicate told can be used to express queries of agents to other agents.

Communication acts are formalized in RPC by means of (inter-theory) re- ection axioms based on the predicate symbols tell and told:[agent1 : told(0agent2;0A)] [agent2 : tell(0agent1;0A)]for all atoms A, and agent names 0agent1 and 0agent2. The intuitive meaning ofeach such axiom is that every time an atom of the form tell(0agent1;0A) can bederived from a theory agent2 (which means that agent2 wants to communicateproposition A to agent1), the atom told(0agent2;0A) is consequently derivedin the theory agent1 (which means that proposition A becomes available toagent1). These axioms are not explicitly present in any given program but ratherthey are simulated within the modi�ed SLD-resolution underlying RPC.These two predicates are intended to model the simplest and most neutralform of communication among agents, with no implication about provability (ortruth) of what is communicated, and no commitment about how much of itsinformation an agent communicates and to whom. An agent may communicateto another agent everything it can derive (in its associated theory), or only partof what it can derive, or it may even lie, that is, it communicates something itcannot derive.The intended connection between tell and told is that an agent may receivefrom another agent (by means of told) only the information the second agenthas explicitly addressed to it (by means of tell). Thus, an agent can regulateits interaction with other agents by means of appropriate clauses de�ning thepredicate tell.What use an agent makes of any information given to it by others is entirelyup to the agent itself. Thus, the way an agent communicates with others is nothard-wired in the language. Rather, it is possible to de�ne in a program di�erentbehaviours for di�erent agents or di�erent behaviours of one agent in di�erentsituations. (Several examples of the use of agents in RPC can be found in [1, 6].)For example, the following clauses:Bob : [solve(X) reliable(A) ^ told(A;X)]Bob : [solve(X) told(0John; 0not X)]express that an agent Bob trusts every agent that is reliable but distrusts John.The clause:Bob : [tell(A;X) agent(A) ^ solve(X)]says that the agent Bob tells the others whatever he can prove.RP and its extension RPC rely upon a Prolog-style control strategy andtherefore do not allow for agents to tell other agents or ask for informationproactively.5 Adding introspection to KS-agentsIn this paper, following the framework of KS-agents, agents are represented asabductive logic programs rather than ordinary logic programs as in RPC. Theabductive logic program, in its completed form, is used embedded within a KS-agents' cycle.

In order to accommodate within agents beliefs about di�erent agents, we givesolve an additional argument rather than using labels as in RPC. The atomsolve(Agent; A) stands for \Agent believes A". We will assume that solve canonly take names of atoms as second argument. By convention, solve(A) will bean abbreviation for solve(Agent; A) within the program P of Agent itself.Instead of incorporating the re ection principle linking object- and meta-level within the proof procedure, as in RP, we incorporate it via (a �nite set of)clauses to be added to the given (abductive) logic program. (A similar approachis presented in [2].)Let hP;A; Ii be an abductive logic program and let C = H B be a clausein P . Then, we de�ne a re ection principle RP :RP(C) = �O(C) if C is an object-level clauseM(C) if C is a meta-level clausewhere� If H = p(t), thenO(C) = �solve(X) X = 0p(0t) ^B :Note that necessarily p 62 A, since we assume, without loss of generality, thatabducible predicates are not de�ned in P .� If H = solve(0p(0t)) and p 62 A, thenM(C) = fp(x) x = t ^Bg :� If H = solve(X(0t)), thenM(C) = �p(t) X = 0p ^ B j for everyp 62 A and p 6= solve :� If H = solve(X), thenM(C) = �p(x) X = 0p(0x) ^ B j for everyp 62 A and p 6= solve :RP(P ) is given by the union of RP(C) for all C 2 P .� Let hP;A; Ii be an abductive logic program. The abducibility set of A,written as � (A), is the set of clauses:� (A) = �solve(X) X = 0a(0x) ^ a(x) j for all a 2 A :The abducibility set of an abductive logic program allows a meta-level represen-tation of provability of abducible predicates.The intended connection among object-level, meta-level and abducible atomsis captured by the following de�nition.� Let hP;A; Ii be an abductive logic program.The associated program of P with respect to A, written as �(P;A), isde�ned as: �(P;A) = P [ RP(P ) [ � (A):

The associated integrity constraints of I with respect to A, written as�(I;A), is de�ned as:�(I;A) = I [ fsolve(0p(0x)) ) p(x) j for all p 2 Ag:The meta-abductive logic program associated with hP;A; Ii is:h�(P;A); A; �(I;A)i:The addition of the new integrity constraints in �(I;A) allows the agent to prop-agate (and thus compute the consequences of) any new information it receivesabout abducible predicates in whatever (meta-level or object-level) form, with-out any need to alter the original set of integrity constraints, I , or the programP .6 Adding communication to KS-agentsIn this paper, we interpret tell(X;Y ) and told(X;Y ) as abducible predicatesin meta-abductive logic programs. As for solve we can give tell and told anadditional argument instead of introducing labels, to represent communicationbetween agents. For simplicity, we will abbreviate tell(Agent; X; Y ) (resp. told)within the program P of Agent itself as tell(X;Y ). As with solve, we willassume that tell and told take only names of atoms as their last argument.Example 3. Let agent1 be represented by the abductive logic program hP;A; Iiwith:P =8<:solve(X) told(A;X)desire(y) y = cargood price(p; x) p = 09=;A = �tell; told; offerI = �desire(x) ^ told(A;0good price(0p;0x)) ) tell(A;0offer(0p;0x)) :Namely, agent1 believes anything it is told (by any other agent), and it desiresto have a car. The third clause in P says that anything that is free is at a goodprice. Moreover, if the agent desires something and it is told (by some otheragent) of a good price for it, then it makes an o�er to the other agent, by tellingit. Note that a more accurate representation of the integrity constraint shouldinclude time.The corresponding meta-abductive logic program is

�(P;A) = P [8>>>>>>>><>>>>>>>>:solve(X) X = 0desire(0y) ^ y = carsolve(X) X = 0good price(0p;0x) ^ p = 0desire(x) X = 0desire(0x) ^ told(A;X)good price(p; x) X = 0good price(0p;0x) ^ told(A;X)solve(Y ) Y = 0tell(0A;0X) ^ tell(A;X)solve(Y ) Y = 0told(0A;0X) ^ told(A;X)solve(Y ) Y = 0offer(0p;0x) ^ offer(p; x)

9>>>>>>>>=>>>>>>>>;A = �tell; told; offer�(I;A) = I [8<:solve(0tell(0A;0X))) tell(A;X)solve(0told(0A;0X))) told(A;X)solve(0offer(0p;0x)) ) offer(p; x)9=; :Given an abductive logic program hP;A; Ii, corresponding to some agent, andthe associated meta-abductive logic program h�(P;A);A; �(I;A)i, the agent'scycle applies the IFF procedure withT = compA(�(P;A)):The equality rewriting rule needs to be modi�ed to take into account equalitybetween names.Example 4. Let hP;A; Ii be the abductive logic program of example 3. Assumethat agent1 has the input observation:told(0agent2;0good price(050;0car));meaning that agent1 has been told by agent2 of a good price (of 50) for a car.Then, the initial goal G is:told(0agent2;0good price(050;0car)):A computed answer for G isD = ftold(0agent2;0good price(050;0car)); tell(0agent2;0offer(050;0car))g� = fg:Within the KS-agent architecture, the following requirements are met:{ KS-agents are known by their symbolic names.{ When a KS-agent sends a message, it directs that message to a speci�caddressee.{ When a KS-agent receives a message, it knows the sender of that message.{ Messages may get lost.

Both tell and told are treated as actions: everytime (the cycle of) an agentagent1 selects a communicative action, i.e., an action of the formtold(0agent2; A) or tell(0agent2; A);agent1 will attempt to execute it. If the attempt is successful, the recordtold(0agent2; A) or tell(0agent2; A)is conjoined to the goals of agent agent1, as an additional input. If the attemptis not successful, the recordtold(0agent2; A)) false or tell(0agent2; A)) falseis conjoined to the goals of agent agent1, as an additional input. In the nextsection we will formalise an example showing how proactive communication isachieved by executing the proof procedure within cycle.7 ExampleThe following example demonstrates the running of the proof procedure withincycle, action selection, proactive communication whereby one agent volunteersinformation to another, and how during the planning phase such informationcan help in the choice of intention.The example is as follows: an agent wishes to register for a conference, letus say Jelia, that is to take place in Paris on the 10th and to make travelarrangements to go to Paris on the 10th (for simplicity we omit the month). Sothe agent's original goal is the conjunction:register(Jelia) ^ travel(Paris; 10):The agent has the following program, P :C1 travel(city; date) go(city; date; train)C2 travel(city; date) go(city; date; plane)C3 early registration(Jelia) send form(Jelia; date) ^ (date < 3)C4 go(city; date;means) book(city; date;means)C5 book(city; date;means) told(0ticket agent;0available(0city;0date;0means))^tell(0ticket agent;0reserve(0city;0date;0means))C6 solve(X) told(0ticket agent; X):Clauses C1 and C2 say that one travels to a city on a given date if one goes thereon that date by train or by plane. C3 says that the deadline for early registrationfor Jelia is the 3rd. C4 says that one goes to a city on a given date by some meansif one makes a booking for that journey. C5 says that one makes a booking ifthe ticket agent con�rms availability of ticket and one makes a reservation. Notethat in a more thorough representation we would represent the transaction timeof when a booking is made (which must be before the time of travel). In that

case we will have an extra argument in tell and told that represents suchtransaction times. We will ignore this issue here for simplicity.The agent has the following set of integrity constraints I :I1 register(conference)) early registration(conference)I2 go(city; date;means) ^ strike(city; date;means)) false:I1 states the agent's departmental policy that anyone registering at a conferencemust take advantage of early registration. I2 states that one cannot use a modeof transport which is subject to a strike.Finally, the agent has the following set of abducibles A:fsend form; tell; told; register; available; reserve; strikeg.Now suppose that the cycle of the agent starts at time 1 and that the agentdoes not observe any input. Thus, the cycle e�ectively starts at step (iv) byapplying the IFF proof procedure (using h�(P;A);A; �(I;A)i which we do notshow here) to the goal obtained by conjoining the original goal and the integrityconstraints �(I;A). By repeatedly applying the IFF proof procedure, the goalis transformed (within the initial cycle or within some later iteration, dependingon the resource parameter r) intoregister(Jelia)^travel(Paris; 10)^send form(Jelia; date)^(date < 3):At this point cycle can select (at step (v)) the action send form(Jelia; date) toperform if time is less than 3. If the agent has been too slow and time 3 hasalready passed the agent has failed its goals. Suppose the agent succeeds. Notethat at any time any other agent can send this agent a message. So suppose theticket agent sends a message that trains are on strike in Paris on the 10th, i.e.at some iteration of cycle at step (i) the agent receives the following inputtold(0ticket agent;0strike(0Paris;010;0train)):In that iteration of cycle this information is propagated (step (iii)) and thesimpli�ed constraintgo(Paris; 10; train)) falseis added to the goal. Meanwhile the sub-goal travel(Paris; 10) is unfolded (step(iv)) intogo(Paris; 10; train) _ go(Paris; 10; plane):The information about the strike will be used to remove the �rst possibility (i.ethe �rst disjunct) leaving onlygo(Paris; 10; plane)which will become the agent's intention. This will be unfolded into the plantold(0ticket agent;0available(0Paris;010;0plane))^tell(0ticket agent;0reserve(0Paris;010;0plane)):The appropriate actions will be selected during some iterations of cycle.For lack of space we have ignored the cycle of the ticket agent.8 ConclusionsWe have presented an approach to agents that can reason about their own beliefsas well as beliefs of other agents and that can communicate with each other. The

approach results from the combination of the approach to agents in [12] and theapproach to meta-reasoning and communication in [6, 1]. We have illustrated theapproach by means of a number of examples.The approach needs to be extended in a number of ways. For simplicity, wehave ignored the treatment of time. However, time plays an important role inmost agent applications and should be explicitly taken into account.We have considered only two communication performatives, tell and told.Existing communication languages, e.g. [5], consider additional performatives,e.g. deny, achieve and unachieve. We are currently investigating whether someof these additional performatives could be de�ned via communication protocols,as de�nitions and integrity constraints within our framework.The primitive told is used to express both active request for information andpassive communication. The two roles should be separated out, possibly with theaddition of a third predicate ask, distinguished from told. Then the predicatetold could be de�ned in terms of ask and tell, rather than be an abducible.For example, an agent agent1 is told of X by another agent agent2 if and onlyif agent2 tells agent1 of X or agent1 actively asks agent2 about X and agent2gives a positive answer.We have implicitly assumed that di�erent agents share the same content lan-guage. However, this assumption is not essential. Indeed, \translator agents"could be de�ned, acting as mediators between agents with di�erent content lan-guages. This is possible by virtue of the metalogic features of the language.AcknowledgmentsThe authors are grateful to the anonymous referees for their comments. Thesecond and third authors are supported by the UK EPSRC project \Logic-basedmulti-agent systems".References1. J. Barklund, S. Costantini, P. Dell'Acqua, and G. A. Lanzarone. Metareasoningagents for query-answering systems. In Troels Andreasen, Henning Christiansen,and Henrik Legind Larsen, editors, Flexible Query-Answering Systems, pages 103{122. Kluwer Academic Publishers, Boston, Mass., 1997.2. J. Barklund, S. Costantini, P. Dell'Acqua, and G. A. Lanzarone. Re ection Prin-ciples in Computational Logic. Submitted to J. of Logic and Computation, 1997.3. A. Chavez and P. Maes. Kasbah: An agent marketplace for buying and sellinggoods. In B. Crabtree and N. Jennings, editors, Proc. 1st Intl. Conf. on the Prac-tical Application of Intelligent Agents and Multi-Agent Technology, pages 75{90.The Practical Application Company, 1996.4. K. L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic andData Bases. Plenum Press, New York, 1978.5. P. R. Cohen and H. J. Levesque. Communicative actions for arti�cial agents. InV. Lesser, editor, Proc. 1st Intl. Conf. on Multiagent Systems, AAAI Press, pages65{72. MIT Press, 1995.

6. S. Costantini, P. Dell'Acqua, and G. A. Lanzarone. Re ective agents in metalogicprogramming. In Alberto Pettorossi, editor, Meta-Programming in Logic, LNCS649, pages 135{147, Berlin, 1992. Springer-Verlag.7. S. Costantini and G. A. Lanzarone. A metalogic programming language. In G. Leviand M. Martelli, editors, Proc. 6th Intl. Conf. on Logic Programming, pages 218{33, Cambridge, Mass., 1989. MIT Press.8. S. Costantini and G. A. Lanzarone. A metalogical programming approach: Lan-guage, semantics and applications. Int. J. of Experimental and Theoretical Arti�-cial Intelligence, 6:239{287, 1994.9. T. H. Fung and R. Kowalski. The IFF proof procedure for abductive logic pro-gramming. J. Logic Programming, 33(2):151{165, 1997.10. A. C. Kakas, R. A. Kowalski, and F. Toni. The role of abduction in logic program-ming. In D. Gabbay, C. Hogger, and A. Robinson, editors, Handbook of Logic inArti�cial Intelligence and Logic Programming, volume 5, pages 235{324. OxfordUniversity Press, UK, 1998.11. R. A. Kowalski. Predicate logic as a programming language. In J. L. Rosenfeld,editor, Information Processing, 1974, pages 569{574, Amsterdam, 1974. North-Holland.12. R. A. Kowalski and F. Sadri. Towards a uni�ed agent architecture that combinesrationality with reactivity. In Dino Pedreschi and Carlo Zaniolo, editors, Logicin Databases, Intl. Workshop LID'96, LNCS 1154, pages 137{149, Berlin, 1996.Springer-Verlag.13. R. A. Kowalski and M. Sergot. A logic-based calculus of events. New GenerationComputing, 4:67{95, 1986.14. Y. Labrou and T. Finin. Semantics and conversations for an agent communicationlanguage. In M. N. Huhns and M. P. Singh, editors, Readings in Agents, pages234{242, San Francisco, 1997. Morgan Kaufmann.15. John W. Lloyd. Foundations of Logic Programming, Second Edition. Springer-Verlag, Berlin, 1987.16. Alberto Martelli and Ugo Montanari. An e�cient uni�cation algorithm. ACMTOPLAS, 4:258{282, 1982.17. M. P. Singh. Towards a formal theory of communication for multiagent systems.In Proc. 12th Intl. Joint Conf. on Arti�cial Intelligence, pages 69{74, Sydney,Australia, 1991. Morgan Kaufmann.18. B. van Linder, W. van der Hoek, and J.-J. Ch. Meyer. Communicating rationalagents. In B. Nebel and L. Dreschler-Fischer, editors, KI-94: Advances in Arti�cialIntelligence, LNAI 861, pages 202{213, Berlin, 1994. Springer-Verlag.19. E. Verharen and F. Dignum. Cooperative information agents and communication.In P. Kandzia and M. Klusch, editors, Cooperative Information Agents, LNAI 1202,pages 195{208, Berlin, 1997. Springer-Verlag.20. G. Wagner. Multi-level security in multiagent systems. In P. Kandzia andM. Klusch, editors, Cooperative Information Agents, LNAI 1202, pages 272{285,Berlin, 1997. Springer-Verlag.