[ieee 1997 ieee international conference on systems, man, and cybernetics. computational cybernetics...

6
Sensible Agents K. S. Barber, A. Goel, T. J. Graser, T. H. Liu, R. H. Macfadzean, C. E. Martin, S. Ramaswamy The Laboratory for Intelligent Processes and Systems The Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 78712, USA ABSTRACT The practical deployment of distributed agent-based systems mandates that each agent behave sensibly. This paper focuses on the development of flexible, responsive, adaptive systems based on Sensible Agents. Sensible Agents perceive, process, and respond based on an understanding of both local and system goals. Each agent is capable of (1) deliberative or reactive planning and execution of one or more domain-specific service /task, (2) maintaining and interpreting knowledge about A critical consideration for Sensible Agent behavior is the agent’s level of autonomy. The term level of autonomy refers to the types of roles an agent plays in its planning interactions with other agents. This research seeks to prove the following hypothesis [ 11: The operational level of agent autonomy is key to an agent’s ability to respond to dynamic situational context, (i.e. the states, events, and goals that exist in a multi-agent system}, conflicting goals, and constraints on behavior. states, events, and goals related to itself, other agents, An autonomy level is assigned for each goal an agent and the environment, and (3) adapting its behavior holds. Levels of autonomy are defined along a spectrum according to its understanding of its own local goals and as shown in Figure 1. (1) Command driven -- the agent overall system goals. This paper addresses the above does not plan and must obey orders given by another issues in the context of applied semiotics, a field that agent, (2) Consensus -- the agent works as a team analyzes and develops the formal tools of knowledge member to devise plans, (3) Locally Autonomous -- the acquisition, representation, organization, generation, agent plans alone, unconstrained by other agents and, enhancement, communication, and utilization. A (4) Master -- the agent devises plans for itself and its Sensible Agent architecture has been developed where each agent is composed of five modules: a Self-Agent Modeler, an External Agent Modeler, an Action Planner, an Autonomy Reasoner, and a Conflict Resolution Advisor. followers who are command-driven. 1 Command-driven Consensus Local Master Agent responds Agents work Agent in charge Agent plans for to external as a team to of planning its self and others. command devise actions own actions Issues commands. 1. INTRODUCTION The development of flexible, responsive and adaptive systems is a highly desirable goal when deploying automated systems operating in complex and dynamic Figure 1: Spectrum of Autonomy domains. The Sensible Agent architecture is proposed with this goal in mind. Capable of dynamically adapting their level of autonomy, these Sensible Agents perceive, process and respond based on an understanding of both system goals and their own local goals. This paper provides an overview of the Sensible Agent architecture and discusses the methodologies by which domain- specific responsibilities are assigned to agents. 2. SENSIBLE AGENT ARCHITECTURE Each Sensible Agent is composed of five major modules as shown in Figure 2: A Self Agent Modeler contains the behavioral model of an agent. An Extemal Agent Modeler contains knowledge about other agents and the environment. An Autonomy Reasoner determines the appropriate autonomy level for each goal, assigns an autonomy level to each goal, and reports autonomy-level * constraints to other modules in the self-agent. This research is supported in part by the Texas Higher Education Coordinating Board #003658452 and the Applied Research Laboratories An Action Planner solves domain problems, stores and Office of Naval Research Grant N00014-96-1-0298. agent goals, and executes problem solutions. 0-7803-4053-1/!V/$lO.OO @ 1997 IEEE 41 46

Upload: s

Post on 13-Mar-2017

218 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

Sensible Agents K. S. Barber, A. Goel, T. J. Graser, T. H. Liu, R. H. Macfadzean, C. E. Martin, S. Ramaswamy

The Laboratory for Intelligent Processes and Systems The Department of Electrical and Computer Engineering

The University of Texas at Austin Austin, TX 78712, USA

ABSTRACT

The practical deployment of distributed agent-based systems mandates that each agent behave sensibly. This paper focuses on the development of flexible, responsive, adaptive systems based on Sensible Agents. Sensible Agents perceive, process, and respond based on an understanding of both local and system goals. Each agent is capable of (1) deliberative or reactive planning and execution of one or more domain-specific service /task, (2) maintaining and interpreting knowledge about

A critical consideration for Sensible Agent behavior is the agent’s level of autonomy. The term level of autonomy refers to the types of roles an agent plays in its planning interactions with other agents. This research seeks to prove the following hypothesis [ 11:

The operational level of agent autonomy is key to an agent’s ability to respond to dynamic situational context, (i.e. the states, events, and goals that exist in a multi-agent system}, conflicting goals, and constraints on behavior.

states, events, and goals related to itself, other agents, An autonomy level is assigned for each goal an agent and the environment, and (3) adapting its behavior holds. Levels of autonomy are defined along a spectrum according to its understanding of its own local goals and as shown in Figure 1. (1) Command driven -- the agent overall system goals. This paper addresses the above does not plan and must obey orders given by another issues in the context of applied semiotics, a field that agent, (2) Consensus -- the agent works as a team analyzes and develops the formal tools of knowledge member to devise plans, (3) Locally Autonomous -- the acquisition, representation, organization, generation, agent plans alone, unconstrained by other agents and, enhancement, communication, and utilization. A (4) Master -- the agent devises plans for itself and its Sensible Agent architecture has been developed where each agent is composed of five modules: a Self-Agent Modeler, an External Agent Modeler, an Action Planner, an Autonomy Reasoner, and a Conflict Resolution Advisor.

followers who are command-driven.

1 Command-driven Consensus Local Master Agent responds Agents work Agent in charge Agent plans for

to external as a team to of planning its self and others. command devise actions own actions Issues commands.

1. INTRODUCTION The development of flexible, responsive and adaptive systems is a highly desirable goal when deploying automated systems operating in complex and dynamic Figure 1: Spectrum of Autonomy

domains. The Sensible Agent architecture is proposed with this goal in mind. Capable of dynamically adapting their level of autonomy, these Sensible Agents perceive, process and respond based on an understanding of both system goals and their own local goals. This paper provides an overview of the Sensible Agent architecture and discusses the methodologies by which domain- specific responsibilities are assigned to agents.

2. SENSIBLE AGENT ARCHITECTURE Each Sensible Agent is composed of five major modules as shown in Figure 2:

A Self Agent Modeler contains the behavioral model of an agent. An Extemal Agent Modeler contains knowledge about other agents and the environment. An Autonomy Reasoner determines the appropriate autonomy level for each goal, assigns an autonomy level to each goal, and reports autonomy-level

* constraints to other modules in the self-agent. This research is supported in part by the Texas Higher Education

Coordinating Board #003658452 and the Applied Research Laboratories An Action Planner solves domain problems, stores and Office of Naval Research Grant N00014-96-1-0298. agent goals, and executes problem solutions.

0-7803-4053-1/!V/$lO.OO @ 1997 IEEE 41 46

Page 2: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

A Conflict Resolution Advisor identifies, classifies, and generates possible solutions for conflicts occurring between the self-agent and other agents.

b

Interaction with other system agents

Environment and\ r Autonomy J Requests

extern

SYSTEM MODEL

Figure 2. Sensible Agent Architecture

Self-Agent and External Agent Modelers Sensible Agents are system components which actively respond based on a dynamic understanding of system goals and local agent goals. An agent’s behaviors are described by its internal states, events and goals. An agent’s understanding of the external world is built from its interpretations of the states, events, and goals of other agents and the environment. A Sensible Agent must be able to readily perceive, appreciate, understand, and demonstrate cognizant behavior.

As a product of object-oriented domain analysis, each agent is assigned responsibility for particular domain- specific services or tasks and is modeled with a specification including: (1) datdknowledge the agent is responsible for, (2) attributes which characterize the agent, (3) dependencies between the respective agent, other agents, and its environment, and (4) the agent’s behavior as the set of states, events, and transitions that define how an agent performs its designated tasks [3].

In addition to what an agent must know about itself, it must also have some knowledge of other agents with which it interacts in the system. The knowledge of these other agents includes descriptive attributes as well as limited knowledge of the high-level actions performed

when fulfilling their respective domain-specific responsibilities.

The resulting meta-agent model is held in two modules in the Sensible Agent, the Self Agent Modeler (SAM) and the External Agent Modeler (EAM). The SAM contains the agent’s representation of itself and dependencies, while the EAM contains representations for each of the other agents and the environment with which the agent interacts. These modules contain a representation of the needed knowledge, and other modules interface with the SAM and EAM to retrieve and update this knowledge.

Agent models include both declarative and behavioral knowledge [ 11. Declarative knowledge includes classes, objects, and attributes. Behavioral knowledge includes the dynamic behavior of an object as described by its states and the transitions among those states. Agent models also contain rules of composition involving agent interdependency. Two types of rules of composition are those based on data or service dependencies among agents and those based on dynamic interactions among agents .

A Sensible Agent must completely capture “domain specific knowledge“ through its representations. Given this perspective, a modeling mechanism that is flexible and sufficiently rich to allow a designer to represent, in detail, the internal structure of these agents is critical to the realization of a Sensible Agent architecture. Moreover, these representations must be similar to what an user might observe in the real world operation of the system in question. In this respect, identification and handling of failures is a very important issue. Extended Statecharts (ESCs), a derivation of Statecharts with respect to temporal and hierarchical extensions, have been developed as a comprehensive modeling mechanism for the behavioral modeling of Sensible Agents. ESCs allow for the explicit representation of declarable problem-specific agent soft failures, thereby allowing failure-related information to be incorporated within the high-level system design. ESCs exploit the XOR state representation in Statecharts to integrate failure information in a high-level system representation. Combined with a powerful event and transition structure this modeling technique allows the development of detailed high level specification of agent behaviors.

An ESC is a 4-tuple (S, E, G, T), where, S is a set of states, E is a set of events, G is a set of guards and T is a set of transitions. The set of states S is defined as

n . s = us1

i = I

41 47

Page 3: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

where, n is the number of levels in the model hierarchy. Exit-safe states represent states wherein a substate is in a stable state to process the transition out of a parent state. That is, events causing a transition between higher level (parent) states are handled only if the substate of the parents’ state is an exit-safe substate. This extension explicitly allows the developer to specify certain non-interruptible critical operations. Non exit- safe states are states that are not exit-safe and hence do not allow for higher level state transitions.

S - Agent S - State State state

S - Exit S -Non exit- Intra-level Inter-level safe state transitions transitions I state

Figure 3: ESC Graphical Notations

Figure 3 illustrates the graphical ESC notations. The dashed rectangle represents the definition of an agent which is a domain-specific definition. The solid rectangle is used to represent a state that is decomposed immediately. The arrow pointing to a state denotes a default state that the agent reaches first among other states at the same level. The shaded rectangle is used to represent decomposed states. This means that the state S is decomposed further in another diagram. Two types of transitions are distinguished to manage the hierarchical information transfer between states. These include: ( i ) Intra-level transitions ( Tintra): transitions between states in the same hierarchical level and (ii) Inter-level Transitions (Tinter ): transitions between states in different hierarchical levels. Intra-level transitions are used to represent normal behavior and inter-level transitions are used to represent abnormal behavior.

.”-’: j y :

I I I

-e7v(val(e3) = 35)

-el I {e5, e6) I

Figure 4: ESC Example Graphical Representation

Figure 4 shows an example ESC representation. In the figure, Y is an agent that has states C and D in an AND composition. State C has substates A and B that are in a XOR combination, A being the default state. D has

substates L, M and N where L is the default state. Notice that J, K and M are exit-safe states. Therefore, a transition from Y to another higher level parent state may occur if and only if the substate of C is either J or K and if substate of D is M.

ESCs have been used in the modeling of manufacturing control software in [2] and in the design of the radar and weapon subsystems in [3].

Autonomy Reasoner The representation of autonomy is based on four autonomy constructs [4]:

Responsibility is a measure of how much the agent must plan to see a goal solved. Commitment is a measure of the extent to which a goal must be solved. Authority is a measure of the agent’s ability to access system resources. Independence is a measure of how freely the agent can plan.

The autonomy level, AL, is a 4-tuple, (R,C,A,i), resulting from assignments to the autonomy constructs. These four autonomy constructs provide a guide for agent planning, ensure consistent agent behavior within autonomy levels, enable agents to choose productive problem-solving groups, and support flexibility for system constraints.

The responsibility construct represents how a particular goal relates to other agents and their goals. It also designates which system agents are involved in creating the plan to achieve the goal. An agent’s planning- resources are divided among the goals it is trying to achieve and the goals it is attempting to help other agents achieve. The responsibility construct is the representation of the planning interaction framework for an agent goal and corresponds directly to the autonomy levels listed above.

The commitment construct represents the price an agent must pay to give up a goal that it had previously intended to achieve. Agents can accept and give up goals based on their commitment assignments. An agent’s commitment to achieving a goal (internal commitment) is represented separately from the agent’s commitment to achieving the goal within a particular planning framework (organizational commitment). This allows agents to track goal ownership during system operation.

The authority construct is used by Sensible Agents to determine and record how their access to resources in the system is changed by autonomy ‘level transitions

4148

Page 4: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

during system operation. System agents may use their own resources on behalf of another agent when they accept goals from that agent. In addition, agents who are part of a problem-solving group (in a consensus or master / command-driven relationship) are more likely to distribute goals among agents in this group. Therefore, forming such autonomy-level agreements with other agents can increase an agent’s access to system resources. Dissolving such agreements would decrease the agent’s access to those resources controlled by other agents. The authority construct maintains a list of agents to whom the self-agent can readily allocate subgoals for any goal.

Finally, the independence construct represents how much an agent is constrained (or unconstrained) by its system with respect to its planning activities. The interpretation of the independence construct relies on two concepts: (1) bounding the impact of an agent’s planning choices on its overall system, based on the inherent system utility of candidate goals and (2) defined levels of system constraints. An agent assigns two utility values (U,,,,,, Usystem) to each goal it considers accepting, where U,,,,, represents the goal’s value to the agent itself, and U,,,,,, represents the goal’s value to the agent’s overall system. As an agent’s level of independence increases, it can accept goals with higher Uagent/USystem ratios. In addition, the social laws (rules of conduct) under which an agent operates are relaxed as the agent gains more independence.

The Autonomy Reasoner determines the autonomy level for each Sensible Agent goal and assigns values to the autonomy constructs. These autonomy constructs effectively control how the agent interacts with other agents during planning. The primary objective of autonomy reasoning is predicting which agents in a system can best plan for a particular goal, such that constraints on cost, time, and utility are likely to be satisfied. A secondary objective of autonomy reasoning is to insure that undesirable interactions between agents are minimized. Therefore, the Autonomy Reasoner considers the potential for a particular autonomy level to result in violations of system constraints.

There are a number of causal factors in determining the appropriate autonomy level for a goal. These include information availability, information errors, information ambiguity, the availability of resources for planning and execution, goal priority, and goal deadline. From the perspective of one agent, information about itself (located in the Self Agent Modeler) will generally be complete and accurate; information on the environment and on external agents (some of whom may not be members of the system) is subject to varying availability, accuracy, ambiguity, and timeliness.

Given these causal factors and an understanding of the relationships among them, a probabilistic model of autonomy can be devised. Belief nets offer a powerful and expressive approach on which to base such a model and their application to autonomy reasoning will be briefly outlined.

Each node in a belief network represents a random variable. Links between nodes represent conditional probability transformations between the parent and child. Root nodes (nodes without parents) are initially assigned prior probabilities. Given the prior probabilities, beliefs in each possible value of the other nodes in the network can be calculated. If the values of certain nodes are known or estimated (e.g. from the Self Agent or External Agent Modelers), the beliefs in all nodes can be modified to reflect the known information.

In applying belief nets to autonomy reasoning, the key nodes represent the autonomy level (with values from the autonomy spectrum shown in Figure 1) and the contribution of each agent to planning. Supporting nodes represent the variables that have a causal effect on the key variables. The conditional probability transformations may be learned from simulation, or be specified by expert judgment.

The control of radar interference in groups of naval ships is an excellent domain for demonstrating autonomy reasoning. A battle group always has a mission goal, such as transit to location P, rescue n people, and transit to location Q. A sub-goal of any mission in a combat area is “survive attacks.” In order to survive attacks, targets must be detected by radar. Interference can substantially reduce the range at which targets can be detected, thereby minimizing the time (and the options) for defending against an attack.

A radar on one ship in a battle group may interfere with another radar either in the group or external to the group. When a radar agent detects interference, it can attempt to resolve the problem using its own knowledge of itself and other agents, e.g. it can be locally autonomous. For example, it could determine a new frequency and cause the radar to adjust its frequency to the new value. Unfortunately, the victim radar (after adjustments) may become a source that now interferes with other radars in the group that were previously in the clear.

Alternatively, the victim may become command-driven, calling on some other agent to (effectively) serve as a central controller, or master agent. Assuming the central controller has (or can get) all the relevant data, this approach may minimize the overall resulting interference, but would require a significant amount of message passing.

4149

Page 5: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

As another alternative, the victim could choose to use consensus. This would involve multiple message exchanges during negotiation with each shiphadar agent in the group, and this is likely to be costly in terms of communication and time.

A simulation of the radar interference domain has been used to study the effect of local and command- drivedmaster autonomy in a group of L band search radars. The only control variables were the frequencies of each radar, constrained to fall at 5 MHz intervals between 1300 and 1340 MHz. For each trial, the position and bandwidth of the radars was assumed constant. A model of the radar interactions produced data on the minimum frequency separation to have a high probability of low interference.

Using local autonomy, the problem of determining frequency assignments for the radars to mitigate interference became problematical; in many cases, two or more radars would have had to change frequencies to solve the problem in such a way that the interference in each radar was below the acceptable interference threshold. Thus, even though interference in the original victim radar might have been reduced, the control actions under local autonomy resulted in interference at other radars where there had been none before. However, in those cases where local autonomy was workable, solutions were found quickly.

When a masterkommand-driven relationship was established between the victim and an agent, the master was assumed to have access to global data about other radar agents and the environment. In addition, a mathematical model of interference was assumed to be available to each agent that allowed predictions of interference as a function of the relevant domain variables. A utility function was defined, and the master agent conducted a hill-climbing search for a combination of frequencies that not only reduced interference in the original victim, but insured that the control actions did not result in new interference occurring in another radar at another position. In this approach, the frequency assignments were always correct, but the time to obtain a solution was excessive. In addition, the communication overhead necessary to implement the masterkommand-driven approach was high.

Although consensus autonomy was not investigated in this simulation, consensus autonomy might have kept the utility measure at an acceptable level while achieving nearly the same quality result as obtained in the masterkommand-driven approach. Consensus autonomy will be investigated further in the future.

The Autonomy Reasoner seeks to assign the autonomy level with the highest probability of producing a plan (problem solution) which results in the best overall performance while satisfying all constraints and preferences. Thus, local autonomy is most appropriate when the agent has a set of good quality information about the other agents in the group and there is little chance of violating a system constraint.

Consensus autonomy would be assigned when it is likely that a few agents can work together to share information on their respective radar performance and agree on actions that may minimize the original victim’s interference without adverse effects on agents in the consensus group or other agents in the system who do not participate in the consensus group.

Master/command-driven autonomy would be invoked when a large number of agents are required for a consensus group, or when there is a substantial probability of violating a system constraint unless the state and needs of all agents are considered in resolving the interference in the original victim radar/agent.

Action Planner The Action Planner solves domain problems, stores agent goals, and executes problem solutions. Action Planner responsibilities (i.e. responsibilities for delivering particular services) are domain dependent. These responsibilities are derived from domain-analysis activities: knowledge acquisition, domain modeling and subsequent object-oriented analysis and design techniques. The Action Planner for a particular agent is assigned some specified responsibility for system functionality.

This research focuses on controlling agent autonomy in such a way that planning interactions contribute to agent goals. In pursuing this objective, this research is not attempting to improve the agent planner itself. That is, it is not developing new, more efficient search algorithms. Instead, we focus on optimizing agent planning interactions to maximize goal satisfaction. These interactions are guided by the autonomy construct assignments made by the Autonomy Reasoner as described above. Planning capabilities are also enhanced through the use of planning advice given by the Conflict Resolution Advisor regarding potential conflicts, as discussed below.

Conflict Resolution Advisor Domain-independent reasoning mechanisms for conflict resolution in Sensible Agents employs the concept of utility and its relation to autonomy and conflict

41 50

Page 6: [IEEE 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation - Orlando, FL, USA (12-15 Oct. 1997)] 1997 IEEE International Conference

resolution. Each autonomy construct governs a different method for evaluating the utility (cost or benefit) of a goal to an agent. The use of utility in coalition formation [5] has been previously explored, as well in conflict resolution and contract nets to minimize communication and maximize efficiency 161. Unfortunately, though organizational structures have been used to reduce the complexity of problem solving tasks, the autonomy of agents within these organization has not been tied together within a single framework.

Our framework 171 gives the impact of autonomy on: ( I ) utility, and (2) optimal autonomy level assignments, and (3) conflict resolution techniques. This framework extends utility theory by detailing an autonomy-based calculation for conflict and goal assessment. This autonomy level can affect the perceived utility provided by a goal. For example, a goal for which an agent has a low independence value is forced to maximize the utility to the system while one with a high independence would be focused on the utility to itself. Similar arguments can be made for the other three autonomy constructs.

The goal being decomposed can only accept an autonomy level assignment for a sub-goal that does not run counter to its own autonomy level and the resulting utility levels. Thus, utility calculations can actually drive the assignment process by identifying assignments that are invalid given the current goal structure.

Figure 5: Conflict transfer in Sensible Agents

With other types of conflicts between goals, utility calculations can guide the resolution process by helping an agent decide if it should re-plan for the conflicting goal or if the actual cause of the conflict lies at a higher level of planning abstraction, while the currently conflicting goals are merely symptoms. Figure 5 illustrates (briefly) how the conflict can move up the goal hierarchies of two different agents.

3. CONCLUSION This paper has given an overview of the Sensible Agent architecture. A Sensible Agent uses knowledge about its own states, events and goals (contained in Self-Agent Modeler) and states, events and goals of other system agents and the environment (contained in the External

Agent Modeler) to develop an understanding about its potential actions (performed by Action Planner) and problemskonflicts in the solution space (performed by Conflict Resolution Modeler). Dynamic adaptation of the autonomy level of a Sensible Agent is performed by the Autonomy Reasoner to both promote efficient problem solving and to resolve conflicts.

All these modules are domain independent except for the Action Planner. Specifically, these modules contain domain-independent representations and reasoning mechanisms. Knowledge instantiated in a particular representation is, of course, domain dependent. Action Planner responsibilities for delivering particular services are domain dependent. Analysis and development of a component-based system architecture for application domain is essential for the development of Sensible Agent-based systems. The concept and capabilities of Sensible Agents are introduced in this paper with the aim of promoting the dynamic response of system components under dynamic and uncertain conditions.

4. BIBLIOGRAPHY [l] T. Graser and K. S. Barber, “Meta-Modeling of

Sensible Agents,” presented at International Multidisciplinary Conference, Intelligent Systems: A Semiotic Perspective, Gaithersburg, MD, 1996.

[2]A. Suraj, S. Ramaswamy, and K. S. Barber, “Extended Statecharts: A Specification Formalism for High Level Design.,” , 1996.

[3]A. Suraj, S. Ramaswamy, and K. S. Barber, “Behavioral Specification of Sensible Agents,” presented at International Multidisciplinary Conference, Intelligent Systems: A Semiotic Perspective, Gaithersburg, MD, 1996.

[4]C. E. Martin and K. S. Barber, “Representation of Autonomy in Distributed Agent-based Systems,” presented at International Multidisciplinary Conference, Intelligent Systems: A Semiotic Perspective, Gaithersburg, MD, 1996.

[5] C. Castelfianchi, “Commitments: From Individual Intentions to Groups and Organizations,” presented at First International Conference on Multi-Agent Systems, San Francisco, CA, 1995.

[6] K. Sycara, “Utility Theory In Conflict Resolution,” Annals of Operation Research, vol. 12, pp. 65-84, 1988.

[7]A. Goel, T. H. Liu, and K. S. Barber, “Conflict Resolution in Sensible Agents,” presented at International Multidisciplinary Conference, Intelligent Systems: A Semiotic Perspective, Gaithersburg, MD, 1996.

41 51