agents with emotions in behavioral animation

8
Pergamon Comput. & Graphics, Vol. 20, No. 3, pp. 377-384, 1996 Copyright 0 19% Ebevia Science Ltd Printed in Great Britain. All ri8hta reserved 0097-8493/96 $15.00+0.00 Soo97-8493(%)000064 Computer Graphics in Brazil AGENTS WITH EMOTIONS IN BEHAVIORAL ANIMATION M&ICA COSTA’ and BRUNO FEIJG)‘*‘+ ‘ICAD-Laboratorio de CAD Inteligente, Department0 de Infotmltica, PUC-Rio, Rua Marqub de Slo Vicente, 225, 22453-900 Rio de Janeiro, Brazil ‘LEC-Laboratorio de Engenharia Concorrente, UERJ, Rio de Janeiro, Brazil e-mail: [email protected] Abstract-This paper presents an innovative architecture for emotional characters in computer animation based on a reactive agent structure. The architecture is baaed on a cognitive model where both controlled and automatic procedures coexist. The proposed architecture breaks with traditional AI paradigms and establishes an efficient aDDrOaCh to develoo behavioral animation systems. Copyright 0 1996 Elsevier -- Science Ltd 1. INTRODUCTION Behavioral animation has been identified with a number of concepts, such as: knowledge-based animation [I], task-oriented animation [2], distrib- uted behavioral models [3], synthetic actors [4], stimulus-response models [5], synthetic vision for behavioral actors [6], ecosystem simulation [7] and virtual creatures generated by geneticalgorithms [8]. Generally speaking, one can say that behavioral animation shares its problemswith those found in artificial intelligence,robotics and artificial life [9- 11]. The authors believe that performing before an audience is a question of planningand this is, in fact, the essence of behavioral animation. However, traditional AI-based planning programs are not reactive systems and, consequently, cannot cope with the continuity, surprises, interactions and ongoing history of the real world. In a previous work, the authors presented an innovative architecture for behavioral animation systems based on reactive autonomous agents that perform in a virtual environment [121. The present paperis a complement of that work and deals with the question of emotional actors in behavioral animation. In this paper, the reactive nature of emotion is explored and a Reactive Emotional Response Archi- tecture is proposed. The feasibility of the solution is illustrated by a prototype. 2.EMOTIONS Generally speaking, emotions are generated when a sequence of behaviors has been completed, whenan interruption occurs in such a sequence or when the character explicitly generates them. Models of emo- tion should address those problemsand also some other questions such as:how emotions are sustained, +Autbor for correspondence. how they influence eachother, how they change over time, how they arise from interactions with other characters, how they are intluenced by personal tastes/preferences and how arousal increases after an interruption. Pragmatically speaking, a model of emotion should at least consider interrupt-driven emotion generation and emotions generated by a combinationof other emotions. There is a small number of works in the area of emotional characters for computer animation. In some ecosystem simulations there is an attempt to generate creatures with primitive emotional beha- viors [7] that is far from a complete model of emotion.The best works in emotionalcharacters are dueto the teamof the Oz project at Carnegie Mellon [13-171. However, the Oz project is not oriented to computer animation but a broaderareaof dramaand virtual reality. The question of emotion in computer animation can be tackled from two points of view: the reactive nature of the virtual environment and the cognitive aspects of the mentalmodels of emotion.The present paper is focused on the first viewpoint. A model of emotion is still under investigation by the authors based on the literature of psychology[18, 191 and the reports of the Oz project. 3. AGBNTS Agent technology has been applied in distributed AI [20], in groupware[21], in virtual environments [15] and in robotics [22]. Also agent-oriented programming has been proposed as a post-object paradigm [23]. Agent theory is not mature yet and leads to several definitions of agents and their properties. A complete survey on agent theories, architectures and languages can be found elsewhere t241. There are three approaches to building agent- basedcomputer systems: deliberative, reactive and hybrid architecture. Deliberative architecture is based on the classical symbolic AI paradigm. 371

Upload: independent

Post on 24-Feb-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

Pergamon Comput. & Graphics, Vol. 20, No. 3, pp. 377-384, 1996

Copyright 0 19% Ebevia Science Ltd Printed in Great Britain. All ri8hta reserved

0097-8493/96 $15.00+0.00

Soo97-8493(%)000064 Computer Graphics in Brazil

AGENTS WITH EMOTIONS IN BEHAVIORAL ANIMATION

M&ICA COSTA’ and BRUNO FEIJG)‘*‘+

‘ICAD-Laboratorio de CAD Inteligente, Department0 de Infotmltica, PUC-Rio, Rua Marqub de Slo Vicente, 225, 22453-900 Rio de Janeiro, Brazil

‘LEC-Laboratorio de Engenharia Concorrente, UERJ, Rio de Janeiro, Brazil e-mail: [email protected]

Abstract-This paper presents an innovative architecture for emotional characters in computer animation based on a reactive agent structure. The architecture is baaed on a cognitive model where both controlled and automatic procedures coexist. The proposed architecture breaks with traditional AI paradigms and establishes an efficient aDDrOaCh to develoo behavioral animation systems. Copyright 0 1996 Elsevier -- Science Ltd

1. INTRODUCTION Behavioral animation has been identified with a number of concepts, such as: knowledge-based animation [I], task-oriented animation [2], distrib- uted behavioral models [3], synthetic actors [4], stimulus-response models [5], synthetic vision for behavioral actors [6], ecosystem simulation [7] and virtual creatures generated by genetic algorithms [8]. Generally speaking, one can say that behavioral animation shares its problems with those found in artificial intelligence, robotics and artificial life [9- 11].

The authors believe that performing before an audience is a question of planning and this is, in fact, the essence of behavioral animation. However, traditional AI-based planning programs are not reactive systems and, consequently, cannot cope with the continuity, surprises, interactions and ongoing history of the real world. In a previous work, the authors presented an innovative architecture for behavioral animation systems based on reactive autonomous agents that perform in a virtual environment [ 121. The present paper is a complement of that work and deals with the question of emotional actors in behavioral animation.

In this paper, the reactive nature of emotion is explored and a Reactive Emotional Response Archi- tecture is proposed. The feasibility of the solution is illustrated by a prototype.

2. EMOTIONS Generally speaking, emotions are generated when

a sequence of behaviors has been completed, when an interruption occurs in such a sequence or when the character explicitly generates them. Models of emo- tion should address those problems and also some other questions such as: how emotions are sustained,

+Autbor for correspondence.

how they influence each other, how they change over time, how they arise from interactions with other characters, how they are intluenced by personal tastes/preferences and how arousal increases after an interruption. Pragmatically speaking, a model of emotion should at least consider interrupt-driven emotion generation and emotions generated by a combination of other emotions.

There is a small number of works in the area of emotional characters for computer animation. In some ecosystem simulations there is an attempt to generate creatures with primitive emotional beha- viors [7] that is far from a complete model of emotion. The best works in emotional characters are due to the team of the Oz project at Carnegie Mellon [13-171. However, the Oz project is not oriented to computer animation but a broader area of drama and virtual reality.

The question of emotion in computer animation can be tackled from two points of view: the reactive nature of the virtual environment and the cognitive aspects of the mental models of emotion. The present paper is focused on the first viewpoint. A model of emotion is still under investigation by the authors based on the literature of psychology [18, 191 and the reports of the Oz project.

3. AGBNTS Agent technology has been applied in distributed

AI [20], in groupware [21], in virtual environments [15] and in robotics [22]. Also agent-oriented programming has been proposed as a post-object paradigm [23]. Agent theory is not mature yet and leads to several definitions of agents and their properties. A complete survey on agent theories, architectures and languages can be found elsewhere t241.

There are three approaches to building agent- based computer systems: deliberative, reactive and hybrid architecture. Deliberative architecture is based on the classical symbolic AI paradigm.

371

378 M. Costa and B. Feiji,

Examples of this approach can be found in [25, 261. In this case, the symbolic model of the world is explicitly represented and the agents act via explicit logical reasoning. Usually, in this approach, an AI planning system is the central component of the agent. This architecture, however, has several draw- backs and alternative approaches to agent architec- tures have been proposed [16, 221.

Reactive architecture is an alternative approach that breaks with the traditional symbolic AI para- digm. This sort of architecture is strongly advocated by Rodney Brooks who claims that intelligence can emerge without having explicit manipulable internal representation or explicit reasoning systems [27]. This architecture is based on reactive agents that must respond dynamically to changes in their environ- ment.

Hybrid architecture attempts to harmonize classi- cal architecture with the reactive approach [28, 291. The present work adopts a hybrid approach based on a previous work by the authors [12I.

4. AGENCY PRJNCR’LES

The principles underlying the reactive agent model proposed by the authors [12] are the following: cognition, emergence, situatedness, recursion and cooperation. Most of these principles are inspired by Rodney Books’ work [27].

By adopting cognition as one of the cornerstones of the model, the authors state that any agent architecture should be influenced by human models of mind. This is important not only because some animation characters are human-like (with emotion, desire and beliefs), but also because this is a new tendency in computer architecture.

The principle of emergence states that the intelli- gence of the agent system emerges from the interac- tion of agents among themselves and with their environment [30]. As pointed by Rodney Brooks: “It is hard to identify the seat of intelligence within any system, as intelligence is produced by the interactions of many components. Intelligence can only be determined by the total behavior of the system and how that behavior appears in relation to the environment. The key idea from emergence is: Intelligence is in the eyes of the observer” [271. This principle can also be identified in the work by Minsky where he proposes that intelligence emerges from a society of mindless agents [31].

Situatedness is an idea proposed by Rodney Brooks [271 who claims that the agent’s intelligence is situated in the world and not in any formal model of the world built in the agent. Therefore, an agent uses its perception of the world rather than deduc- tions based on a symbolic representation of this world (such as those found in theorem provers or expert systems). This is a dramatic change from the traditional AI paradigm. However, the authors believe that this change is of the utmost importance for behavioral animation. Maintaining the tradi- tional AI paradigm means that behavioral actors will

always have access to direct and perfect perceptions/ actions and, consequently, no external world will really exist with its continuity, surprises and ongoing history.

Recursive structure and agent cooperation are principles attached to the object nature of the agents. In this view, an agent is built by sub-agents that have the same structure. Furthermore, agents work cooperatively distributing tasks, interchanging mes- sages and sharing databases.

5. THE REAClWE ACXNT STRU~BE

Following the proposal by Costa et al. [12], actors are reactive agents with the structure presented in Fig. 1. Actually, actors are motors which are themselves actors with the same structure. This focus on motors is the essence of animation, where characters are constantly performing actions and changing their environment. The elements of this structure are drawn on general principles of cogni- tion [32].

The Sensory Centre has two kinds of basic functions: (1) functions to send and receive messages; (2) sensory perception functions. An agent is activated by a message sent by its parent-agent. Most of the time, this message is passed to the agents’ motors by the cognition centre, in order to distribute tasks. A motor always reports success or failure to the agent that called it. This interchanging of messages is the basis for cooperative work as required by the agency principle of cooperation. The other kind of basic function, i.e. the sensory perception function, detects events in the virtual environment associated to vision, hearing and touch. As far as vision is concerned, stereoscopic capacity is not required because the distances between objects can be calculated straightforwardly. Also no pattern recognition is required because the list of the objects

Fig. 1. The actor structure.

Behavioral animation 379

is available to the character. These two latter facilities clearly illustrate the advantages of working with virtual characters instead of real robots. Perception functions are part of the strategy for supporting the agency principle of situatedness.

The architecture for the Cognition Centre and the LTM (Large Term Memory) is also inspired in the principles of cognition [32]. The LTM is a declarative memory with facts specified by the animator and facts perceived by the character during its existence in the virtual environment. Sharing facts in LTM is a way of supporting cooperative work. These facts are, however, inert structures that should be operated by processes in the Cognition Centre. Processes are procedural knowledge of two types: controlled and automatic procedures (Fig. 2). Controlled procedures require conscious attention like an interpreter. Automatic procedures are like compiled programs automatically triggered by events or goals. The Logical Procedures in Fig. 2 are sentences in mathematical logic. They are used in situations where deductive thought is required in specific domains. General path-planning with a low degree of details is usually done by logical procedures. The logical procedures represent the deliberative aspect of the hybrid agent architecture. A previous work done by the authors [33] is an example of how to use logic in behavioral animation.

Learned Procedures represent reactive plans en- coded as compiled programs and are not based on traditional AI techniques, such as reasoning based on first-order logic and expert systems. In this context, there is no distinction between planner and executor. These plans are continuously revised and, conse- quently, can adapt themselves to unexpected events. The name Learned Procedure comes from the fact that these procedures represent learned skills with no need for conscious attention.

Behavioral functions are primitive forms of auto- matic procedures defined by a single expression. They are used by agents that should react in a stimulus- response basis. Sometimes simple creatures are defined by a single agent and just one behavioral function, such as a very small insect flying around a lamp.

The emotion generator operates on the LTM to generate emotional states from primitive emotional propositions. It is important to notice that emotions are supposed to be generated by automatic proce- dures rather than by symbolic deductions as in traditional Al applications.

PROCESSES

‘“**i;“c*q

Logical Lesmed Behavioral Emotion Procedures Procedures Functions Generator

Fig. 2. Processes in the Cognition Centre.

The automatic procedures presented in Fig. 2 implement the reactive behavior of the characters in the same spirit proposed by [27, 291. The agents are driven by their automatic procedures (and not by deductions from a formal representation of the world) and the intelligence they exhibit is a result of the interactions that occur within the virtual envir- onment. The authors’ idea of implementing the reactive nature of the agents in terms of automatic procedures is a graceful way of satisfying the principles of cognition, emergence and situatedness.

The Body (see Fig. 1) contains information about the physical structure of the character to which the agent belongs. Only the agents in the very low end of the hierarchy tree contain this sort of information.

The reactive capacity of the virtual environment is defined by a time step called the system clock. After each clock the system has the opportunity for checking if any event happened and then it takes the necessary steps. Similarly to the case of time stepping in numerical integration, large clocks degenerate the reactive response of the system. Different agents with different accelerations still have the same clock. However, the difference between these agents relies on the amount of displacement steps or rotation steps performed by the agent during the clock.

Usually an agent has certain parts always empty. Decoration objects, for instance, usually have only bodies and no cognition or sensory centres. The agent controlling the human gait may have a sensory centre (to receive stimulus), a cognition centre with just one behavioral function (to perform the gait), a small LTM and a body.

6. THE REACTIVE EMOTIONAL RESPONSE ARC

The Reactive Emotional Response Architecture is defined by the reactive agent structure presented in the previous section with the following character- istics: (i) the primitive emotional states are triggered by events perceived by the character’s sensors associated to recognition behavioral functions; (ii) the result of the external stimuli is the generation of emotional facts (such as fear or love) in the LTM of the topmost agent in the hierarchical structure and; (iii) the decisions to act are imposed on the agents down in the tree. The definition of this architecture also requires a clearer description of the LTM, as presented below.

The LTM of an agent is in fact a window to a vast area of factual data, similar to a certain extent to the classical blackboard technique in AI [34]. Figure 3 illustrates the mechanism of asserting a fact in the LTM. Sometimes there are facts that are common to more than one agent. Only a special agent called the Universal Agent has the consciousness of the entire factual data base. The Universal agent is on the top of the hierarchy and is in contact with the user.

Strictly speaking, external events should activate a propositional network of facts, as in a real

380 M. Costa and B. Fe@

niversal LTM

Fig. 3. LTM with emotional facts.

associative memory, rather than simply creating isolated facts in a memory area. In fact, aligned with cognitive science [32], the authors propose the definition of LTM as a propositional network with the following characteristics: (1) activation level- i.e. each node is at some level of activation; (2) spreading activation-i.e. activation spreads through the network along its links; (3) threshold of consciousness-i.e. if the activation level reaches a high enough value in some portion of the network, that portion will be accessible to conscious attention (that portion is then called the Working Memory); (4) concurrent spread-i.e. activation spreads simultaneously, in parallel, to all of the links leading from a node; (5) spreading declining- i.e. activation at a node is divided among the links that lead from it (thus, activation is weakened rapidly by successive divisions); (6) decaying-i.e.

activation at a node fades rapidly with time; (7) history-based capacity-i.e. higher-capacity links (for activation) are the ones that have been more frequently activated in the past. Without the characteristics (5) and (6), a single event or thought would rapidly activate the character’s entire LTM, throwing it into a state of permanent confusion. In the prototype developed by the authors, however, the propositional network is substituted by a simple mechanism of asserting and deleting facts and the working memory corresponds to the entire LTM.

The best way to understand the Reautive Emo- tional Response Architecture is through an example. Figure 4 shows an agent structure for a character KID that is capable of moving itself in a house. This character is scared of mice.

The emotional states are represented in the agent act which can make the decision of changing the

ad VISION

I w-to indicates an agent

r-1 with Emotion Generator

followgatb

1 I

I VISION

face move

turn-delta

clock-checker

tram-delta

clock-checker

act Hmove I

Fig. 4. An agent hierarchy for navigation.

Behavioral animation 381

high-level navigation plan based on the current emotional state. This agent has a vision function in its sensory centre. The agent go-to generates the high-level navigation plan (with no details) based on mathematical logic. The agent followqath follows a planned path, changing it in order to avoid obstacles. The agent face points the character to a new direction in order to avoid a collision. The agent move takes the character to a certain position along a straight line. This agent has also a vision function in its sensory centre in order to detect obstacles.

The agents turn-delta and trans-delta perform small rotation and translation steps during a system clock. Learned procedures determine the size of these basic movements. In the example implemented by the authors, the size of the basic movements is constant and arbitrary. After each clock, in a more complex case, these learned procedures could read the positions of their agents from the dynamic analysis performed by a physics-based model associated with the character’s body.

After each clock, the agent clock-checker looks for any event occurrence in the virtual environment. This agent will stop and send messages back to its parent if an event threatens the plan. These messages go up in the hierarchy, from agent to agent, until the appropriate agent takes the necessary steps. Because more than one agent can have perception capacity, the clock-checker keeps a list of these agents and

follows the strategy of notifying the topmost agents fil-dly.

A prototype was built by the authors and an example of a character moving in a house was tested. Figures 5-g show a kid avoiding a toy laying on the floor and executing the plan of going to the farthest room in the house. The animation is automatically changed by the presence of a mouse which causes panic, an escapade scene and faster movements. The resulting animation is more vivid and convincing than the one achieved from emotionless characters. The animated sequence was rendered using the script language available in 3-D Studio.

7. CONCLUSIONS The authors present an innovative architecture for

emotional characters in computer animation based on a reactive agent structure proposed in their previous work [12] and designed to satisfy the principles of cognition, emergence, situatedness, recursion and cooperation conjointly.

In the present work emotions are generated by automatic procedures in accordance with the archi- tecture of mind proposed by the cognitive science [32]. In this context, the emotional state is generated by procedures rather than by logical deductions from a formal representation of the world. These straight- forward procedures render computational efficiency to the implementation of the proposed model.

Fig. 5. The KID starts moving towards the farthest room in the house.

382 M. Costa and B. Feijb

Fig. 6. The KID makes a detour around the toy.

Fig. 7. The KID faces the mouse and gets scared.

Behavioral animation 383

Furthermore, this approach is totally consistent with the global proposal of establishing agency based on cognition, emergence and situatedness where the agent is a mindless object with a recursive structure. A formal model of emotion is not within the scope of the present work and, in fact, a very simple mechanism for emotion generation is used. However, the architecture permits one to plug an emotion generator or procedures to the agents in order to test more complete models of emotion.

The proposed architecture is focused on the principle of emergence where agents are small objects whose behaviors result from an intensive interaction between them. Also this architecture is designed to support generic behavior and emotion models. Moreover, the proposed architecture is suitable for incorporating reuse technology and parallel processing. As far as the authors are aware, there is no work in the literature on animation with the above-mentioned characteristics. The focus of the well-known Oz project [13-171 is neither on computer animation nor on a generic framework to develop behavioral animation sys- tems. Also the work by Brooks [22, 27, 351 that inspired the present authors is focused on robotics, especially navigation problems. Generally speaking, in robotics there is no need for representing emotions and behaviors like real actors.

Fig. 8. The KID runs away.

The implemented prototype should be improved towards more complex animation and actors. Also there are several theoretical topics for future work. The animator could use a library of agents to set up a scene or even still the entire story. He/she could also use this library to create a new character. These ambitious steps require, however, one to develop an agent language for computer animation and to solve problems of component reuse. Furthermore, a formal model of emotion should be investigated. Another topic under investigation by the authors is the consideration of parallel processing to implement concurrent actions in real time. This latest topic is essential for artificial life and virtual worlds in high performance computer environments.

Acknowledgmenrs-The authors would like to thank the CNPq for financial support and H&o Magalhbs for technical support in the rendering part. Also thanks are due to the reviewers for their valuable comments.

REFERENCES

1. D. Zeltzer, Knowledge-based animation. In Proceedings of SIGGRAPHISIGART WorkshoD on Motion. 187-192 (i983).

2. D. Zeltzer, Towards an integrated view of 3-D computer animation. The Visual Computer, 1, 249-259 (1985).

3. C. Reynolds, Flocks, herds and schools: a distributed . . . . . - behavioral model. Computer Graphics, 21, 25-34 (1987).

384 M. Costa and B. Feijb

4.

5.

6.

I.

8.

9. 10. 11.

12.

13.

14.

15.

N. Magnenat-Thalmann and D. Thalmann, Complex models for animating synthetic actors. IEEE Computer Graphics and Applications, 32-44 (1991). J. Wilhelms and R. Skinner, A “notion” for interactive behavioral animation control. IEEE Computer Graphics and Applications, 10, 14-22 (1990). 0. Renault, N. Magnenat-Thalmann and D. Thalmann, A vision-based approach to behavioral animation. The Journal of Visualization and Computer Animation, 1, 18- 21 (1990). X. Tu and D. Terzopoulos, Artificial fishes: physics, locomotion, perception, behavior. In Computer Gra- phics Proceedings, Annual Conference Series, ACM SIGGRAPH, 43-50 (1994). K. Sims, Evolving virtual creatures. In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, 14-21 (1994). C. Langton, Artificial Life III, Addison-Wesley (1994). S. Levy, Artificial Life, Vintage Books, NY (1992). P. Maes, Designing Autonomous Agents: Theory and Practice from Biology to Engineering and Back, MIT Press, Cambridge, MA (1990). M. Costa, B. Feijo and D. Schwabe, Reactive agents in behavioral animation. In Proceedings of SIBGRAPI’95, UFSCar, SBo Carlos, SP, 159-165 (1995). J. Bates, Virtual reality, art, and entertainment. Presence: Teleoperators and Virtual Environments, 1, 133-138 (1992). J. Bates, A. B. Loyal1 and W. S. Reilly, An architecture for action, emotion, and social behavior. Technical Report CMU-CS-92-142, School of Computer Science, Carnegi&4ellon University, Pittsburgh, PA (1992). J. Bates, A. B. Loyal1 and W. S. Reilly, Integrating 30.

16.

17.

18.

19.

reactivity, goah, and emotion in a broad agent. Technical Reoort CMU-CS-92-144, School of Computer Science, Carnegie-Mellon University, Pittsburgh; PA (1992). 31. A. B. Loyal1 and J. Bates, Hap: a reactive, adaptive architecture for agents. Technical Report CMV-CS-91- 32. 147, School of Computer Science, Carnegie-Mellon University, Pittsburgh, PA (1991). W. S. Reilly and J. Bates, Building Emotional Agents. Technical Report CMU-CS-92-143, School of Computer 33. Science, Carnegie-Mellon University, Pittsburgh, PA (1992). R. M. Gordon, The Structure of Emotions, Cambridge 34. University Press (1990). A. Ortony, G. Clore and A. Collins, The Cognitive 35. Structure of Emotions, Cambridge University Press (1990).

20. A. H. Bond and L. Gasser, Readings in Distributed

28. R. C. Arkin, Integrating Bebavioral, Perceptual and World Knowledge in Reactive Navigation. In Designing Autonomous Agents: Theory and Practicefrom Bioiogy to Engineering and Back, P. Maes (Ed.), MIT Press, Cambridge, MA, pp. 105-122 (1990).

29. M. P. Georgeff, A. L. Lansky and M. 6. Schoppers, Reasoning and planning in dynamic domains: an exneriment with a mobile robot. Technical Report 380, A&icial Intelligence Center, SRI International, Menlo Park, CA (1983. L. Steels, Towards a theory of emergent functionality. In Proceedings of the First International Conference on Simulation of Adaptive Behavior, MIT Press, Cam- bridge, MA, 451-461 (1990). M. Minsky, The Society of Mind, Pan Books, London (1987). N. A. Stillings, M. H. Feinstein, J. L. Garfield, E. I,. Rissland, D. A. Rosenbaum, S. E. Weisler and L. Baker-Ward, Cognitive Science: An Introduction, MIT Press, Cambridge, MA (1987). B. Feiib and M. Costa. Animacao Comnortamental Baseada em Logica. In Proceedings of SIBGRAPI’93, UFPE, Recife, PE, 117-122 (1993). B. Hayes-Roth, A blackboard architecture for control. Artificial Intelliaence. 26. 251-321 (1985). R. A. Brooks, Elenhants don’t play chess. In Designing Autonomous Agents: Theory and Practice from~ Biology to Engineering and Back, P. Maes (Ed.), MIT Press, Cambridge, MA, 3-15 (1990).

Artificial Inteliigence, Morgan Kaufmann Publishers, Inc., San Ma&o, CA (1988).

21. R. M. Baecke& Readings in Groupware and Computer- Supported Cooperative Work, Morgan Kaufmarm Pub- lishers, Inc., San Mateo, CA (1993).

22. R. A. Brooks, A robust layered control system for a mobile robot. Technical -Report A.I. &mo 864, Artificial Intellieence Laboratorv. MIT (198%

23. Y. Shoham, igent-oriented p&ram&g. ’ Artificial Intelligence, 60, 51-92 (1993).

24. M. I. Wooldridge and N. R. Jennings, Agent theories, architectures, and languages: a survey. In Proceedings of the ECA194 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, l-32 (1994).

25. S. Vere and T. Bickmore, A basic agent. Computational Intelligence, 6, 41-M) (1990).

26. S. Wood, Planning and Decision Making in Dynamic Domains, Ellis Horwood Ltd. (1993).

27. R. A. Brooks, Intelligence without reason. Technicui Report A.I. Memo 2293, Artificial Intelligence Labora- tory, MIT (1991).