collaborative virtual training with physical and communicative autonomous agents

9
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2014; 25:485–493 Published online 16 May 2014 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cav.1583 SPECIAL ISSUE PAPER Collaborative virtual training with physical and communicative autonomous agents Thomas Lopez 1 *, Pierre Chevaillier 2 , Valérie Gouranton 1 , Paul Evrard 3 , Florian Nouviale 1 , Mukesh Barange 2 , Rozenn Bouville 1 and Bruno Arnaldi 1 1 Institut National des Sciences Appliquées RENNES (INSA), IRISA - Hybrid, Rennes, France 2 ENIB Brest, Brest, France 3 CEA LIST Institute, Paris, France ABSTRACT Virtual agents are a real asset in collaborative virtual environment for training (CVET) as they can replace missing team members. Collaboration between such agents and users, however, is generally limited. We present here a whole integrated model of CVET focusing on the abstraction of the real or virtual nature of the actor to define a homogenous collaboration model. First, we define a new collaborative model of interaction. This model notably allows to abstract the real or virtual nature of a teammate. Moreover, we propose a new role exchange approach so that actors can swap their roles during training. The model also permits the use of physically based objects and characters animation to increase the realism of the world. Second, we design a new communicative agent model, which aims at improving collaboration with other actors using dialog to coordinate their actions and to share their knowledge. Finally, we evaluated the proposed model to estimate the resulting benefits for the users and we show that this is integrated in existing CVET applications. Copyright © 2014 John Wiley & Sons, Ltd. KEYWORDS interaction for virtual humans; conversational agents; autonomous actors; avatars; virtual reality *Correspondence Thomas Lopez, IRISA - Hybrid, Institut National des Sciences Appliquées RENNES (INSA). E-mail: [email protected] 1. INTRODUCTION The use of virtual reality for training offers lots of advantages. First, it reduces the costs and risks of the training for the trainees and the equipment. It also allows the trainees to learn collaborative procedures, along with other team members who can either be other users or autonomous agents. Our paper focuses on such collabora- tive virtual environments for training (CVETs) where real users and autonomous agents efficiently collaborate toward a common goal as equal teammates. In this context, each role of the training can either be handled seamlessly by a user or an autonomous agent. Virtual agents are crucial in training as they generally replace team members when there is not enough trainees. Thus, they have to be able to handle a precise role in the procedure and also help other trainees in order to enhance their learning experience. To do so, an autonomous agent should be able to (i) collaborate with other team mem- bers no matter they are users or other autonomous agents; (ii) have credible behaviors and gestures to help users to comprehend its actions on its surroundings; and (3) be able to easily communicate with other team members in order to share information or simply synchronize its actions with them. To improve the user experience as well as immersion during the training, we propose in this paper a CVET that uses such a virtual agent. In this CVET, we gathered two important contributions impacting on the collaboration between actors. Our first contribution consists in a uni- fied interaction model for users and autonomous agents, allowing them to efficiently collaborate during the train- ing. This model not only allows to abstract the nature of the actors but also to perform a role exchange between actors and to use physical interactions as well as physically simulated virtual agents to increase the realism of the train- ing. Second, these agents are also communicative agents able to handle a dialog with a user in order to furnish him details about the procedure or about the training, using its knowledge and natural language. Copyright © 2014 John Wiley & Sons, Ltd. 485

Upload: bruno

Post on 27-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Collaborative virtual training with physical and communicative autonomous agents

COMPUTER ANIMATION AND VIRTUAL WORLDSComp. Anim. Virtual Worlds 2014; 25:485–493

Published online 16 May 2014 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cav.1583

SPECIAL ISSUE PAPER

Collaborative virtual training with physical andcommunicative autonomous agentsThomas Lopez1*, Pierre Chevaillier2, Valérie Gouranton1, Paul Evrard3, Florian Nouviale1,Mukesh Barange2, Rozenn Bouville1 and Bruno Arnaldi1

1 Institut National des Sciences Appliquées RENNES (INSA), IRISA - Hybrid, Rennes, France2 ENIB Brest, Brest, France3 CEA LIST Institute, Paris, France

ABSTRACT

Virtual agents are a real asset in collaborative virtual environment for training (CVET) as they can replace missing teammembers. Collaboration between such agents and users, however, is generally limited. We present here a whole integratedmodel of CVET focusing on the abstraction of the real or virtual nature of the actor to define a homogenous collaborationmodel. First, we define a new collaborative model of interaction. This model notably allows to abstract the real or virtualnature of a teammate. Moreover, we propose a new role exchange approach so that actors can swap their roles duringtraining. The model also permits the use of physically based objects and characters animation to increase the realism ofthe world. Second, we design a new communicative agent model, which aims at improving collaboration with other actorsusing dialog to coordinate their actions and to share their knowledge. Finally, we evaluated the proposed model to estimatethe resulting benefits for the users and we show that this is integrated in existing CVET applications. Copyright © 2014John Wiley & Sons, Ltd.

KEYWORDS

interaction for virtual humans; conversational agents; autonomous actors; avatars; virtual reality

*Correspondence

Thomas Lopez, IRISA - Hybrid, Institut National des Sciences Appliquées RENNES (INSA).E-mail: [email protected]

1. INTRODUCTION

The use of virtual reality for training offers lots ofadvantages. First, it reduces the costs and risks of thetraining for the trainees and the equipment. It also allowsthe trainees to learn collaborative procedures, along withother team members who can either be other users orautonomous agents. Our paper focuses on such collabora-tive virtual environments for training (CVETs) where realusers and autonomous agents efficiently collaborate towarda common goal as equal teammates. In this context, eachrole of the training can either be handled seamlessly by auser or an autonomous agent.

Virtual agents are crucial in training as they generallyreplace team members when there is not enough trainees.Thus, they have to be able to handle a precise role in theprocedure and also help other trainees in order to enhancetheir learning experience. To do so, an autonomous agentshould be able to (i) collaborate with other team mem-bers no matter they are users or other autonomous agents;

(ii) have credible behaviors and gestures to help users tocomprehend its actions on its surroundings; and (3) be ableto easily communicate with other team members in orderto share information or simply synchronize its actionswith them.

To improve the user experience as well as immersionduring the training, we propose in this paper a CVETthat uses such a virtual agent. In this CVET, we gatheredtwo important contributions impacting on the collaborationbetween actors. Our first contribution consists in a uni-fied interaction model for users and autonomous agents,allowing them to efficiently collaborate during the train-ing. This model not only allows to abstract the nature ofthe actors but also to perform a role exchange betweenactors and to use physical interactions as well as physicallysimulated virtual agents to increase the realism of the train-ing. Second, these agents are also communicative agentsable to handle a dialog with a user in order to furnish himdetails about the procedure or about the training, using itsknowledge and natural language.

Copyright © 2014 John Wiley & Sons, Ltd. 485

Page 2: Collaborative virtual training with physical and communicative autonomous agents

CVT with physical and communicative autonomous agents T. Lopez et al.

2. RELATED WORK

2.1. Collaborative Virtual Environmentsfor Training

In the CVET literature, autonomous agents generally inter-act with users in three different manners [1]: (i) as apersonal assistant assigned to a single trainee to helphim/her; (ii) as a team assistant assuring the commu-nication between users and helping them to coordinatetheir actions; and (iii) as an equal team member operatingautonomously and performing the collaborative procedurealongside users and other autonomous agents. Most of theCVETs are focusing on this last case. Thus, team mem-bers were able to perform tasks, interact with the objects,and communicate with other teammates [2]. Regardless oftheir nature, autonomous agents and users work toward ashared objective. In most CVETs, autonomous agents areable to perform their task independently [3,4]. Thus, theyare generally able to play different roles such as collabora-tors, instructors, or assistants. They can also replace teammembers needed for training [5,6]. Unfortunately, inter-actions between team members and particularly betweenautonomous agents and users are limited. They performparallel tasks, working toward the team’s shared goal butcannot neither interact collaboratively on a same object norexchange their roles during the simulation. Some recentplatforms handle collaborative interactions between team-mates. This is the case of the generic virtual training [7,8].In the collaborative version of STEVE [9], agents play thedouble role of collaborator and instructor. As collabora-tors, they simply perform their part of the procedure, waitfor the actions of their teammates, and communicate whenneeded, but as instructors, they directly interact with thetrainee to help in the task.

2.2. Communication with AutonomousAgents in CVET

Some CVET are already using conversational agents toenhance the user training user [10]. The dialog is gener-ally based on the knowledge of the agent, and differentstructures are used to design this agent. For instance, anagent such as STEVE [9] uses production rules, whereasMAX [11] uses logical frames to represent the knowledgeconcerning the world. Both, however, use a hierarchi-cal structure to represent the domain associated to theirprocedural task. Nevertheless, none of these architecturesprovides a unified knowledge representation allowing anactor to collaborate and communicate with other team-mates to achieve a team goal.

In CVET, integrating dialog models (state/frame based[12], agent based [13], etc.) to the task model is needed forthe coordination and the sharing of the knowledge betweenteammates. Some dialog systems such as TrindiKit [12]are based on information state (IS) updates, whereas someothers such as COLLAGEN [13] are using agent-based

model for the conversational behavior. However, in thesesystems, users do not actively perform the collaborativetasks with other teammates. Furthermore, these worksfocused either on dialog management for exchange ofinformation [12] or for the planning and execution of thegoal directed plan [13], but a little work [11] is doneto integrate these two aspects together to achieve mutualunderstanding and shared goal.

2.3. Limitations

Regarding the literature, collaboration between users andautonomous agents is still limited as they barely under-stand their respective needs and choices. Moreover, inmost CVETs, autonomous agents’ actions are precom-puted beforehand and not adapted on-the-fly to the users’interactions. Some solutions have been proposed to handletask-driven postures as well as physical objects interactionsfor autonomous characters [14,15]. These methods, how-ever, have not been adapted yet for CVET composed ofusers and autonomous agents.

In addition, the dialogs between users and autonomousagents are usually composed of predefined sentences andare limited to instructions given by an autonomous agent.Thus, users and autonomous agents can hardly commu-nicate with each other. Even in [16,17], the work onconversational agents focused on a limited aspect of thedialog and did not really considered users actively engagedin collaborative tasks with other teammates.

3. OVERVIEW

In order to tackle the limitations of current systems, wepresent a new functional CVET. Our actors’ architectureis summed up in Figure 1. To enhance the training, weintegrated the following:

(1) A model of collaborative and physical interactions.Our interaction model allows actors to act collabo-ratively on the same objects. Moreover, we used thismodel to abstract the real and virtual nature of theactors during the simulation and to allow them toexchange on-the-fly their avatars as well as the roleattached to this avatar. We also extended our modelto handle physically simulated objects and avatars.Thus, movements and gestures of the autonomousagents, as well as the actors’ interactions with theenvironment, look more natural and realistic tothe users.

(2) A communication module allowing trainees tocommunicate with autonomous agents teammatesand ask them for information. Moreover, it alsoallows these agents to interact and collaborate withthe users using natural language. Conversationalagents improve the training by updating their knowl-edge during the simulation and by sharing it withtrainees, giving information about the procedure.

486 Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 3: Collaborative virtual training with physical and communicative autonomous agents

T. Lopez et al. CVT with physical and communicative autonomous agents

Figure 1. Global architecture of the communicative autonomous agents and users present in the virtual environment andcollaborating through their Shells.

Finally, we present some results based on our conceptsin the result section. First, we present two different trainingscenarios that answer industrial needs of our partners anduse our functionalities. Second, to demonstrate the bene-fits of these new functionalities, we will also present someexperiments we conducted in order to evaluate our contri-butions. This experiments aim at evaluating three elements:(i) the exchange protocol, a concept based on our collab-orative interaction model; (ii) the communication betweenusers and autonomous actors in the context of CVET usingour model; and (iii) the whole CVET system for novicetrainees in an immersive setup.

4. COLLABORATIVE VIRTUALTRAINING

Contrary to real life training, the use of virtual reality offersthe opportunity to learn a collaborative procedure with asingle user and to use autonomous agents to replace miss-ing members. However, current CVET are limited becausecollaborative interactions on the same objects barely exist.Moreover, the collaboration between actors has to be easyand natural, even if these actors are users or autonomousagents. This way, the training can be performed even ifonly a single trainee is available as the other actors canbe controlled by autonomous agents. Thus, we have todefine a new model for collaborative interaction as wellas an abstraction of the actor similar for both users andautonomous agents.

4.1. Collaborative Interactions

Using the STORM model, presented in [18], we defineda new collaborative model of interaction. Our model,detailed in [19], allows an object to be controlled by severalother objects, sharing its control with multiple sources. Todo so, each object presents a set of manipulable parametersand handles itself the multiple modifications from the con-trolling sources. An object of the simulation can then havea new capability: either Interactor, if it can manipulateother objects parameters, or Interactive, if its parameterscan be manipulated by others, or even both. In our context,

an object can be a simple tool, such as a screwdriver, aswell as an avatar in the environment.

Considering this, an avatar is then a specific object thatcan be controlled by different sources such as a user or anautonomous agent. Moreover, it could also be configuredto be manipulated by multiple sources at the same time, forinstance, by a trainer who wants to show specific moves ofa particular avatar to the trainee controlling this avatar.

4.2. The Shell

Designing a new collaborative procedure requires toacknowledge various elements concerning the users, suchas the knowledge they acquire and use during the proce-dure. While learning such a procedure, it is also importantfor a trainee to understand the decisions made by his team-mates, which can be autonomous agents or users. In orderto meet these requirements, we introduce the Shell as anentity containing a knowledge base and whose control canbe exchanged between autonomous agents and users. Weintroduced the term Shell† to emphasize the differencewith the term of Avatar, commonly used in the computergraphics field that usually designates only the visual rep-resentation (i.e., the mannequin) of an actor in the virtualworld. This Shell extends the concept of avatar and allowsto abstract the nature of an actor from its representation inthe virtual training. Although this does not prevent an actorfrom having his own knowledge base, it is also essentialfor the Shell to save some pieces of knowledge at runtimeto either allow an actor to resume its actions or to considerwhich knowledge has been acquired so far.

In our model, a Shell is an entity including essentialcomponents required to complete a procedure in a CVET,its architecture is presented Figure 2. These componentsare the following: an interface to represent a trainee in theenvironment and possessing the capacity to interact withelements of the virtual world and an acquisition module to

†The term Shell refers to the Japanese Manga series Ghost inthe Shell written by Masamune Shirow where a Ghost is a think-ing mind that can be embedded in a Shell, that is, a body, andtakes control over it to act in the world.

Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd. 487DOI: 10.1002/cav

Page 4: Collaborative virtual training with physical and communicative autonomous agents

CVT with physical and communicative autonomous agents T. Lopez et al.

Figure 2. The Shell: this entity gathers an interaction moduleand a knowledge base coupled with an acquisition module.

perceive information about the state of the world duringthe simulation and finally, a knowledge base, either gath-ered by the avatar or known a priori. We find this conceptto be particularly relevant in CVETs with mixed teams ofusers and autonomous agents, where each actor has a rolein the collaborative task, whatever its nature. Moreover, theknowledge base of a Shell can be accessed by both typeof actors, for example, to help in their decision process orto retrieve information about the beginning of the training.Thus, we are able to completely abstract the real or virtualnature of the actors during the task.

The Shell is designed as both an interactive object, as itis controllable by an actor, and an interactor, as it can actupon the objects of the virtual environment. Thus, usingthis entity, both users and autonomous agents share a com-mon representation and common capacities in the virtualworld as they both use the Shell as their entry point tointeract with the virtual world.

In the context of CVET, a critical element is the knowl-edge gathered by the trainees at the end of the procedure.Thus, we completed the Shell’s architecture by including aknowledge base. This knowledge has two main purposes.First, when a user is controlling the Shell, the knowledgeis still gathered in the Shell, which helps him/her in thetraining as he/she can retrieve some information neededto complete the procedure. Second, when an autonomousagent is controlling the Shell, the knowledge is used toretrieve information about the previous actions of thisShell and the coming tasks, its role in the procedure, itsteammates, and the virtual world history to support itsdecision process.

To better retrieve relevant information in the Shell’sknowledge, we identified different kinds of knowledgeneeded by a trainee to understand and carry on a proce-dure. First, the trainee must be able to get informationabout his/her avatar and its current state. Then, informationconcerning the procedure to perform and the individual orcollaborative tasks is also needed. In the context of CVET,information about the team and the teammates is impor-tant. Indeed, actors need to be aware of their partners inorder to synchronize their actions with them. Finally, thetrainee must update his/her knowledge about the surround-ings in the virtual world, for instance, to locate neededtools or to retrieve information about them. Based on theseobservations, we defined four knowledge categories to sortthe gathered information: Ego, Task, Team, and World.The knowledge is either known a priori or filled using theacquisition module during the simulation. The four cate-gories not only help to easily retrieve knowledge for usersbut also for decision process as well as communication forautonomous agents.

Moreover, regarding the needs of training environments,we defined an exchange protocol, which provides newusages for both the trainers and the users. For instance,a teacher performing a virtual medical procedure, using aShell, can pick a student and exchange their roles, allowingthis student to perform a part of the medical procedure. Inindustrial training applications, a trainer could take controlover an embodiment controlled by an autonomous agentand either help or perturb the work of the trainees bymodifying the training conditions in real time. Using ourexchange protocol, this can be done without the traineesnoticing the active intervention of their teacher. This pro-tocol mainly consists in an exchange of Shells between thetwo involved actors, implying an exchange of the avatarsand knowledge associated to each Shell.

4.3. Physical Collaboration withAutonomous Agents

The credibility of the training is a critical issue as themore the CVET will be realistic, the better the formationwill be. Indeed, the realism of the environment allows thetrainee to easily convert a virtual training to a real context.Some solutions have been proposed to handle task-drivenpostures and physical object interactions for autonomouscharacters [14,15] but have not been adapted yet for CVET.Currently, in most of the CVETs, the behavior of objectsinvolved in the scenario and the virtual teammates interac-tions are precomputed and played when the trainee triggersa specific action. Thus, the training environment as well asautonomous agents do not adapt their physical behaviorsaccordingly with the trainees actions. Moreover, the differ-ent objects involved in the training cannot be manipulatedgenerally as freely as they would be in the real world.

In order to tackle this issue, we use in our CVET aphysics simulation for both avatars and objects. In thissimulation, a physically simulated avatar is used to rep-resent the actor. Using physics, this avatar can directly

488 Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 5: Collaborative virtual training with physical and communicative autonomous agents

T. Lopez et al. CVT with physical and communicative autonomous agents

interact with the reactive virtual environment, and agentscan adopt realistic postures in the virtual environment. Ourimplementation is based on the work of Liu et al. and isconcern with the representation of the user [20] than therepresentation of an autonomous agent [15].

Moreover, this avatar is controlled by the actors throughthe Shell, which allows users and autonomous agents tohave similar controls on it. In the case of an autonomousagent, the avatar controlled by the agent automatically han-dles the basic interactions such as grab an element or reacha place by adapting its posture in order to interact with thevirtual environment. In the case of a user, many differentmeans exist to control an avatar, from a simple clickableinterface, which determines the element in the focus of theuser to more immersive setups such as tracking markerscapturing the motion of the user. For non-immersive setup,the physics simulation automatically handles the posturesto adopt for the user’s avatar, as it does for autonomousagents. In the case of an immersive setup, however, theuser can totally control the postures of the avatar. The useof such a physically simulated environment allows the userto perform any gesture he/she wants and to find on his/herown preferred one to CHOOSE.

Concerning the objects of the world, the use of physicsincreases the realism of the interactions as their reactionscorrespond to real life situations. This observation is evenmore relevant when using an immersive setup for the useras he/she is able to have natural interactions with theobjects and have the impression to manipulate objects ofdifferent weight. He/she also has to deal with the collisionsbetween the different elements of the environment. Thus, itfacilitates the design of the environment as it directly reactsto users’ interactions and these interactions do not need tobe scripted beforehand.

Moreover, the physics simulation could be used to checkthe validity of the adopted postures. To extend this con-tribution, it would be interesting, especially in the contextof CVET, to give feedbacks to the user on these adoptedpostures in order to lead him/her to correct him/her if theposture is unsafe or not ergonomic [15].

5. VERBAL COMMUNICATION TOFOSTER COLLABORATION

Our architecture endows the autonomous agents with bothdeliberative and conversational capabilities: agents areable to engage themselves into shared plans and producethe necessary dialog acts for the coordination of theiractions (Figure 1). First, the agents’ deliberative behav-ior is based on the joint-intention theory from [21], whichstates that to coordinate their activities, agents must havethe joint intention to achieve the collective goal and mustagree upon a common actions plan. Second, following theprinciples of the shared-plan theory [22], agents make deci-sions and communicate in order to make commitmentstoward the group to achieve their common goals. The agentarchitecture borrows the principles of the Belief, Desire,Intention Architecture [23,24]. Our behavioral architecture

treats deliberative and conversational behaviors uniformlyas guided by the goal-directed shared activity. It considersdialog as a collaborative activity and ensures its inter-twining with the task-oriented agents activity. The lasttheoretical foundation of our model is the IS approach fordialog modeling from [12]. The IS maintains the neces-sary contextual information of the agent about the variousdimensions of the context: dialogical, semantic, cognitive,perceptual, and social.

Using these components of the architecture, the agentcan combine the unified knowledge representation hostedby the Shell with its IS, in order to decide whether to elab-orate the plan, react to the current situation, or exchangeinformation with other real or virtual teammates. There-fore, the agent makes decision based on the overall contextof the task, its mental state, and the course of the dialog.

5.1. Knowledge Organization

Knowledge representation. Agents deal with differentsources of knowledge: information collected by the Shellalong the training and sorted in four categories (ego, world,team, and task), semantic knowledge (static at this scaleand containing information about the task), and contex-tual information about the ongoing decision-making andconversational processes handled by the agent.

Following the shared-plan theory, the perceived state ofthe current situation is an instantiation of shared conceptsthat agents hold in their semantic knowledge. Perceptionand dialog allows agents to update their knowledge usingthe Shell and to monitor the progress of the shared activ-ity. This condition ensures the consistency of the agent’sknowledge after any update of its belief initiated by com-munication or perception. Besides, although the agentsshare the same semantic knowledge; due to their locationand perception, they only have partial beliefs about theworld and do not necessarily share the same informationabout the situation.

Information about the task is the central element ofthe agents’ knowledge base. It defines the global activityplan (GAP) and the conversation protocols (see followingsection), which are represented as hierarchical frame struc-tures. On the one side, the GAP is shared by all actors andeach of its node represents subgoals that must be achievedby the team. A subactivity plan, modeling a shared activ-ity, is represented as a node of the GAP that refers toanother graph. A quick overview of this GAP is shownin Figure 3. On the other side, the conversation protocolsare used to make the dialog evolve toward the achieve-ment of a common goal and they can be viewed as sets ofproduction rules.

Information state. The agent architecture integratesdeliberation and task-oriented communication aspectstogether by using the semantic data structure IS. It con-tains contextual information of dialog such as conversationcontext or social context. The social context also includesan agenda of dialog goals. These contextual features areinherited from the model defined in [25] that we extended

Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd. 489DOI: 10.1002/cav

Page 6: Collaborative virtual training with physical and communicative autonomous agents

CVT with physical and communicative autonomous agents T. Lopez et al.

(a) (b)

Figure 3. (a) A Setter and an Operator need to communicate and collaborate in a procedure. (b) The autonomous agent providesinformation about the GAP and update its knowledge.

by introducing the new component task context. It containsthe agent’s desires, goals related to the task, and task inten-tions. The agent uses this task context to monitor the taskin progress and to provide this context to the conversa-tion. Thus, agents are able to talk about the ongoing task,the properties of the objects in the environment, their ownintentions, and the shared goals.

5.2. Collaborative and ConversationalBehavior

Decision-making. Our conversational agents have to makedecisions about their course of actions, their knowledgeupdate, and the necessity to exchange information withothers. The decision-making mechanism fulfills these twofunctions: deliberative control and belief revision. Thedeliberative control aims at deciding which goal the agentmust pursue. It uses semantic information about the GAP,the IS, and the knowledge from the Shell to decide whethera conversational behavior is required to collaborate withother team members. Our belief revision maintains theconsistency of both the knowledge base and the IS byupdating the agent’s beliefs using (i) its current percep-tions through the Shell about resources and capabilities and(ii) the new information acquired after a Shell exchangewith another actor.

The decision-making algorithm verifies whether theagenda in IS is not empty or holds some communicativeintentions. If so, control is passed to the conversationalbehavior. Otherwise, the agent chooses the plan to be exe-cuted using a deliberative belief, desire, intention-stylebehavior. If the agent identifies some cooperative situ-ations in the collective activity, then the control passesagain to the conversational behavior: the cooperativesituations generate communicative intentions in theagenda, which causes the agent to exchange informationwith other team members. Then, the control is passedto the deliberative behavior, which generates new inten-tions. Finally, the agent selects actions to execute and

update its IS to maintain the knowledge about the currenttask context.

Conversational behavior. Our agents hold reactive andproactive conversational abilities. Reactive behavior allowsthem to manage information-seeking dialogs. Thus, userscan ask questions to their virtual partners about the ongoingactivity (e.g., actions of the procedure, resources, currentgoal, and state of objects). Proactive behavior correspondsto so-called deliberation dialogs. Here, the agent gener-ates its own communicative intentions for the coordinationof the collaborative activity. We defined three collabora-tive communication protocols that lead the agent to engageitself into this kind of dialog ??: (i) when the agent decidesto pursue a new collective goal, it communicates with otherteam members to establish joint-commitment and to ensurethat everyone is going to use the same plan of action toachieve this goal. (ii) When the agent has performed itsplanned actions and the shared activity is not yet finished, itrequests information to other team members to know whenthe activity will be finished. (iii) The agent, who has justfinished the last action of the shared activity, informs otherteam members that the activity has ended.

6. RESULTS

6.1. Two Industrial Applications

Using the concepts presented in Sections 4 and 5, wedeveloped two different CVETs. These two environmentshave common needs. First, they both involve severaltrainees that are controlled seamlessly by users orautonomous agents with no impact on the learning proce-dure. Second, they can be used on various virtual realityplatforms, from the computer station to a complete immer-sive system using tracking devices. And finally, the use ofphysics and communication in both of these training is anasset improving the learning of the user.

Military application. The first virtual environment fortraining we designed is a collaborative procedure involvingfive trainees. The procedure consists in the deployment of a

490 Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 7: Collaborative virtual training with physical and communicative autonomous agents

T. Lopez et al. CVT with physical and communicative autonomous agents

(a) (b)

Figure 4. Collaborative virtual environments developed using the presented concepts.

military vehicle. An overview of the resulting environmentis shown in Figure 4(a).

Industrial application. The second scenario is a virtualfactory where two teammates have to learn how toexchange the mold of an industrial machine. This environ-ment is visible Figure 4(b).

Moreover, a video demonstrating each of our contri-butions (independently as well as together in the sameapplication) is available at http://youtu.be/5olmgpxoTUg.

6.2. Evaluation of the Theoretical Concepts

In order to measure the benefits of our CVET, we designedthree experiments. The first experiment aimed at evaluat-ing the metaphors used in the exchange of Shell in order todetermine how such a metaphor should be defined for a bet-ter comprehension and use. This experiment and the resultsare detailed in [26]. The conclusion of this experimentis that different parameters should be taken into accountdepending on the application context (user-friendly ver-sus efficiency) and also that different metaphors should beused regarding the familiarity of the end user with virtualreality applications, which is really important in the designof CVET.

Moreover, we conducted two other experiments. Thefirst experiment concerned the verbal communicationbetween the user and the autonomous virtual agents in thecontext of a collaborative task. This experiment mainlyaimed at analyzing how the user’s activity articulates withthose of the virtual teammates. The preliminary results,based on verbalizing analysis, highlight that users consis-tently react to the information provided by the agents andto their questions or requests.

Finally, our last experiment aims at evaluating thecollaborative training system with end users. Its purposeis to evaluate the ‘usability’ (ISO 9241-11) of the globalsystem. This evaluation consists in a collaborative scenariowith five actors where one of them is a real user and has tocollaborate, synchronize, and dialog with four autonomousagents in order to complete a complex collaborative

task and to correct their mistakes. This experiment iscurrently ongoing.

7. CONCLUSION

We presented a whole and complex model of CVETintensively using autonomous agents to assist trainees asfull-fledged team members. Our contributions come undertwo main areas. First, we presented an advanced model ofcollaborative interactions. This model has been completedby the Shell to gather the control of an avatar and its asso-ciated knowledge and to abstract the real or virtual natureof the actors. Moreover, we proposed to increase the userexperience during the training using a physics simulationfor the avatars and the objects of the virtual world. Thiscontribution aims at intensifying the realism of the feed-backs given to the user and allows him to interact morenaturally with the environment. Thus, it can be seen as abaseline of reasoning components that can be consideredwhen building new CVETs.

Second, we detailed a new model of communicativeagent that handles specific constraints of CVET. Thanksto these communicative behavior, agents are now ableto engage themselves into shared plans and to produceand interpret dialog acts to accomplish commitmentsabout their achievement toward collaborative goals andfor the coordination of their actions with users and otherautonomous agents. Moreover, they are able to share theirknowledge on the procedure or on the elements of thesimulated environment with other trainees, which is a realbenefit for users as they can ask them for help as theywould do with a real teammate.

To conclude, we presented a model of autonomousagents that perfectly fits the various needs of CVET. Thisagent is able to collaborate seamlessly with virtual agentsas well as users. This collaboration is possible at differ-ent level from the collaborative actions to a collaborativeconversation and knowledge exchange. Not to mention theprecious asset of the use of physics, which helps to rein-force the user’s immersion during the collaboration. Thiswhole CVET has been developed in the French research

Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd. 491DOI: 10.1002/cav

Page 8: Collaborative virtual training with physical and communicative autonomous agents

CVT with physical and communicative autonomous agents T. Lopez et al.

project CORVETTE involving academical and industrialpartners. More details on the project and on the differentindividual contributions can be found on the CORVETTE’swebsite‡. Our current research now aims at evaluating theconcept of our entire CVET with end users in order toestimate the benefits of our various contributions. We willalso estimate these benefits regarding the setup used forthe training, from the basic computer station to the entireimmersive setup.

REFERENCES

1. Sycara K, Sukthankar G. Literature review of team-work models, Robotics Institute, 2006.

2. Swartout WR, Gratch J, Hill RW, Hovy EH, MarsellaS, Rickel J, Traum DR. Toward virtual humans. AIMagazine 2006; 27(2): 96–108.

3. Dugdale J, Pavard B, Pallamin N, el Jed M, Mau-gan CL. Emergency fire incident training in a virtualworld. In Proceedings ISCRAM2004, Brussels, Bel-gium, 2004.

4. Edward L, Lourdeaux D, Barthes J-P, Lenne D,Burkhardt J-M. Modelling autonomous virtual agentbehaviours in a virtual environment for risk. Interna-tional Journal of Virtual Reality 2008; 7(3): 13–22.

5. Chevaillier P, Trinh T-H, Barange M, De Loor P,Devillers F, Soler J, Querrec R. Semantic modelingof virtual environments using mascaret. In SoftwareEngineering and Architectures for Realtime InteractiveSystems, Orange County, CA, USA, 2012; 1–8.

6. Barot C, Lourdeaux D, Burkhardt J-M, AmokraneK, Lenne D. V3S: a virtual environment forrisk-management training based on human-activitymodels. Presence: Teleoperators and Virtual Environ-ments 2013; 22(1): 1–19.

7. Mollet N, Arnaldi B. Storytelling in virtual realityfor training. In Edutainment, Hangzhou Chine, 2006;334–347.

8. Gerbaud S, Mollet N, Ganier F, Arnaldi B, TisseauJ. GVT: a platform to create virtual environments forprocedural training. In IEEE VR, Reno, Nevada, USA,2008; 225–232.

9. Rickel J, Johnson WL. Virtual humans for team train-ing in virtual reality. In Proceedings of the ninthinternational conference on artificial intelligence ineducation, 1999; 578–585.

10. Lok B. Teaching communication skills with virtualhumans. IEEE Computer Graphics and Applications2006; 26(3): 10–13.

11. Leßmann N, Kopp S, Wachsmuth I. Situated interac-tion with a virtual human - perception, action, cogni-

‡http://corvette.irisa.fr.

tion. In Situated communication. Walter de Gruyter:Berlin, 2006; 287–323.

12. Traum DR, Larsson S. The information state approachto dialogue management. In Current and new direc-tions in discourse and dialogue. Springer, 2003;325–353.

13. Rich C, Sidner CL, Lesh N. Collagen: applyingcollaborative discourse theory to human-computerinteraction. AI Magazine 2001; 22(4): 15–25.

14. Yamane K, Kuffner JJ, Hodgins JK. Synthesizinganimations of human manipulation tasks. In TOG, LosAngeles, California USA, 2004, ACM; 532–539.

15. Liu M, Micaelli A, Evrard P, Escande A. Task-drivenposture optimization for virtual characters. In ACMSIGGRAPH/Eurographics Symposium on ComputerAnimation (SCA), Lausanne, Switzerland, 2012;155–164.

16. Traum D, Marsella S, Gratch J, Lee J, HartholtA. Multi-party, multi-issue, multi-strategy negotiationfor multi-modal virtual agents. In Intelligent VirtualAgents, Tokyo, Japan, 2008; 117–130.

17. Buschmeier H, Kopp S. Towards conversational agentsthat attend to and adapt to communicative user feed-back. In Intelligent Virtual Agents, Reykjavik, Iceland,2011; 169–182.

18. Mollet N, Gerbaud S, Arnaldi B. STORM: a genericinteraction and behavioral model for 3D objects andhumanoids in a virtual environment. In IPT-EGVE,Weimar, Germany, 2007; 95–100.

19. Saraos Luna A, Gouranton V, Arnaldi B. CollaborativeVirtual environments for training: a unified interac-tion model for real humans and virtual humans. InEdutainment, Darmstadt, Germany, 2012; 1–12.

20. Liu M, Micaelli A, Evrard P, Escande A, Andriot C.Interactive dynamics and balance of a virtual characterduring manipulation tasks. In Proceedings of the IEEEInternational Conference on Robotics and Automation(ICRA), Shanghai, China, 2011; 1676 –1682.

21. Cohen PR, Levesque HJ. Confirmations and jointaction. In Proceedings of IJCAI’91, Sydney, NewSouth Wales, Australia, 1991; 951–957.

22. Grosz BJ, Kraus S. Collaborative plans for com-plex group action. Artificial Intelligence 1996; 86(2):269–357.

23. Rao AS, Georgeff MP. BDI agents: from theoryto practice. In First International Conference onMulti-Agent Systems, San Francisco, California, 1995;312–319.

24. Mascardi V, Demergasso D, Ancona D. Languagesfor programming BDI-style agents: an overview. InProceedings of WOA’05, Camerino, Italia, 2005; 9–15.

25. Bunt H. Multifunctionality in dialogue. ComputerSpeech & Language 2011; 25(2): 222–245.

492 Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 9: Collaborative virtual training with physical and communicative autonomous agents

T. Lopez et al. CVT with physical and communicative autonomous agents

26. Lopez T, Bouville R, Loup-escande E, Nouviale F,

Gouranton V, Arnaldi B. Exchange of avatars: Toward

a better perception and understanding. Visualization

and Computer Graphics, IEEE Transactions on 2014;

20(4): 644–653, DOI 10.1109/TVCG.2014.22.

AUTHORS’ BIOGRAPHIES

Thomas Lopez is a research engineerin computer science for INSA deRennes. He received his master’sdegree in 2008 and his PhD degreein Computer Science from the INSAof Rennes in 2012. His researchinterests focus on virtual reality,immersion, and also on autonomousagent and path planning.

Pierre Chevaillier is full professorin Computer Science at the ENIB(Brest, France) and is a member of theLab-STICC. His research interests arein artificial intelligence (multi-agentsystems) and virtual reality. The over-all objective is to provide solutionsfor the design of collaborative vir-tual environments that improve users’

engagement. His current work aims at endowing vir-tual agents with adaptive decision-making capabilitiesthat contribute to natural interactions between virtualhumans and users for the efficient coordination oftheir actions.

Valérie Gouranton received herPhD degree in Computer Science fromthe University of Rennes (France) in1997. In 1999, she became the assis-tant professor at the University ofOrléans, France. In 2006, she becamethe associate professor of ComputerScience at INSA de Rennes (Engi-neer School) and IRISA/Inria-Rennes.

Her research interests include virtual reality, 3Dinteraction, and collaborative virtual environments andtraining (CVET).

Paul Evrard received his MS degreein Computer Science from the FrenchInstitut d’Informatique d’Entreprisein 2006 and his PhD degree in2010 in robotics from the Uni-versité de Montpellier 2, France.He is currently a research scien-tist at CEA-LIST at Gif-sur-Yvette,

France. His research interests include digital human mod-elling and control, humanoid robotics, and human–robotinteraction.

Florian Nouviale received his mas-ter’s degree at Insa de Rennes in2008. He is an engineer for Insa deRennes at the IRISA laboratory on theImmersia platform and in the HybridTeam. His research interests includecomputer vision, computer graphics,virtual reality, immersion, and virtualtraining.

Mukesh Barange is a PhD student inComputer Science at École Nationaled’Ingénieurs de Brest (ENIB), France.After an MSc Degree in ComputerScience and a Master Research degreein Human Centered Information Sys-tems at Telecom Bretagne, France, hepursue his PhD in Computer Scienceat ENIB. His current research interests

are virtual reality, natural language dialogue management,semantic modeling, interaction and collaboration betweenusers, and virtual agents.

Rozenn Bouville After a 3-yearexperience in the industry as R&Dengineer, Rozenn Bouville startedher PhD in 2009 at INSA (Rennes ,France) and obtained her degree in2012. She is now a research engi-neer in the Hybrid Team of IRISAin Rennes. Her research fields areconcern with computer graphics and

virtual reality environments with a special interestin interoperability issues and virtual environmentsfor training.

Bruno Arnaldi received his PhDdegree in Computer Science from theUniversity of Rennes in 1988. Forhis thesis, he introduced physicallybased models in computer animation.In 1988, he became a full time INRIAresearcher in computer graphics atIRISA/INRIA. In 1994, he becamea professor in Computer Science at

INSA de Rennes (Engineer School). He was the head ofthe Project SIAMES (Image Synthesis, Animation, Mod-elling, and Simulation). He was the chairman of PERF-RVand Part@ge, French National Platforms dedicated to vir-tual reality. He was a member of the Network ManagementCommittee of the NOE INTUITION and currently the vicepresident of the French Association for Virtual Reality(AFRV). He is now the vice director of IRISA laboratory(around 750 members).

Comp. Anim. Virtual Worlds 2014; 25:485–493 © 2014 John Wiley & Sons, Ltd. 493DOI: 10.1002/cav