planning cooperative motions for animated characters

7
International Symposium on Robotics and Automation 2004 August 25-27, 2004 Quer´ etaro, M´ exico Planning Cooperative Motions for Animated Characters Claudia Esteves Gustavo Arechavaleta Jean-Paul Laumond 7, avenue du Colonel Roche, 31077 Toulouse {cesteves,garechav,jpl}@laas.fr Abstract This paper presents a motion planning scheme to plan, generate and edit motions for two or more vir- tual characters in cluttered environments. The main challenge is to deal with 3D collision avoidance while preserving the believability of the agent behaviors. To accomplish a coordinated task a geometrical decoupling of the system is proposed. Several techniques such as probabilistic path planning for open and closed kine- matic chains, motion controllers and motion editing are integrated within a same algorithmic framework. 1 Introduction The interaction between the fields of robotics and computer animation is not new. The human figure has been frequently represented in computer animation with the articulated mechanisms used in robotics to control manipulators [1]. With this common represen- tation, many techniques have been developed within both fields having two different goals in mind: On one hand, applications of computer animation mainly in the entertainment industry have made it necessary to develop techniques that generate realistic-looking mo- tions. On the other hand the interest in motions from a robotics point of view is to generate them automat- ically without caring for realism. In recent years, arising applications in both ar- eas (e.g. ergonomics, interactive video games, etc.) have interested researchers in automatically generat- ing human-like plausible motions. With such a motivation, several approaches to au- tomatically plan motions for a human mannequin have been developed. These works have focused in animat- ing a given behavior at a time. For instance, in [6] the authors make a virtual mannequin perform ma- nipulation planning. Another approach using an au- tomated footprint planner that deals with locomotion on rough terrains is described in [2]. A two-step path planner for walking virtual characters was proposed in [8]. This approach consists in planning a collision-free path for a cylinder in a 2D world and then animating the mannequin along the path. This idea of planner was extended in [12] in order to deal with 3D obstacle avoidance and produce eye-believable motions. In the same spirit, this work combines techniques is- sued from robotics and computer animation into a sin- gle motion planning scheme accounting for two behav- iors: locomotion and manipulation. Here, the main challenge is to deal with 3D collision avoidance while preserving the believability of the motions and allow- ing cooperation between two or more virtual man- nequins. In order to achieve this, we define a sim- plified model of the task (walking while carrying a bulky object) by performing a geometric and kinemat- ical decomposition of the systems degrees of freedom (DOF). This task model is extended to allow the co- operation between characters by automatically com- puting an approximation of the so-called “reachable cooperative space”. The remaining of this paper is structured as fol- lows: In Section 2 a brief overview of the techniques used in this work is presented. In Section 3 the system and the task models are described. Section 4 describes how all techniques are combined in a 3-step algorithm in order to obtain an animation. In Section 5, ex- amples of individual and cooperative tasks are shown and discussed. Finally, conclusion and future work are presented in Section 6. 2 Techniques Overview In order to generate complete motion sequences of one or more virtual mannequins transporting a bulky object in a cluttered environment we use three main components: a motion planner that handles open and closed kinematic chains motion controllers adapted for virtual human mannequins a 3D collision-avoidance editing strategy. There are many techniques that could be used to cover these requirements. In the paragraphs below, we describe the techniques that we think are best adapted to our problem.

Upload: independent

Post on 15-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

Planning Cooperative Motions for Animated Characters

Claudia Esteves Gustavo Arechavaleta Jean-Paul Laumond7, avenue du Colonel Roche, 31077 Toulouse

{cesteves,garechav,jpl}@laas.fr

AbstractThis paper presents a motion planning scheme to

plan, generate and edit motions for two or more vir-tual characters in cluttered environments. The mainchallenge is to deal with 3D collision avoidance whilepreserving the believability of the agent behaviors. Toaccomplish a coordinated task a geometrical decouplingof the system is proposed. Several techniques such asprobabilistic path planning for open and closed kine-matic chains, motion controllers and motion editingare integrated within a same algorithmic framework.

1 IntroductionThe interaction between the fields of robotics and

computer animation is not new. The human figure hasbeen frequently represented in computer animationwith the articulated mechanisms used in robotics tocontrol manipulators [1]. With this common represen-tation, many techniques have been developed withinboth fields having two different goals in mind: On onehand, applications of computer animation mainly inthe entertainment industry have made it necessary todevelop techniques that generate realistic-looking mo-tions. On the other hand the interest in motions froma robotics point of view is to generate them automat-ically without caring for realism.

In recent years, arising applications in both ar-eas (e.g. ergonomics, interactive video games, etc.)have interested researchers in automatically generat-ing human-like plausible motions.

With such a motivation, several approaches to au-tomatically plan motions for a human mannequin havebeen developed. These works have focused in animat-ing a given behavior at a time. For instance, in [6]the authors make a virtual mannequin perform ma-nipulation planning. Another approach using an au-tomated footprint planner that deals with locomotionon rough terrains is described in [2]. A two-step pathplanner for walking virtual characters was proposed in[8]. This approach consists in planning a collision-freepath for a cylinder in a 2D world and then animatingthe mannequin along the path. This idea of planner

was extended in [12] in order to deal with 3D obstacleavoidance and produce eye-believable motions.

In the same spirit, this work combines techniques is-sued from robotics and computer animation into a sin-gle motion planning scheme accounting for two behav-iors: locomotion and manipulation. Here, the mainchallenge is to deal with 3D collision avoidance whilepreserving the believability of the motions and allow-ing cooperation between two or more virtual man-nequins. In order to achieve this, we define a sim-plified model of the task (walking while carrying abulky object) by performing a geometric and kinemat-ical decomposition of the systems degrees of freedom(DOF). This task model is extended to allow the co-operation between characters by automatically com-puting an approximation of the so-called “reachablecooperative space”.

The remaining of this paper is structured as fol-lows: In Section 2 a brief overview of the techniquesused in this work is presented. In Section 3 the systemand the task models are described. Section 4 describeshow all techniques are combined in a 3-step algorithmin order to obtain an animation. In Section 5, ex-amples of individual and cooperative tasks are shownand discussed. Finally, conclusion and future work arepresented in Section 6.

2 Techniques OverviewIn order to generate complete motion sequences of

one or more virtual mannequins transporting a bulkyobject in a cluttered environment we use three maincomponents:

• a motion planner that handles open and closedkinematic chains

• motion controllers adapted for virtual humanmannequins

• a 3D collision-avoidance editing strategy.

There are many techniques that could be used tocover these requirements. In the paragraphs below, wedescribe the techniques that we think are best adaptedto our problem.

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

2.1 Probabilistic Motion Planning Tech-niques

2.1.1 Probabilistic Roadmaps

The interest of this method is to obtain a represen-tation of the topology of the collision-free space ina compact data structure called a roadmap. Such astructure is computed without requiring an explicitrepresentation of the obstacles. A roadmap can beobtained by using two types of algorithms: samplingor diffusion. The main idea of the first techniques (e.g.PRM [5]) is to sample random configuration lying inthe free space and to trace edges to connect them withneighbor samples. Edges or local paths should alsobe collision-free and their form depends on the kine-matical constraints (steering method) of the movingdevice. The diffusion techniques [9] consist in sam-pling the collision-free space with only few configura-tions called roots and to diffuse the exploration in theneighborhoods to randomly chosen directions.

In this work we use a variant of the first approach:the Visibility PRM [14]. In this method, there aretwo types of nodes: the guards and the connexions.Nodes are added if they are not “visible” from previ-ous sampled nodes or if they allow to link two or moreconnected components of the roadmap. The kind ofroadmap generated with this approach is more com-pact than the one obtained using the PRM alone.

2.1.2 Planning for open-kinematic chains

A motion path can be found in a roadmap by us-ing a two-step algorithm consisting on a learningand a query phase. For an articulated mechanism,the roadmap is computed in the learning phase bygenerating random configurations within the allowedrange of each DOF. In the query phase, the initialand final configurations are added as new nodes inthe roadmap and connected with an adapted steeringmethod. Then, a graph search is performed to find acollision-free path between the starting and goal con-figurations. If a path is found it is then converted intoa trajectory (a time-parameterized path).

2.1.3 Planning for closed-kinematic chains

In order to handle the motions of closed kinematicmechanisms, some path planning methods have beenproposed in the literature [10, 4, 3]. In our workwe have chosen to use the Random Loop Generator(RLG) algorithm proposed in [3]. For applying suchmethod a closed kinematic chain is divided in activeand passive parts. The main idea of the algorithm is to

decrease the complexity of the closed kinematic chainat each iteration until the active part becomes reach-able by all passive chain segments simultaneously. Thereachable workspace of a kinematic chain is definedas the volume which the end-effector can reach. Anapproximation of this volume is automatically com-puted by the RLG using a simple bounding volume(spherical shell) consisting on the intersection of con-centric spheres and cones. A guided random samplingof the configurations of the active part is done insidethe computed shell and within the current joint lim-its. When several loops are present in the mechanism,they are treated as separate closed chains.

Once the roadmap is constructed, a path is foundin the same way as for open kinematic chains.

2.2 Motion Generation Techniques2.2.1 Kinematics based methods

Kinematics based techniques specify motion indepen-dently of the underlying forces that produced them.Motion can either be defined by specifying the valueof each joint (forward-kinematics) or it can be de-rived from a given end-effector configuration (inverse-kinematics). In this work we are especially interestedin generating the motions of virtual human charac-ters. In computer animation this approach has beenfrequently used to generate the motions of articulatedhuman characters as in [18]. Several inverse kine-matics (IK) algorithms for 7-DOF anthropomorphiclimbs have been developed based on biomechanicaldata in order to best reproduce human-arm motions(e.g. [7, 15]). In our work we have chosen to usethe combined analytic and numerical IK method pre-sented in [15]. Kinematics-based methods are welladapted when a specified target is given (like in reach-ing motions) but generating believable motions is of-ten problematic.

2.2.2 Motion capture based methods

Motion capture allows a rapid generation of charactermotions and it has been widely used in interactiveand real-time applications. The idea of these kindof techniques is to record human motions by placingsensors in the subjects body and apply them later toa synthetic model [11]. Motion libraries are built byfiltering and characterizing the data obtained from therecorded motions. One particular motion at a time islater chosen from the library. The disadvantage ofthese techniques is that, when used alone, the kind ofmotion generated cannot be adapted or reused.

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

2.2.3 Motion Editing Techniques

When a limited set of data is available, like in a mo-tion capture library, there is a need to expand theset of possible motions in order to produce an anima-tion. Interpolation methods to modify, combine andadapt these motions have been developed to overcomethis necessity. For example in [17] motion parametersare represented in curves which are blended in orderto combine motions. In [16] the authors extract themotion characteristics from original data and definea functional model based of Fourier series expansion.This is done in order to interpolate or extrapolate hu-man locomotion and generate emotion-based anima-tions. These methods deal with the generation of newmotions while preserving the qualities of the originalones such as realism and expressiveness.

3 System and Task ModelingOur system can be composed of one or several hu-

man or robot mannequins. All characters are modeledas a hierarchy of rigid bodies connected by constrainedlinks trying to reproduce (in the case of human man-nequins) the human joint limits.

Human mannequins are modeled with 53 DOF in18 spherical and rotational joints arranged in a treestructure of 5 kinematical chains that converge in thecharacters pelvis (Figure 1).

Here, we consider locomotion and manipulation asthe robots basic complementary behaviors. In orderto combine them we perform a geometrical decouplingof the systems DOF according to their main func-tion. The mannequins are thus divided in three DOFgroups: Locomotion, Grasp and Mobility.

Figure 1: DOF are decomposed into three groups ac-coring to their function: locomotion, mobility andgrasp.

The Locomotion (resp. Grasp) DOF are those in-

volved in producing the walking (resp. manipulat-ing) behavior on the virtual mannequin. The Mobilitygroup contains the DOF that allow a complementaryposture control to the specified behaviors. In our vir-tual mannequin these are the DOF located in the headand spine.

This geometrical decoupling strategy is fundamen-tal in our approach because in this way a reduced sys-tem model can be employed to simplify the controland description of the current task.

As we are aiming for two mannequins to interactwith each other in the manipulation behavior, a ge-ometrical decoupling of the system is not enough todescribe the cooperative task. In the paragraphs be-low the strategy to compute a reachable cooperativespace that allows to complete the task description isdescribed.

As it can be seen in Figure 2, when two virtualmannequins hold the same object, several closed kine-matic loops are formed. In this case, the system isconsidered as a multiple-loop articulated mechanismand treated with the RLG technique overviewed inSection 2.1.3.

Figure 2: Closed kinematic chains are formed in co-operative manipulation

For a single human mannequin, the arms inversekinematics algorithm defines the range of its reachablespace. This space is automatically aproximated byusing the spherical shells volume. For a cooperativetask, the reachable cooperative space is considered tobe the intersection of all the individual spaces (Figure3a).

In Figure 3b it is seen that even though the individ-ual reachable spaces are not intersecting, a coopera-tive reachable space approximation is computed usingthe large object as the end-effector of each kinematicchain.

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

(a) (b)

Figure 3: Individual reachable spaces are intersectedto obtain a cooperative space for manipulation.

4 AlgorithmOur algorithm consists on three stages:

1. Plan a collision-free trajectory for a reducedmodel of the system

2. Animate locomotion and manipulation behaviorsin parallel

3. Edit the generated motions to avoid residual col-lisions.

The user-specified inputs for the algorithm are:

• a geometric and kinematical description of thesystem

• maximal linear velocity and acceleration

• number of desired frames in the animation

• a motion library containing captured data fromdifferent walking sequences

• an initial and a final configuration.

The output is an animated sequence of the com-bined behaviors. In the next paragraphs each step ofthe algorithm is described.4.1 Motion planning

In this step a simplified model of the system is em-ployed in order to reduce the complexity of the plan-ning problem. For this, each mannequin locomotionDOF are covered with a bounding box as seen on Fig-ure 4.

In this example a 12-DOF reduced system is ob-tained with 6 parameters to specify the mannequinposition and orientation and the other 6 to specify

Figure 4: Simplified model used for planning a trajec-tory.

the object motions. With this model a collision-freepath is found using the probabilistic roadmap methoddescribed in Section 2.1.1. In this work we have con-sidered that smooth human-like paths can be approxi-mated to Bezier curves of third degree but other localpaths could be used instead.

Once a collision-free path is found, it is then trans-formed into a set of discrete time-stamped positions(animation frames), which is the input to the nextstep of the algorithm.

4.2 Motion generationIn this step, locomotion and manipulation behav-

iors are synthesized in parallel using the completemodel of the system.

The locomotion controller described in [12], usesmotion-capture editing techniques in order to combinedifferent walking cycles from the library and to adaptthem to the velocity and acceleration constrained tra-jectory. Here, the walking behavior will set the jointvalues for the locomotion and mobility DOF leavingthe grasp DOF inanimated.

The grasping behavior is separately animated ap-plying a 7-DOF inverse kinematics. This, to reach thevalued imposed by the object configurations along thetrajectory defined in the planning step. In this processonly the grasp DOF values are specified.

4.3 Motion editingIn the first step of the algorithm, only a system in-

volving the lower part of the mannequins was consid-ered. Therefore, the generated trajectory allows resid-ual collisions with the remaining DOF (mobility andgrasp). The purpose of this last step is to solve thesepossible remaining collisions while preserving the be-lievability of the generated motions. Collisions withthe mobility and grasping DOF are treated differentlydepending on the nature of the kinematic chains thatcontains them (open or closed).

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

In the case of an open kinematic chain (head-spine),a local deformation of the chain is performed until avalid (collision-free) configuration is reached. There-after, a warping method is applied to obtain the min-imal deformation and preserve the smoothness of themotions.

When collisions are found in a closed chain (arms-object) a new local collision-free path is found for thechain and the same warping method is applied.

If at this stage there are no collision-free configu-rations found, the computed trajectory is invalidatedand a new one is found.

5 Experimental ResultsWe have implemented our algorithm within the

software platform Move3D [13] developed at LAAS.We have generated different animations with differ-ent environments varying their size and complexity.Two types of virtual mannequins have been used. Inthe paragraphs below, two examples are presented anddiscussed.5.1 Pizza Delivery Service

In this example the virtual pizza delivery boy hasto take a pizza from one office to another (Figure 5).Here, we would like the mannequin to keep the boxeshorizontal along the trajectory to prevent the pizzas toloose their ingredients. For this, we impose kinemat-ical constraints by restricting two of the six DOF forthe free-flying object. This means that in roll-pitch-yaw angle representation we have removed the DOFallowing the object to pitch and roll.

Figure 5: Initial and final configurations

Once the initial and final configurations, velocityand acceleration constraints and the number of framesare specified by the user, the algorithm is applied andthe animation generated. In Figure 6 the resultingtrajectory is shown as a sequence of configurations se-lected for image clarity.

After the locomotion and grasping behaviors weregenerated, a residual collision was found between the

Figure 6: Selected configurations of the computed tra-jectory.

mannequin’s arm and the bookshelf at the second of-fice entrance. In this case a solution was quickly foundby modifying the elbows configuration as it is shownin Figure 7. Here, this collision was not likely to beavoided in the original path because the doorways arenarrow and the bookshelf is very near the final config-uration.

Figure 7: The mannequin moves his elbow in order toavoid collision with the bookshelf.

5.2 In the factoryThe second example takes place in a typical indus-

trial environment. Here a heavy plate has to be trans-ported across the room by a human mannequin alongwith a virtual robot manipulator (Figure 8).

As it can be seen, the environment is complex, itcontains plenty of obstacles and collisions are likely tooccur. The drums lying around the room leave onlyone collision-free pathway for the system to traverse.

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

Figure 8: Virtual mannequins cooperating in the in-dustrial facilities.

In the first step of our algorithm this collision-freepath is found and sampled. Then, locomotion andcooperative grasping behaviors are synthesized. Here,the virtual robot is considered to be holonomic, soa straight line was used as steering method. In thetuning step, several residual collisions were found andavoided. Some frames of the final animation are shownin Figure 9.

Figure 9: The agents deal with several obstacles whiletransporting the plate.

5.3 Computational timeBoth examples were computed on a Sun-Blade-100

workstation wih a 500MHz UltraSparc-IIe processorand 512 Mb in RAM.

For the first example, the office environment is com-posed of 148,977 polygons, all of them taking part inthe collision test. The industrial environment is com-posed of 159,698 polygons but only the lower (man-nequin height) 92,787 take part in collision testing.

Table 1 presents the time needed to compute eachof the examples considering a pre-computed roadmap(i.e. only the query phase). The time it took to buildthis roadmap was 1.69 sec. for the office and 31.4 sec.

Table 1: Computational time in seconds.Stage Office Factory

No. Frames 308 268 530StagesI. Planning- Path 0.5 6.5 – 6.5- Trajectory 2.0 2.1 – 4.5II. Animating- Locomotion 0.8 0.8 – 1.6- Manipulation 0.3 0.4 – 0.8III. Editing 0.2 5.7–11.4Total time 3.8 15.5–24.8

for the factory. At this stage, the time varies withthe complexity of the environment and given its theprobabilistic nature of the algorithm.

Two different animations were generated for eachtrajectory in the industrial environment and one inthe office. We can see that the time to compute thetrajectory in the factory is higher than in the officebecause there are two agents in the scene, and thereare more obstacles to avoid and the distance betweenthe initial and final configurations is greater.

The time it takes to synthesize locomotion andmanipulation behaviors is clearly proportional to thenumber of frames in the animation. The edition-stepcomputing time depends on the number of residualcollisions found along the trajectory.

6 ConclusionsWe presented an approach to plan, generate and

edit collision-free motions for cooperating virtual char-acters handling bulky objects. To accomplish thistask a geometric decoupling of the system is proposed.A three-step algorithm that integrates several tech-niques such as motion planning algorithms, motioncontrollers and motion editing techniques is described.

This approach works well to cope with complemen-tary behaviors such as locomotion and manipulationwhere different DOF are used for each of them. Thisapproach should be extended in order to generate alarger set of behaviors.

A major component of realistic motion, which is notconsidered in this work is physically-based reaction tocommonly encountered forces. Work should be done inorder to integrate this in our current motion planningscheme. This is one of our near-future goals.

At this stage, manipulation planning is not consid-ered. This problem has been previously tackled in [6].However, more complicated instances of this problem

International Symposium on Robotics and Automation 2004August 25-27, 2004Queretaro, Mexico

should be considered in order to plan the motions ofseveral interacting arms.

Animations related to this work canbe found at http://www.laas.fr/RIA/RIA-research-motion-character.html

AcknowledgmentC. Esteves and G. Arechavaleta benefit from

SFERE-CONACyT grants. This work is partiallyfunded by the European Community Projects FP5IST 2001-39250 Movie and FP6 IST 002020 Cogniron.

References[1] N. Badler, C. Phillips, and B. Webber, Simulating

Humans: Computer Graphics, Animation, andControl. University of Pennsylvania, Philadel-phia: Oxford University Press, 1992.

[2] M. Choi, J. Lee, and S. Shin, “Planning biped lo-comotion using motion capture data and proba-bilistic roadmaps,” ACM transactions on Graph-ics, Vol. 22(2), 2003.

[3] J. Cortes and T. Simeon, “Probabilistic motionplanning for parallel mechanisms,” in Proc. IEEEInternational Conference on Robotics and Au-tomation (ICRA’2003), 2003.

[4] L. Han and N. Amato, “A kinematics-based prob-abilistic roadmap method for closed chain sys-tems,” in Proceedings of International Work-shop on Algorithmic Foundations of Robotics(WAFR’00), 2000.

[5] L. Kavraki, P. Svestka, J.-C. Latombe, andM. Overmars, “Probabilistic roadmaps forpath planning in high-dimensional configurationspaces,” in Proc. IEEE Transactions on Roboticsand Automation, 1996.

[6] Y. Koga, K. Kondo, J. Kuffner, and J. Latombe,“Planning motions with intentions,” ACM SIG-GRAPH Computer Graphics, vol. 28, pp. 395–408, 1994.

[7] K. Kondo, “Inverse kinematics of a human arm,”Journal of Robotics and Systems, vol. 8, no. 2,pp. 115–175, 1991.

[8] J. Kuffner, “Autonomous agents for real-time ani-mation,” Ph.D. dissertation, Stanford University,Stanford, CA, December 1999.

[9] S. LaValle, “Rapidly-exploring random trees: Anew tool for path planning,” Computer ScienceDepartment. Iowa State University, Tech. Rep.,October 1998.

[10] S. LaValle, J. H. Yakey, and L. E. Kavraki, “Aprobabilistic roadmap approach for systems withclosed kinematic chains,” in Proc. IEEE Trans-actions on Robotics and Automation, 1999.

[11] R. Parent, Computer Animation: Algorithms andTechniques. The Ohio State University: Morgan-Kaufmann Publishers, 2001.

[12] J. Pettre, J.-P. Laumond, and T. Simeon, “A2-stages locomotion planner for digital actors,”in Proc. ACM SIGGRAPH/Eurographics Sympo-sium on Computer Animation. San Diego, Cali-fornia: Eurographics Association, 2003, pp. 258–264.

[13] T. Simeon, J.-P. Laumond, and F. Lamiraux,“Move3d: A generic platform for motion plan-ning,” in Proc. 4th International Symposiumon Assembly and Task Planning (ISATP’2001),2001.

[14] T. Simeon, J.-P. Laumond, and C. Nissoux, “Vis-ibility based probabilistic roadmaps for motionplanning,” Advanced Robotics, vol. 14, no. 6,2000.

[15] D. Tolani, A. Goswami, and N. Badler, “Real-time inverse kinematics techniques for antropo-morphic limbs,” Graphical models, no. 5, pp. 353–388, 2000.

[16] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourierprinciples for emotion-based human figure anima-tion,” in Proc. of SIGGRAPH’95, 1995.

[17] A. Witkin and Z. Popovic, “Motion warping,” inProc. of SIGGRAPH’95, 1995.

[18] J. Zhao and N. Badler, “Inverse kinematicspositioning using nonlinear programming forhighly articulated figures,” ACM Transactions onGraphics, vol. 14, no. 4, 1994.