learning how to approach industrial robot tasks from...

6
Learning how to approach industrial robot tasks from natural demonstrations Stefano Michieletto 1 and Nicola Chessa 1 and Emanuele Menegatti 1 Abstract— In the last years, Robot Learning from Demon- stration (RLfD) [1] [2] has become a major topic in robotics research. The main reason for this is that programming a robot can be a very difficult and time spending task. The RLfD paradigm has been applied to a great variety of robots, but it is still difficult to make the robot learn a task properly. Often the teacher is not an expert in the field, and viceversa an expert could not know well enough the robot to be a teacher. With this paper, we aimed at closing this gap by proposing a novel motion re-targeting technique to make a manipulator learn from natural demonstrations. A RLfD framework based on Gaussian Mixture Models (GMM) and Gaussian Mixture Regressions (GMR) was set to test the accuracy of the system in terms of precision and repeatability. The robot used during the experiments is a Comau Smart5 SiX and a novel virtual model of this manipulator has also been developed to simulate an industrial scenario which allows valid experimentation while avoiding damages to the real robot. I. I NTRODUCTION Several works in Robot Learning from Demonstration (RLfD) highlight the importance of human demonstrations. Acquiring examples from humans provides a powerful mech- anism to simplify the process of programming complex robot motions. Different modalities can be used to convey the demonstrations from the teacher to the robot: motion sensors [3], kinesthetic teaching [14], or vision systems [8]. The main drawback of non-vision-based techniques is that the human cannot act in a natural way: motion sensors must be attached to the human body, while kinesthetic demonstrations involve robot motion guiding by a human performer. The introduction of low-cost RGB-D sensors [13] with good resolution and frame-rate has generated a rapid boosting of computer vision algorithms to estimate human pose [24], skeleton tracking [15] and activities recogni- tion [20]. These information can be used as input for motion re-targeting techniques making an avatar replicate the same poses performed by an actor. Computer graphics [11] [17] uses such kind of techniques to generate off-line feasible motion for virtual characters, but robotics requires on-line methods to be applied in dynamic environments, in which sensors can provide feedback to avoid dangerous collisions, follow moving objects, or react to changing requests. On-line robotics-oriented systems have been developed to retarget motion from human to humanoids [21] [6] [19] or from human to robot limbs [9] [18]. In the cited works, 1 Stefano Michieletto and Nicola Chessa and Emanuele Menegatti are with Department of Information Engineering (DEI), University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy {michieletto, chessani, emg}@dei.unipd.it the mapping from human to robot is anthropomorphous. This practice is good if the demonstrator is conscious to teach tasks to a robot, while a different approach can be adopted if he is acting in a natural way. For example, if the human knows that his arm directly controls a manipulator, the movement he perform will be concentrated on the arm, with no lower-body motion. A natural execution, instead, involves the whole body. In this work, we developed a novel mapping from the whole human body to a manipulator, namely the Comau Smart5 Six, and test this approach in a Learning from Demonstration framework. In order to avoid damages on the real robot, a novel virtual model of the robot has also been developed. The model is suitable for different Open Source 3D simulators. In particular, we tested it in Gazebo [16] and V-REP [10]. As highlighted in [25], one of most popular and well performing simulators is Gazebo, while V-REP is an emerging platform which became Open Source at the beginning of 2013. The system has been tested using two different scenarios. In the first scenario a specific object should be properly reached in a set of similar items stacked one near each other. In the second scenario a box is linearly moved of a certain distance. The tasks are planned to verify the flexibility and accuracy of the adopted framework looking especially to precision and repeatability. The novel motion re-targeting technique we are proposing in this paper aim to make the natural demonstrations col- lected from humans helpful in programming new tasks in an industrial environment. The remainder of the paper is organized as follows: in Sec- tion II, the data acquisition system, the manipulator model, and the motion re-targeting method are presented. The Robot Learning from Demonstration framework based on Gaus- sian Mixture Model (GMM)/ Gaussian Mixture Regression (GMR) [5] is described in Section III. The architectures used, the tests performed and the results collected are reported in Section IV. Conclusions and future works are contained in Section V. II. MOTION RETARGETING A. Data acquisition In this work, a manipulator should learn a new task from a human demonstrator. Our main objective is to let the demon- strator act as more natural as possible, thus choosing the right acquisition system is a crucial step. A visual technique fits our needs better than a kinesthetic demonstration: the user is not required to rethink the task from the robot point of view. Using a motion capture system (MoCap), a great accuracy

Upload: vomien

Post on 06-Feb-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

Learning how to approach industrial robot tasks from naturaldemonstrations

Stefano Michieletto1 and Nicola Chessa1 and Emanuele Menegatti1

Abstract— In the last years, Robot Learning from Demon-stration (RLfD) [1] [2] has become a major topic in roboticsresearch. The main reason for this is that programming a robotcan be a very difficult and time spending task.

The RLfD paradigm has been applied to a great variety ofrobots, but it is still difficult to make the robot learn a taskproperly. Often the teacher is not an expert in the field, andviceversa an expert could not know well enough the robot tobe a teacher.

With this paper, we aimed at closing this gap by proposinga novel motion re-targeting technique to make a manipulatorlearn from natural demonstrations. A RLfD framework basedon Gaussian Mixture Models (GMM) and Gaussian MixtureRegressions (GMR) was set to test the accuracy of the systemin terms of precision and repeatability.

The robot used during the experiments is a Comau Smart5SiX and a novel virtual model of this manipulator has also beendeveloped to simulate an industrial scenario which allows validexperimentation while avoiding damages to the real robot.

I. INTRODUCTION

Several works in Robot Learning from Demonstration(RLfD) highlight the importance of human demonstrations.Acquiring examples from humans provides a powerful mech-anism to simplify the process of programming complexrobot motions. Different modalities can be used to conveythe demonstrations from the teacher to the robot: motionsensors [3], kinesthetic teaching [14], or vision systems [8].

The main drawback of non-vision-based techniques is thatthe human cannot act in a natural way: motion sensorsmust be attached to the human body, while kinestheticdemonstrations involve robot motion guiding by a humanperformer. The introduction of low-cost RGB-D sensors [13]with good resolution and frame-rate has generated a rapidboosting of computer vision algorithms to estimate humanpose [24], skeleton tracking [15] and activities recogni-tion [20]. These information can be used as input for motionre-targeting techniques making an avatar replicate the sameposes performed by an actor. Computer graphics [11] [17]uses such kind of techniques to generate off-line feasiblemotion for virtual characters, but robotics requires on-linemethods to be applied in dynamic environments, in whichsensors can provide feedback to avoid dangerous collisions,follow moving objects, or react to changing requests.

On-line robotics-oriented systems have been developed toretarget motion from human to humanoids [21] [6] [19] orfrom human to robot limbs [9] [18]. In the cited works,

1Stefano Michieletto and Nicola Chessa and Emanuele Menegattiare with Department of Information Engineering (DEI), University ofPadova, Via Gradenigo 6/B, 35131 Padova, Italy {michieletto,chessani, emg}@dei.unipd.it

the mapping from human to robot is anthropomorphous.This practice is good if the demonstrator is conscious toteach tasks to a robot, while a different approach can beadopted if he is acting in a natural way. For example, if thehuman knows that his arm directly controls a manipulator,the movement he perform will be concentrated on the arm,with no lower-body motion. A natural execution, instead,involves the whole body. In this work, we developed a novelmapping from the whole human body to a manipulator,namely the Comau Smart5 Six, and test this approach ina Learning from Demonstration framework.

In order to avoid damages on the real robot, a novelvirtual model of the robot has also been developed. Themodel is suitable for different Open Source 3D simulators.In particular, we tested it in Gazebo [16] and V-REP [10]. Ashighlighted in [25], one of most popular and well performingsimulators is Gazebo, while V-REP is an emerging platformwhich became Open Source at the beginning of 2013.

The system has been tested using two different scenarios.In the first scenario a specific object should be properlyreached in a set of similar items stacked one near each other.In the second scenario a box is linearly moved of a certaindistance. The tasks are planned to verify the flexibility andaccuracy of the adopted framework looking especially toprecision and repeatability.

The novel motion re-targeting technique we are proposingin this paper aim to make the natural demonstrations col-lected from humans helpful in programming new tasks in anindustrial environment.

The remainder of the paper is organized as follows: in Sec-tion II, the data acquisition system, the manipulator model,and the motion re-targeting method are presented. The RobotLearning from Demonstration framework based on Gaus-sian Mixture Model (GMM)/ Gaussian Mixture Regression(GMR) [5] is described in Section III. The architectures used,the tests performed and the results collected are reported inSection IV. Conclusions and future works are contained inSection V.

II. MOTION RETARGETING

A. Data acquisition

In this work, a manipulator should learn a new task from ahuman demonstrator. Our main objective is to let the demon-strator act as more natural as possible, thus choosing the rightacquisition system is a crucial step. A visual technique fitsour needs better than a kinesthetic demonstration: the user isnot required to rethink the task from the robot point of view.Using a motion capture system (MoCap), a great accuracy

Page 2: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

TABLE ISUMMARY OF THE MAIN COMAU SMART5 SIX FEATURES.

Lock lower angle Lock upper angle SpeedAxis 1 −170o 170o 140o/sAxis 2 −85o 155o 160o/sAxis 3 −170o 0o 170o/sAxis 4 −210o 210o 450o/sAxis 5 −130o 130o 375o/sAxis 6 −2700o 2700o 550o/s

in data acquisition can be achieved, but some marker shouldbe worn by the demonstrator. A marker system could beunwieldy or dangerous in an industrial environment, and alsothe demonstrator could not feel comfortable.

We propose to use a RGB-D sensor able to acquire thescene at 30 fps with a resolution of 640x480 pixels for bothcolor and depth information. A skeleton tracking algorithmrunning also at 30 fps allows us to get data from 15 skeletonjoints (Figure 1). The joint positions and orientations areelaborated to understand the motion and guide the robot inlearning phase.

Fig. 1. Skeleton joints provided by the tracker.

B. Robot Model

The robot used is a Comau Smart5 SiX. It is a small ma-nipulator with 6 DoF particularly suitable for all operationsthat require low payload, fast movement, and a high degreeof repeatability. In Table I, the main characteristics of therobot are listed1.

Figure 2 shows the robot we used in this work and itsoperating area. From these technical data a URDF model(Figure 3 (a)) has been built. Information not provided byComau, like mass and inertial matrices, has been estimatedfrom the original 3D robot model by approximating the

1A more exhaustive description of the robot and its technical specifica-tions can be found at http://www.comau.com

different parts to parallelepipeds and applying formulas inEquation 1 and 2:

m = w · h · d · SW steel; I =

Ixx 0 00 Iyy 00 0 Izz

(1)

Ixx = 1/12 m(h2 + d2)Iyy = 1/12 m(w2 + d2)Izz = 1/12 m(w2 + h2)

(2)

where:• m is the parallelepiped mass;• w, h, and d are respectively the parallelepiped width,

height, and depth;• SWsteel is the steel specific weight approximated to

8 · 10−3 Kg/m3;• I is the parallelepiped inertial matrix.The final model has been integrated in both Gazebo and

V-REP, but only the first one (Figure 3 (b)) has been used asexperimental environment due to the Gazebo robot plug-incan be better customized for our purposes.

C. Retargeting

The motion re-targeting system has to consider two dif-ferent articulated models: the human body and the ma-nipulator. Recent works in computer graphics highlightedthat such kind of retargeting are feasible also for complexmodels [26] [23]. In this work, the motion captured fromthe 15 skeleton joints are remapped to 3 robot joints relatedto the “rough” robot motion: Axis1, Axis2, Axis3. We didnot take into account the 3 remaining axes (Axis4, Axis5,Axis6), which control the “fine” end effector motion. A visionbased hand gesture interface [12] could be used to get a moreeffective control of the end effector, especially if combinedwith recent manipulation techniques from uncertain depthdata [27].

The Axis1 rotation angle α is the projection P onto theXY plane of the robot coordinate system of the vectorv (neck, r hand) starting from the neck skeleton joint andending in the right hand skeleton joint. In Equation 3 theanalytic formula is reported.

α =

arccos(

xv

PXY (v)

)if yv ≥ 0,

− arccos(

xv

PXY (v)

)if yv < 0

(3)

where xv and yv are the x and y components of theconsidered vector v.

The rotation β around the Axis2 is referred to the relativemotion of the demonstrator with respect to his initial posi-tion. In order to be robust to the sensor noise we consideredthe position of the segment w (l shoulder, r shoulder) fromleft shoulder skeleton joint to right shoulder skeleton joint,in such way also the orientation of the vector w can easilybe included in the formula (Equation 4).

β =π

2− arccos

PX(M−1R · dw

)lAxis2

; (4)

Page 3: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

Fig. 2. Comau Smart5 SiX operating area (red line). The overall dimensionsare also reported.

MR =

yw/‖w‖ xw/‖w‖ 0−xw/‖w‖ yw/‖w‖ 0

0 0 1

(5)

where• MR is the rotation matrix between initial and current w

segment (Equation 5);• dw is the distance between initial and current position

of the w mean point pm ∈ w;• PX denotes the projection onto X;• lAxis2 is the length of the robot part involved in the

motion (namely Axis2 - Axis1).

(a) (b)

Fig. 3. The Comau Smart5 SiX robot represented through URDF linksand joints (a), and simulated in Gazebo (b).

The Axis3 rotation γ really represents the human arm andit depends from both v and w. Equation 6 explains how tocalculate the desired angle γ starting from skeleton joints.

γ =

β − π2 − arccos

√x2v+y2v‖v‖ if zv ≥ 0,

β − π2 + arccos

√x2v+y2v‖v‖ if zv < 0

(6)

where the notation is the same used in the previous equations.

III. THE LEARNING SYSTEM

In order to test the motion retargeting system in a realRobot Learning from Demonstration environment a frame-work based on a paper from Calinon et al. [4] was set. In hiswork, Calinon compared several techniques to evaluate theimitation performances; our aim is different: the imitationperformance is connected to the motion retargeting system,so we used only a framework configuration, the best per-forming one. The demonstrated trajectories were encoded ina Gaussian Mixture Model (GMM) and Gaussian MixtureRegression (GMR) was applied to retrieve a generalizedtrajectory. In our scenario, no reduction of dimensionalityis needed; as explained in Section II, there is no redundancyin human motion extracted to calculate the 3 robot angles.And even if a reduced dimensionality can be achieved forsome particular tasks, the number of joints is pretty low and abetter re targeting evaluation is feasible retaining all of them.Figure 4 shows an overview of the complete framework.

Fig. 4. Overview of the Robot Learning from Demonstration framework.

A. Gaussian Mixture Model

Gaussian Mixture Model is a probability manner to encodea set of N data, in this work demonstrations. The aim isto obtain the best sum of K Gaussian components whichapproximate the input dataset ξ = {ξj}j=1,...,N , ξj ∈RD. Each demonstration consists on a sequence of robotjoint angles ξs ∈ RD−1 recorded in a specific temporal

Page 4: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

instant ξt ∈ R. The resulting probability density functionis described in Equation 7.

p (ξj) =

K∑k=1

πkN (ξj ;µk,Σk) (7)

where• πk are priors probabilities;• N (ξj ;µk,Σk) are Gaussian distribution defined by µk

and Σk, respectively mean vector and covariance matrixof the k-th distribution.

The Expectation-Maximization (EM) [7] algorithm is itera-tively used to estimate the optimal mixture parameters (πk,µk, Σk). The Bayesian Information Criterion (BIC) [22] wasused to estimate the optimal number of Gaussian componentsK in the Mixture. The score assigned at each considered kis described in Equation 8

SBIC = −L+np2

logN (8)

where• L =

∑Nj=1 log (p (ξj)) is the model log-likelihood;

• np = (K−1) +K(D+ 1/2D(D+ 1)) is the number offree parameters required for a mixture of K componentswith full covariance matrix.

The log-likelihood measures how well the model fits the data,while the second term aims to avoid data overfitting to keepthe model general.

B. Gaussian Mixture Regression

The aim of Gaussian Mixture Regression is to retrievea smooth generalized version of the signal encoded inthe associated Gaussian Mixture Model. In this work, wecalculated the conditional expectation of robot joint anglesξs, starting from the consecutive temporal values ξt knowna priori. As we already said, a Gaussian component k isdefined by the parameters (πk, µk, Σk), with:

µk = {µt,k µs,k} Σk =

[Σt,k Σts,kΣst,k Σs,k

](9)

The conditional expectation and its covariance can be esti-mated respectively using Equation 10 and 11.

ξs = E [ξs |ξt ] =

K∑k=1

βk ξs,k (10)

Σs = Cov [ξs |ξt ] =

K∑k=1

β2kΣs,k (11)

where• βk =

πkN (ξt|µt,k,Σt,k )∑Kj=1N (ξt|µt,j ,Σt,j )

is the weight of the k-thGaussian component through the mixture;

• ξs,k = E [ξs,k |ξt ] = µs,k + Σst,k (Σt,k)−1

(ξt − µt,k)is the conditional expectation of ξs,k given ξt;

• Σs,k = Cov [ξs,k |ξt ] = Σs,k + Σst,k (Σt,k)−1

Σts,k isthe conditional covariance of ξs,k given ξt.

Thus, the generalized form of the motions ξ ={ξt, ξs

}is generated by keeping in memory only the means and

covariance matrices of the Gaussian components calculatedthrough the GMM.

IV. EXPERIMENTAL RESULTS

The experiments consist of two different scenarios.In the first experiment the robot learns how to reach an

object at a specific position in a set of several similar itemsstacked one near each other. The scenario (Figure 5) iscomposed by three stacks of boxes placed in front of thedemonstrator who pushes a specific box. Three demonstra-tors repeated the task reaching three different boxes, 15 timesper box. The first selected box is the one on the top of thecentral stack, the second box is the one on the bottom ofthe right stack, and the third box is the one in the middle ofthe left stack. The robot has to reproduce the three subtasksreaching the right object each time.

Fig. 5. Overview of the first experimental scenario in the simulated world.The three subtasks performed by the demonstrators are numbered in thecorresponding order.

In the second scenario the demonstrators are asked tomove a box along a linear path from the beginning to theend of a small table (Figure ??); the covered distance is45 cm. As in the previous experiment three demonstratorsrepeated the task 15 times. The robot should be able tocorrectly reproduce the task showed by the actors in asimilar simulated environment (Figure 7) . The virtual boxis equivalent to the real one less than the color; instead, thetable is larger in order to avoid singularities in the simulatorwhen the box reach the end of the table.

The experiments were performed on the Comau Smart5SiX simulated in Gazebo. No experiment is actually beenperformed on the real robot, but we are working on testingour system also in the real world. The demonstrations weremanually segmented and, subsequently, resampled using aspline interpolation technique to T = 1750 data-points. Allthe collected examples are properly re-targeted to the robotmotion system, but only 107 up to 180 led to a qualitativelycorrect task once performed by the robot. In the first task

Page 5: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

Fig. 6. An actor performing the movement of the box requested in thesecond task.

the wrong box was pushed, while in the second one the boxwas not properly moved.

Fig. 7. Overview of the second simulated scenario.

This experiment aims to test the precision and the re-peatability of each task using the data collected through ourmotion re-targeting system. These features are very importantto achieve the high performances required in an industrialenvironment.

We tested the repeatability reproducing the first task20 times for each selected box, while the second task issimulated 25 times. Finally, the robot is able to correctlyreproduce both a task 85 times. From the first experimentwe do not have an accurate measure of the precision, thuswe used the covered distance in the second scenario totest this aspect. The data collected from the 25 attemptsvary from 44.286 to 46.542 cm, giving a resulting meanof 45.011 ± 0.402 cm. This is a good simulation result, infact the variability is due mainly to two factors: the initialconditions in the simulation environment and the sensornoise. Such precision can be feasible if we are working with a

big gripper and large plates, but it is not so good as expectedin a soldering task.

The results (Figure 8) showed that using our GMM/GMRframework the tasks are well learned with a low numberof demonstrations (11 examples in the worst case). We testsingle actor and multi-actors learning: even if the tasks canbe correctly performed in both methods, the single actorapproach needs less examples that the multi-actors one. Thepath generated by the framework avoids rapid accelerationsbetween timesteps and large oscillations in robot velocity.The resulting trend is very smooth and it is easy to noticethat the sensor noise is removed, in fact the vertical straight-lines peaks present in the blue demonstrations are completelyabsent in the final red path.

In this work, we did not look at velocity, acceleration orforce of the robot during the performance. This is a directlyconsequence of using positions to control the robot. We choseon purpose this modality to better focus on precision andrepeatability, even omitting other important parameters.

V. CONCLUSIONS

In this paper, a RLfD framework was developed in orderto test a novel motion re-targeting technique to map wholehuman body to a manipulator. Information extracted fromhuman natural motion were reprojected to the robot jointsspace and used to train the GMM/GMR system at the baseof the learning framework. The new trajectories generatedfrom the regression were tested on a virtual model of ComauSmart5 SiX we presented for the first time in this work.Results highlight that is feasible to control a industrial robotusing natural demonstrations.

As future work we will test the motion re-targeting systemin a more challenging scenario. We also plane to control themanipulator even in velocity and acceleration. In order toimprove the precision of the robot, a computer vision systemcan be integrated in the loop to identity objects and guidethe the end effector to the right position.

REFERENCES

[1] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot pro-gramming by demonstration. In B. Siciliano and O. Khatib, editors,Handbook of Robotics, pages 1371–1394. Springer, Secaucus, NJ,USA, 2008.

[2] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey ofrobot learning from demonstration. Robotics and Autonomous Systems,57(5):469–483, 2009.

[3] S. Calinon and A. Billard. Stochastic gesture production and recogni-tion model for a humanoid robot. In Intelligent Robots and Systems,2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Con-ference on, volume 3, pages 2769–2774. IEEE, 2004.

[4] S. Calinon, F. Guenter, and A. Billard. On learning, representing,and generalizing a task in a humanoid robot. Systems, Man, andCybernetics, Part B: Cybernetics, IEEE Transactions on, 37(2):286–298, 2007.

[5] S. Calinon and A. Billard. A probabilistic programming by demon-stration framework handling constraints in joint space and task space.In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ Inter-national Conference on, pages 367–372. IEEE, 2008.

[6] B. Dariush, M. Gienger, A. Arumbakkam, Y. Zhu, B. Jian, K. Fu-jimura, and C. Goerick. Online transfer of human motion to hu-manoids. International Journal of Humanoid Robotics, 6(02):265–289,2009.

Page 6: Learning how to approach industrial robot tasks from ...robotics.dei.unipd.it/images/Papers/Conferences/Michieletto_ARSO... · Learning how to approach industrial robot tasks from

(a) Results of GMM/GMR applied to Axis2.

(b) Results of GMM/GMR applied to Axis2.

(c) Results of GMM/GMR applied to Axis3.

Fig. 8. Results obtained in the first task using a GMM trained with 11examples (blue) to calculate the regression (red) through GMR for Axis1 (a),Axis2 (b), Axis3 (c). The vertical straight-lines peaks are out-layers dueto the sensor noise. Using the GMM/GMR model the robot avoid rapidaccelerations between timesteps and large oscillations in its velocity.

[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihoodfrom incomplete data via the em algorithm. Journal of the RoyalStatistical Society. Series B (Methodological), pages 1–38, 1977.

[8] R. Dillmann. Teaching and learning of robot tasks via observation ofhuman performance. Robotics and Autonomous Systems, 47(2):109–116, 2004.

[9] G. Du, P. Zhang, J. Mai, and Z. Li. Markerless kinect-based handtracking for robot teleoperation. INTERNATIONAL JOURNAL OFADVANCED ROBOTIC SYSTEMS, 9, 2012.

[10] M. Freese, S. Singh, F. Ozaki, and N. Matsuhira. Virtual robotexperimentation platform v-rep: a versatile 3d robot simulator. InProceedings of the Second international conference on Simulation,modeling, and programming for autonomous robots, SIMPAR’10,pages 51–62, Berlin, Heidelberg, 2010. Springer-Verlag.

[11] M. Gleicher. Retargetting motion to new characters. In Proceedingsof the 25th annual conference on Computer graphics and interactive

techniques, pages 33–42. ACM, 1998.[12] R. Gopalan and B. Dariush. Toward a vision based hand gesture

interface for robotic grasping. In Intelligent Robots and Systems, 2009.IROS 2009. IEEE/RSJ International Conference on, pages 1452–1459.IEEE, 2009.

[13] J. Han, L. Shao, D. Xu, and J. Shotton. Enhanced computer vision withmicrosoft kinect sensor: A review. IEEE Transactions on Cybernetics,2013.

[14] M. Hersch, F. Guenter, S. Calinon, and A. Billard. Dynamicalsystem modulation for robot learning via kinesthetic demonstrations.Robotics, IEEE Transactions on, 24(6):1463–1467, 2008.

[15] A. Kar. Skeletal tracking using microsoft kinect. Methodology, 1:1–11,2010.

[16] N. Koenig and A. Howard. Design and use paradigms for gazebo,an open-source multi-robot simulator. In Intelligent Robots and Sys-tems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ InternationalConference on, volume 3, pages 2149–2154 vol.3, Sept.-2 Oct.

[17] J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard.Interactive control of avatars animated with human motion data. InACM Transactions on Graphics (TOG), volume 21, pages 491–500.ACM, 2002.

[18] M. M. Marinho, A. A. Geraldes, A. P. Bo, and G. A. Borges.Manipulator control based on the dual quaternion framework forintuitive teleoperation using kinect. In Robotics Symposium and LatinAmerican Robotics Symposium (SBR-LARS), 2012 Brazilian, pages319–324. IEEE, 2012.

[19] K. Miura, M. Morisawa, S. Nakaoka, F. Kanehiro, K. Harada,K. Kaneko, and S. Kajita. Robot motion remix based on motioncapture data towards human-like locomotion of humanoid robots. InHumanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS Interna-tional Conference on, pages 596–603. IEEE, 2009.

[20] M. Munaro, G. Ballin, S. Michieletto, and E. Menegatti. 3D flowestimation for human action recognition from colored point clouds.Journal on Biologically Inspired Cognitive Architectures, 2013.

[21] N. S. Pollard, J. K. Hodgins, M. J. Riley, and C. G. Atkeson. Adaptinghuman motion for the control of a humanoid robot. In Roboticsand Automation, 2002. Proceedings. ICRA’02. IEEE InternationalConference on, volume 2, pages 1390–1397. IEEE, 2002.

[22] G. Schwarz. Estimating the dimension of a model. The annals ofstatistics, 6(2):461–464, 1978.

[23] Y. Seol, C. OSullivan, and J. Lee. Creature features: Online motionpuppetry for non-human characters. In Proceedings of the 2013 ACMSIGGRAPH/Eurographics Symposium on Computer Animation, 2013.

[24] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio,A. Blake, M. Cook, and R. Moore. Real-time human pose recognitionin parts from single depth images. Communications of the ACM,56(1):116–124, 2013.

[25] A. Staranowicz and G. L. Mariottini. A survey and comparison ofcommercial and open-source robotic simulator software. In Proceed-ings of the 4th International Conference on PErvasive TechnologiesRelated to Assistive Environments, PETRA ’11, pages 56:1–56:8, NewYork, NY, USA, 2011. ACM.New York, NY, USA, 2011. ACM.

[26] K. Yamane, Y. Ariki, and J. Hodgins. Animating non-humanoidcharacters with human motion data. In Proceedings of the 2010 ACMSIGGRAPH/Eurographics Symposium on Computer Animation, SCA’10, pages 169–178, Aire-la-Ville, Switzerland, Switzerland, 2010.Eurographics Association.

[27] C. Zito, M. Kopicki, R. Stolkin, C. Borst, F. Schmidt, M. Roa,and J. Wyatt Sequential Re-planning for Dextrous Grasping UnderObject-pose Uncertainty In RSS 2013 Workshop on Manipulation withUncertain Models, Berlin, Germany, 2013.