ict call 9 reconfig fp7-ict-600825 -...

17
ICT Call 9 RECONFIG FP7-ICT-600825 Deliverable D5.6: Multi-agent demonstrator visualizing heterogeneity June 6, 2016

Upload: others

Post on 21-Mar-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

ICT Call 9RECONFIG

FP7-ICT-600825

Deliverable D5.6:

Multi-agent demonstrator visualizing heterogeneity

June 6, 2016

Page 2: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

Project acronym: RECONFIGProject full title: Cognitive, Decentralized Coordination of Heterogeneous

Multi-Robot Systems via Reconfigurable Task Planning

Work Package: WP5Document number: D5.6Document title: Multi-agent demonstrator visualizing heterogeneityVersion: 0.1

Delivery date: June 06, 2016Nature: ReportDissemination level: Public

Authors: Michele Colledanchise (KTH)Alejandro Marzinotto (KTH)Petter Ogren (KTH)Meng Guo (KTH)Jana Tumova (KTH)Dimos Dimarogonas (KTH)Anastasios Tsiamis (NTUA)George C. Karras (NTUA)Charalampos P. Bechlioulis (NTUA)Kostas J. Kyriakopoulos (NTUA)Kondaxakis Polychronis (Aalto)Khurram Gulzar (Aalto)Ville Kyrki (Aalto)Iasonas Kokkinos (ECP)Stefan Kinauer(ECP)

The research leading to these results has received funding from the European Union SeventhFramework Programme FP7/2007-2013 under grant agreement no600825 RECONFIG.

2

Page 3: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Chapter 1Introduction

This document includes the experiment results of the multi-agent demonstrator that visualizesheterogeneity.

Chapter 2 describes the theoretical background from [3] and [4], which are also included inDeliverable 4.1. Then we present the experiment setup of the distributed plan reconfigurationscheme via knowledge transfer for multi-agent systems. The agents are assumed to be of differentcapabilities. The proposed design is distributed as only local interactions are assumed.

Chapter 3 shows the experiment results. In particular, compared with Deliverable 5.5 whichcontains the scenario that visualizes real-time adaptability, this scenario features heterogeneitywith the NAO humanoid robot and two KUKA youBots. Moreover, real-time adaptability andcollaboration between these heterogeneous robots are also demonstrated, where knowledge aboutthe workspace is transferred between the youBots and the NAO, via implicit gesture recognition.Based on the knowledge update, the youBots then revise their local plans accordingly to satisfytheir local tasks.

In particular, we emphasize first the heterogeneity between the youBot and the NAO, whichare embodied in the following aspects:

• The onboard vision system of NAO allows it to easily recognize the object of interest,estimate its location and perform the pointing action in a natural and accurate way.

• In contrast, the youBot has more accurate motion and actuation abilities, e.g., to navigateto a goal point via a reference trajectory and to grasp the object of interest with high successrate.

• The gesture recognition algorithms developed for the NAO robot can be extended easily toother humanoid robots with a change of scale.

Lastly, the videos mentioned in this deliverable have been uploaded to the Reconfig YouTubechannel [7] and summarized below:

3

Page 4: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

Extension Media Content Description1 Video [8] Experiment Video of Section 3.2 Video [9] Experiment Video of Section 4.1.3 Video [10] Experiment Video of Section 4.2.

Table 1.1: Index to Multimedia Extensions

4

Page 5: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Chapter 2System Model and Experiment Setup

In this chapter, we first present the system model and main objective in Section 2.1 and then weintroduce the experiment setup in Section 2.2. In Section 2.3, we describe the scenario that willbe demonstrated.

2.1 System Model and Objective

For the general system model, we consider a team of heterogeneous agents k ∈ K = {1, 2 · · · ,K},with unique identities (IDs) k. They move within a workspace Π which consists of N regions withunique and pre-defined labels πn, n = 1, 2 · · · , N . Denote by APk = {a1, a2, · · · , a|APk|} the setof APs known to agent k. Its task specification ϕk is an LTL formula over APk. We assume thatϕk remains unchanged for all agents once the system starts.

Denote by ϕk|APkas the set of APs appearing in ϕk. In particular, we consider the task

specification ϕk with the following structure:

ϕk = ϕsoftk ∧ ϕhard

k ,

where ϕsoftk and ϕhard

k are “soft” and “hard” sub-formulas over APk. ϕhardk could include safety

constraints like collision-avoidance: “avoid all obstacles” or energy-supply guarantee: “visit thecharging station infinitely often”. Introducing soft and hard specifications is due to the observationthat the partially-known workspace might render parts of the specification infeasible initially andthus yielding the needs for them to be relaxed, while the safety-critical parts should not berelaxed during the process. Moreover, the motion of agent k in the workspace is described by adeterministic finite-transition system (FTS):

Tk = (Π, −→k,Π0,k, APk, Lk, Wk),

where (i) −→k⊆ Π × Π is the transition relation; (ii) Π0,k ⊆ Π is the set of initial regionswhere agent k may start; (iii) Lk : Π → 2APk is the labelling function that represents agent k’sknowledge. Lk(πi) is the set of APs satisfied by πi; (iv) Wk :−→k→ R+ is the cost of eachtransition. We assume that each agent has only partial knowledge of the workspace initially. ThusTk might be updated afterwards. Denote by T t

k the FTS of agent k at time t. Note that APk

might be different among the agents due to heterogeneity.The objective of this experiment is to demonstrate the effectiveness of the proposed frame-

work to handle heterogeneity in case of partially-known workspace. The proposed solution for

5

Page 6: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

distributed plan reconfiguration via knowledge transfer scheme consists of the following three ma-jor steps: Initial Optimal Plan Synthesis. In this part, each agent synthesizes its initial task andmotion plan when the system starts, based on their initial knowledge and task specification. Thealgorithm provided in [4] guarantees that the initial plan satisfies the hard task specification whilesatisfying the soft specification as much as possible. Knowledge Update and Transfer. The agentshave both the sensing ability to discover the workspace and the communication functionality toshare knowledge with their neighboring agents. In this step, the agents share their individualknowledge about the workspace by a subscriber-publisher communication scheme. The commu-nication payload is significantly reduced compared with a fully synchronized solution. Real-timePlan Reconfiguration. The agents may evaluate their plans given the received new knowledgeregarding their validity, safety and optimality. The algorithms for updating the plan are given in[4], which guarantees the hard task specification is always satisfied and the fulfillment of the softspecification is gradually improved.

2.2 Experiment Setup

The experiment setup involves two youBots, one NAO humanoid, a localization system, one objectof interest.

Each youBot is equipped with two arms, capable of picking up and dropping object of interest(as shown in Figure 2.1). The motion control of the youBot relies on the nonlinear feedback controlscheme that navigates the youBot from any initial position to any goal position in the workspace.The controller takes as inputs the real-time position of the robot within a pre-defined and fixedcoordinate. A collision-avoidance feature is added to the control scheme used in Deliverable 5.5,which will be explained in detail in Section 3.4.

The NAO robot as shown in Figure 2.2 is an autonomous, programmable humanoid robotwith various motion primitives implemented, such as “pick” to pick up a ball; “throw” to throwa ball in hand; “crouch” to crouch; and “wave” to wave one hand. They are coordinated andimplemented using the Behavior Tree motion control scheme [6]. In this demo, it is used as thegesture performer to transfer the identity of the object of interest in the workspace.

The customized localization system we developed is based the ARToolKit application, whichuses a single camera to track markers in the shape of black squares. It offers fast and reliable 3-Dposition data of all the markers in the field of view. If we put the markers on the youBot platformand the camera in the ceiling, we can retrieve the real-time position data of both the youBots andthe object of interest, with respect to one reference marker, which is used as the origin.

In the workspace, there are some predefined regions of interest for each youBot. In particular,the youBot with marker “G” (the one in the front in Figure 2.1) has three regions of interest: onein the middle r3 and two on the left side of the workspace r1, r2, while the youBot with marker“A” (the one in the back in Figure 2.1) has three regions of interest: one in the middle r4 andtwo on the right side of the workspace r5, r6. The object is of interest to youBot “A” and isalways put in one of its regions described above. The NAO robot with marker “C” stays aroundregion r3, without priori knowledge of where the object of interest might be.

The local task for each agent is assigned independently as follows: youBot “G” is required tosurveil over its two regions of interest on the left side. It can be expressed as Linear TemporalLogic formula ϕG = �♦r1 ∧�♦r2; youBot “A” is required to go to region r4, pick up the objectof interest and transport it to region r5, then surveil over regions r5 and r6. It can be expressedas Linear Temporal Logic formula ϕA = ♦

((r4 ∧♦pick)∧♦(r5 ∧♦drop)

)∧ (�♦r5 ∧�♦r6). On

6

Page 7: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

the other hand, the NAO robot “C” is to surveil over region r3 and perform the pointing actionwhenever the object of interest is placed at region r4, i.e., ϕC = (♦�r3) ∧ ♦(r4 ∧ (♦point)).

2.3 Scenario Description

We now describe the experiment scenario which features real-time knowledge transfer and planadaptation. In particular, when the system starts, there is no object of interest in region r3.Thus youBot “G” simply surveils the regions r1 and r2 as specified in the task; youBot “A” visitsregion r4 and surveil over regions r5 and r6, as shown in Figure 2.1; the NAO robot stays atregion r3. Since there is no object of interest in region r4, it can not perform the pick up actionthere.

After the system has been running for around one minute, the object of interest is addedmanually, as shown in Figure 2.2. Moreover, this object is located closer to youBot “G”, meaningthat youBot “G” would detect this object earlier. This information is shared with the NAOrobot, which has the ability to perform the pointing gesture. After initial synchronization withthe youBot “A”, NAO points at the object while youBot “A” observes the gesture and estimatesthe location of the object, relative to its location. Afterwards, youBot “A” incorporates this newknowledge into its workspace model and revises its plan, i.e., it navigates to region r3 and picksup the object of interest.

Figure 2.1: Two youBots and one NAO exe-cutes the initial plan

Figure 2.2: The object of interest is addedmanually at region r4 later in the experiment.

7

Page 8: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Chapter 3Experiment Results

In this chapter, we present the actual experiment results, including (i) the stage before the objectof interest is placed, in Section 3.1. Since now the local task of youBot “A” is not feasible, itsmaximal satisfying plan is used ; (ii) the knowledge transfer via gesture after the object is placed, inSection 3.2; The new knowledge regarding the object of interest is passed from youBot “G” to theNAO, which then performs the gesture towards youBot “A” regarding the existence and locationof the object of interest; (iii) the plan adaptation after the knowledge is transferred, in Section 3.3.YouBot “A” observes this gesture and updates its workspace model, which furthermore improvesits plan by incorporating this update.

3.1 Priori to object placement

The experiment results priori to the object of interest being placed are shown in Figure 3.1.In particular, youBot “G” fulfills its task specification by visiting regions r1 and r2 repetitively,as shown in Figures 3.1a-3.1d. Since initially the object of interest is not at region r4, youBot“A” can not perform the action “pick” there as required by the task specification. Instead,the least-violating or the maximal-satisfying plan is synthesized as its initial plan, which is tosurveil over region r5 and r6 by navigating itself to region r5 and then r6 repetitively, as shown inFigures 3.1a-3.1d. The NAO robot simply stays at region r3 as there is no object of interest atregion r4. The discrete plan synthesis algorithm is described in Section 2.1 for both the nominalplan and maximal-satisfying plan.

3.2 Knowledge Transfer via Implicit Gesture

After the system has been running for some time, a human operator puts the object of interestclose to region r3, as shown in Figure 3.2a. As a result, youBot “G” discovers this object in itsregion of interest r3 and then sends this information (including object identity and located region)directly to the NAO robot. However note that the exact location of the object is not known andthus is not transferred to the NAO.

After the NAO robot receives this information, it incorporates this information into its workspacemodel and more importantly updates its discrete plan, in order to reduce the current cost or improvethe satisfiability of the current plan. The plan adaptation algorithm is described in Section 2.1.The derived plan requires that the NAO robot perform the pointing action at region r3’ as shownin Figure 3.2b. The pointing action can be decomposed into three components: firstly, the object

8

Page 9: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

(a) (b)

(c) (d)

Figure 3.1: Experiment results before the object placement as described in Section 3.1.

recognition module from Deliverables 1.1-1.3 is used to recognize the segmentation related to theobject identity, as shown in Figure 3.2c; the object pointing gesture planning and implicit identitysharing from Deliverables 2.1-2.3 are activated to transfer this knowledge to youBot “A” , asshown in Figure 3.2d; youBot “A” observes the pointing gesture, anchors the object identity andposition, as shown in Figures 3.2e-3.2f. A more detailed description of the implementation of thepointing gesture can be found in [11].

3.3 Revised Plan Execution

After the knowledge transfer described by Section 3.2, youBot “A” revises its local plan in orderto improve the satisfaction of its local task. Particularly, since region r4 satisfies the propertythat there is an object of interest, its task specification requires that it should then perform theaction “pick” at region r4 and transport it to region r5. Note that the identity and location of theobject is transferred implicitly via gesture, which is subject to uncertainty and disturbance. ThusyouBot “A” navigates to region r4 and performs the action “pick” (as shown in Figure 3.3a).The grasping action relies on real-time visual servo and is capable of precise grasping. Aftera successful grasping, this object is then transported to region r5 (as shown in Figure 3.3b).Afterwards, youBot “A” visits regions r5 and r6 repetitively (as shown in Figures 3.3c-3.3d). Notethat youBot “G” keeps surveiling regions r1 and r2.

Afterwards, the system converges to a steady status as the workspace remains unchanged,i.e., the NAO stays at region r5; youBot “G” keeps surveiling regions r1 and r2; and youBot “A”surveils over regions r5 and r6.

9

Page 10: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

3.4 Collision avoidance

It is worth mentioning that the navigation scheme of the youBot has been enhanced with the colli-sion avoidance scheme, compared with that in Deliverable 5.5. In particular, while the youBot “A”is navigating between its regions of interst, it takes into account to position of the NAO robotand avoids it. It can be seen in Figures 3.4a-3.4b that after youBot “A” transfers the knowledgeto the NAO robot, it avoids the NAO robot while it is moving from region r1 to r2. Details aboutthe reactive navigation control scheme can be found in [12].

10

Page 11: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

(a) (b)

(c) (d)

(e) (f)

Figure 3.2: Implicit knowledge transfer via gesture as described in Section 3.2.

11

Page 12: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

(a) (b)

(c) (d)

Figure 3.3: Revised plan execution of youBots “G” and “A” as described in Section 3.3.

(a) (b)

Figure 3.4: Navigation control of youBot “G” with collision avoidance as described in Sec-tion 3.4.

12

Page 13: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Chapter 4Heterogeneous Leader-follower Schemefor Collaboration

This chapter describes the experimental procedure that involves two ore more agents that arenominally assigned local surveillance tasks within the workspace. In addition, some agents areresponsible for contingent transportation tasks of specific objects which may occur during runtime.These transportation tasks require the collaboration from another agent. Since the agents are ableto communicate only within a specific range, some of the agents are assigned with surveillancetasks in order to collect and transfer help requests among the agents responsible for transportingthe objects. Finally, after the fulfillment of their contingent transportation tasks the agentscontinue with their nominal specifications.

4.1 Case A: two Youbots

This scenario includes two Youbots which are initially assigned surveillance tasks within theworkspace as depicted in Figure 4.1a. At a specific time instant an object is placed by a humanuser in the workspace, as shown in Figure 4.1b. The Youbot responsible for the transportationof this object leaves its surveillance task and switches to a leader mode in order to proceed tothe object. As soon as it has reached the desired goal, it broadcasts a help request message.While the other Youbot performs its surveillance task, it receives the request message when itarrives at the proximity of the leader Youbot. Then, it switches to a follower mode and takesplace close to the object to be transported. The two Youbots simultaneously grasp the object,as shown in Figure 4.1c and transport it cooperatively to a predefined location. They release theobject and continue with their nominal surveillance tasks. At a next time instant, two objectsare simultaneously inserted in the workspace by a human user. In this case, each of the Youbotsswitches to the leader mode, moves to its corresponding object and broadcasts a help requestmessage for collaboration. Thus, the system halts, as there is a conflict of interest, as illustratedin Figure 4.1d. The complete experiment video can be found in [9].

4.2 Case B: two Youbots and a Pioneer

This scenario includes two Youbots and one Pioneer mobile robot which are initially assignedsurveillance tasks within the workspace as shown in Figure 4.2a. At a specific time instant twoobjects are simultaneously inserted in the workspace by a human user (see Figure 4.2b). As

13

Page 14: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

(a) (b)

(c) (d)

Figure 4.1: Experiments with two Youbots as described in Section 4.1

a result, each of the Youbots switches to the leader mode, moves to its corresponding objectand broadcasts a help request message for collaboration. The mobile robot which is still undersurveillance task collects request messages from each Youbot as well as the priority of each message(e.g, which of the Youbots asked for help first). It broadcasts the collected messages when itreaches the proximity of each Youbot. The Youbot that asked for help last switches to followermode, as depicted in Figure 4.2c and takes place close to the object to be transported. The twoYoubots simultaneously grasp the object and transport it cooperatively to a predefined location,as illustrated in Figure 4.2d. They release the object and continue with the transportation of theother object. After all the transportation tasks have been fulfilled, the robots continue with theirnominal surveillance tasks. The complete experiment video can be found in [10].

14

Page 15: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

D5.6 FP7-ICT-600825 RECONFIG June 6, 2016

(a) (b)

(c) (d)

Figure 4.2: Experiments with two Youbots and a mobile robot as described in Section 4.2

15

Page 16: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Chapter 5Summary and Future Plan

5.1 Summary

To conclude, in this deliverable we have presented the following content:Chapter 2 describes the experiment setup for the distributed plan reconfiguration scheme via

knowledge transfer for multi-agent systems. In particular, the system consists of two youBotswith locally-assigned tasks: one for surveillance and one for transporting objects. Three differentscenarios are considered: including a nominal scenario where the workspace if fully known and ascenario where new features about the workspace are added after the system starts. Moreover,we have demonstrated the improved object detection and grasping modules in the third scenario.Chapter 3 presents in detail the experiment results and highlights the contributions. Particularly,the heterogeneity between the youBot and the NAO are embodied in the following aspects:

• The onboard vision system of NAO allows it to easily recognize the object of interest,estimate its location and perform the pointing action in a natural and accurate way.

• In contrast, the youBot has more accurate motion and actuation abilities, e.g., to navigateto a goal point via a reference trajectory and to grasp the object of interest with high successrate.

• The gesture recognition algorithms developed for the NAO robot can be extended easily toother humanoid robots with a change of scale.

Chapter 4 describes the experimental procedure that demonstrates the heterogeneous leader-follower scheme for collaboration.

16

Page 17: ICT Call 9 RECONFIG FP7-ICT-600825 - CentraleSupelecvision.mas.ecp.fr/Personnel/iasonas/reconfig/userfiles/downloads/D5.6.pdf · the youBot “A”, NAO points at the object while

Bibliography

[1] C. Baier, J.-P Katoen. Principles of model checking. The MIT Press, 2008.

[2] P. Gastin and D. Oddoux. (2012, viewed September) LTL2BA tool. URL: http://www.lsv.ens-cachan.fr/ gastin/ltl2ba/.

[3] M. Guo, K. H. Johansson and D. V. Dimarogonas. Motion and Action Planning under LTLSpecification using Navigation Functions and Action Description Language. IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems(IROS), 2013.

[4] M. Guo, D. V. Dimarogonas. Distributed Plan Reconfiguration via Knowledge Transfer inMulti-agent Systems under Local LTL Specifications. In IEEE International Conference onRobotics and Automation(ICRA), 2014.

[5] M. Guo, K. H. Johansson, D. V. Dimarogonas. Revising Motion Planning under Linear Tem-poral Logic Specifications in Partially Known Workspaces. IEEE International Conference onRobotics and Automation(ICRA), 2013.

[6] M. Colledanchise, A. Marzinotto and P. Ogren, Performance Analysis of Stochastic BehaviorTrees. IEEE International Conference on Robotics and Automation(ICRA), 2014.

[7] https://www.youtube.com/channel/UCDnZ3apkhOYG256Hni0A_bA

[8] https://youtu.be/YTwrQKFPHX0

[9] https://youtu.be/bLV4nclf3eo

[10] https://youtu.be/MoRLzHjrhNo

[11] P. Kondaxakis, J. Pajarinen and V. Kyrki. Real-Time Recognition of Pointing Gestures forRobot to Robot Interaction. in IEEE/RSJ International Conference on Intelligent Robots andSystems, 2014.

[12] C. P. Bechlioulis, K. J. Kyriakopoulos. Robust model-free formation control with prescribedperformance and connectivity maintenance for nonlinear multi-agent systems. IEEE Conferenceon Decision and Control (CDC), 2014.

17