shared control strategy applied to command of an …

5
SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN ASSISTIVE TELEPRESENCE ROBOT Guilherme Baldo * , Alan Floriano , Berthil Longo , Teodiano Bastos * Electrical Engineering Department, Federal University of Espirito Santo - UFES, Av. Fernando Ferrari, 514 - 29075-910, Vitoria, Espirito Santo, Brazil Post-Graduate Program of Electrical Engineering, Federal University of Espirito Santo - UFES, Av. Fernando Ferrari, 514 - 29075-910, Vitoria, Espirito Santo, Brazil Post-Graduate Program of Biotechnology, Federal University of Espirito Santo - UFES, Av. Marechal Campos, 1468, Vitoria, Espirito Santo, Brazil Emails: [email protected], [email protected], [email protected], [email protected] Abstract— This paper presents a Shared Control strategy designed to command an Assistive Telepresence Robot integrated to a Brain Computer Interface (BCI) destined to people with motor disabilities. To implement the control, a tangential escape method of obstacle avoidance was used. The role of the shared control is to facilitate the command of the robot sending less commands, decreasing the time spent and the mental workload, to perform the same tasks. This work compares the task completion time and the number of commands needed to complete a task proposed in our experiment. An 18.39% time reduction and 17.06% number of commands reduction was acquired with the implementation of the shared control. Keywords— Shared Control, Telepresence, Tangential Escape, Brain Computer Interface 1 Introduction The capability of extracting information of brain signals and use them to communicate with other people or interact with the environment is a key point in the development of Brain Computer In- terfaces (BCI) (Tonin et al., 2010). A BCI technology can be used as an assistive technologies (AT) and increase the quality of life of people with severe disabilities. In this scenario a lot of those AT are being developed to assist peo- ple with motor disabilities, such as the Robotic Telepresence combined with a BCI. Telepresence Robots are mobile robots equipped with a camera and a screen that can move, interact and commu- nicate with the environment, and with the people around. That gives the user the sensation of pres- ence in the ambient (Park, 2013). The Telepresence combined with the BCI could be seen as an extension of the user and can enable people with severe disabilities to perceive, explore and interact with the real environment us- ing only brain activity (Escolano et al., 2012). Operating a Telepresence Robot using a BCI in an environment that might have obstacles might be a complex and frustrating task, since the user has to deal with variabilities of the environ- ment, obstacles avoidance and reduced vision an- gle caused by the usage of a video camera (Tonin et al., 2011). Moreover, BCIs are limited by a low information transfer rate making it more difficult to use the control (Leeb and Mill´ an, 2013). Thus, it is important to implement a strat- egy to facilitate the usage of the system. In this scenario the Shared Control facilitates the opera- tion of the system by the user, since in this strat- egy the robot uses his/her inputs combined with the information of the sensors to modify the com- mands and decide the best way to do the users will (Goodrich et al., 2013). So the system can perform the intention of the user with less com- mands and less time spent. Another important key factor is the reduction of the mental workload of the user, since it can be difficult to send a lot of commands using a BCI. Material and Methods section presents the telepresence robot, the programming environ- ment, the discussion about the Shared Control and the implementation of the control system. In the Evaluation section the evaluation of the sys- tem and the experiment will be presented. In the Results and Discussion section the results achieved with both Direct Control (Control with- out any autonomy of the robot) and the Shared Control will be presented, and the performance improvement using the Shared Control will be shown. Finally the Conclusion of this work and the next steps towards the whole system develop- ment will be presented. 2 Material and Methods 2.1 Programming Enviroment The programming language used for the control system implementation was C++ together with the ARIA library (Advanced Robot Interface for Applications) provided by MobileRobots. The ARIA library allows the dynamic control of veloc- ity, relative orientation and others parameters of movement of the robot, using high-level functions to access internal and external sensor of the robot.

Upload: others

Post on 16-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN …

SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN ASSISTIVETELEPRESENCE ROBOT

Guilherme Baldo∗, Alan Floriano†, Berthil Longo‡, Teodiano Bastos†

∗Electrical Engineering Department, Federal University of Espirito Santo - UFES,Av. Fernando Ferrari, 514 - 29075-910, Vitoria, Espirito Santo, Brazil

†Post-Graduate Program of Electrical Engineering, Federal University of Espirito Santo - UFES,Av. Fernando Ferrari, 514 - 29075-910, Vitoria, Espirito Santo, Brazil

‡Post-Graduate Program of Biotechnology, Federal University of Espirito Santo - UFES,Av. Marechal Campos, 1468, Vitoria, Espirito Santo, Brazil

Emails: [email protected], [email protected], [email protected],

[email protected]

Abstract— This paper presents a Shared Control strategy designed to command an Assistive TelepresenceRobot integrated to a Brain Computer Interface (BCI) destined to people with motor disabilities. To implementthe control, a tangential escape method of obstacle avoidance was used. The role of the shared control is tofacilitate the command of the robot sending less commands, decreasing the time spent and the mental workload,to perform the same tasks. This work compares the task completion time and the number of commands neededto complete a task proposed in our experiment. An 18.39% time reduction and 17.06% number of commandsreduction was acquired with the implementation of the shared control.

Keywords— Shared Control, Telepresence, Tangential Escape, Brain Computer Interface

1 Introduction

The capability of extracting information of brainsignals and use them to communicate with otherpeople or interact with the environment is a keypoint in the development of Brain Computer In-terfaces (BCI) (Tonin et al., 2010).

A BCI technology can be used as an assistivetechnologies (AT) and increase the quality of lifeof people with severe disabilities. In this scenario alot of those AT are being developed to assist peo-ple with motor disabilities, such as the RoboticTelepresence combined with a BCI. TelepresenceRobots are mobile robots equipped with a cameraand a screen that can move, interact and commu-nicate with the environment, and with the peoplearound. That gives the user the sensation of pres-ence in the ambient (Park, 2013).

The Telepresence combined with the BCIcould be seen as an extension of the user and canenable people with severe disabilities to perceive,explore and interact with the real environment us-ing only brain activity (Escolano et al., 2012).

Operating a Telepresence Robot using a BCIin an environment that might have obstaclesmight be a complex and frustrating task, since theuser has to deal with variabilities of the environ-ment, obstacles avoidance and reduced vision an-gle caused by the usage of a video camera (Toninet al., 2011). Moreover, BCIs are limited by a lowinformation transfer rate making it more difficultto use the control (Leeb and Millan, 2013).

Thus, it is important to implement a strat-egy to facilitate the usage of the system. In thisscenario the Shared Control facilitates the opera-tion of the system by the user, since in this strat-

egy the robot uses his/her inputs combined withthe information of the sensors to modify the com-mands and decide the best way to do the userswill (Goodrich et al., 2013). So the system canperform the intention of the user with less com-mands and less time spent. Another importantkey factor is the reduction of the mental workloadof the user, since it can be difficult to send a lotof commands using a BCI.

Material and Methods section presents thetelepresence robot, the programming environ-ment, the discussion about the Shared Controland the implementation of the control system. Inthe Evaluation section the evaluation of the sys-tem and the experiment will be presented. Inthe Results and Discussion section the resultsachieved with both Direct Control (Control with-out any autonomy of the robot) and the SharedControl will be presented, and the performanceimprovement using the Shared Control will beshown. Finally the Conclusion of this work andthe next steps towards the whole system develop-ment will be presented.

2 Material and Methods

2.1 Programming Enviroment

The programming language used for the controlsystem implementation was C++ together withthe ARIA library (Advanced Robot Interface forApplications) provided by MobileRobots. TheARIA library allows the dynamic control of veloc-ity, relative orientation and others parameters ofmovement of the robot, using high-level functionsto access internal and external sensor of the robot.

Page 2: SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN …

A simulator software called MobileSim, providedby MobileRobots, was used for the developmentof the control system.

2.2 Telepresence Robot

The research robotic platform Pioneer 3DX, de-veloped by MobileRobots, was used in the con-struction of the telepresence robot (Figure 1). ThePioneer 3DX is a sturdy robot composed of two-wheeled differential traction. The platform alsohas ultrasonic sensors, SICK LMS-200 laser, bat-teries, wheel encoders, a microcontroller and anon-board computer. The programming environ-ment was executed in Ubuntu 12.04 LTS.

Figure 1: TRON - Telepresence Robotic Naviga-tion of UFES/Brazil.

2.3 Final Position Controller

A Kinematics Model

The kinematics model was used for the robot mod-elling. The Cartesian equations of the kinematicsmodel are given by (Figure 2)

x = v cosϕ, y = v sinϕ, ϕ = ω (1)

B Position Controller

The mobile robot objective while moving is toleave the origin point and get to the destiny point.That means that the position errors are given by

xe = xd − xr, ye = yd − yr (2)

These errors in polar coordinates are given by

ρ =√x2e + y2e , θ = arctan

yexe

(3)

From this, the objective of the mobile robot be-comes to reduce these errors to zero. Writing theEq.(1) in polar coordinates

ρ = −vcosα, α = vsinα

ρ−ω, θ = v

sinα

ρ(4)

Figure 2: Coordinates system for final positioncontroller.

For the position controller (Secchi, 1998), the bal-ance point is given by

ρ = 0, α = 0 (5)

Therefore it should be ensured that

ρ→ 0, α→ 0, as t→∞ (6)

taking the Lyapunov’s candidate function

V (ρ, α) =1

2ρ2 +

1

2α2 (7)

It’s possible to prove the global asymptotic stabil-ity of the position controller. Applying the timederivative of the Eq.(7) in Eq.(4)

V (ρ, α) = −ρv cosα+ αvsinα

ρ− αω (8)

Setting the control actions (Secchi, 1998) as

v = vmax tanh ρ cosα

ω = kwα+ vmaxtanh ρρ sinα cosα

(9)

Applying Eq.(9) in Eq.(8)

V (ρ, α) = −vmaxρ tanh ρ cosα2 − kwα2 < 0for vmax, kw > 0

(10)Therefore the asymptotic stability is shown at thebalance point.

C Orientation Controller

It is important to the telepresence robot to per-form rotation commands, thus it was developedorientation control that rotates the robot to de-termined orientation. The controller is proposedin (Toibero et al., 2006). The orientation error isgiven by

ϕe = ϕd − ϕe (11)

Given that the orientation destiny is constant dur-ing the command and taking Eq.(1), we obtain

ϕe = ω (12)

Page 3: SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN …

Taking the proposed Lyapunov’s candidate func-tion

V (ϕe) =1

2ϕ2e (13)

To ensure the global asymptotic stability at thebalance point

ϕe = 0 (14)

Taking the time derivative from Eq.(13) and con-sidering Eq.(12)

V (ϕe) = −ϕeω (15)

The control actions in (Toibero et al., 2006) are

v = 0, ω = ωmax tanhϕe (16)

We have that

V (ϕe) = −ωmaxϕe tanhϕe < 0for ωmax > 0

(17)

Therefore, the global asymptotic stability is en-sured.

2.4 Obstacle Avoidance Strategy

The proposed obstacle avoidance method is basedon tangential escape (Ferreira et al., 2008). Thisapproach avoids the found obstacle by keeping atangent trajectory of the robot relative to the ob-stacle (Figure 3).

Figure 3: Tangential Escape Method.

When the obstacle is detected trough ultra-sonic sensors, the robot calculate which is thenearest sensor to the obstacle and the angle β tothe robot orientation. If the distance measuredby the nearest sensor is smaller than the safe dis-tance, the angle γ is defined to create a virtualdestiny which the robot follows until there is noobstacle.

γ = β − α− 90, if β ≥ 0γ = β − α+ 90, if β < 0

(18)

So, while there is an obstacle detect near thepath of the robot, the robot follows the virtualdestiny, which is a rotation of the real destiny. Assoon as the robot overcomes the obstacle it startsto follow the real destiny again.

2.5 Shared Control

The Direct Control is defined in the work of(Goodrich et al., 2013) by the control system thatthe user sends the commands to the robot and noautonomy of the robot is used. So, in the case ofthe Telepresence in the Direct Control, the usersends the input by its brain activity that is pro-cessed and a command is sent to the robot that ex-ecutes it, despite of the environment. The mainlyfeedback is the visual feedback provided by thecamera (Figure 4).

Figure 4: Direct Control Diagram.

On the other hand, the Shared Control is de-fined in the work of (Goodrich et al., 2013) bythe control system that the user continuously sendcommands to the robot. It has the autonomy tomodify the action determined by the command toincrease the performance and ensure safety.

So, in the case of the Shared Control in theTelepresence, the user sends the input by pro-cessed information of his/her brain activity. Thusa command is sent to the robot as in the DirectControl. However, in the Shared Control, therobot uses also information of the environmentobtained by its sensors and has the autonomy tomodify its action to compensate the lack of pres-ence of the user and the low information transferrate by the BCI to perform the users intent withmore accuracy (Figure 5). So, the Shared Con-trol gives the user the sensation of ”riding a horse”(Leeb and Millan, 2013).

Figure 5: Shared Control Diagram.

The implemented Shared Control system inthis work can receive four commands of the user.The movement commands are: Forward, Back-ward, Turn Left and Turn Right. The commandForward makes the robot move 0.5 meters aheadusing the odometer to control the distance, whilethe Shared Control ensures that the robot doesnot hit any obstacle in the way. The commandBackward works similar to move Forward exceptthat it is to the backwards. The commands TurnRight and Turn Left works equal in the SharedControl and Direct Control. Both use the orien-tation controller mentioned and makes the robot

Page 4: SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN …

turns 30 degrees.

3 Evaluation

3.1 Experiment Protocol

The goal of the experiment is to drive the Telep-resence Robot trough an environment with ob-stacles. Since the objective of the experimentis to evaluate the control system, the commandsare given by the users using a wireless keyboard.Eight volunteers were asked to perform the taskas fast as possible but without hit obstacles (Fig-ure 6). The area of the experiment is a square of4 x 4 m. The proposed task consists of drivingthe robot from the target 1 to the target 2 andso on, till reach the target 4, and then return tothe target 1. The volunteers performed the task 2times, first using the Direct Control and then theShared Control.

Figure 6: Proposed Task.

3.2 Evaluated Criteria

The measures of performance proposed to com-pare the Shared Control with the Direct Controlare (Tonin et al., 2010):

• Task Completion Time: A key point to eval-uate is the time spent by the volunteers tocomplete the determined task, since the timethe user takes to move around in an environ-ment and avoiding obstacles can be frustrat-ing.

• Number of Commands: The number of com-mands is also very important for the user,

since it is difficult to send a lot of commandswhile using the BCI due the low informationtransfer rate.

4 Results and Discussion

The trajectories of the volunteers were recordedby the odometer of the robot and are plotted inFigure 7.

Direct Control Shared Control

Figure 7: Trajectories produced by the volunteersperfoming the proposed task.

The Number of commands and the time spentof each volunteer were recorded in the Experimen-tal Tasks; the values obtained are in the Figure 8.

1 2 3 4 5 6 7 80

20

40

60

80

NumberofCommands

1 2 3 4 5 6 7 80

100

200

300

400

Time[s]

Volunteers

Direct Control

Shared Control

Figure 8: Number of commands and Task Com-pletion Time using Direct Control and SharedControl performed by each volunteer.

The first information that can be infered byFigure 8 is that all volunteers needed less com-mands and less time to perform the tasks whilethe Shared Control was enabled. Applying theWilcoxon test (Japkowicz and Shah, 2011) in theobtained results, it is proven that the results inthe Direct Control and the Shared Control are infact statistically different.

Figure 9 shows the average value of the ob-tained results comparing the Direct and Shared

Page 5: SHARED CONTROL STRATEGY APPLIED TO COMMAND OF AN …

Control. The average number of commandsneeded using the Direct Control was 63.88 ± 6.85while using Shared Control was 53 ± 7.93. Theaverage time spent in the experiment using theDirect Control was 261.38 ± 28.98 [s] while usingthe Shared Control was 213.13 ± 28.07 [s]. Theobtained results showed a reduction of 17.06% inthe average number of commands needed and areduction of 18.39% in the time needed.

Direct Shared45

50

55

60

65

70

NumberofCommands

Direct Shared180

190

200

210

220

230

240

250

260

270

280

Time[s]

Figure 9: Average values of Number of Commandsand Task Completion Time using Direct Controland Shared Control.

These results indicate that the Shared Con-trol in fact can provide a reduction in the effortof the users to use the Telepresence Robot, reduc-tion in time and commands needed to perform thetasks and an improvement of the performance andsafety to control of the system.

5 Conclusion

The experimental results reported in this workdemonstrate that the Shared Control strategy canbe used in the Robotic Telepresence combinedwith BCI to facilitate the users control. TheShared Control can provide a decrease in the men-tal workload, since it is shown that the use of thisstrategy can reduce the number of commands andthe time needed to perform simple tasks and tomove trough environments with obstacles. Nextsteps of this work are to improve the Shared Con-trol and to integrate the control system with theBCI, in addition to execute tests for its evaluation.

6 Acknowledgment

Authors thank UFES/Brazil for the technical andscientific support, and CAPES and CNPq for thefinancial funding.

References

Escolano, C., Antelis, J. M. and Minguez, J.(2012). A telepresence mobile robot con-trolled with a noninvasive brain–computerinterface, Systems, Man, and Cybernetics,

Part B: Cybernetics, IEEE Transactions on42(3): 793–804.

Ferreira, A., Pereira, F. G., Vassallo, R. F., Bas-tos Filho, T. F. and Sarcinelli Filho, M.(2008). An approach to avoid obstacles inmobile robot navigation: the tangential es-cape, Sba: Controle & Automacao SociedadeBrasileira de Automatica 19(4): 395–405.

Goodrich, M. A., Crandall, J. W. and Barakova,E. (2013). Teleoperation and beyond for as-sistive humanoid robots, Reviews of HumanFactors and Ergonomics 9(1): 175–226.

Japkowicz, N. and Shah, M. (2011). Evaluatinglearning algorithms: a classification perspec-tive, Cambridge University Press.

Leeb, R. and Millan, J. d. R. (2013). Introductionto devices, applications and users: Towardspractical bcis based on shared control tech-niques, Towards Practical Brain-ComputerInterfaces, Springer, pp. 107–129.

Park, E. (2013). The adoption of tele-presence sys-tems: Factors affecting intention to use tele-presence systems, Kybernetes 42(6): 869–887.

Secchi, H. (1998). Control de vehıculos autoguia-dos con realimentacion sensorial, Control deVehıculos Autoguiados con RealimentacionSensorial .

Toibero, J., Carelli, R., Kuchen, B. and Canali, L.(2006). Switching controllers for navigationwith obstacles in unknown environments, IVJornadas Argentinas de Robotica (JAR) .

Tonin, L., Carlson, T., Leeb, R. and del Mil-lan, J. (2011). Brain-controlled telepresencerobot by motor-disabled people, Engineer-ing in Medicine and Biology Society, EMBC,2011 Annual International Conference of theIEEE, IEEE, pp. 4227–4230.

Tonin, L., Leeb, R., Tavella, M., Perdikis, S. anddel Millan, J. (2010). The role of shared-control in bci-based telepresence, SystemsMan and Cybernetics (SMC), 2010 IEEE In-ternational Conference on, IEEE, pp. 1462–1466.