journal of manufacturing systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... ·...

8
Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists available at ScienceDirect Journal of Manufacturing Systems j ourna l h omepage: www.elsevier.com/locate/jmansys Depth camera based collision avoidance via active robot control Bernard Schmidt a,, Lihui Wang a,b a Virtual Systems Research Centre, University of Skövde, PO Box 408, 541 28 Skövde, Sweden b Department of Production Engineering, KTH Royal Institute of Technology, Brinellvägen 68, 100 44 Stockholm, Sweden a r t i c l e i n f o Article history: Received 20 November 2013 Received in revised form 8 April 2014 Accepted 10 April 2014 Available online 21 May 2014 Keywords: Depth camera Monitoring Collision avoidance Human–robot collaboration a b s t r a c t A new type of depth cameras can improve the effectiveness of safety monitoring in human–robot collab- orative environment. Especially on today’s manufacturing shop floors, safe human–robot collaboration is of paramount importance for enhanced work efficiency, flexibility, and overall productivity. Within this context, this paper presents a depth camera based approach for cost-effective real-time safety mon- itoring of a human–robot collaborative assembly cell. The approach is further demonstrated in adaptive robot control. Stationary and known objects are first removed from the scene for efficient detection of obstacles in a monitored area. The collision detection is processed between a virtual model driven by real sensors, and 3D point cloud data of obstacles to allow different safety scenarios. The results show that this approach can be applied to real-time work cell monitoring. © 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved. 1. Introduction When humans and robots coexist (as in any human–robot col- laborative environment), safety protection of human operators in real time is of paramount importance. The challenge is not only collision detection in real time but collision avoidance at runtime via active closed-loop robot control. A novel cost-effective sensing system such as Microsoft Kinect opens up new opportunities for surveillance in collaborative environment. Several approaches have been reported during the past years for human–robot collaborations. Bussy [1] proposed a control scheme for humanoid robot to perform a transportation task jointly with a human partner. Takata and Hirano [2] proposed a planning algo- rithm that can allocate humans and robots in a hybrid assembly system, adaptively. Fei et al. [3] presented a simulation based multi- objective optimisation process for allocating the assembly subtasks to both human and robot. Krüger et al. [4] summarised the advan- tages of human–robot collaboration in assembly lines. A hybrid assembly system exhibits both the efficiency of robots and the flex- ibility of humans. Nevertheless, it may induce extra stress to human operators if the assembly lines are improperly designed. Arai et al. [5] thus assessed operator stress from the perspective of distance and speed between a robot and an operator, aiming to guide the design of a productive hybrid assembly system. Corresponding author. Tel.: +46 500 44 8547; fax: +46 500 44 8598. E-mail addresses: [email protected], [email protected] (B. Schmidt), [email protected] (L. Wang). Regarding safety monitoring and protection in human–robot collaborative manufacturing environment, two approaches were popularly used, i.e. vision-based approach such as 3D surveillance [6] via motion, colour and texture analysis, and inertial sensor- based approach [7] via a motion capture suit. From the practical point of view, the latter may not be feasible for real-world appli- cations due to its dependence on a special suit with sensors and the limitation of motion capture of the suit wearer only, where the surrounding is unmonitored. This may lead to a safety leak, e.g. a mobile object may hit a stationary operator. In the area of vision based approaches, efficient collision detec- tion has been the focus for many years. Ebert et al. [8] proposed an emergency-stop approach to avoid a collision using an ad-hoc developed vision chip, whereas Gecks and Henrich [9] adopted a multi-camera system for detecting obstacles. Vogel et al. [10] presented a projector-camera based approach for dynamic safety zone boundary projection and violation detection. Tan and Arai [11] applied a triple stereovision system for tracking the upper- body movement of a sitting operator wearing a suit with colour markers. However, mobile operators that might appear in a mon- itored area may not show consistent colour or texture. Moreover, changes in illumination may also affect the overall effectiveness of the collision-detection process. For this reason, range imaging with depth information is robust to the change in both colour and illu- mination, and allows straight representation of a view in 3D spaces. Schiavi et al. [12] used a ToF (time-of-flight) camera in implemen- tation of a collision detection system. Fischer and Henrich [13] presented an approach using multiple 3D depth images for colli- sion detection. For the range imaging, laser scanners provide high http://dx.doi.org/10.1016/j.jmsy.2014.04.004 0278-6125/© 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.

Upload: others

Post on 09-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

D

Ba

b

a

ARRAA

KDMCH

1

lrcvss

hfarsottaio[ad

l

h0

Journal of Manufacturing Systems 33 (2014) 711–718

Contents lists available at ScienceDirect

Journal of Manufacturing Systems

j ourna l h omepage: www.elsev ier .com/ locate / jmansys

epth camera based collision avoidance via active robot control

ernard Schmidta,∗, Lihui Wanga,b

Virtual Systems Research Centre, University of Skövde, PO Box 408, 541 28 Skövde, SwedenDepartment of Production Engineering, KTH Royal Institute of Technology, Brinellvägen 68, 100 44 Stockholm, Sweden

r t i c l e i n f o

rticle history:eceived 20 November 2013eceived in revised form 8 April 2014ccepted 10 April 2014vailable online 21 May 2014

a b s t r a c t

A new type of depth cameras can improve the effectiveness of safety monitoring in human–robot collab-orative environment. Especially on today’s manufacturing shop floors, safe human–robot collaborationis of paramount importance for enhanced work efficiency, flexibility, and overall productivity. Withinthis context, this paper presents a depth camera based approach for cost-effective real-time safety mon-

eywords:epth cameraonitoring

ollision avoidanceuman–robot collaboration

itoring of a human–robot collaborative assembly cell. The approach is further demonstrated in adaptiverobot control. Stationary and known objects are first removed from the scene for efficient detection ofobstacles in a monitored area. The collision detection is processed between a virtual model driven by realsensors, and 3D point cloud data of obstacles to allow different safety scenarios. The results show thatthis approach can be applied to real-time work cell monitoring.

© 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.

. Introduction

When humans and robots coexist (as in any human–robot col-aborative environment), safety protection of human operators ineal time is of paramount importance. The challenge is not onlyollision detection in real time but collision avoidance at runtimeia active closed-loop robot control. A novel cost-effective sensingystem such as Microsoft Kinect opens up new opportunities forurveillance in collaborative environment.

Several approaches have been reported during the past years foruman–robot collaborations. Bussy [1] proposed a control scheme

or humanoid robot to perform a transportation task jointly with human partner. Takata and Hirano [2] proposed a planning algo-ithm that can allocate humans and robots in a hybrid assemblyystem, adaptively. Fei et al. [3] presented a simulation based multi-bjective optimisation process for allocating the assembly subtaskso both human and robot. Krüger et al. [4] summarised the advan-ages of human–robot collaboration in assembly lines. A hybridssembly system exhibits both the efficiency of robots and the flex-bility of humans. Nevertheless, it may induce extra stress to humanperators if the assembly lines are improperly designed. Arai et al.

5] thus assessed operator stress from the perspective of distancend speed between a robot and an operator, aiming to guide theesign of a productive hybrid assembly system.

∗ Corresponding author. Tel.: +46 500 44 8547; fax: +46 500 44 8598.E-mail addresses: [email protected], [email protected] (B. Schmidt),

[email protected] (L. Wang).

ttp://dx.doi.org/10.1016/j.jmsy.2014.04.004278-6125/© 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. Al

Regarding safety monitoring and protection in human–robotcollaborative manufacturing environment, two approaches werepopularly used, i.e. vision-based approach such as 3D surveillance[6] via motion, colour and texture analysis, and inertial sensor-based approach [7] via a motion capture suit. From the practicalpoint of view, the latter may not be feasible for real-world appli-cations due to its dependence on a special suit with sensors andthe limitation of motion capture of the suit wearer only, where thesurrounding is unmonitored. This may lead to a safety leak, e.g. amobile object may hit a stationary operator.

In the area of vision based approaches, efficient collision detec-tion has been the focus for many years. Ebert et al. [8] proposedan emergency-stop approach to avoid a collision using an ad-hocdeveloped vision chip, whereas Gecks and Henrich [9] adopteda multi-camera system for detecting obstacles. Vogel et al. [10]presented a projector-camera based approach for dynamic safetyzone boundary projection and violation detection. Tan and Arai[11] applied a triple stereovision system for tracking the upper-body movement of a sitting operator wearing a suit with colourmarkers. However, mobile operators that might appear in a mon-itored area may not show consistent colour or texture. Moreover,changes in illumination may also affect the overall effectiveness ofthe collision-detection process. For this reason, range imaging withdepth information is robust to the change in both colour and illu-mination, and allows straight representation of a view in 3D spaces.

Schiavi et al. [12] used a ToF (time-of-flight) camera in implemen-tation of a collision detection system. Fischer and Henrich [13]presented an approach using multiple 3D depth images for colli-sion detection. For the range imaging, laser scanners provide high

l rights reserved.

Page 2: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

7 nufacturing Systems 33 (2014) 711–718

rptbtasaKi

sKebnmcwtttbs

tadiet

ocmrfset

tamp

2

smontft

ertsbu

12 B. Schmidt, L. Wang / Journal of Ma

esolution but with relatively low processing speed, because eachoint or line of the scanned scene is sampled separately. Compara-ively, ToF cameras allow acquiring depth images with high speed,ut with low image resolution (up to 200 × 200) and with rela-ively high cost. Lately, Rybski et al. [14] fused data from stereond range cameras to obtain a volumetric evidence grid for locali-ation of object and collision detection. Flacco et al. [15] presentedn integrated framework for collision avoidance with use of singleinect depth sensor and depth space approach without represent-

ng obstacles in the 3D space.In recent research, Yeung et al. [16] used double Kinect sen-

or to improve the skeleton tracking provided by the Microsoftinect SDK. This approach shows the potential for human motionxtraction; however, it has some limitations. The whole humanody needs to be seen to properly initialise the skeleton recog-ition. Morato et al. [17] presented a similar approach that usesultiple Kinect skeletons with application to safe human–robot

ollaboration. Their framework has been verified in a case studyith a small laboratory 5DOF robot. The approaches that use skele-

on tracking are faster than general 3D approaches; neverthelesshey could have problem with proper identification of the operatorhat is handling some equipment. Moreover, the skeleton needs toe enriched with additional data from point cloud to obtain the realize of human body.

One of common choices from available commercial safety pro-ection systems is SafetyEYE® [18]. It calculates 3D data about

surveilled space from three cameras embedded into a sensingevice and checks for intrusions into predefined safety zones. Entry

nto a safety zone may limit the movement speed or trigger anmergency stop of a robot in the zone. During active surveillance,he safety zones are static and cannot be changed.

For the purpose of close human–robot collaboration with-ut sacrificing its productivity, there is a need to develop aost-effective yet real-time safety solution suitable for dynamicanufacturing environment where human operators co-exist with

obots. Targeting the need, this research proposes a new approachor human safety protection. Its novelty includes: (1) timely colli-ion detection between depth images and 3D models in augmentednvironment; and (2) active collision avoidance via path modifica-ion and robot control in real-time with zero robot programming.

The rest of the paper is organised as follows. Section 2 describeshe principle of collision detection; Section 3 provides the details of

new collision avoidance method; Section 4 depicts system imple-entation and test results; and finally, Section 5 concludes the

aper and highlights our future work.

. Collision detection

For the collision detection purpose, 3D models linked to realensors are used to represent a structured manufacturing environ-ent. A cost-effective depth camera is then chosen for surveillance

f any other mobile foreign objects, including operators, which areot present in the virtual 3D models. In the current implementa-ion, two redundant Kinect sensors are used as the depth camerasor better area coverage and for eliminating possible blind spots inhe surveillance area.

As shown in Fig. 1, in the simplified human–robot co-existingnvironment, the current status of the robot is obtained from theobot controller and shown via a virtual robot 3D model; whereas

he appearance of the human operator is captured by the Kinectensors via a set of real depth images. The human skeleton that cane obtained from the Kinect sensor with use of Kinect SDK is notsed in this work.

Fig. 1. Concept of collision detection in an augmented environment.

2.1. Depth images processing

The procedures of depth images acquisition and processing areillustrated in Fig. 2. Efficient processing is enabled by removingbackground images obtained during the calibration process. Suchbackground reference image is obtained for each of the depthsensors. During the acquisition process, the robot performs somemovements, and the background behind the robot can then be reg-istered.

Objects corresponding to the modelled moving robot are alsoremoved from the depth images by the back projection of the robotmodel to the images. After the background removal, only the for-eign objects are left in the depth images, as shown in the second rowof the right column in Fig. 2. The third-row images reveal a mobileoperator as the interested subject after applying a noise-removalfilter and a connected-component algorithm.

The depth images of the detected operator from both camerasare converted to point clouds in the robot coordinate system, andmerged into a single 3D point cloud after image registration. Finally,the single 3D point cloud of the operator is superimposed to the 3Dmodel of the robot in augmented virtuality, where the minimumdistance between the point cloud and the 3D robot model is cal-culated. The bottom image in Fig. 2 demonstrates the result of thedepth images processing.

2.2. Minimum distance calculation

Although the size of the depth images is significantly reducedafter background removal etc., the point cloud of the operator is stillconsisting of a large amount of data. To heuristically speed up thecomputation of collision detection, minimum bounding boxes areapplied to the 3D point cloud. This allows decreasing the number ofall distance calculations per frame from hundreds of thousands tothousands. In the current implementation, axis alignment bound-ing box is chosen for faster computation and easier representationby only two opposite corners. One point consists of coordinates

that are the maximum values from all point coordinates, and theother point consists of the coordinates that are the minimum val-ues. Fig. 3 depicts the point cloud with bounding boxes at differentgranularity. The actual level of granularity relates to the sensitivity
Page 3: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

B. Schmidt, L. Wang / Journal of Manufacturing Systems 33 (2014) 711–718 713

depth

oebcbbooco

2

s5Sai

where n is the raw 11-bit disparity from the Kinect sensor andm1–m4 are the calibration parameters whose optimised values arelisted in Table 1.

Table 1Parameters for Kinect sensor calibration.

Parameter Value Unit

Kinect 1 Kinect 2

Fig. 2. Procedures of

f the collision detection, and is governed by a threshold param-ter. To speed up the distance calculation, each of the subdividedoxes is treated as a minimal sphere bounding this sub-box. In thisase, it only needs to calculate distance to the centre of each sub-ox. The robot model for distance calculation is also representedy a set of bounding spheres. The distance between any operatorr unknown object and any robot part are used for warning theperator or stopping the robot. Distance to the robot TCP is used toontrol the robot movement for active collision avoidance, whilether distances are monitored.

.3. Sensitivity analysis

The depth cameras adopted in this research are Kinect sen-ors with spatial resolution 640 × 480, depth 11-bit, field of view

8 × 40◦, and measurement range 0.8–3.5 m, as presented in Fig. 4.ince the Kinect sensors are not designed as a measuring devicesnd do not provide explicit distance values, a calibration processs required. This is carried out through measurements of a target

images processing.

surface from varying distances, where Eq. (1) is used for parameteroptimisation to reduce errors in distance measurement.

d = m3 tan(

n

m2+ m1

)− m4 (1)

m1 1.1873 1.18721 radm2 2844.7 2844.1 1/radm3 0.12423 0.1242 mm4 0.0045 0.01 m

Page 4: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

714 B. Schmidt, L. Wang / Journal of Manufacturing Systems 33 (2014) 711–718

Fig. 3. (a) Point cloud in a minimum boundin

2.4 m

3.4 m

0.78 m

0.54 m

0.8 m

2.7 m

2.15 m

2.08 m

1.47 m

ormrtm

v = fyy′′ + cy

Fig. 4. Kinect sensor’s field of view.

The calibration results of distance measurement and range res-lutions of one of the Kinect sensors are given in Figs. 5 and 6,espectively. The range resolution corresponds to the change ineasured distance related to the change in least significant bit of

aw disparity values. When deciding a safety threshold value, bothhe workplace situation and the range resolutions as shown in Fig. 6

ust be considered so as not to compromise the safety.

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

Dis

tan

ce [

m]

Raw disparity value from se nsor

Measured Calculated

1000900800700600

Fig. 5. Calibration results of distance measurement.

g box; (b) an array of subdivided boxes.

For optics calibration of IR camera, the Open Source ComputerVision library OpenCV is utilised. Several raw infrared images ofknown chessboard are chosen for obtaining intrinsic parametersand distortion coefficients, whereas the pin-hole camera modeldefined in Eqs. (2)–(5) is used for camera calibration and 3D recon-struction.

x′ = x

z

y′ = y

z

(2)

x′′ = x′ 1 + k1r2 + k2r4 + k3r6

1 + k4r2 + k5r4 + k6r6+ 2p1x′y′ + p2(r2 + 2x′2)

y′′ = y′ 1 + k1r2 + k2r4 + k3r6

1 + k4r2 + k5r4 + k6r6+ p1(r2 + 2y′2) + 2p2x′y′

(3)

r2 = x′2 + y′2 (4)

u = fxx′′ + cx(5)

where (x, y, z) are camera coordinates, distortion coefficients k1–k6for radial distortion and p1, p2 for tangential distortion, fx, fy are

0

10

20

30

40

50

60

70

Res

olu

tio

n [

mm

]

Distance [m]

0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Fig. 6. Range resolutions of one Kinect sensor.

Page 5: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

B. Schmidt, L. Wang / Journal of Manufact

fa

ppi

2

obate

T

wfb

Fig. 7. Positioning camera around robot.

ocal lengths, (cx, cy) are the principal point coordinates, and (u, v)re depth image coordinates.

During the calibration process, we can obtain the intrinsicarameters: fx, fy, cx, cy, and the distortion coefficients: k1–k6, p1,2 for the IR camera. This enables us to convert the obtained depthmage to 3D point cloud in the camera’s coordinate system.

.4. Position calibration

In order to combine depth images with 3D models, the positionf the camera with respect to the modelled environment needs toe obtained. To retrieve positions of the depth cameras mountedround the robot, the procedure from [19] is applied. The rela-ionship between transformation matrixes shown in Fig. 7 can bexpressed in Eq. (6).

W = TTCPTW = TCTW (6)

O O TCP O C

here TWO is the transformation matrix between calibration object

rame and the world frame, TTCPO the transformation matrix

etween calibration object frame and the robot TCP frame, TWC the

Fig. 8. Collision avoi

uring Systems 33 (2014) 711–718 715

transformation matrix between camera frame and the word frame,TC

O the transformation matrix between calibration object frame andthe camera frame which can be resolved from a captured image, andTW

TCP the transformation matrix between the robot TCP frame andthe world frame which can be resolved by a forward kinematics ofthe robot.

3. Collision avoidance

For assembly operations, we can distinguish two behaviour sce-narios of collision avoidance. The first is to stop the robot movementwhen the close proximity of an operator has been detected, andcontinue the robot movement after the operator walks away. Thisscenario is suitable for operations with limited degree of freedom,for example inserting a pin to an object held by the robot. Thesecond scenario is applicable to such operations as transportationwhere the path can be dynamically modified to avoid collision withhuman operators and other obstacles.

As shown in Fig. 8 when a short distance between the TCP(tool centre point) of the robot end-effector and the operatoris detected, the direction of movements of the end-effector is mod-ified to maintain a defined minimal distance. However, if theoperator appears behind the robot and the distance di betweenthe operator and the robot becomes less than a safety threshold,the robot will stop moving. Once the situation is relaxed and thecollision distance ||c|| is greater than the threshold, the end-effectorwill move back in direction to its destination position.

The representation of vectors taken into consideration for pathmodification is also shown in Fig. 8. Collision vector c is calculated,as vector between the TCP of the robot and the closest obstacle.The vector representing the direction of the robot movement vc isdecomposed into a parallel component vc||c and a perpendicularcomponent vc⊥c to the collision vector c. The parallel componentis calculated in Eq. (7) as a dot product of the movement vectorand collision versor and has the direction of the collision vector.Collision versor is a unit vector with direction of collision vector.Then the perpendicular component can be calculated using Eq. (8).Modification is based on the distance from the obstacle and thedirection of the movement. The parallel component, as it is respon-

sible for the movement towards the obstacle, is modified to avoid acollision. The length of this component is modified to maintain theminimal distance. Example of this modification is presented in Fig. 9when only the positions are considered. This figure presents the

dance vectors.

Page 6: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

716 B. Schmidt, L. Wang / Journal of Manufacturing Systems 33 (2014) 711–718

va||cto

obward stacle

||c||

moTtwmtti

v

v

v

4

4

wFac2s

p

dth1 ddth2

Fig. 9. Example of vector modification.

odification in two cases: when the robot is moving towards theperator, and when the robot is moving away from the operator.he threshold value dth1 defines the distance to activate the vec-or modification, whereas dth2 defines the nearest distance belowhich robot moves away with configured speed to avoid rapidovement in close distance to the human operator. As a result of

he presented modifications, vector va||c is obtained. The vector ofhe modified movement is composed from vc⊥c and va||c as givenn Eq. (9).

c||c = (vc · c) · c, where c = c

|c| (7)

c ⊥ c = vc − vc||c (8)

a = va||c + vc ⊥ c (9)

. Robotic assembly testing

.1. System configuration

The proposed active collision avoidance system is integratedith our earlier Wise-ShopFloor framework [20] as shown in

ig. 10. An ABB industrial robot IRB 140 is used to create a minissembly cell for testing and validation. The application server ofollision avoidance is deployed in a PC with Intel Core i7 CPU of

.7 GHz, 4 GB RAM, and running a 64-bit Windows 7 operatingystem.

For implementing the active collision avoidance system, tworogramming platforms have been used. Communications with

Fig. 10. System co

Fig. 11. Tasks structure of the robot controller.

depth cameras and image processing are implemented in C/C++ as alibrary. 2D and 3D visualisation and communication with the robotcontroller and the Wise-ShopFloor framework are implementedin Java. To integrate the Java-based web application with the C++library, the Java Native Access (JNA) is employed. The integratedsystem allows both local and remote monitoring and robot controlwith secure safety protection.

Two parallel tasks running in the robot controller for robot con-trol and monitoring are expanded to handle the active collisionavoidance. This third parallel task is added for independent com-

munication with the collision avoidance application as shown inFig. 11.

nfiguration.

Page 7: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

B. Schmidt, L. Wang / Journal of Manufacturing Systems 33 (2014) 711–718 717

amtphaDa

waooriitre

5

ai

medt.trt

am

fbp

man–robot-cooperation. Ann CIRP 2005;54:19–22.

Fig. 12. Recorded path for collision avoidance.

A test case was performed in a robotic mini assembly cell, where robot is assembling shafts and washers and inserting them in theagazine of finished products. A human operator is responsible for

aking out the finished products from the magazine of the finishedroducts as well as filling the magazines of components at the left-and side. For the operation of picking and inserting, the collisionvoidance is limited only to stop the movement of the robot arm.uring the movement to/from the magazines, the active collisionvoidance algorithm is applied.

Another exemplary scenario for assembly of the shafts andashers utilises the capability of the system to follow the oper-

tor. The robot has to grab a washer and deliver it to the humanperator. The robot first grips a shaft and moves it to the humanperator, who then inserts the washer on the shaft held by theobot. Afterwards, the robot takes the assembled product and putst in the magazine of finished products. The delivery of the washers realised by tracking and following the human operator’s posi-ion. When the washer is gripped by the human operator, the roboteleases the washer. This action can be triggered by a switch on thend-effector or by a voice command.

. Results

A recorded example of the robot’s TCP path during collisionvoidance is shown in Fig. 12. The starting position of the robots market by , following targets on the robot path are accordingly

, , and back to . When there is no detected obstacle, robotoves linearly between specified targets. As soon as the pres-

nce of an obstacle is detected in proximity to the TCP, the currentirection of movement is modified to avoid the collision on the wayo the next target. Points of modified path are marked by circles

Obstacle points taken into consideration during path modifica-ion are marked by crosses ; they are the closest points to theobot TCP. Lines represent the distances between the TCP andhe obstacle during the movement.

The recorded results of the robot’s TCP following an operatorre shown in Fig. 13. The detected points of the operator’s hand arearked by red crosses. The clusters of these points indicated by ,and correspond to the places where the operator held his hand

or longer periods of time. The points of the robot TCP are markedy blue crosses, where , and are the robot TCP’s stoppingoints corresponding to the hand positions.

Fig. 13. Recorded path for following.

6. Conclusions

This paper presents an integrated and cost-effective approachfor real-time active collision avoidance in a human–robot col-laborative work cell that enables all-time safety protection. Thisapproach connects a set of vision and motion sensors to vir-tual 3D models for real-time monitoring and collision detectionin augmented virtuality, aiming to improve the overall manufac-turing performance. Rather than emergency stops that preventhuman–robot coexistence, our approach links collision detectionto active robot control through three collision-avoidance strate-gies: warning an operator, stopping a robot, or modifying the robotpath to avoid a collision with the operator. The outcomes of thehuman–robot collaborative assembly are much improved flexi-bility, absolute human safety without a fence, and better overallproductivity.

Limitation of the proposed solution is related to the sensors.High-intense direct sunlight could disturb the detection. Illumi-nation of the same surface by multiple Kinect can affect thedetectability of the surface. Test shows that if some parts of surfacescanned by two sensors were detected by one of the sensors.

Our future work will focus more on developing and improvingthe collision avoidance strategies for more complex collaborativemanufacturing tasks, as well as including the RGB images in detec-tion and tracking process, to increase the accuracy and robustnessof the detection system.

References

[1] Bussy A, Kheddar A, Crosnier A, Keith F. Human–humanoid haptic joint objecttransportation case study. In: IEEE/RSJ international conference on intelligentrobots and systems. 2012. p. 3633–8.

[2] Takata S, Hirano T. Human and robot allocation method for hybrid assemblysystems. CIRP Ann Manuf Technol 2011;60:9–12.

[3] Fei C, Sekiyama K, Sasaki H, Jian H, Baiqing S, Fukuda T. Assembly strategymodeling and selection for human and robot coordinated cell assembly. In:IEEE/RSJ international conference on intelligent robots and systems. 2011. p.4670–5.

[4] Krüger J, Lien TK, Verl A. Cooperation of human and machines in assembly lines.CIRP Ann Manuf Technol 2009;58:628–46.

[5] Arai T, Kato R, Fujita M. Assessment of operator stress induced by robot collab-oration in assembly. CIRP Ann Manuf Technol 2010;59:5–8.

[6] Krüger J, Nickolay B, Heyer P. Image based 3D surveillance for flexible

[7] Corrales JA, Candelas FA, Torres F. Safe human–robot interaction based ondynamic sphere-swept line bounding volumes. Robot Comput Integr Manufac2011;27:177–85.

Page 8: Journal of Manufacturing Systemsstatic.tongtianta.site/paper_pdf/73be348e-e5c6-11e9-9f7e... · 2019. 10. 3. · Journal of Manufacturing Systems 33 (2014) 711–718 Contents lists

7 nufac

[

[

[

[

[

[

[

[

[[

18 B. Schmidt, L. Wang / Journal of Ma

[8] Ebert D, Komuro T, Namiki A, Ishikawa M. Safe human–robot-coexistence:emergency-stop using a high-speed vision-chip. In: IEEE/RSJ international con-ference on intelligent robots and systems. 2005. p. 2923–8.

[9] Gecks T, Henrich D. Human–robot cooperation: safe pick-and-place operations.In: IEEE international workshop on robot and human interactive communica-tion. 2005. p. 549–54.

10] Vogel C, Poggendorf M, Walter C, Elkmann N. Towards safe physicalhuman–robot collaboration: a projection-based safety system. In: IEEE/RSJinternational conference on intelligent robots and systems. 2011. p. 3355–60.

11] Tan JTC, Arai T. Triple stereo vision system for safety monitoring ofhuman–robot collaboration in cellular manufacturing. In: IEEE/CIRP interna-tional symposium on assembly and manufacturing. 2011. p. 1–6.

12] Schiavi R, Bicchi A, Flacco F. Integration of active and passive compliance control

for safe human–robot coexistence. In: IEEE international conference on roboticsand automation. 2009. p. 259–64.

13] Fischer M, Henrich D. 3D collision detection for industrial robots and unknownobstacles using multiple depth images. In: Kröger T, Wahl FM, editors. Advancesin robotics research. 2009. p. 111–22.

[

turing Systems 33 (2014) 711–718

14] Rybski P, Anderson-Sprecher P, Huber D, Niessl C, Simmons R. Sensor fusionfor human safety in industrial workcells. In: IEEE/RSJ international conferenceon intelligent robots and systems. 2012. p. 3612–9.

15] Flacco F, Kröger T, De Luca A, Khatib O. A depth space approach to human–robotcollision avoidance. In: IEEE international conference on robotics and automa-tion. 2012. p. 338–45.

16] Yeung KY, Kwok TH, Wang CCL. Improved skeleton tracking by duplex Kinects:a practical approach for real-time applications. J Comput Inf Sci Eng 2013;13.

17] Morato C, Kaipa KN, Zhao B, Gupta SK. Toward safe human robot collaborationby using multiple Kinects based real-time human tracking. J Comput Inf Sc Eng2014;14.

18] Pilz GmbH & Co. KG (2006) http://www.safetyeye.com/19] Schmidt B, Wang L. Automatic robot calibration via a global-local camera

system. In: International conference on flexible automation and intelligentmanufacturing. 2012. p. 423–31.

20] Wang L, Givehchi M, Adamson G, Holm M. A sensor-driven 3D model-based approach to remote real-time monitoring. CIRP Ann Manuf Technol2011;60:493–6.