a method for collision handling for industrial robots

94
Institutionen för systemteknik Department of Electrical Engineering Examensarbete A method for collision handling for industrial robots Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping av Fredrik Danielsson och Anders Lindgren LITH-ISY-EX--08/4105--SE Linköping 2008 Department of Electrical Engineering Linköpings tekniska högskola Linköpings universitet Linköpings universitet SE-581 83 Linköping, Sweden 581 83 Linköping

Upload: lamtruc

Post on 03-Jan-2017

226 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A method for collision handling for industrial robots

Institutionen för systemteknikDepartment of Electrical Engineering

Examensarbete

A method for collision handling for industrial robots

Examensarbete utfört i Reglerteknikvid Tekniska högskolan i Linköping

av

Fredrik Danielsson och Anders Lindgren

LITH-ISY-EX--08/4105--SE

Linköping 2008

Department of Electrical Engineering Linköpings tekniska högskolaLinköpings universitet Linköpings universitetSE-581 83 Linköping, Sweden 581 83 Linköping

Page 2: A method for collision handling for industrial robots
Page 3: A method for collision handling for industrial robots

A method for collision handling for industrial robots

Examensarbete utfört i Reglerteknikvid Tekniska högskolan i Linköping

av

Fredrik Danielsson och Anders Lindgren

LITH-ISY-EX--08/4105--SE

Handledare: Christian Lyzellisy, Linköpings universitet

Richard WarldénMotoman Robotics

Examinator: Svante Gunnarssonisy, Linköpings universitet

Linköping, 16 May, 2008

Page 4: A method for collision handling for industrial robots
Page 5: A method for collision handling for industrial robots

Avdelning, InstitutionDivision, Department

Division of Automatic ControlDepartment of Electrical EngineeringLinköpings universitetSE-581 83 Linköping, Sweden

DatumDate

2008-05-16

SpråkLanguage

� Svenska/Swedish� Engelska/English

RapporttypReport category

� Licentiatavhandling� Examensarbete� C-uppsats� D-uppsats� Övrig rapport�

URL för elektronisk versionhttp://www.control.isy.liu.se

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-ZZZZ

ISBN—

ISRNLITH-ISY-EX--08/4105--SE

Serietitel och serienummerTitle of series, numbering

ISSN—

TitelTitle

Metod för kollisionshantering av industrirobotarA method for collision handling for industrial robots

FörfattareAuthor

Fredrik Danielsson och Anders Lindgren

SammanfattningAbstract

This master’s thesis presents the development of a collision handling function forMotoman industrial robots and investigates further use of the developed software.When a collision occurs the arm is to be retracted to a safe home location andthe job is to be restarted to resume the production. The retraction can bedone manually, which demands that the operator has to have good knowledgein robot handling and it might be a time consuming task. To minimise thetime for restarting the job after a collision and allowing employees that havelimited knowledge in robot handling to retract and restart the job, Motomanprovides an automatical retraction function. However, the retraction functionmay cause further collisions when used and therefor a new function for retractingthe arm is needed. The new function is based on that the motion of the robot isrecorded by sampling the servo values, which are then stored in a buffer. A jobfile is automatically created and loaded into the control system, and the positionvariables of the job file are updated using the contents of the buffer. This willensure a safe retraction of the arm as long as the environment surrounding therobot remains the same.

The developed software made it possible to control the robot in real-timeby changing the buffer information, which has lead to a cognitive system calledthe Pathfinder. By initiating the Pathfinder function with at least a start and anend point, the function generates a collision free path between the start point andthe end point. A pilot-study has also been made concerning integration of a visionsystem with the Pathfinder to increase the decision handling for the function.

NyckelordKeywords key1, key2

Page 6: A method for collision handling for industrial robots
Page 7: A method for collision handling for industrial robots

AbstractThis master’s thesis presents the development of a collision handling function forMotoman industrial robots and investigates further use of the developed software.When a collision occurs the arm is to be retracted to a safe home location andthe job is to be restarted to resume the production. The retraction can be donemanually, which demands that the operator has to have good knowledge in robothandling and it might be a time consuming task. To minimise the time for restart-ing the job after a collision and allowing employees that have limited knowledge inrobot handling to retract and restart the job, Motoman provides an automaticalretraction function. However, the retraction function may cause further collisionswhen used and therefor a new function for retracting the arm is needed. The newfunction is based on that the motion of the robot is recorded by sampling theservo values, which are then stored in a buffer. A job file is automatically createdand loaded into the control system, and the position variables of the job file areupdated using the contents of the buffer. This will ensure a safe retraction of thearm as long as the environment surrounding the robot remains the same.

The developed software made it possible to control the robot in real-time bychanging the buffer information, which has lead to a cognitive system called thePathfinder. By initiating the Pathfinder function with at least a start and an endpoint, the function generates a collision free path between the start point and theend point. A pilot-study has also been made concerning integration of a visionsystem with the Pathfinder to increase the decision handling for the function.

SammanfattningDetta examensarbete behandlar i första hand en metod för kollisionshantering avMotomans industri robotar men undersöker även ytterliggare användningsområ-den för den utarbetade metoden. När en kollision har inträffat och man vill dratillbaka armen till ett säkert hemma läge för att återställa systemet och återupp-ta produktionen kan man i dagens system antingen köra tillbaka armen manuellteller använda en automatisk funktion. Den manuella körning kräver att operatörenhar god kännedom om roboten samt att det oftast är tidskrävande vilket är nå-got man vill minimera när det gäller att få igång produktionen efter ett stopp.Den nuvarande automatiska funktionen minimerar tiden och underlättar för oper-atören att återställa jobbet och gör det lättare att återuppta produktionen. Dennafunktion har dock sina begränsningar vilket kan leda till ytterliggare kollisionernär man drar tillbaka armen. En ny metod har utvecklats och implementerats för

v

Page 8: A method for collision handling for industrial robots

vi

att förbättra och underlätta tillbaka dragningen för operatören. Den nya metodenbygger på att man samplar robotens rörelse och lagrar den i en buffert. Buffertenanvänds sedan för att återskapa rörelsen genom att ett jobb skapas automatisktoch positionsvariabler uppdateras från bufferten. På detta sätt kan en säker vägtill hemma läget garanteras så länge den omgivande världen förblir konstant.

Den utvecklade mjukvaran har även gjort det möjligt att kontrollera roboten ireal tid genom att ändra innehållet i bufferten vilket har lett vidare till ett kogni-tivt system, Pathfinder, som kan lära sig av sina misstag och rätta till existerandejobb. Genom att initiera Pathfindern med minst en start och en slut punkt sågenererar den en kollisions fri väg mellan dessa båda punkter som sedan laddasupp som ett vanligt robot jobb. Det har även gjorts en förstudie i att komplet-tera Pathfinder funktionen med ett kamerabaserat vision system för att utvidgabeslutsfattningen för funktionen.

Page 9: A method for collision handling for industrial robots

Acknowledgments

We would like to thank the employees at Motoman Robotics AB, Kalmar, for nevergetting tired of our foolish questions and a special thanks to Richard Warldén whomade this master’s thesis possible.Thanks to Svante Gunnarsson and Christian Lyzell for commenting on the reportand helping us throughout the process. Finally we would like to thank Per Lind-gren who gave us roof over our heads during our first month in Kalmar and allour friends and family that have shown their support.

vii

Page 10: A method for collision handling for industrial robots
Page 11: A method for collision handling for industrial robots

Contents

1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Research and Development . . . . . . . . . . . . . . . . . . 21.1.2 Motoman Robotics AB . . . . . . . . . . . . . . . . . . . . 2

1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

I Motion Control 5

2 Robotics 72.1 Industrial robots in general . . . . . . . . . . . . . . . . . . . . . . 72.2 Sensors on the robot . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Kinematics of the robot . . . . . . . . . . . . . . . . . . . . . . . . 82.4 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5.1 NX100 control system . . . . . . . . . . . . . . . . . . . . . 112.5.2 NX100 Parameters . . . . . . . . . . . . . . . . . . . . . . . 112.5.3 Safe home location . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.6.1 Rigid motion . . . . . . . . . . . . . . . . . . . . . . . . . . 132.6.2 Forward Position Kinematics . . . . . . . . . . . . . . . . . 142.6.3 Inverse Position Kinematics . . . . . . . . . . . . . . . . . . 152.6.4 Forward Velocity Kinematics . . . . . . . . . . . . . . . . . 17

3 Backtracking 193.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 How to retrieve a safe way home . . . . . . . . . . . . . . . . . . . 20

3.2.1 Using external sensors . . . . . . . . . . . . . . . . . . . . . 213.2.2 Path recording . . . . . . . . . . . . . . . . . . . . . . . . . 223.2.3 Test bench . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3 How to get a steady motion . . . . . . . . . . . . . . . . . . . . . . 253.4 Environmental change . . . . . . . . . . . . . . . . . . . . . . . . . 283.5 Restore the monitored job . . . . . . . . . . . . . . . . . . . . . . . 283.6 INI-file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

ix

Page 12: A method for collision handling for industrial robots

x Contents

3.7 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Future use of the backtracking software 314.1 Real-time motion control . . . . . . . . . . . . . . . . . . . . . . . 314.2 Pathfinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.2.1 Basic-Pathfinder . . . . . . . . . . . . . . . . . . . . . . . . 344.2.2 Path processing . . . . . . . . . . . . . . . . . . . . . . . . . 384.2.3 Object Estimation . . . . . . . . . . . . . . . . . . . . . . . 39

5 Test Results on Part I 415.1 Backtrack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.2 Basic-Pathfinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

II Integration of a Vision System 47

6 Theory 496.1 The Stereo Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6.1.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . 506.2 Stereo Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6.2.1 Block Matching . . . . . . . . . . . . . . . . . . . . . . . . . 516.2.2 Local Phase Estimation . . . . . . . . . . . . . . . . . . . . 53

6.3 Image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.3.1 Object segmentation using histogram . . . . . . . . . . . . . 556.3.2 Otsu’s method . . . . . . . . . . . . . . . . . . . . . . . . . 576.3.3 Morphological operations . . . . . . . . . . . . . . . . . . . 58

7 Vision System 617.1 Problem Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 617.2 The Developing Procedure . . . . . . . . . . . . . . . . . . . . . . . 637.3 Image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647.4 Find the solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

8 Test Results on Part II 67

9 Discussion 71

10 Conclusions And Future Work 7310.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7310.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Bibliography 75

A Technical Specifications 77

B Kinematics 79B.1 Forward Position Kinematics . . . . . . . . . . . . . . . . . . . . . 79B.2 Forward Velocity Kinematics . . . . . . . . . . . . . . . . . . . . . 82

Page 13: A method for collision handling for industrial robots

Chapter 1

Introduction

This chapter will first give an overview of the background to this master’s thesisand an introduction to Motoman Robotics Europe AB. Then the purpose of thismaster’s thesis is explained and finally the outline of this thesis is presented.

1.1 Background

This master’s thesis was carried out at Motoman Robotics AB in Kalmar and itcovers a collision handling system for their industrial robots.

Today when a collision occurs the arm has to be retrieved, the cause of the col-lision needs to be corrected and the programmed job has to be restarted. Thisprocess has to be performed quickly to maintain the production. The arm canbe retrieved manually or automatically by pressing a button. If the robot is re-trieved manually, especially in tight spaces, it will be a time consuming task andthe robot operator has to have good knowledge in how to control the robot or anew collision might take place. To improve the process of restarting the job after acollision and allowing employees that have limited knowledge in robot handling toretract and restart the job, Motoman provides an automatical retraction function.However, there might occur problems when using Motoman’s current retractionsolution, because it will only execute the programmed job backwards. This maycause problems when executing multi task jobs, which will be explained in Chapter3.

The problem description for this thesis was developed in a collaboration betweenRichard Warlden at Motoman and us, and it was not entirely stated at the startof this project, because all the problems were not defined.During the background research, the problem description developed.

1

Page 14: A method for collision handling for industrial robots

2 Introduction

1.1.1 Research and Development

During the research for Part I it was hard to find previously published articlesabout the collision problem because this solution had to be inhouse and specifiedfor Motoman’s NX100 system. The reason for this is described in Chapter 3.Almost all the information was therefore collected from the Motoman system man-uals, documents and we also had great help from the employees at MotomanTo be able to control and program the robot we took a course in robot handlingat Motoman. After this we could generate jobs on our own and start to study themotion and the limits of the robot. All our tests were performed on a HP3L, butall the results can be applied on almost all the products in Motoman’s robot parkwith only small modifications in the software setup.To increase our knowledge in robot motion we read several papers in robot kine-matics and motion control.

To find information for the pilot-study of a vision system in Part II we readliterature and papers in image processing and had email contacts with the imageprocessing group at the Department of Electrical Engineering at Linköping’s Uni-versity.

The software solutions for this project were implemented using C/C++ and Vx-Works for the robot handling and Matlab was used for the vision system.In the early stages of the development process for the robot-handling-function atest bench was developed for easier error diagnosis. The test bench was developedin C# and a communicating protocol was developed in C++ to handle the com-munication between the test bench and the robot. In this test bench the mainstructure of the software was developed and tested.

1.1.2 Motoman Robotics AB

Motoman Robotics Europe AB was founded 30 years ago as a manufacturer andsupplier of welding machinery, primarily to the automotive industry. The com-pany became part of the Japanese electronics group Yaskawa Electric Corporationin the mid-1990s.

Motoman products and services are well established in Europe together with otherparts of the world with more than 150.000 unit installations in full production(more than 15.000 units in Europe).

Motoman Robotics Europe AB head office for the European organisation is lo-cated in Kalmar, Sweden. Development and manufacturing mainly takes place inproduction units in Sweden and Germany. Since Motoman started in 1976 theyhave steadily developed as a robot supplier. As early as 1996, they introduced asolution for controlling several robots with a single control system, which helpedthem to achieve there current position as one of the world’s leading robot manu-facturer.

Page 15: A method for collision handling for industrial robots

1.2 Purpose 3

1.2 PurposeThe main purpose of this master’s thesis was to design and develop a softwaresolution for a safe retraction of the robot arm. It should also investigate furtheruse of the software, in applications like real-time motion control and autonomouspathfinder with integration of a vision system.

1.3 Thesis outlineThis master’s thesis is divided into two main parts. Part one describes the devel-opment of the backtracking function and spin-off functions and the second part isa pilot-study of an integration of a vision system which is a complement to thespin-off functions. The contents of the following chapters are:

Part I - Motion Control

Chapter 2 - This chapter presents the basics of an industrial robot and theNX100 control system. This chapter also contains the kinematics theory whichdescribes the motion of the robot.

Chapter 3 - A more detailed description of the problem is given and the so-lution methods for a safe retraction are presented.

Chapter 4 - This chapter presents the possibility for using the backtrackingfunction in applications like real-time motion control and autonomous pathfinder.

Chapter 5 - This chapter displays the test results from the backtracking so-lution method and the autonomous pathfinder.

Part II - Integration of a Vision System

Chapter 6 - The theory for the chosen vision system is presented.

Chapter 7 - How the vision system was developed is explained and the prob-lems are discussed.

Chapter 8 - This chapter displays the test results from the vision system

Chapter 9 - In this chapter the solution methods are discussed along with theworking process.

Chapter 10 - Presents the conclusions from the discussion in Chapter 9 anddiscusses further work on the software.

Page 16: A method for collision handling for industrial robots

4 Introduction

Page 17: A method for collision handling for industrial robots

Part I

Motion Control

5

Page 18: A method for collision handling for industrial robots
Page 19: A method for collision handling for industrial robots

Chapter 2

Robotics

Figure 2.1. A Motoman Robot

The word robot has been used to describe a wide range of different items, fromthings in our homes such as autonomous vacuum cleaners and lawn mowers toautonomous submarines and missiles. Almost anything that operates with somedegree of autonomy, usually under computer control, has at some point been calleda robot.

2.1 Industrial robots in generalIn 1956 the company Unimation, founded by George Devol and Joseph F. En-gelberger, produced the first industrial robot, [1]. It was mainly used to transferobjects from one point to another, less then a dozen feet apart. Today, industrialrobots are used for a wide range of applications including welding, painting, iron-ing, assembly, pick and place, packaging and palletizing, product inspection, andtesting. The size of an industrial robot varies depending on the application andthe amount of load it is designed to carry.

7

Page 20: A method for collision handling for industrial robots

8 Robotics

An industrial robot consists of a moving mechanical arm, see Figure 2.1. The armconsists of a sequence of rigid bodies connected by revolute or prismatic joints,also called axes. The most common industrial robot on the market today consistsof six axes, where three axes are required to reach a point in space, and to fullycontrol the orientation of the end of the arm three more axes are required. Anelectrical motor, a servo, controls every axis and a computer controls the move-ments of the system.

The robot uses pre-programmed jobs to perform a variety of tasks. The setupor programming of motions and sequences for an industrial robot is typically doneby moving the robot to the desired location and storing the position together withthe desired motion type. This can be very time consuming but in the last fewyears the industrial robotics business has provided enough speed, accuracy and"easy to use" interface for most of the applications.

2.2 Sensors on the robotThe use of sensors on industrial robots is of vital significance to achieve the high-performance robotic systems that exist today. There are various types of sensorsavailable, often divided into sensors that measure the internal state of the robot(proprioceptive sensors) and sensors that provide knowledge about the surround-ing environment (heteroceptive sensors), [14]

Examples of proprioceptive sensors are encoders and resolvers for joint positionmeasurements and tachometers for joint velocity measurements. Heteroceptivesensors include, for example, force sensors for end effector force measurements andvision sensors for object image measurements when the manipulator interacts withthe environment.

Due to cost and reliability requirements, the sensors in a standard robot usedtoday are usually proprioceptive sensors, which only measure the servo angularposition. In some special applications heteroceptive sensors are used. For exam-ple, where an object should be picked from a transport band a vision system canbe used or on welding robots which need to have a contact to the welding objectsbut can’t push too hard, force sensors are used.

2.3 Kinematics of the robotThe standard Motoman robot on the market today consists of 6 axes, S, L, U, R,B and T see Figure 2.2, and is a chain of serially linked axes. There is a lot ofresemblance between the robot arm and a human arm.The robot arm can be divided, as the human arm, into two main parts, an "armpart" and a "hand part" where the wrist is at servo B. The S, L and U axes controlthe "arm part" and generates the big movements. The "hand part" is controlled by

Page 21: A method for collision handling for industrial robots

2.4 System Overview 9

R, B and T, and it is used for the precision movements of the tool center point,TCP. The R servo is placed before the wrist, but it has the same effect on theTCP as twisting the underarm has on the hand.In this thesis we have defined three different reference systems to make it easierfor the reader when we are referring to them throughout the text. They are

• The Base Frame

• The Wrist Frame

• The Tool Frame

The robot’s base position is when the axes are aligned as shown in Figure 2.2. Inthis position all the servo values are zero and the kinematics calculations in thisthesis are based on this definition.

The Base Frame

The base frame has its origin defined in the center point of the robot base as shownin Figure 2.2. The S-servo will have its rotation ωS around z in the base frame.

The Wrist Frame

The wrist frame origin has been defined in point P, see Figure 2.2, on the robot.The relation between the base frame and wrist frame is only depending on the threefirst servos S, L and U, which, as described above, control the big movements.

The Tool Frame

The tool frame has its origin defined in the TCP of the robot, see Figure 2.2.The relation between the wrist frame and the tool frame does only depend on theservos R, B and T.

2.4 System OverviewThe robot cell consists of a robot arm, an NX100 control box and a programmingpendent box, see Figure 2.3, where the NX100 control box generates commandsthat the robot arm executes. The program pendent box is the user interface towardNX100 where the robot can be programmed.

2.5 Robot ControlThe robot is controlled by programmed jobs, see Table 2.1 and Figure 2.4 for ashort example of a job. A job consists of job lines and on every job line there isan instruction where a position, a motion type and a velocity has been given.The three main motion types are

Page 22: A method for collision handling for industrial robots

10 Robotics

Figure 2.2. The dynamics of the robot

Figure 2.3. System Overview

• MOVL - Moves the TCP, see Section 2.3, along a straight line toward thedesired coordinate.

• MOVC - Makes circles. At least three or more coordinates have to be usedto define the border of a circle. The MOVC instructions will then generatea circular motion.

• MOVJ - Moves the robot arm in a way that is optimal for the servos.

The velocities can either be defined as joint speed or tool speed. The joint speedis set between 0-100% and means the maximum allowed speed for all the servosduring the execution of an instruction, see Appendix A for maximum servo speeddefinitions. The tool speed is set in the range 0-9000 cm/min and means themaximum allowed speed for the TCP during the execution of an instruction.

Page 23: A method for collision handling for industrial robots

2.5 Robot Control 11

0 NOP1 MOVL P000 V=500.02 MOVJ P001 VJ=0.253 MOVC P001 V=500.04 MOVC P002 V=500.05 MOVC P003 V=500.06 MOVJ P003 VJ=0.257 MOVJ P004 VJ=0.258 MOVL P000 V=500.09 END

Table 2.1. Short example of a job where P00i are the positions.

Figure 2.4. The result from the job example in Table 2.1.

2.5.1 NX100 control systemThe NX100 control system controls the arm by generating pulses for the servos.The NX100 makes it possible to control up to 36 axes, a synchronized control offour robots and external axis. The programming capacity is 60.000 steps, 10.000instructions and 40 input and 40 output signals (can be extended to 1024/1024).Because of the control system’s PC architecture it is possible to communicate withother systems via a communication protocol.

2.5.2 NX100 ParametersA robot can be seen as a closed box that can be manipulated by programmed jobswhich are executed in the NX100 control system. One way to generate these jobs

Page 24: A method for collision handling for industrial robots

12 Robotics

is to change to the TEACH mode, which is one of the three modes on the robot.When changed to TEACH the robot can be programmed by the program pendentbox by moving it through the desired route.The other two modes are PLAY and REMOTE, where the play mode is activatedwhen a pre-programmed job should be executed autonomously. The REMOTEmode is a monitor mode, which can be used with a simulation program calledMotoSim.It is possible to upload information to some register in the NX100 by using IO:son the robot or by using functions in the NxAddon programming library. Theuploaded information can then be used in the pre-programmed jobs.There are three kinds of variables that are extra significant for this thesis, especiallyfor Chapter 3. These are

• C-variables

• D-variables

• P-variables

C-variables

A job line always consists of a motion type, a position variable and a motionvelocity as described in Section 2.5. When a job is programmed using the pendentbox, the position variable is not visible on the job line, only the motion type andthe velocity is visible, but the position is stored in a C-variable.The C-variables only exist inside the NX100 and can’t be modified from the outsidewhen the job is running. They are only possible to modify before the job has beenloaded because all C-variables have to be pre-defined.

D-variables

D-variables are of the type double in the NX100-system, and they are possible tochange from the outside when a job is executed.

P-variables

The P-variables are, like C-variables, position variables. The difference betweenthese two is that the P-variable can be changed during the execution of the job.They can not be changed directly, but by using the SETE instruction in NX100 aP-variable can get a D-variable value.

2.5.3 Safe home locationThe definition of a safe home location (SHL) is a place that the arm can retract to.There can be several SHL in a cell and a SHL can be specified by using existingcubes in NX100. The cubes are specified with a height, a width, a depth and thelocation of it in the robot cell. Another way to specify a SHL is to insert a SETinstruction in the programmed job file which sets a predefined IO port and defineswhere in the job file the SHL is located.

Page 25: A method for collision handling for industrial robots

2.6 Kinematics 13

2.6 KinematicsThe theory in this chapter is primarily taken from [7], [4], [8] and [14].

2.6.1 Rigid motion

Figure 2.5. Two reference frames

The relation between two frames (x0, y0, z0) and (x1, y1, z1), seen in Figure 2.5,can be described with a combination of a pure rotation and a pure translation,called a rigid motion. The position of a point p1 in frame 1 with respect to frame0 is given by

p0 = R10p1 + d1

0 (2.1)

where R10 is the rotation of frame 1 relative to the reference frame 0 and d1

0 is thetranslation of frame 1 with respect to the reference frame 0.

Homogeneous transformation

For a easier representation, the relation between the two frames can be describedas a homogeneous transformation given by

(p0 1)T = T 10 (p1 1)T =

(R1

0 d10

0 1

)(p1 1)T (2.2)

where T 10 includes all the necessary information about the position and rotation

of frame 1 with respect to the reference frame 0. The inverse transformation T 01

is given by

T 01 =

(R0

1 −R01d

10

0 1

)(2.3)

Equation (2.2) can easily be extended to describe more complex relations betweenframes. If there were three frames, the mapping from frame 2 to frame 0 isaccomplished by a multiplication of the individual homogeneous transformations.

T 20 = T 1

0 T21 (2.4)

Page 26: A method for collision handling for industrial robots

14 Robotics

2.6.2 Forward Position KinematicsForward position kinematics is a method for finding the position of the TCPexpressed in the base frame when the servo angles and the length of the axes areknown. The robot can be seen as a set of rigid links connected together withvarious joints.The transformation between the end points on an axis are based on the angle ofrotation and amount of linear displacement.The joint angles are

q = [qS , qL, qU , qR, qB , qT ]T (2.5)

To express the position of the TCP in the base frame the homogeneous transforma-tion matrix between the tool frame and the base frame is needed. To calculate thistransformation matrix, the homogeneous transformation between the base frameand the wrist frame is first calculated. The complete transformation from a pointdefined in the wrist frame to the base frame is given by the following equations

pbs = dSbs +RSbspS (2.6)

pS = dLS +RLSpL (2.7)

pL = dUL +RULpU (2.8)

pU = dwrU +RwrU pwr (2.9)

The rotation of the wrist frame in relation to the base frame and the positionof the center point of wrist frame with respect to the base frame follows from(2.6),(2.7),(2.8) and (2.9)

Rwrbs = RSbsRLSRULRwrU (2.10)

dwrbs = dSbs +RSbsdLS +RSbsRLSdUL +RSbsRLSRULdwrU (2.11)

The position of the wrist frame center point with respect to the base frame canbe reduced to

dwrbs =

dh cos qSdh sin qS

dv

(2.12)

where dh and dv are the horizontal and vertical distancs given by

dh = lLS + lUL sin(qL) + lBU cos (qU − qL) (2.13)

dv = lUL cos(qL) + lBU sin (qU − qL) (2.14)

The complete homogeneous transformation between the base frame and the wristframe is given by [

pbs1

]= Twrbs

[pwr1

]=[Rwrbs dwrbs

0 1

] [pwr1

](2.15)

where RwrU = I because the relation between frame U and the wrist frame is onlya translation along the positive z-axis of frame U.

Page 27: A method for collision handling for industrial robots

2.6 Kinematics 15

The relation between the wrist frame and the tool frame is given by the followingequations

pwr = dRwr +RRwrpR (2.16)

pR = dBR +RBRpB (2.17)

pB = dTB +RTBpT (2.18)

The rotation of the tool frame in relation to the base frame and the position of thetool frame center point with respect to the base frame follows from (2.16),(2.17)and (2.18)

Rtoolwr = RRwrRBRRTB (2.19)

dtoolwr = RRwrRBRdTB (2.20)

The complete homogeneous transformation between the wrist frame and the toolframe is given by[

pwr1

]= T toolwr

[ptool

1

]=[Rtoolwr dtoolwr

0 1

] [ptool

1

](2.21)

Finally the position of the TCP expressed in the base frame follows from (2.15)and (2.21) [

pbs1

]= Twrbs Ttoolwr

[ptool

1

](2.22)

or similar aspbs = Rtoolbs ptool + dtoolbs (2.23)

Rtoolbs = Rwrbs Rtoolwr (2.24)

dtoolbs = dwrbs +Rwrbs dtoolwr (2.25)

In Appendix B.1 the transformations are explained more thoroughly.

2.6.3 Inverse Position KinematicsThe inverse kinematics problem is, given a position and orientation of the TCP, tocompute the corresponding joint angles. Compared to forward kinematics, wherea unique closed form of a solution always exists, this is a much more complexproblem.It is not certain that a solution to the inverse kinematics problem exists, for ex-ample if the desired position is beyond the reach limit of the robot. That is to sayoutside its workspace volume and if a valid solution exists it is not certain thatit is unique. An example of multiple solutions can be seen in Figure 2.6, wherethere are 4 different configurations to reach the same position for the wrist framecenter point by combining the binary decisions "forward/backward" and "elbowup/elbow down".If the R,B and T-servos are included there will be an infinite amount of solu-

tions for reaching the desired position with the TCP. A simple way to solve thisproblem and minimize the calculations is to lock the tool frame in a certain angle

Page 28: A method for collision handling for industrial robots

16 Robotics

Figure 2.6. Multiple solutions

by predefine the rotation matrix Rtoolbs , see Section 2.6.2. If this is the case theposition of the wrist frame center point expressed in the base frame is given by

dwrbs = dtoolbs −Rtoolbs dTB (2.26)

The S joint angle qS will then be

qS = arctan(dxwrbs , ± dy

wrbs

)(2.27)

A positive dywrbs refers to that the robot is facing "forward" and negative is "back-ward". The horizontal and vertical distances dh and dv from the wrist frame centerpoint to the center point of frame S are given by

dh =√dxwr 2bs +±dywr 2

bs , dv = dzwrbs (2.28)

The cosines rule gives the angle for qU

qU = ± arccos

(dh

2 + dv2 − dUL

2 − dwrU2

2dULdwrU

)(2.29)

A positive value of qU will give the "elbow up" configuration shown in Figure 2.6and a negative value will give an "elbow down" configuration. The last servo angle,qL , is given by

qL = arctan(dhdv

)− arctan

(dwrU sin qU

dUL + dwrU cos qU

)(2.30)

The inverse position for the wrist uses Rtoolwr as an input, this is straightforwardlyderived from the known rotation matrix Rtoolbs and the rotation matrix Rwrbs ,whichcan be calculated as soon as qS , qL and qU are known.

RTwr = Rtoolwr = RbswrRtoolbs (2.31)

Two solutions exist, one when qB is positive called "non-flip" and one when qB isnegative called "flip".

Page 29: A method for collision handling for industrial robots

2.6 Kinematics 17

2.6.4 Forward Velocity KinematicsThis problem is similar to forward position kinematics problem, but the resultwill be the velocity of the tool reference frame. To calculate the velocity the jointvelocities and facts from Section 2.6.2 have to be known.

Given joint angles q from equation (2.5) and the joint velocities are

q = [qS , qL, qU , qR, qB , qT ]T (2.32)

The relation between joint positions q and the position of the TCP is nonlinear,which can be seen in the transformation matrix T, but the relationship betweenthe joint velocities q and the TCP velocity in the base frame vbs is linear. If oneservo drives twice as fast the vbs will be twice as high.If the length of the axis, l, the rotation speed for the axis in point A, qA and thevelocity in the endpoint A, vA is know it is possible to calculate the velocity inthe other end point B, vB with

vB = qA × ~AB + vA (2.33)

where ∣∣∣ ~AB∣∣∣ = l (2.34)

With the results from forward position kinematics 2.6.2 where the positions andtransformation for the joints have been calculated, the velocity, vbs for the TCPin the base reference frame is found by recursion from base frame to wrist frameby knowing the joint velocities q and the positions of the joints.For a more thorough description, see Appendix B.2.

Page 30: A method for collision handling for industrial robots
Page 31: A method for collision handling for industrial robots

Chapter 3

Backtracking

This chapter will present a more detailed problem description and discuss differentsolution methods for a safe retraction of the arm.

3.1 Problem OverviewThe concept Backtracking is meant to be used in tight spaces or when the robot iscontrolled by a person without any knowledge in robot handling. The function hasto be active at all times and it can’t depend on specific servo or alarm situationsand it also has to work after an emergency break. Whenever the robot has to bestopped during production it has to be possible to retrieve the robot to the lastpassed safe home location by only pushing a button. When the retraction hasbeen completed, the monitored job has to be reset to start on the job line wherethe retraction ends so that the job can safely be restarted by pressing the playbutton on the PP. The function has to work on the existing NX100 control systemwith a minimum amount of outside modifications of the robot.

When using robots in the industry there is always a possibility that a collisionmay occur. The collision can be the result of careless programming or a change inthe environment surrounding the robot. When a collision occurs on a MotomanRobot the emergency breaks will be activated and the current to the servos will beturned off to prevent the robot from damaging itself and the surrounding objectseven more. When the collision alarm has been reset the robot has to be retrievedfrom the collision area. This can be done manually or, as in most cases, using theexisting retrieving function in the NX100. The existing retrieving function can beactivated by pressing a back button on the programming pendant box but thisdoes not always ensure a safe retraction of the robot arm.Industrial robots usually have some different tasks it should perform, which canbe totally independent from each other. For example it lifts up a box on a feederband, put some metal in a lathe and lifts off another box from another feederband, see Figure 3.2.

19

Page 32: A method for collision handling for industrial robots

20 Backtracking

All these tasks are written in the same job file and named with labels and do nothave to be executed in a chronological order. They might instead be controlled bysome external sensors which say when a task is ready to be executed.

When a collision has occurred and the existing retraction function is activated,the system will execute the job in the reversed order by stepping line by line, seeFigure 3.1. If the job is divided into small tasks, as described in the exampleabove, and a collision occurs when performing the motions specified by task 2, thearm will be retracted correctly until it reaches the end of task 2. When it reachesthe end of task 2 it will execute the motions in task 1 and this may cause a newcollision, especially when it works in narrow spaces.

A Motoman robot consists of at least six axes but they usually have some exter-

Figure 3.1. Using the back button

nal axes that make it move along a working line or holds on to its working piece.To ensure a safe retraction, the arm and the external axes must be retrieved in asynchronized way to avoid further collisions.

In some cases there are several robots that are synchronized and working to-gether, which means that they will move around each other while performing dif-ferent tasks. If one of the robots should collide, with an object or another robot, allthe robots have to be retrieved in a synchronized way or a new collision may occur.

3.2 How to retrieve a safe way homeThe first thought about how to solve this problem was to use external sensors anda sensor fusion algorithm to guide the robot back, as described in [16] and [5], but

Page 33: A method for collision handling for industrial robots

3.2 How to retrieve a safe way home 21

Figure 3.2. Retracting arm

during the work we realized that it was a complex problem to create a generalsolution with only sensors as further explain below.

3.2.1 Using external sensors

An industrial robot has a specific motion compared to a mobile robot, because itis always connected to the ground. This means that when the head has passed anobject the danger is not over, because no points along the 6 axes can pass throughthe collision area.The conclusion was that the whole arm had to be covered by a sensor-shield toguarantee total collision avoidance when the arm was retracted.One method is to let a vision system monitor the arm and control the movement.This approach demands that the vision system monitors the whole arm at alltimes, which might not be possible if the arm is working inside an object or intight spaces.Another method is to use sensors mounted on the robot arm that cover all the axessurfaces to avoid collisions when retracting the arm. This would however meanmajor modifications of the robot and can be an expensive task, which would behard to promote on the market as a selling product.

Because of the reasons mentioned above and that Motoman wished to have asimple solution that would not demand major modifications of the robot, none ofthe methods seemed suitable to solve the retraction problem.Instead the method described in the next section was chosen. This is a more gen-eral solution and works on all NX100 systems without any modifications on therobot.

Page 34: A method for collision handling for industrial robots

22 Backtracking

3.2.2 Path recordingBy sampling the motion of the robot a collision free path can be retrieved. Thisensures a safe retraction of the arm and does not demand any modifications out-side the NX100 control system.To ensure a collision free path, the sample rate has to be high enough, otherwisethe arm might collide with additional objects when retracted, because it starts todiverge too much from the original path, see Figure 3.3. If the sample rate is toohigh when calling for pulse information, the fetching will take too much of theCPU in NX100 so the rest of the system will fail and the system will be stopped.To be able to store a big amount of information from the sampled robot motion a

Figure 3.3. Bad Sampling

circular buffer was introduced. The advantage with a circular buffer, instead of adynamic buffer, is that if the robot keeps on going the dynamic buffer would keepon growing and it would cause a memory overflow after a while. This problemwill never occur with a circular buffer, because when it is full it will just overloadthe old data. The disadvantage with this is that if the circular buffer is too smallit can’t bring the robot all the way back to the safe home location but a safetymeasurement has been done. When a pulse is loaded into the buffer it is also sentto a text file. The text file allocates less memory then the buffer to store the sameamount of data and can be used to retract the arm if the buffer is overloaded.This also makes it possible to use the software for executing text files created onan external unit.

Another possibility is to instead send data to a text file. It is possible to sendthe data to a memory located on a external unit. This increases the possibilitiesin the amount of storing data so a data overflow will never occur. To store data onan external unit has not been tested accurately and is more of a future work. Thepreparations have been done and it would not require much work to implementthis.

3.2.3 Test benchOur first idea was to use an existing function in the API for the NX100 to generatethe motion of the arm. By sending the sampled pulses to the function the desiredretraction path would be executed. It is hard to make error diagnosis on the

Page 35: A method for collision handling for industrial robots

3.2 How to retrieve a safe way home 23

Figure 3.4. System overview of the test bench

NX100 control system once the software is loaded. A test bench needed to bedeveloped and placed on an external unit. In the test bench the backtrackingsoftware could be tested and an error diagnosis of the created software becamepossible. To be able to load pulses from the robot to the buffer and evaluatethe solution method, a communication protocol was needed. In the beginning amanual was used which described the communication between the NX100 and anexternal unit. This manual did not however contain all the necessary informationfor developing our own communication with the robot. To retrieve the necessaryinformation the communication between the NX100 and the program MotoSimwas monitored by using a software called SNIFFER. When the information hadbeen retrieved the communication protocol was developed. In the resulting testbench it is possible to perform motions of the robot that have been stored inthe buffer. As seen in Figure 3.5 the test bench consists of the following mainfunctions:

• Start: The pulses from the robot are retrieved and stored in the buffer.

• Stop : Stops the sampling of the motion.

• Read: Displays the content of the buffer.

• Execute Forward: Executes the motion specified by the pulses stored in thebuffer. The motion is executed from the first sampled pulses up to the lastsampled pulses.

• Execute Backward: Executes the motion specified by the pulses stored in

Page 36: A method for collision handling for industrial robots

24 Backtracking

the buffer. The motion is executed from the last sampled pulses up to thefirst sampled pulses.

Figure 3.5. The test bench

When we started to execute the motion stored in the buffer we discovered thatthe motion became very unsteady. The arm moved towards the point specifiedby the first loaded pulse and then stopped and then started to move towards thepoint specified by the second loaded pulse and so on. Because the sampled pathconsists of a large number of pulses the solution of using the existing function inthe API to perform the motion did not give a satisfactory result. Instead anotherapproach was used which will be described in Section 3.3.

Page 37: A method for collision handling for industrial robots

3.3 How to get a steady motion 25

3.3 How to get a steady motionThere is an existing API in the NX100 which provides an interface for sending amove instruction to the robot from a module inside the NX100 or an external unit.The problem is that it’s only possible to send one at a time. This means that therobot will only execute one instruction at a time and then stop and wait. Theresult will be an unsteady motion when trying to execute several move instructions.

To be able to generate a smooth motion for the robot the NX100 system reads 4instructions ahead in a job file so it can optimize its motion from that, see chapter2. By allowing the software to create a job file and load it into NX100 the aboveadvantage could be used.

Two different solutions that generate a steady motion have been developed.One is to let the software construct a job file which uses the P-variables, see Chap-ter 2, to execute the motion and then load the buffered data into the P-variables.Because of limitations in NX100 the P-variables can only be modified before therobot initiates the job. After the initiation the only way to modify the P-variableis to update the D-variables from the buffer and use the SETE instruction. Bycontinuously updating the D-variables from the buffer in every job cycle the robotwill modify the P-variables on its own.

A crucial point in this solution is to know when to update the D-variables fromthe buffer. If the updating is too slow the same set of D-variables will be executedagain and the same motion will be repeated. If this occurs the arm will move theshortest distance from the last to the first position specified by the D-variables.This may cause an additional collision, see Figure 3.6. If the D-variables instead

Figure 3.6. The effect of the path when the update is too slow

are updated too fast there will be a gap in the motion information and the armwill skitter, see Figure 3.7. This may also cause an additional collision.

Page 38: A method for collision handling for industrial robots

26 Backtracking

Figure 3.7. The effect of the retraction path when the update is too fast

To solve this problem a communication between the executing job and the modulewas established. By using IO outputs on the robot, that both the module and thejob could write to, the problem of updating the D-variables could be solved. Thishowever lead to some additional problems regarding the steady motion.

When the updating of the new position variables exceeds the time for the jobto complete the motion instructions, the job will stop and wait for the data trans-fer to be completed. The only way to avoid this stop is to increase the time forthe job to execute its motion instructions. There are two ways to accomplish that,either decrease the motion speed or decrease the sample rate. The problem withdecreasing the motion speed is that it does not guarantee that the stop won’t occurbut by making a dynamic sampling algorithm the problem could be solved.

The Algorithm is based on the knowledge of the current executing job speed andthe speed of the future retraction, see equation (3.1)-(3.6). By knowing this it ispossible to make sure that the time to execute the retraction instructions exceedsthe time for uploading the new position variables.

The dynamic sampling algorithm also gives the benefit of optimizing the sam-ple rate. A slower speed in the monitored job doesn’t require the same samplerate to guarantee a safe retraction of the arm, so by decreasing the sample ratewhen possible memory can be saved for later.

The distance between the two sample points with sampling time t in the mon-itored job is given by X[t]. The value for X[t] is predicted from the informationin t− 1, given by

X[t] = Ts[t− 1]Vo[t− 1] (3.1)

Page 39: A method for collision handling for industrial robots

3.3 How to get a steady motion 27

where Ts is the sample rate and Vo is the velocity for the TCP. It is only possibleto receive the joint velocities from NX100, so Vo had to be calculated with forwardvelocity kinematics, see section 2.6.4.

The equation for the distance between the two sample points in the created job isgiven by

X[t] = Tmove[t− 1]Vb (3.2)

where Tmove is the time it takes to move from point i to point i + 1 and Vb(constant) is the velocity between the points.

The time Tload, which is the time it takes to update the new positions from thebuffer to the D-variable, can not exceed Tmove or a stop will occur. Tload wasmeasured to 10 ms/pulse. This will give us the following condition

Tmove[t] > Tload (3.3)

where Tload is given byTload = 10NNrOfAxis (3.4)

By using equations (3.1), (3.2), (3.3) and (3.4) the minimum sample time is givenby

Ts[t] >Vb 10NNrOfAxis

Vo[t](3.5)

The equation used in the software is

Ts[t] = 10NNrOfAxis VbVo[t]

+K (3.6)

where a safety margin K has been added. This will update the sample rate inevery sampling cycle by calculating V0 and using equation (3.6) to calculate thesampling rate.

The other solution to the problem of steady motion is to store all the sampleddata from the buffer into the C-variables, see Chapter 2, on the NX100. TheC-variables will be initiated before the job is executed and this will guarantee an

Page 40: A method for collision handling for industrial robots

28 Backtracking

entirely smooth motion with none of the disturbances mentioned above.It is possible to store the sampled data in up to 15000 C-variables depending onhow much of the memory that is occupied. This means that it is more than un-likely to run out of C-variables.This solution however does not support some of the features described in chapter4.1, like the possibility to a real time modification of the position variables.

3.4 Environmental change

When retracting the arm the environmental changes must also be considered. Ifthe robot picks up an object and a collision occurs when moving the object, thestored path from the collision point up to the picking point will be safe. However,if the object is not realised at the picking point the path from the picking point tothe safe home locate may not be safe because the object that is now connected tothe robot may cause a collision. To solve this problem the value of the IO-portsthat decide the different environmental setting have to be saved as well. If theIO-ports that control the environmental changes are known they can also be savedin the buffer. The stored environmental changes are analyzed before the job file iscreated. When a change in the environment occurred between two-stored positionsin the buffer a job instruction called DOUT was inserted along with the addressof the IO-port and the new binary value.This environmental change handling is only possible when the Backtrack moduleuses the C-variables method, because the solution for the other method becametoo complex.

3.5 Restore the monitored job

When the function is activated the current job information of the monitored jobis retrieved and stored. The information consists of the job name and the currentjob line that is executed. When the retraction has been completed, the monitoredjob is reloaded and reset to the job line where the retraction ends so that the jobcan safely be restarted by pressing the play button on the Program Pendent box.If this is not done and the operator presses the play button the arm will travelthe shortest distance to the motion specified by the job line where the collisionoccurred and this may cause further collisions.The solution method using D-variables allows the operator to stop the retractionof the arm by pressing the hold button on the Program Pendent box. When thehold button is pressed the software will retract the arm to the closest job line in themonitored job and reloads the job. This is accomplished by connecting the loadedpulse in the buffer to the corresponding job line in the monitored job. When thesoftware detects that the hold button has been pressed the pulses will be loadedinto the executed job file until it reaches the closest job line.

Page 41: A method for collision handling for industrial robots

3.6 INI-file 29

3.6 INI-fileThe INI-file makes it possible to modify the backtracking software based on theapplication and the model of the robot. When the NX100 control system is acti-vated it sets the parameters of the software based on the information provided inthe INI-file, see Figure 3.8. The INI-file provides the following information:

• Buffer : Specifies the size of the buffer.

• Address for the triggering IO-port: Specifies the IO-port which will be usedto trigger the retraction.

• Address for the sampling triggering: Specifies the IO-port which will be usedto trigger the sampling of the motion.

• Control parameters: Specifies the IO-ports which control different envior-mental settings.

• Control group: Specifies the robot and the external axes that will be usedin the application.

• Robot information: Length of axis and servo limits.

Figure 3.8. System Overview of the Backtrack module

3.7 LimitationsA safe retraction of the arm might not always be guaranteed. For example if anobject has been picked from an area and a unit that the NX100 does not controlplaces a new object on the area, the arm will collide with the new object whenit tries to replace the object that it carries. If the backtracking function is to beused it is up to the operator that creates the job-file to decide how far the arm isto be retracted when a collision occurs.

Page 42: A method for collision handling for industrial robots
Page 43: A method for collision handling for industrial robots

Chapter 4

Future use of thebacktracking software

The creation of the Backtracking software gave new possibilities for manipulatingthe motions of the robot. We investigated the possibilities to use the software tomore than just retracting the arm to a safe zone. This chapter will present theresult of this investigation.

4.1 Real-time motion controlThis section describes a method for controlling a Motoman robot in real-time bycombining the backtracking function with an external unit. This new possibilityto control the motion became the foundation for the cognitive system in Section4.2.

One of the motion functions created in the backtracking software has openedthe possibility to create a real-time motion control. The main approach for thefunction is that sets of X path points, see Figure 4.1, where one path point consistsof one pulse for each axis of the robot, are loaded into a created job on NX100 inevery cycle from the buffer, see Section 3.3. If the buffer would consist of pulses

Figure 4.1. How the path consists of pulse sets

that generates route A, it will be possible to replace pulses in the buffer where the

31

Page 44: A method for collision handling for industrial robots

32 Future use of the backtracking software

new pulses generate route B instead, see Figure 4.2. By doing this, a form of areal-time motion control will be achieved.

To avoid disturbances in the motion, the time to execute the set of path points

Figure 4.2. Real-time control

in every cycle has to be longer then the upload time for the new set of path pointsfor the next cycle. If this is not the case, the robot will stop and wait until thepath points have been uploaded. The distance the arm has to travel, before thechange of the path is executed, is determined by the sample distance between thepath points representing the current path and the amount of path points used ineach set, see Figure 4.3.

If the sampling distance between the path points is small and only one pointis used to represent each set, the distance traveled before the path is changed willbe shorter. However, to avoid the disturbance mentioned above, the speed Vs ofthe executing job has to be limited to fulfill (4.1), which will lead to a slowermotion of the arm. If a higher speed is wanted, the distance between the samplepoints has to be enlarged at the cost of a longer motion distance before the path ischanged. There will also be a small disturbance due to the restart of every cycle.This will however always occur and the only way to minimize its appearance whenexecuting a path is to reduce the number of cycles needed to reach the end pointof the path. This can be done by using a larger sample distance between the pathpoints, reducing the number of total points in the path, and/or use more pathpoints in each set. This will give a smoother motion but a loss in the motioncontrol.

TTimeOfMotion > NNrOfAxisNNumberOfPathPoints [ms] (4.1)

TTimeOfMotion = X

Vs(4.2)

X is the traveled distance when the set n, shown in Figure 4.4, is executed. If thereal time motion control is to be used only to reach a desired end point, with no

Page 45: A method for collision handling for industrial robots

4.1 Real-time motion control 33

Figure 4.3. The distance traveled before the path is changed depending on the amountof path points used in each set.

Figure 4.4. A set consisting of one path point

desired path of how to reach it, there is another solution which gives a more directreal-time motion control. If the original path only consists of a start point andan end point, the path can be changed by just updating the endpoint. The jobis designed with only one motion instruction with a constraint which is controlledby an IO-signal.When the software receives a new end point it updates the D-variables and whenthe update is completed the IO signal goes high. The current move instruction isaborted and the job will start over in a new cycle and the arm will start to movetowards the new end point, see Figure 4.5. This will lead to no limitations of thespeed between the points because the path will be changed as soon as the updateis completed.

Page 46: A method for collision handling for industrial robots

34 Future use of the backtracking software

Figure 4.5. Change of end point

4.2 PathfinderBy using the backtracking software to create a buffer and a job file, which isupdated using D-variables as described in Section 3.3, the motion of the arm canbe changed during the execution of the job by loading new data into the buffer. Thenew data will be loaded into the D-variables and the new motion will be executedin the next job cycle. This makes it possible to develop software solutions forapplications that demand an autonomous change of the motion for the arm whilethe job is being executed.This section will describe a method for how the robot can learn from its mistakesand generate a collision free path using the backtracking software. By using avision system a more advanced decision method can be used to find a path aroundthe collision area which will be discussed in Chapter 7.

4.2.1 Basic-PathfinderThe basic pathfinder function is used to generate a collision free path from pointA to point B and back to point A, which can be loaded into the NX100 as a jobfile. The assumption is that the robot is positioned and moved as shown in Figure4.6, and a safe way can be retrieved by moving over the objects. By adding morepoints[A, B, C, D . . . ] when initiating the pathfinder, constraints can be given ofcertain points that has to be passed when travel from point A to D.

The function is first initiated with a start point A and an end point B which

Page 47: A method for collision handling for industrial robots

4.2 Pathfinder 35

Figure 4.6. Collision free path

is used to generate a linear path consisting of SA−B samples. The linear path isthen stored in the buffer, shown as the black line in Figure 4.7, and the backtrack-ing software creates a job file, consisting of D-variables and MOVL instructions,and loads it into the NX100. The generated job file will read the stored data in theD-variables as the cartesian coordinates and the rotation of the TCP, expressed inthe base frame, instead of pulses. A part of the path in the buffer will be loadedinto the D-variables and the robot will start to move along the path between thetwo points A and B.

When a collision occurs the current position of the TCP , expressed as the carte-sian coordinates and the rotation of the TCP, is retrieved from the NX100. Thearm is then retracted by loading a part of the path, in the reversed order, fromthe buffer into the D-variables and then executing the job. When the arm hasbeen retracted the path from the current position to point B will be abandoned.Instead an alternative path from the current position to point B will be calculated.The new path will be calculated using the following two steps.

• The path from the current position to the collision point is calculated bymoving the collision point in the positive z-direction of the base frame andgenerating a linear path between the two points consisting of SR−C sampleswhich are stored into the buffer, shown as the green line in Figure 4.7.

• The path from the collision point to end point B is a generated linear pathconsisting of SC−B samples between these two points which are stored intothe buffer, shown as the blue line in Figure 4.7.

This procedure is repeated until the arm reaches the end point B.To ensure that the arm doesn’t get stuck in an endless loop when trying to

reach the modified collision point, the number of path points to retract the armis based on the location of the collision point. If the retraction point is located asshown in Figure 4.8, the robot will not be able to move over the object unless thearm is retracted to the second retraction point. The retraction condition is givenby

Page 48: A method for collision handling for industrial robots

36 Future use of the backtracking software

Figure 4.7. Modified path

√(xc − xr)2 + (yc − yr)2 > D (4.3)

where D is a pre-defined minimum distance and (xc, yc) and (xr, yr) are coor-dinates for the collision point and the retraction point. If the condition is notsatisfied for the chosen retraction point a new retraction point is chosen until thecondition is fulfilled. When the condition is fulfilled the path from the collisionpoint to the chosen retraction point is loaded into the D-variables and the job isexecuted. However, problems may occur when retracting the arm from the colli-sion point. If the arm has collided with a flexible object some of the passed sample

Figure 4.8. retraction based the distance to collision point

points may not be valid as part of the collision free path, see Figure 4.9. The armmay have managed to pass the object without triggering the collision alarm, butwhen another collision occurs it might collide with the object when the arm isbeing retracted, see Figure 4.9. However, the sensitivity of the collision sensorscan also be modified but if the sensitivity is too low the collision sensor may betriggered simply by the motion of the arm. Another solution to this problem isto insert a new retraction point on the line between the two collision points. Thepoint is then loaded into the D-variables and the arm is moved to the new point,Figure 4.9 shows the modified path. The insertion of the new point is only doneif there is no sample point between the two collision points. If the collision occurs

Page 49: A method for collision handling for industrial robots

4.2 Pathfinder 37

when the arm is being retracted to a second retraction point as shown in Figure4.10 the arm will retract to the first retraction point and modify the second retrac-tion point. When the arm has successfully reached the modified second retractionpoint the new path to the original collision point is generated and executed.

Figure 4.9. Insertion of retraction point

When the arm has reached the end point B, the path may still cause collisionswhen the path is executed from point B to point A. If the arm has collided with aflexible object and managed to pass it, without triggering the collision sensors, andnone of the above situations have occurred, the path may cause a collision whenmoving from point B to point A, see Figure 4.11. Therefor the path is executedfrom point B to point A and if a collision occurs the arm is retracted to the closestpassed sample point and the next path point in the motion direction is modifiedas shown in Figure 4.11. If the modified path point is the point A another pointhas to be inserted at the original location of the point A, see Figure 4.12, to ensurethat the arm will end its motion at the desired start point. When the safe pathhas been generated it is processed to remove unnecessary points in the path beforeit is used as a job. The path processing will be described in Section 4.2.2. Afterthe path has been processed the remaining path points are used to create a job fileconsisting of C-variables, as described in section 3.3. However, the created job filefor this application will interpret the stored data in the C-variables as cartesian

Page 50: A method for collision handling for industrial robots

38 Future use of the backtracking software

Figure 4.10. Modified second retraction point

Figure 4.11. Correction of generated path

coordinates and rotation of the TCP instead of pulses. The job is then loaded onto the NX100 and can be used as any other manually created job.

In the basic path finder the arm may collide with the same object several timesdue to the simple path planing. However, if there were some constraints on theenvironment a more advanced path planing could be used or the basic pathfindercould be used to locate and estimate the size of the objects in the environment.This will be discussed in Section 4.2.3.

4.2.2 Path processing

When the safe way around the objects has been generated the path will consist ofa number of points. When creating a job file consisting of MOVL instructions thepath points that describe a change in the motion direction are the only ones neededfor the future job, see Figure 4.13. By looking at how the gradient changes betweenthe points in the path it can be determined were in the path the information is

Page 51: A method for collision handling for industrial robots

4.2 Pathfinder 39

Figure 4.12. Modified A point

redundant and can be removed. The algorithm is

norm5 g = (x2 − x1, y2 − y1, z2 − z1)‖(x2 − x1, y2 − y1, z2 − z1)‖

norm5 f = (x3 − x2, y3 − y2, z3 − z2)‖(x3 − x2, y3 − y2, z3 − z2)‖

(4.4)

with the constraints

if (norm5 g = norm5 f)”remove point 2” (4.5)

where norm5 g and norm5 f are the normalized gradients between the points.By stepping through the path points and removing points, a robot job withoutunnecessary information can be generated.

Figure 4.13. Resampling

4.2.3 Object EstimationWhen a collision occurs using the basic pathfinder the collision point is retrievedand stored and the robot tries to move over the object by planing a linear path. Ifthe objects are pre-defined as spheres, the collision points can be used to estimatethe location and size of the object to improve the path planing. When the arm

Page 52: A method for collision handling for industrial robots

40 Future use of the backtracking software

has collided with the same object three times, the collision points can be used todetermine the midpoint and radius of the sphere using equation

(xi − x0)2 + (yi − y0)2 + (zi − z0)2 = r2 (4.6)

where (x0, y0, z0) is the center point of the sphere and r is the radius of the sphere.When the parameters of the sphere have been determined equation (4.6) can beused to generate a path that avoids the object completely by placing the pathpoints around the sphere as shown in Figure 4.14. If the objects are pre-defined as

Figure 4.14. Path over sphere

consisting of flat surfaces, Equation (4.7) is used to describe the side of the objectwhich the arm has collided with using the collision points shown in Figure 4.15.However, this information is not enough to generate a path that avoids the objectcompletely as for the sphere. Instead the information can be used to decide wereto move the collision point. It might be better to move the collision point in thedirection of the plane instead of just moving it in the positive z-direction of thebase frame.

nx (xc − xb) + ny (xc − xb) + nz (xc − xb) = 0 (4.7)

where (nx, ny, nz) is the orthogonal vector to (xc − xa, yc − ya, zc − za) and (xc −xb, yc − yb, zc − zb) given by

(nx, ny, nz) = (xc − xa, yc − ya, zc − za)× (xc − xb, yc − yb, zc − zb) (4.8)

Figure 4.15. Collision points

Page 53: A method for collision handling for industrial robots

Chapter 5

Test Results on Part I

In this chapter the results from experiments and tests from Part I are presented.

5.1 BacktrackTo test the backtracking function a job was created which jumped to differentinstruction sets in the job file. A collision was caused when the robot was per-forming the motions specified by the instructions in label 2. The path from thestart point of the job file, and the motions defined in label 2, until the collisionoccurred is shown in Figure 5.1. When the job was retracted using the existingfunction the arm was correctly retracted until it reached the end of the motionsin label 2. Instead of retracting to the start point of the job, the arm is retractedtoward the point defined by the last motion in label 1, see Figure 5.2. This maycause further collisions of the robot. If the developed backtracking software is usedthis will be avoided. Figure 5.3 shows the original path, the retracted path usingthe existing function in NX100 and the retracted path using the backtracking soft-ware. How well the backtracking software will follow the original path depends onthe sampling rate, see Section 3.

5.2 Basic-PathfinderThe pathfinder testing was performed in the environment shown in Figure 5.4.Figure 5.4 also shows the approximate position of the start point A and the endpoint B. The position of the start point A and end point B in the base frame areshown in Figure 5.5.

The two points were then used to generate a linear path between the twopoints. In this test the generated linear path only consisted of these two points.In general there is no need for sample points between the start point and the endpoint because the MOVL instruction will move the TCP in a linear motion betweenthe two points. The advantage of having more sample points is that the retractiondistance will be shorter. As seen in Figure 5.4 a collision occurred when the arm

41

Page 54: A method for collision handling for industrial robots

42 Test Results on Part I

Figure 5.1. Path from start point of job file to the motion defined in label 2

Figure 5.2. The retraction path for the existing function in NX100 and the originalpath.

Page 55: A method for collision handling for industrial robots

5.2 Basic-Pathfinder 43

Figure 5.3. The retracted path using the developed backtracking software, the originalpath and the retracted path using the existing function in NX100

Figure 5.4. Testing environment for the basic pathfinder

Page 56: A method for collision handling for industrial robots

44 Test Results on Part I

Figure 5.5. The location of A and B in the base frame

reached object one. The arm was then retracted to the start point and a newpath was planned to the endpoint as described in Section 4.2. This procedure wasrepeated until the arm reached the endpoint. The original path and the generatedpath after the endpoint was reached are shown in Figure 5.6.

Page 57: A method for collision handling for industrial robots

5.2 Basic-Pathfinder 45

Figure 5.6. The generated path

Page 58: A method for collision handling for industrial robots

46 Test Results on Part I

Page 59: A method for collision handling for industrial robots

Part II

Integration of a VisionSystem

47

Page 60: A method for collision handling for industrial robots
Page 61: A method for collision handling for industrial robots

Chapter 6

Theory

The following theory are based on the information which was found during thepilot-study of the Vision System.

6.1 The Stereo ProblemThe theory in this section is based on facts found in [4], [13], [15], [2], [17], [11],[6] and [10].By comparing two images on the same object but from different locations, theobject will move compared to the background between the two images as shownin Figure 6.1. This is the same effect as one will get if one closes one eye and lookswith the other and then switch eyes. The displacements that will occur betweenthe pixels in the images are called disparity.

Figure 6.1. The Stereo Problem

The disparity can be calculated by triangulation if the distance between thecameras and the focal length are known. The point (X,Y,Z) in Figure 6.2 werethere is only a displacement in the x-direction, will be reproduced on (xl, yl) inthe left and (xr, yr) in the right camera, where

xr = X − hZ

, yr = Y

Z(6.1)

49

Page 62: A method for collision handling for industrial robots

50 Theory

xl =X + h

Z, yl =

Y

Z(6.2)

xl − xr = f2hZ, yr = yl (6.3)

The difference xl − xr is called disparity, d.

d = f2hZ

(6.4)

Z = f2hd

(6.5)

Figure 6.2. Canonical stereo configuration consisting of two pinhole camera models.

6.1.1 System OverviewA vision system that is based on the stereo problem needs a left and a right imagethat are taken with a displacement from each other. This can be made by twocameras or by moving one between the shots. The distance between the cameraswill affect the max image distance for the system. A human has approximately5 cm between the eyes and can detect a distance up to 10-20m. Larger distancesthan that can only be detected by using known references such as trees and houses.The main distances that an industrial robot works with are usually around 0.5-3mdepending on the model and the surrounding environment. So the objects that

Page 63: A method for collision handling for industrial robots

6.2 Stereo Algorithms 51

need to be detected and avoided will not be further away than approximately 1m.This means that a distance of more than 5 cm between the cameras is unnecessaryand 1-2 cm will be more than fine.

6.2 Stereo AlgorithmsThe methods that find the correlated points in the left and the right image canbe divided into two main groups, area and character based. The area based looksinto every pixel and usually takes a block surrounding the pixel and matches it tothe other image and sees where it has the best correlation. The character basedfinds different structures in the image such as straight lines from walls etc. andthen tries to locate the same object in the other image. The results from bothmethods will be the same, a disparity map which will have high values for closeobjects and low for objects further away.There are some problems with all stereo algorithms, an object can be visible in theleft image but behind an other object in the right and it will effect the matchingbetween the images. Sometimes there can be refractive errors with the cameralens, which can be detected on around the edge of the image.The images can also cause problems, if the objects are shiny metal or if there is apattern that is repeated trough the image.There can also be differences in the light between the images, because they aretaken at different angles.

Two methods to retrieve a disparity map are

• Block matching

• Local Phase Estimation

6.2.1 Block MatchingBlock matching consists of four steps

1. Pre step

2. Disparity estimation

3. Sturdiness

4. Sub pixel-interpolation

Pre Step

In the pre step the images is normalized. It is done to compensate for the differ-ences in light that may occur in the different images. The standard deviation ofthe intensity of the boxes are also estimated. This is done to get a measurementof how well the area is textured.

Page 64: A method for collision handling for industrial robots

52 Theory

Disparity Estimation

Block matching method tries to estimate the disparity for a pixel by match-ing blocks from the left image and compare it to blocks from the right image.Suppose that the left image is used as a reference and the comparing block inthe right image is displaced with a factor d = [0 dmax] . The best match forL (x, y) will be retrieved by calculating the cost for every pixel in the interval,[R (x, y) , . . . , R (x− dmax, y)] and the lowest cost will return the wanted disparityfor the pixel.There are some different methods for how to calculate the cost for the pixels,SSD (Sum of Square Differences) and SAD (Sum of Absolute Differences) are twomethods that are presented in [10]. There are only small differences between thesemethods, SAD is slightly faster according to [12] but SSD will also return thecovariance matrix so if this is also needed it will be a better alternative.Given a left and a right image, and a block size on ∈ [2n+ 1, 2n+ 1], then SADcan be calculated for a block with central point in (u, v) in the left image and adisparity d as

SAD (u, v, d) =n∑

j=−n

n∑i=−n

|L (u+ j, v + i)−R (u− d+ j, v + i)| (6.6)

This sum will be calculated for all the disparities from zero to the given dmax.The disparity will be retrieved from the lowest SAD which probably will give theright disparity.Higher values of n and dmax will give a larger calculation cost but will probablygive a more reliable system.For every pixel the four lowest values of SAD will be saved for step 3 and 4, wherean after treatment of the result will be done to improve the reliability of it.

Sturdiness

An easy way to check if the right disparity has been achieved, is to first calculatethe disparity with the left image as the reference and then use the right imageas reference and see if the disparities maps are the same. This is however acomputational demanding way to investigate the reliability of the disparity map.A less demanding way to increase the certainty off the disparity map is presentedin [6].It is based on two tests, where first the edginess of the disparity is investigatedand then a certainty test of the SAD value is performed.The edginess test will describe how well the disparities fit together. The lower ∆dthe better result

∆d =3∑i=0|di − dmin| (6.7)

Page 65: A method for collision handling for industrial robots

6.2 Stereo Algorithms 53

The other test is how certain the SAD is. The higher ∆SAD value the morereliable the result will be

∆SAD =3∑i=0|SADi − SADmin| (6.8)

If the received result is true or not can now be established by threshold values on∆d and ∆SAD.

Sub-Pixel Interpolation

A disadvantage of the block matching is that it will not return sub pixel accuracy.So the return value is 3 instead of the real value of 3.45 and will cause a misscalculation of 0.45 pixels. This can be corrected by interpolation, but this is a slowmethod. Instead almost the same result can be achieved by a method presentedin [12], which is faster than the interpolation. By using SAD(d− 1), SAD(d) andSAD(d+ 1) a result similar to interpolation can be achieved.

k1 = SAD (d)− SAD (d− 1)k2 = SAD (d+ 1)− SAD (d)k = max (|k1| , k2) (6.9)

The steepest slope will determine the sub-pixel, see Figure 6.3.The continuous function on both sides of the minimum can be written as

SAD (dsubpixel) = SAD (d− 1)− k (dsubpixel − d− 1)SAD (dsubpixel) = SAD (d+ 1) + k (dsubpixel − d+ 1)

dsubpixel =SAD (d− 1)− SAD (d+ 1)

2k+ d (6.10)

Example

Figure 6.5 shows the disparity map made from the real object in Figure 8.1. Itis received with a simple SAD block matching. Where only step 2 is performedof the four steps in the block matching algorithm made with a simple SAD blockmatching, where only step 2 of the four steps in the block matching algorithm isperformed.

6.2.2 Local Phase EstimationBecause of the complexity off this method only the basics will be explained and amore thorough description can be found in [15]. The idea with the Local PhaseEstimation is that the local phase in small areas in the left and the right image arecompare with each other. The assumption is that in small areas in the images the

Page 66: A method for collision handling for industrial robots

54 Theory

Figure 6.3. Sub-pixel interpolation

Figure 6.4. Real Object

Figure 6.5. Disparity map

Page 67: A method for collision handling for industrial robots

6.3 Image processing 55

phase is unequivocal and it has a frequency. So by locally estimating the phaseand the frequency it will be possible to estimate the disparity.A local area can be described as follows

f (u, v) = A+ cos (2π f u+ p) (6.11)

where A is the DC-gain, f is the horizontal frequency and p is the phase.Assume that there is an identical local environment in the right image with thesame frequency and phase but with a small displacement.

fleft (u, v) = fright (u− d, v) (6.12)

where

fleft (u, v) = A+ cos (2π f u+ pleft)fright (u, v) = A+ cos (2π f u+ pright) (6.13)

cos (2π f u+ pleft) = cos (2π f (u− d) + pright) (6.14)pleft = 2π f d+ pright (6.15)

d = (pleft − pright)2π f

(6.16)

One of the advantages with this is that even if the phase and the frequency areintegers, the disparity is not. This means that a sub pixel accuracy will be retrieveddirectly, in contrast to block matching. So only the local phase and the frequencyhave to be estimated in this method.

6.3 Image processing

6.3.1 Object segmentation using histogramA gray scale image can be expressed as a matrix with the sizeM×N that consistsofMN pixels. Each element or pixel of the matrix holds an intensity value I(x, y),where x = 0, . . . ,M − 1 and y = 0, . . . , N − 1, that specifies how light or dark theelements are when viewing them as an image. Usually the intensity value goesfrom 0 to 255, where 0 means black and 255 means white. A histogram P (f) of animage is a probability function that describes how often a certain intensity levelappears in the image. The histogram can be retrieved by looping through all thepixels, all x and y coordinates in the image f(x, y), and count up P (f) = P (f)+1for every intensity value given by f(x, y) and then normalize P (f) with the totalnumber of pixels in the image. The image f(x, y) and its corresponding histogramare shown in Figure 6.6.

Page 68: A method for collision handling for industrial robots

56 Theory

Figure 6.6. Image f(x, y) and the corresponding histogram

Global thresholding

If the gray level intensity between the object and the background differs, as shownin Figure 6.6, the resulting histogram has a bimodal distribution. The peeks shownin Figure 6.6 can then be used to locate objects in the image by using a thresholdthat separates the background from the objects. This can be done by setting thethreshold T between the two peeks seen in Figure 6.6. Every point in the imagewhere the intensity level f(x, y) < T is called an object point, if the backgroundis lighter then the object. The binary segmented image g(x, y) is given by

g(x, y) ={

1 if f(x, y) < T0 if f(x, y) ≥ T (6.17)

Figure 6.7 shows the image f(x, y), resulting binary image g(x, y) and the thresh-old value T = 90 marked in the histogram.

Figure 6.7. Image f(x, y), resulting binary image g(x, y) and the corresponding his-togram.

The main problem in separating objects from the background is to find a suit-able threshold. To automatically find a suitable threshold, the Otsu’s method wasused which is described in Section 6.3.2. The chances of selecting an appropriatethreshold is improved if the shape of the histogram peaks are tall, narrow andseparated by valleys.

Page 69: A method for collision handling for industrial robots

6.3 Image processing 57

6.3.2 Otsu’s methodTo automatically find a suitable threshold, the well known Otsu’s method, [9]was used. The Otsu method is based on finding the threshold that minimizes thewithin-class variance, defined as a weighted sum of variances of the two classes.

σ2p(T ) = P1(T )σ2

1(T ) + P2(T )σ22(T ) (6.18)

The weights P1 and P2 are the probabilities of the two classes separated by athreshold T and σ2

1 and σ22 are variances of these classes.

Otsu shows that minimizing the within-class variance is the same as maximiz-ing between-class variance

σ2b (T ) = σ2(T )− σ2

p(T ) = P1(T )P2(T )(m1(T )−m2(T )) (6.19)

which is expressed in terms of class probabilities Pi and class means mi whichin turn can be updated iteratively.

To calculate the variance and probabilities, the normalized histogram is used.The normalized histogram has components given by pi = ni/MN , where ni is thenumber of pixels with intensity level i = 0, 1, 2 . . . , L−1. The sum of the elementsis given by

L−1∑i=0

pi = 1 (6.20)

The image is separated into two groups C1 and C2 using a threshold T (k) = k,0 < k < L− 1. This means that C1 will consist of all the pixels with an intensityvalue in the range of [0, k] and C2 consists of all the pixels with an intensity valuein the range of [k + 1, L− 1]. The probability for a pixel to be assigned into classC1 is given by

P1(k) =k∑i=0

pi (6.21)

and the probability for a pixel assigned into class C2 is given by

Page 70: A method for collision handling for industrial robots

58 Theory

P2(k) =L−1∑i=k+1

pi = 1− P1(k) (6.22)

The mean value for the intensity in C1 is given by

m1(k) =∑ki=0

iniMN∑k

i=0niMN

=∑ki=0 ini∑ki=0 ni

(6.23)

and the mean value for the intensity in C2 is given by

m2(k) =∑L−1i=k+1 ini∑L−1i=k+1 ni

(6.24)

The algorithm is as follows:

1. Step through all possible thresholds T = k 0 < k < L− 1

2. For each new k update Pi(k) and mi(k) and compute σ2b (k)

3. Desired threshold T corresponds to the maximum σ2b (k)

For more information on Otsu’s method, see [9].

6.3.3 Morphological operations

Erosion

If A(x, y) is a binary image consisting of objects and B is a structuring element,the erosion of A and B, detoned A�B, gives new points for the objects. The newpoints are defined as the set of all points x and y such that the structuring elementB is contained in the objects of A(x, y). The structuring element B consists of apattern specified as the coordinates of a number of discrete points relative to someorigin. Examples of structuring elements are shown in Figure 6.8. An exampleof erosion with the square structure element is shown in Figure 6.9. As shown inFigure 6.9 erosion can be seen as a shrinking or thinning operation.

Page 71: A method for collision handling for industrial robots

6.3 Image processing 59

Figure 6.8. Structuring elements

Figure 6.9. Erosion with a 3x3 structure element

Dilation

If A(x, y) is a binary image consisting of objects and B is a structuring element,the dilation of A and B, detoned A⊕ B, gives new points for the objects definedas the set of all points x and y such that at least one element in B overlaps anobject point in A.

An example of dilation is shown in Figure 6.10 which uses the same binary imageand structuring element as in Figure 6.8.

Figure 6.10. dilation with a 3× 3 structure element

Unlike erosion, dilation grows or thickens the objects in a binary image.

Morphological reconstruction

Morphological reconstruction involves a marker image, a mask image and a struc-turing element. Central to morphological reconstruction are the concept of geodesicdilation. The geodesic dilation of size 1 is given by

D(1)G (F ) = (F ⊕B) ∩G (6.25)

where F is the marker image, G is the mask image and B is the structuring ele-ment. An example of geodesic dilation is shown in Figure 6.11.

The geodesic dilation of size n is defined as

D(n)G (F ) = (D(n−1)

G (F )⊕B) ∩G (6.26)

Page 72: A method for collision handling for industrial robots

60 Theory

Figure 6.11. Geodesic dilation

Morphological reconstruction can be used to fill holes in the objects. An automatedhole filling algorithm (described in [3]) can be implemented by choosing the markerimage F that is zero everywhere, except for the image border where it is set to1− I(x, y). For more information about the hole filling algorithm we refer to [3].

Page 73: A method for collision handling for industrial robots

Chapter 7

Vision System

7.1 Problem DiscussionTo increase the intelligence of the decision handling in Pathfinder we made a pilot-study of integration a camera based vision system on the robot.During our tests, we mounted a camera on the robot as shown in Figure 7.1. Thismade us realize the difficulties with receiving relevant information on short dis-tances with a narrowed angled vision system as a camera is.Since the robot acts in a small areas it is never far from the potential obstacle. Sodepending on the size and distance to the obstacle usually only parts of the wholeobstacle can be received as shown in Figure 7.1.The major problem with the camera system is that it is so narrow that it can’t

observe the whole robot all the way into the base. When it moves around and thecamera is directed in the motion direction of the arm, it has a blind zone whereno objects can be observed as shown in Figure 7.2.The optimal vision system would be something that could receive information

in a sphere around the TCP, as in Figure 7.3. This would not solve the problemcompletely because there is still a blind zone behind the robot so there is a prob-lem when the TCP moves towards the origin of the base frame.If a camera based system should be used it needs to be complemented with ei-ther another camera or some sensors placed on exposed areas. If there is a visionsystem sufficient for generating information fast enough there is another complexproblem. It is not enough to just prevent the TCP from a collision, all the axeshave to be cleared. This will generate a zone for the robot when it is moving alonga path where no possible collision could occur. If an object is located inside thezone, the path has to be modified so the object will be excluded from the safezone, see Figure 7.4.

Because of the complexity of these problems and the time limit for the project,it was decided to limit the study so it would only investigate the possibility todevelop a cheap vision system with performance good enough for the decisionhandling in the basic function after a collision.

61

Page 74: A method for collision handling for industrial robots

62 Vision System

Figure 7.1. When using a camera on the robot.

Figure 7.2. The blind zone of the robot

Figure 7.3. Preferd vision zone.

Page 75: A method for collision handling for industrial robots

7.2 The Developing Procedure 63

Figure 7.4. The zone of the robot.

7.2 The Developing ProcedureWe developed a simple vision system with a laptop and a web-camera based onthe theory in Chapter 6.Our primary goal was to see if it was possible to develop a cheap vision systemwith performance good enough for the decision handling in the basic pathfinderafter a collision.

The two major problems with collecting the information with the camera-systemwere how to direct the camera so the image would include the obstacle and thatit would be taken on a distance far enough so the edges of the object could bedetected.Our conclusion was that the camera should always be angled so it looks in the

Figure 7.5. Field of Sight

direction where the collision has occurred, in order collect the information so the

Page 76: A method for collision handling for industrial robots

64 Vision System

possibilities in finding a way round obstacles are maximized, see Figure 7.5.A problem is that the camera may not always be directed against the collisionzone after the arm has been retracted.By looking into the buffer a vector ~r can be estimated which points toward thecollision zone. This is done by taking the position from the collision and for theretracted location by collection the information from the buffer.

~r = pn−1 − pn−2 (7.1)

By rotating the tool frame, see Chapter 2, so that the camera is angled parallel

Figure 7.6. Minimum distance

with the collision vector a good image should be guaranteed.The needed angles for the R, B and T servos to archive this can be estimated withinverse position kinematics from Section 2.6.3.As described earlier it was important that the retraction distance d is large enough,see Figure 7.6, or the edges of the object will not be received. A solution to this isthat, if the the image is flat and no edge is received, the robot will retract furtherback for a better view, with the same principle as in Chapter 4.2.

7.3 Image processingWhen the images have been received a disparity map, D, of the collision zone iscalculated. This gives measurement of how the objects are placed against eachother in the world. There are several different algorithms that can be used forthese calculations, see Section 6.2 where some are explained. We had some prob-lems with noise in the disparity map so it was processed for a better result. Thedepth in the image was calculated using (6.5) and then we tried to estimates itsappearance by processing the disparity map. For segmenting the obstacle fromthe rest of the world in the image a threshold was calculated using Otsu’s method,Section 6.3.2. For an even cleaner result we first used morphological reconstruction

Page 77: A method for collision handling for industrial robots

7.4 Find the solution 65

to fill in holes in the object caused by noise.By finding the edges of the object, a vector of points on a safe distance x from theedge could be returned as in Figure 7.7. To find these points we used a dilation

Figure 7.7. Possible Solutions around the object

filter, from 6.3.3, on the reconstructed disparity map. This means that the imageis filtered by a kernel with ones and the result is that the objects grows larger inevery direction, how much is decided by the kernel size.Finally we filtered the result with a Laplace kernel which returns the edges of theenlarged image, which are the safe solutions around the object. So by changingthe kernel size the safety distance x could be changed.

7.4 Find the solutionTo find a final result, the results from the vision system has to be transformed fromcamera coordinates to world coordinates. Then the solution alternatives shouldbe verified by inverse kinematics where it will be controlled so no possible solutionwill generate that any axes will pass through the object and cause a new collision.From the resulting solutions the one with the shortest distance to the collisionpoint will be returned as the final result.

Page 78: A method for collision handling for industrial robots

66 Vision System

Figure 7.8. Possible solutions

Page 79: A method for collision handling for industrial robots

Chapter 8

Test Results on Part II

One of the two original images in Figure 8.1, which then was analysed by our image-processing module. The calculated disparity is displayed in Figure 8.2. There was

Figure 8.1. One of the two images

some noise in the image but it gave a result good enough to use for testing theimage-processing. After finding the disparity map, we separated the obstacle from

Figure 8.2. Disparity on Obstacle

67

Page 80: A method for collision handling for industrial robots

68 Test Results on Part II

the rest of the image using the calculated threshold, see Figure 8.3. There are still

Figure 8.3. Threshold on the disparity

disturbances left from the bad disparity map, but by morphological reconstructionthe holes from miscalculated disparity can be filled in as in Figure 8.4. Then byusing a Laplace filter the edges around the object can be received. The edges arevery thin because of the big contrast which is achieved with the threshold function,which is a satisfying result. To find the points which are on a safe distance aroundthe object we used dilation, see Section 6.3.3. This was applied on the image inFigure 8.4. By filtering the resulting image with a Laplace filter the desired pointswere received. In Figure 8.6 the solution points surrounding the object can beobserved.

Figure 8.4. Morphological reconstruction

Page 81: A method for collision handling for industrial robots

69

Figure 8.5. Edges of the obstacle

Figure 8.6. Solution points surrounding the obstacle.

Page 82: A method for collision handling for industrial robots
Page 83: A method for collision handling for industrial robots

Chapter 9

Discussion

This chapter includes a discussion about our different solutions methods, the work-ing process and our thoughts about the functions.The two different methods for how to create a smooth motion had their meritsand demerits in functionality. The major advantages with the P-variable basedmethod were that the motion could be changed during execution of the job. Thiswas not possible with the C-variable method, where the whole path had to bedefined and loaded before the retraction of the robot.The disadvantage was that if the positions were too close and the speed was toohigh in the created job file, there was not enough time to update the new variablesand therefore the robot had to stop and wait for the upload to finish.The solution to this was to decrease the speed or increase the distance betweenthe points, which led to a slow motion or that it followed the original path poorly.This was not a problem with the C-variable method, which could have a short dis-tances between the positions and still have high speed without loosing the smoothmotion.

Almost all the tests were performed on a HP3L, see Appendix A, but becauseof lack of access, a system with multiple robots could not be tested. This shouldnot affect any of the results because the same type of control system, NX100,controls all the robots, but the INI-file and the automatic creation of a job-file hasnot been thoroughly tested and verified.

The time for the develop must process of the vision system became diminutivebecause the learning process of the NX100 took longer than expected because ofits complexity. The development of the test bench and the final software alsobecame longer then expected due to unsufficant knowledge in areas like C#, com-munication protocols and VxWorks.During this process we learned a lot and improved our programming skills, whichhas been useful throughout the project.The spin-off functions started out as just tests on the Backtrack function, but whenwe realized that it was possible to stop the robot during the retraction and go for-

71

Page 84: A method for collision handling for industrial robots

72 Discussion

ward again the ideas for the pathfinder were born. It is an interesting thoughtwhere the robot becomes more and more independent and will correct jobs if itfinds faults in them.

Page 85: A method for collision handling for industrial robots

Chapter 10

Conclusions And FutureWork

This chapter presents the conclusions from the discussion in Chapter 9 and dis-cusses further improvements of the software.

10.1 ConclusionsThe resulting function for retracing the arm used the method of creating a jobfile that consisted of C-variables. The job created with the C-variables made itpossible to insert the external events that needed to be handled. When the jobis created with the C-variables the sample time does not depend on the velocityof the current job. This will give some redundant information if the monitoredmotion is performed with a low velocity but it will always ensure that a collisionfree path is sampled.The resulting software solution did solve the problems of the existing function hadwhen retracting the arm, which was shown in Section 5. However, the operatorthat program the job has to be aware of the limitations of the function to be ableto use it properly.The investigation of further use of the software resulted in the development of thebasic pathfinder which was presented in Section 4.2. The results from the basicpathfinder shows that the software can be used to open up the possibility for therobot to control its own motion and correct the path if necessary. The integrationof a vision system with the basic pathfinder was never performed because of thetime limit and the complexity of the problem which was discovered during the pilotstudy of the vision system. The use of the software for a real time motion controlwas also investigated but was never tested in combination with an external unit.The possibility of loading a new path in the to buffer which generates a change ofthe original path was tested in the basic pathfinder with the robot as the providerof the new path.

73

Page 86: A method for collision handling for industrial robots

74 Conclusions And Future Work

A kind of real time motion control was developed, see Section 3.2.3, to be able totest the first approach by using the existing move function in the API to performthe motion. The jerky motion that the move function provides is not suitablefor moving the arm when it is to be used as a real time motion control. Insteadthe approach of loading a job file and using it to create the motion, as describedin Section 4.1 is probably the best way for generating a real-time motion controlwhich provides a steady motion.

10.2 Future WorkThe developed pathfinder makes it possible for the robot to generate a collisionfree path to a point specified in the base frame during the assumption mentionedin Section 4.2. The developed software for the pathfinder opens up the possibilityfor changing the path while the job is executed. This is one of the basic softwaresolutions that is needed when generating a self-programming robot or creating areal time motion control. For future work the basic path-planing algorithm canbe improved by extending the software with a better decision of how to generatethe path to avoid a collision.

Page 87: A method for collision handling for industrial robots

Bibliography

[1] J. Feder Barnaby. He brought the robot to life. 1982.

[2] Gunnar Farnebäck. Polynomial expansion for orientation and motion estima-tion. Dissertation No 790, 2002.

[3] Rafael C. Gonzalez and Richard E. Woods. Digital Image Processing 3rdEdition. Prentice Hall, 2008.

[4] Johan Hallenberg. Robot tool center point calibration using computer vision.2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9520.

[5] Per Holmberg. Sensor fusion with coordinated mobile robots. 2003.

[6] M. Marchionno L. Di Stefano. A pc-based real-time stereo vision system.MGV, 13(3):197-220„ 2004.

[7] S. Hutchinson Mark W. Spong and M. Vidyasagar. Robot Modeling andControl. John Wiley and Sons Inc., 2005.

[8] Stig Moberg. On modeling and control of flexible manipulators. 2007.http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10463.

[9] N. Otsu. A threshold selection method from gray level histograms. IEEETrans. Systems, Man and Cybernetics, 9:62–66, 1979.

[10] Daniel Scharstein and Richard Szeliski. Ta taxonomy and evaluation of densetwo-frame stereo correspondence algorithms. Int. J. Comput. Vision, 47(1-3):7-42, 2002.

[11] Maria Magnusson Seger. Lite om kamerageometri. 2005.http://www.cvl.isy.liu.se/Education/UnderGraduate/TSBB52/download/kamerageometri.pdf.

[12] Johan Skoglund. Robust real-time estimation of region displacements in videosequences. 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8006.

[13] Oskar Thulin. Intermediate view interpolation of stereoscopic images for 3d-display. 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7651.

[14] Erik Wernholt. Multivariable frequency-domain identification of industrialrobots. 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10149.

75

Page 88: A method for collision handling for industrial robots

76 Bibliography

[15] CarlJohan Westelius. Focus of attention and gaze control for robot vision.1995. http://www.cvl.isy.liu.se/ScOut/Theses/PaperInfo/westelius95.html.

[16] Birgitta Wingqvist and Mattias Källstrand. Navigation, sensor fusion andcontrol of an autonomous ground vehicle. 2005.

[17] Zhengyou Zhang. Flexible camera calibration by viewing a plane from un-known orientations. ICCV, pages 666–673, 1999.

Page 89: A method for collision handling for industrial robots

Appendix A

Technical Specifications

Data for a Motoman Robotics AB industrial robot, model HP3L.

77

Page 90: A method for collision handling for industrial robots

78 Technical Specifications

Figure A.1. Technical Specifications

Page 91: A method for collision handling for industrial robots

Appendix B

Kinematics

These following calculations are based on the data from a HP3L industrial robot,see Appendix A.

B.1 Forward Position KinematicsThe different transformations between the servo frames can be described asbs→ S

pbs = RSbspS + dSbs =

cos (qS) − sin (qS) 0sin (qS) cos (qS) 0

0 0 1

pS +

000

(B.1)

S → L

pS = RLSpL + dLS =

cos (qL) 0 − sin (qL)0 1 0

sin (qL) 0 cos (qL)

pL +

10000

(B.2)

L→ U

pL = RULpU + dUL =

cos (qU ) 0 sin (qU )0 1 0

− sin (qU ) 0 cos (qU )

pU +

00

370

(B.3)

U → wr

pU = RwrU pwr + dwrU = I pU +

00

380

(B.4)

79

Page 92: A method for collision handling for industrial robots

80 Kinematics

wr → R

pwr = RRwrpR + dRwr =

cos (qR) sin (qR) 0− sin (qR) cos (qR) 0

0 0 1

pR +

000

(B.5)

R→ B

pR = RBRpB + dBR =

cos (qB) 0 sin (qB)0 1 0

− sin (qB) 0 cos (qB)

pB +

000

(B.6)

B → T

pB = RTBpT + dTB =

cos (qT ) sin (qT ) 0− sin (qT ) cos (qT ) 0

0 0 1

pT +

0090

(B.7)

The transformations between the base frame and the wrist frame can be describedasS → wrThe position of the center point of the wrist frame relative the base frame, thetranslation dwrbs between the frames, is given by

dwrbs =

dh cos(qS)dh sin(qS)

dv

(B.8)

wheredh = 100 + 370 sin (qL) + 380 cos (qU − qL) (B.9)

dv = 370 cos (qL) + 380 sin (qU − qL) (B.10)

and the rotation Rwrbs of the wrist frame relative the base frame is given by thefollowing equations

RSbs =

cos (qS) − sin (qS) 0sin (qS) cos (qS) 0

0 0 1

(B.11)

RUS =

sin(qU − qL) 0 cos(qU − qL)0 1 0

− cos(qU − qL) 0 sin(qU − qL)

(B.12)

RwrU = I (B.13)

whereRwrbs = RSbsR

US R

wrU (B.14)

The transformation between the base frame and the wrist frame can be describedas

Page 93: A method for collision handling for industrial robots

B.1 Forward Position Kinematics 81

wr → toolThe rotation matrix RTwr is given by

RTwr = RRwr RBR R

TB (B.15)

where

RwrR =

cos (qS) sin (qR) 0− sin (qR) cos (qR) 0

0 0 1

(B.16)

RBR =

cos (qB) 0 sin (qB)0 1 0

− sin (qB) 0 cos (qB)

(B.17)

RTB =

cos (qT ) sin (qT ) 0− sin (qT ) cos (qT ) 0

0 0 1

(B.18)

The translation dTB is only along the z-axis with 90 mm as shown in (B.7). Thetranslation dTwr depends on the rotation matrices RRwr and RBR , and can be de-scribed as

dTwr = RRwr RBR dTB (B.19)

Then a position pT in the tool frame transforms to the wrist frame as

pwr = RTwr pT + dTwr (B.20)

Finally the complete transformation between the tool frame and the base framecan be described as

pbs = Rtoolbs ptool + dtoolbs (B.21)

where the rotation matrix Rtoolbs follows from (B.14) and (B.15) as

Rtoolbs = Rwrbs RTwr R

toolT (B.22)

RtoolT = I (B.23)

and the translation dtoolbs follows from (B.8) and (B.19)

dtoolbs = dtoolT +Rwrbs dTwr + dwrbs (B.24)

dtoolT =

000

(B.25)

Page 94: A method for collision handling for industrial robots

82 Kinematics

B.2 Forward Velocity KinematicsIn this section has the index for the velocity vi been changed. The index i nowstands for the velocity in position i. The index for the rest of the variables is thesame as in Section B. For definitions for the rotation matrices Rij and translationdij see Section B.

The speed in joint L

vL = qS ×RSbs dLS + vS =

00qS

×100 cos(qS)

100 sin(qS)0

+

000

(B.26)

The speed in joint U

vU = RSbs ˙qL ×RLbs dUL + vL =

− ˙qL sin(qS)˙qL cos(qS)

0

×RLbs 0

0370

+ vL (B.27)

The speed in the wrist frame

vwr = RSbs ˙qU ×Rwrbs dwrU + vU =

˙qU sin(qS)− ˙qU cos(qS)

0

×Rwrbs 0

0380

+ vU (B.28)

The speed of the tool

vtool = Rwrbs ˙qR,B,T ×Rtoolbs dTwr + vwr = Rwrbs

− ˙qB0˙qR

×Rtoolbs 0

0380

+ vwr (B.29)