Transcript
Page 1: Robot calibration using a 3D vision-based measurement system with a single camera

Robotics and Computer Integrated Manufacturing 17 (2001) 487–497

Robot calibration using a 3D vision-based measurement systemwith a single camera

Jos!e Maur!ıcio S.T. Mottaa,*, Guilherme C. de Carvalhob, R.S. McMasterc

aDepartment of Mechanical Engineering, University of Brasilia, 70910-900 Bras!ılia, DF, BrazilbDepartment of Mechanical Engineering, University of Brasilia, 70910-900 Bras!ılia, DF, Brazil

cSchool of Industrial and Manufacturing Science, Cranfield University, Building 62, Cranfield MK43 OAL, England, UK

Received 1 March 2001; received in revised form 3 June 2001; accepted 27 August 2001

Abstract

One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy ofrobots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production

and maintenance. This work presents techniques for modeling and performing robot calibration processes with off-lineprogramming using a 3D vision-based measurement system. The measurement system is portable, accurate and low cost, consistingof a single CCD camera mounted on the robot tool flange to measure the robot end-effector pose relative to a world coordinate

system. Radial lens distortion is included in the photogrammetric model. Scale factors and image centers are obtained withinnovative techniques, making use of a multiview approach. Results show that the achieved average accuracy using a common off-the-shelf CCD camera varies from 0.2 to 0.4mm, at distances from 600 to 1000mm from the target, respectively, with differentcamera orientations. Experimentation is performed on two industrial robots to test their position accuracy improvement using the

calibration system proposed: an ABB IRB-2400 and a PUMA-500. The robots were calibrated at different regions and volumeswithin their workspace achieving accuracy from three to six times better when comparing errors before and after calibration, ifmeasured locally. The proposed off-line robot calibration system is fast, accurate and easy to set up. r 2001 Elsevier Science Ltd.

All rights reserved.

Keywords: Kinematic model; Robot calibration; Absolute accuracy; Camera calibration

1. Introduction

Most currently used industrial robots are stillprogrammed by a teach pendant, especially in theautomotive industry. However, the importance of off-line programming in industry as an alternative to teach-in programming is steadily increasing. The main reasonfor this trend is the need to minimize machine downtimeand thus, to improve the rate of robot utilization. Atypical welding line with 30 robots and 40 welding spotsper robot takes about 400 h for robot teaching [1].Nonetheless, for a successful accomplishment of off-lineprogramming, the robots need to be not only repeatablebut also accurate.

Robot repeatability is unaffected by the method ofprogramming since it is due to random errors (e.g. dueto the finite resolution of joint encoders). In contrast,the systematic errors in absolute position are almostentirely due to programming the robot off-line. One ofthe leading sources of lack of accuracy is the mismatchbetween the prediction made by the kinematic modeland the actual system. Robot constant-pose errorsare attributed to several sources, including errors ingeometrical parameters (e.g. link lengths and jointoffsets) and deviations which vary predictably withposition (e.g. compliance or gear transmission errors)[1–3].

Robot static calibration is an integrated process ofmodeling, measurement, numerical identification ofactual physical characteristics of a robot, and imple-mentation of a new model. The measurement step of thecalibration process is necessary for the determination ofthe ‘‘best fit’’ set of parameters to the actual robot. The

*Corresponding author. Tel.: +55-61-368-7137; fax: +55-61-307-

2978.

E-mail address: [email protected] (J.M.S.T. Motta).

0736-5845/01/$ - see front matter r 2001 Elsevier Science Ltd. All rights reserved.

PII: S 0 7 3 6 - 5 8 4 5 ( 0 1 ) 0 0 0 2 4 - 2

Page 2: Robot calibration using a 3D vision-based measurement system with a single camera

types of measurement required comprise tool positionscorresponding to joint positions. Since the fitting cannotbe of better quality than that of the original datacollection, it is very important to obtain a good set ofdata.

Vision-based measurement systems using a singlecamera for robot calibration seem to be one of thetrends of the development of robot calibration systemstoday [2], but the knowledge involved in this area is stillrestricted to research groups sponsored by large budgetcompanies, with little interest to publish technicaldetails. For this technology to be available and usedextensively in industry, there is still a long way to go.

The objectives of this research were to investigatetheoretical aspects involved in robot calibration meth-ods and systems, to develop a feasible low-cost vision-based measurement system using a single camera placedon a robot tool flange and, finally, to construct aprototype of a robot calibration system. Experimentalresults to assess the measurement system accuracy arepresented. Results from experimentation on an ABBIRB-2400 and PUMA-500 robots to test their accuracyimprovement using the proposed calibration system arealso shown.

2. Kinematic modeling and parameter identification

The first step to kinematic modeling is the properassignment of coordinate frames to each link. Eachcoordinate system here is orthogonal, and the axes obeythe right-hand rule.

The first step to assign coordinate frames to joints isto make the z-axis coincident with the joint axis. Thisconvention is used by many authors and in many robotcontrollers [3,4].

The origin of the frame is placed as the following [5,6]:if the joint axes of a link intersect, then the origin of theframe attached to the link is placed at the joint axesintersection; if the joint axes are parallel or do notintersect, then the frame origin is placed at the distaljoint; subsequently, if a frame origin is described relativeto another coordinate frame by using more than onedirection, then it must be moved to make use of onlyone direction, if possible. Thus, the frame origins will bedescribed using the minimum number of link para-meters.

The x- or the y-axis has its direction according to theconvention used to parametrize the transformationsbetween links. For either perpendicular or parallel jointaxes, the Denavit–Hartenberg or Hayati modelingconvention were used, respectively. The requirementsof a singularity-free parameter identification modelprevents the use of a single minimal modeling conven-tion that can be applied uniformly to all possible robotgeometries [7,8]. At this point, the homogeneous

transformations between joints must have already beendetermined. The other axis (x or y) can be determined byusing the right-hand rule.

The end-effector or tool-frame location and orienta-tion is defined according to the controller conventions.Geometric parameters of length are defined to have anindex of joint and direction. The length pni is thedistance between coordinate frames i � 1 and i; and n isthe parallel axis in the coordinate system i � 1: Fig. 1shows the above rules applied to the IRB-2400 robotwith all coordinate frames and geometric features.

The kinematic equation of the robot manipulator isobtained by consecutive homogeneous transformationsfrom the base frame to the last frame. Thus,

#T0

N ¼ #T0

NðpÞ ¼ T01 T

12 y TN�1

N ¼YNi¼1

Ti�1i ; ð1Þ

where N is the number of joints (or coordinate frames),p ¼ ½ pT1 p

T2 y pTN �

T is the parameter vector for themanipulator, and pi is the link parameter vector forthe joint i; including the joint errors. The exact linktransformation Ai�1

i is [9]

Ai�1i ¼ Ti�1

i þ DTi; DTi ¼ DTiðDpiÞ; ð2Þ

where Dpi is the link parameter error vector for thejoint i:

The exact manipulator transformation #A0

N�1 is

#A0

N ¼YNi¼1

ðTi�1i þ DTiÞ ¼

YNi¼1

Ai�1i : ð3Þ

Fig. 1. Skeleton of the IRB-2400 robot with coordinate frames in the

zero position and geometric variables for kinematic modeling (out of

scale).

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497488

Page 3: Robot calibration using a 3D vision-based measurement system with a single camera

Thus,

#A0

N ¼ #T0

N þ D #T; D #T ¼ D #Tðq;DpÞ; ð4Þ

where Dp ¼ ½DpT1DpT2 yDpTN �

T is the manipulator para-meter error vector and q ¼ ½yT1 ; y

T2 ; y

TN �

T is the vector ofjoint variables. It must be stated here that D #T is anon-linear function of the manipulator parameter errorvector Dp:

Considering m, the number of measure positions, itcan be stated that

#A ¼ #A0

N ¼ #Aðq; pÞ ð5Þ

where #A : Rn�RN is a function of two vectors with nand N dimensions, n is the number of parameters and Nis the number of joints (including the tool). It followsthat

#A ¼ #A0

N

¼ #Aðq; pÞ

¼ ð #Aðq1; pÞ;y; #Aðqm; pÞÞT : Rn�RmN ð6Þ

and

D #T ¼ D #Tðq;DpÞ

¼ ðD #Tðq1;DpÞ; ::: ;D #Tðqm;DpÞÞT : Rn�RmN : ð7Þ

All matrices or vectors in bold are functions of m: Theidentification itself is the computation of those modelparameter values p ¼ pþ Dp; which result in anoptimal fit between the actual measured positions andthose computed by the model, i.e., the solution of thenon-linear equation system

Bðq; pÞ ¼MðqÞ; ð8Þ

where B is a vector formed with position and orientationcomponents of #A and

MðqÞ ¼ ðMðq1Þ;y;MðqmÞÞTARfm ð9Þ

are all measured components and f is the number ofmeasurement equations provided by each measuredpose. If orientation measurement can be provided by themeasurement system, then six measurement equationscan be formulated per each pose. If the measurementsystem can only measure position, each pose measure-ment can supply data for three measurement equationsper pose and then B includes only the positioncomponents of #A:

When one is attempting to fit data to a non-linearmodel, the non-linear least-squares method arises mostcommonly, particularly in the case that m is much largerthan n [10]. In this case, we have from Eqs. (2), (4)and (8):

Bðq; p*Þ ¼MðqÞ ¼ Bðq; pÞ þ Cðq;DpÞ; ð10Þ

where C is the differential motion vector formed by theposition and rotation components of D #T: From the

definition of the Jacobian matrix and ignoring second-order products

Cðq;DpÞ ¼ J Dp ð11Þ

and so,

MðqÞ � Bðq; pÞ ¼ JDp: ð12Þ

The following notation can be used

b ¼MðqÞ � Bðq; pÞARfm; ð13Þ

J ¼ Jðq;DpÞARfm�n; ð14Þ

x ¼ DpARn; ð15Þ

r ¼ Jx� bARfm: ð16Þ

Eq. (10) can be solved by a non-linear least-squaremethod in the form

Jx ¼ b: ð17Þ

One method to solve non-linear least-square problemsthat proved to be very successful in practice and wasthen recommended for general solutions is the algorithmproposed by Levenberg–Marquardt (LM algorithm)[10]. Several algorithms versions of the LM algorithmhave been proved to be successful (globally convergent).From Eq. (17), the method can be formulated as

xjþ1 ¼ xj � JðxjÞT JðxjÞ þ mj I

� ��1JðxjÞ bðxjÞ; ð18Þ

where, according to Marquardt suggestion, mj ¼ 0:001 ifxj is the initial guess, mj ¼ lð0:001Þ if jjbðxjþ1ÞjjXjjbðxjÞjj;mj ¼ 0:001=l if jjbðxjþ1ÞjjpjjbðxjÞjj and l is a constantvalid in the range of 2:5olo10 [11].

3. Vision-based measurement system

There are basically two typical setups for vision-basedrobot calibration. The first is to fix cameras in the robotsurroundings so that the camera can frame a calibrationtarget mounted on the robot end-effector. The othersetup is named hand-mounted camera robot calibration.This latter setup can use a single camera or a pair ofcameras. A single moving camera presents the advan-tages of a large field-of-view with a potential largedepth-of-field, and a considerable reduced hardware andsoftware complexity of the system. On the other hand, asingle camera setup needs full camera re-calibration ateach pose.

The goal of camera calibration is to develop amathematical model of the transformation betweenworld points and observed image points resulting fromthe image formation process. The parameters whichaffect this mapping can be divided into three categories[12,13]: (a) extrinsic (or external) parameters, whichdescribe the relationship between the camera frame andthe world frame, including position (three parameters)

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497 489

Page 4: Robot calibration using a 3D vision-based measurement system with a single camera

and orientation (three parameters); (b) intrinsic (orinternal) parameters, which describe the characteristicsof the camera, and include the lens focal length, pixelscale factors, and location of the image center; and (c)distortion parameters, which describe the geometricnon-linearities of the camera. Some authors includedistortion parameters in the group of intrinsic para-meters (e.g. Tsai [14] and Weng et al. [15]). Distortionparameters can or not be present in a model.

The algorithm developed here to obtain the cameraparameters (intrinsic, extrinsic and distortions) is a two-step method based on the radial alignment constraint(RAC) model (see Ref. [14]). It involves a closed-formsolution for the external parameters and the effectivefocal length of the camera. Then, a second stage is usedto estimate three parameters: the depth component inthe translation vector, the effective focal length, and theradial distortion coefficient. The RAC model is recog-nized as a good compromise between accuracy andsimplicity, which means short processing time [13]. Somefew modifications were introduced here in the originalRAC algorithm, and will be explained later.

3.1. RAC-based camera model

In Fig. 2, the world coordinate system is {xw; yw; zw};the camera coordinate system is {x; y; z}. The origin ofthe camera coordinate system is centerd at Oc; and the z-axis coincides with the optical axis. (X ;Y) is the imagecoordinate system at Oi (intersection of the optical axiswith the front image plane) and is measured in pixels.(u; v) are the analog coordinates of the object point inthe image plane (usually measured in meters ormillimeters). (X ;Y) lies on a plane parallel to the xand y axes. f is the distance between the front imageplane and the optical center.

The rigid body transformation from the objectcoordinate system (xw; yw; zw) to the camera coordinatesystem (x; y; z) is

x

y

z

264

375 ¼ R

xw

yw

zw

264

375þ T ; ð19Þ

where R is the orthogonal rotation matrix, which alignsthe camera coordinate system with the object coordinatesystem, and it can be represented as

R ¼

r1 r2 r3

r4 r5 r6

r7 r8 r9

264

375; ð20Þ

and T is the translation vector represented as

T ¼

Tx

Ty

Tz

264

375; ð21Þ

which is the distance from the origin of the cameracoordinate system to the origin of the object coordinatesystem represented in the camera coordinate system.(xi; yi; zi) is the representation of any point (xwi; ywi; zwi)in the camera coordinate system.

The distortion-free camera model, or ‘‘pinhole’’model, assumes that every real object point is connectedto its correspondent image on the image plane in astraight line passing through the focal point of thecamera lens, Oc: In Fig. 2, the undistorted image pointof the object point, Pu; is shown.

The transformation from 3D coordinate (x; y; z) to thecoordinates of the object point in the image planefollows the perspective equations below [16]:

u ¼ fx

zand v ¼ f

y

z; ð22Þ

where f is the focal length (in physical units, e.g.millimeters).

The image coordinates (X ;Y) are related to (u; v) bythe following equations (see Refs. [13,14]):

X ¼ su v and Y ¼ sv v; ð23Þ

where su and sv are scale factors accounting for TVscanning and timing effects and converting cameracoordinates in millimeter or meters to image coordinates(X ;Y) in pixels. Lenz and Tsai [17] relate the hardware-timing mismatch between image acquisition hardwareand camera scanning hardware, or the imprecision ofthe timing of TV scanning only to the horizontal scalefactor su: In Eq. (23), sv can be considered as aconversion factor between different coordinate units.

The scale factors, su and sv; and the focal length, f ; areconsidered to be the intrinsic model parameters of thedistortion-free camera model, and reveal the internalinformation about the camera components and aboutthe interface of the camera to the vision system. The

x

y

Y, v

X, u

z

P

Pu

Oc

Oi

Pd

ywzw

xw

Image Plane

World Coordinates

Camera Coordinates

f

Fig. 2. Pin-hole model and the RAC hypothesis.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497490

Page 5: Robot calibration using a 3D vision-based measurement system with a single camera

extrinsic parameters are the elements of R and T ; whichpass on the information about the camera position andorientation with respect to the world coordinate system.

Combining Eqs. (22) and (23) it follows that

X ¼ su u ¼ f sux

z¼ fx

x

z; ð24Þ

Y ¼ sv v ¼ f svy

z¼ fy

y

z: ð25Þ

The equations above combined with Eq. (19) producethe distortion-free camera model

X ¼ fxr1 xwþ r2 ywþ r3 zwþ Tx

r7 xwþ r8 ywþ r9 zwþ Tz; ð26Þ

Y ¼ fyr4 xwþ r5 ywþ r6 zwþ Ty

r7 xwþ r8 ywþ r9 zwþ Tz; ð27Þ

which relates the world coordinate system (xw; yw; zw)to the image coordinate system (X ;Y). fx and fy arenon-dimensional constants defined in Eqs. (24) and (25).

Radial distortion can be included in the model as[14,15]

Xð1þ k r2Þ ¼ fxr1 xwþ r2 ywþ r3 zwþ Tx

r7 xwþ r8 ywþ r9 zwþ Tz; ð28Þ

Yð1þ k r2Þ ¼ fyr4 xwþ r5 ywþ r6 zwþ Ty

r7 xwþ r8 ywþ r9 zwþ Tz; ð29Þ

where r ¼ mX2 þ Y2 and m is the ratio of scale to bedefined further in the text. Whenever all distortioneffects other than radial lens distortion are zero, anRAC equation is maintained. In Fig. 2, the distortedimage point of the object point, Pd; is shown.

Eqs. (28) and (29) can be linearized as [18]:

X

1� k r2D fx

r1 xwþ r2 ywþ r3 zwþ Tx

r7 xwþ r8 ywþ r9 zwþ Tz; ð30Þ

Y

1� k r2D fy

r1 xwþ r2 ywþ r3 zwþ Ty

r7 xwþ r8 ywþ r9 zwþ Tz: ð31Þ

This transformation can be done under the assumptionthat k r251; i.e., ð1þ k r2ÞDð1=ð1� k r2ÞÞ when k r251:The first stage to solve Eqs. (30) and (31) determines therotation matrix R (Eq. (20)), Ty and Tx: The algorithmfor that is found in Refs. [13,14].

The second stage is proposed here as

�Xi xi �xi r2i

�Yi myi �m yi r2i

" # Tz

fx

k fx

264

375 ¼

Xi wi

Yi wi

" #; ð32Þ

where xi ¼ r1:xwi þ r2:ywi þ Tx; yi ¼ r4:xwi þ r5:ywi þTy; wi ¼ r7:xwi þ r8:ywi; and zwi is made null (allcalibration points are coplanar). The overdeterminedlinear system in Eq. (32) can be solved by a linear least-square routine. The calibration points used to solve thesystem above were on the border of a square grid of9� 9 circular points with 1mm diameter, totting up 32points. The grid has approximately 200� 200mm2. The

angle between the calibration plane and the image planeshould not be o301 to avoid ill-conditioned solutions.

The second stage of the algorithm originally proposedby Tsai [14] was not a linear solution as it was based onEq. (29), with components only in the y direction. Tsai’sarguments were based on the fact that y componentswere not contaminated by the mismatch between theframe grabber and camera frequencies. However, it wasexperimentally observed during this research thatbecause of the various angles and distances that theimages of the target plate were to be acquired from, alinear solution considering both the directions (x and y)showed to be substantially more accurate than usingonly one direction. The improvement in accuracy of thesolutions using Eq. (32) was evaluated by comparingthree different images at different orientations andsimilar distances, and observing the focus lengthcalculated from the second stage of the RAC algorithm,using Eq. (29) (only y components), Eq. (30) (only xcomponents), and Eq. (32) (both x and y components).Ideally, the focus length has to be the same for allimages whatever orientation the camera is placed. Theuse of Eq. (32) resulted in considerably closer valuesof fx for the three images than using only one direction(x or y).

3.2. Calibration of the scale factors

A ratio of scale is defined as

m ¼fy

fx¼

svsu

ð33Þ

and dividing Eq. (30) by (31) yields

X

Y¼ m�1 r1 xwþ r2 ywþ r3 zwþ Tx

r4 xwþ r5 ywþ r6 zwþ Ty: ð34Þ

Two-step camera calibration techniques such as theRAC model can be used to determine the ratio of scaleaccurately if more than one plane of calibration points isused. This is accomplished by moving a single-planecalibration setup vertically with a z-stage [18]. However,if only one coplanar set of calibration points is to beused, pre-calibration of the horizontal scale factor isessential.

Based on the fact that by using non-coplanar andparallel planes, m can be determined from two-stepcalibration methods [14], several images were obtainedfrom different orientations and positions of the camera.This was accomplished when the camera was mountedon the robot hand during robot calibration measure-ments. Using the RAC model, residuals as calculatedfrom Eq. (34) [17,19] are

r4Xi xwi þ r5Xi ywi þ Xi Ty� r1Yi xwi

� r2Yi ywi � Yi Tx ¼ 0; ð35Þ

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497 491

Page 6: Robot calibration using a 3D vision-based measurement system with a single camera

where i is the index representing each calibration pointin the image, and based on the fact that m and the imagecenter are quite independent from each other in theRAC model (a poor guess for one does not affectthe determination of the optimal value for the other),the residues found for each image were averaged andthen m was searched to minimize the average of theresiduals. The solution found was m ¼ 0:9825:

3.3. Calibration of the image center

The image center is defined as the frame buffercoordinates (Cx;Cy) of the intersection of the opticalaxis with the image plane. It is usually used as the originof the imaging process and appears in the perspectiveequation. In high-accuracy applications, it also serves asthe center of radially modeled lens distortion [13]. Theimage center is defined from

Cx ¼ Xf � X and Cy ¼ Yf � Y ; ð36Þ

where (Xf ;Yf ) is the computed image coordinates for anarbitrary point and (Cx;Cy) is the computed imagecoordinates for the center Oi in the image plane.

The RAC model holds true and is independent ofradial lens distortion when the image center is chosencorrectly. Otherwise, a residual exists and, unfortu-nately, the RAC is highly non-linear in terms of theimage center coordinates.

The method devised to find the image center was tosearch for the ‘‘best’’ image center as an average of allimages in a sequence of robot measurements, using theRAC residuals. This method could actually find theoptimal image center in the model, which could be easilychecked calculating the overall robot errors after thecalibration. The tests with a coordinate milling machine(CMM) (explained further in the text) also showed thatthe determination of the image center by this method ledto the best measurement accuracy in each sequence ofconstant camera orientation.

4. Measurement accuracy assessment

The evaluation of the accuracy obtained by the visionmeasurement system was carried out using a CMM. ACCD camera (Pulnix 6EX–752� 582 pels) was fixed onthe CMM’s table and moved on pre-defined paths asshown in Fig. 3. The calibration board was fixed in frontof the camera externally to the CMM, at a position thatcould allow angles from the camera optical axis to thenormal of the calibration plane to be higher than 301(avoiding ill-conditioned solutions in the RAC model).

The CMM’s table motion produced variations in z; xand y axes of the target plate relative to the cameracoordinate system. Fig. 4 shows the coordinate systemsof the camera and calibration board.

There were three different measurement sequences ofthe 25 camera positions illustrated in Fig. 3 by a crossmark. Each sequence was performed with the calibra-tion board placed at different distances and orientationsfrom the camera. The distances were calculated usingthe photogrammetric model.

Once all the external parameters of the camera (R; Tx;Ty and Tz) are determined for a particular camera posein the camera coordinate system, it is necessary toestablish an inverse transformation to determine thetranslation vector (XW ;YW ;ZW) from any point (onthe calibration board) represented in the world coordi-nate system to the camera optic center. This translationvector has to be represented in the x; y and z directionsof the world coordinate system. As defined in Eq. (19), Ris the rotation matrix that aligns the world coordinatesystem with the camera coordinate system. From this, itis clear that

T ¼ �R

XW þ xw

YW þ yw

ZW þ zw

264

375; ð37Þ

80m

m

8 0 m m

2 0 m m

20mm

x t

yt

Fig. 3. Path of camera positions on the CMM table, during the

evaluation of the measurement accuracy of the vision system.

CMM coordinateframe

calibration board

Camera coordinateframe

World coordinateframe

CMM table

yt

xt

zt

x y

z

xw

yw

zw

Fig. 4. Diagram showing coordinate systems and the experimental

setup for the evaluation of the measurement system accuracy.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497492

Page 7: Robot calibration using a 3D vision-based measurement system with a single camera

where T is the translation vector from the camera opticcenter to the origin of the world coordinate system(represented in the camera coordinate system) and(xw; yw; zw) is any point of the world coordinate system.This definition agrees with Eq. (19). Multiplying bothsides of Eq. (37) by �R�1 and using Eq. (19) yields

XW

YW

ZW

264

375 ¼ �R�1

x

y

z

264

375; ð38Þ

where (x; y; z) is the representation of any point(xw; yw; zw) in the camera coordinate system, fromEq. (1). So, the vector (XW ;YW ;ZW) is the distancefrom any reference point on the calibration board tothe optical center, represented in the world coordinatesystem.

The orientation of the line from the reference point tothe optical center (target line), represented in the worldcoordinate system can be obtained from the rotationmatrix

RW ¼ �R�1: ð39Þ

The calibration of the remaining intrinsic parametersof the camera, fx and k; was performed using the firstsequence of 25 camera positions, which was chosen to becloser to the target plate. For each image, values of fxand k calculated by the algorithm were recorded. Due tonoise and geometric inaccuracies, each image yieldeddifferent values for fx and k: The average values of theconstants fx and k calculated from the 25 images were1523 and 7.9� 10�8, respectively. The standard devia-tion values for fx and k were 3.38 and 2.7� 10�9,respectively.

The average values for fx and k should be keptconstant from then on, and considered the ‘‘best’’ones. To check this assumption, the distance travelledby the camera calculated by the Euclidean norm of(XW ;YW ;ZW) for each camera position, and thedistance read from the CMM display were comparedto each other. Errors were calculated to be the differencebetween the distance travelled by the camera using thetwo methods. It was observed then that the averagevalue of fx did not minimize the errors.

Subsequently, an optimal value for fx was searched tominimize the errors explained above, by changingslightly the previous one. The value of k did not showan important influence on the errors, when changedslightly. The optimal values of fx and k were checked inthe same way for the other two measurement sequencesat different distances, and were showed to produce thebest accuracy. The optimal values for fx and k wherefound to be 1566 and 7.9� 10�8, which were keptconstant from then on.

Assessment of 3D measurement accuracy using asingle camera is not straightforward. The methoddesigned here to assess the measurement accuracy wasto compare each component XW ; YW and ZWcorresponding to each camera position, to the samevector calculated, assuming an average value for therotation matrix. This method is justified consideringthat, since there are no rotations between the cameraand the calibration board during the tests, the rotationmatrix must be ideally the same for each cameraposition. Since the vector (XW ;YW ;ZW) depends onR in Eq. (20) and T in Eq. (21), and as T is calculatedfrom R; all measurement errors would be accounted forin the rotation matrix, apart from the errors due to theground accuracy of the calibration board. Of course,this assumption is valid only if another method tocompare the camera measurements to an externalmeasurement system (in physical units) is used tovalidate it. That means, the accuracy calculated fromthe travelled distance validates the accuracy calculatedfrom the rotation matrix. The first method does notconsider each error component independently (x; y; z),but serves as a basis to optimize the camera parameters.The second method takes into account the 3D measure-ment error, assuming that the ‘‘correct’’ rotation matrixis the average of the ones calculated for each of the 25positions. Table 1 shows the average, median andstandard deviation of the 3D position error for eachcomponent in x; y and z; calculated from the rotationmatrix error.

The maximum position errors for each measurementsequence were 0.43, 0.97, and 0.72mm, respectively.Fig. 5 shows graphically the average, median and

Table 1

Average, median and standard deviation of the measurement system position accuracy for each coordinate axis calculated as the position error using

the average rotation matrix

Measurement sequence Measurement accuracy (mm)

Average Median Standard deviation

x y z x y z x y z

1 0.13 0.11 0.07 0.10 0.10 0.09 0.10 0.09 0.06

2 0.18 0.30 0.12 0.13 0.26 0.11 0.12 0.20 0.09

3 0.17 0.20 0.16 0.10 0.18 0.13 0.17 0.11 0.13

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497 493

Page 8: Robot calibration using a 3D vision-based measurement system with a single camera

standard deviation values of the measurement systemposition accuracy, calculated as explained before, as afunction of the average distance from the camera(optical center) to the central point of the calibrationboard.

5. Robot calibration and experimental results

Within the IRB-2400 robot workspace, three calibra-tion regions were defined to collect data, each one with adifferent volume (V1; V2 and V3). Fig. 6 representsgraphically the three regions within the robot work-space. The results from the calculation of average errorsand their standard deviation in each region can be seenin the graphs shown in Fig. 7, calculated before andafter the calibration. For comparing the calculated andmeasured data in different coordinate systems with eachother, the robot base coordinate frame was moved tocoincide with the world coordinate system at the

measurement target plate. This procedure was carriedout through a recalibration of the robot base in eachregion.

The results presented in Fig. 7 show that the averageof the position errors after calibration were very close invalue for the three calibration regions tested. Within thePUMA-500 robot workspace, two calibration regionswere defined to collect data. In each region, threevolumes were defined with different dimensions. Oncethe two different volumes had the same volume, theyalso had the same dimensions, whatever the region theywere in.

Fig. 8 represents graphically all the regions andvolumes within the PUMA-500 workspace. The resultspresented in Figs. 9 and 10 show that the average ofthe position errors before and after calibration werehigher when the volumes were larger for both theregions tested. This robot was also calibrated locally,which means that the robot was recalibrated in eachregion.

Distance x Position Accuracy

00.050.1

0.150.2

0.250.3

0.350.4

0.450.5

650 700 750 800 850 900 950 1000

Distance from the Target (mm)

Po

siti

on

Acc

ura

cy (

mm

)Average

Median

St. Dev.

Fig. 5. Measurement system 3D position accuracy versus the average distance from the camera focal point to the central point of the target.

Fig. 6. Side and top view of the IRB-2400 robot workspace showing regions, volumes and their dimensions and locations.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497494

Page 9: Robot calibration using a 3D vision-based measurement system with a single camera

Fig. 7. Average error and standard deviation calculated before and after calibration in each region.

Fig. 8. Side and top view of the PUMA-500 robot workspace showing regions, volumes and their dimensions and locations.

Fig. 9. Average error and standard deviation calculated before and after calibration in each volume in region 1.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497 495

Page 10: Robot calibration using a 3D vision-based measurement system with a single camera

6. Conclusions

The calibration system proposed showed to improvethe robot accuracy to well below 1mm. The systemallows a large variation in robot configurations, which isessential for proper calibration. The robot calibrationsystem approach proposed here stood out to be afeasible alternative to the expensive and complexsystems available today in the market, using a singlecamera and showing good accuracy and ease of use andsetup. Results showed that the RAC model used (withslight modifications) is not very robust, since even forimages filling the entire screen and captured atapproximately the same distances from the target, thefocus length was not constant and showed an averagevalue shifted by approximately 3% from the exact one.This amount of error can produce 3D measurementerrors much larger than acceptable. Practically speak-ing, the solution for this problem developed here for aset of camera and lens was to use an externalmeasurement system to calibrate the camera, at leastonce. The measurement accuracy obtained is compar-able to the best found in academic literature for this typeof system, with median values of accuracy of approxi-mately 1 : 3000 when compared to the distance from thetarget. However, this accuracy was obtained at con-siderable larger distances and different camera orienta-tions than usual applications for cameras require,making the system suitable for robotic metrology.Besides, if a larger calibration board is used and/or acamera with more resolution, measurement accuracycan increase considerably.

References

[1] Bernhardt R. Approaches for commissioning time reduction. Ind

Robot 1997;24(1):62–71.

[2] Albada GD, Visser A, Lagerberg JM, Hertzberger LO. A portable

measuring system for robot calibration. Proceedings of the 28th

international symposium on automotive technology and auto-

mationFdedicated conference on mechatronics and efficient

computer support for engineering, 1995. p. 441–8.

[3] McKerrow PJ. Introduction to robotics. Singapore: Addison

Wesley, 1995.

[4] Paul RP. Robot manipulatorsFmathematics, programming, and

control. Boston, Massachusetts: MIT Press, 1981.

[5] Motta JM, McMaster RS. Improving robot calibration results

using modeling optimization. IEEE Proceedings of the interna-

tional symposium on industrial electronicsFISIE’97, 1997.

SS291–6.

[6] Motta JM. Optimised robot calibration using a vision-based

measurement system with a single camera. Ph.D. thesis, Cranfield

University, UK, April 1999.

[7] Baker DR. Some topological problems in robotics. The Math

Intelicencer 1990;12(1):66–76.

[8] Schr .oer K, Albright SL, Grethlein M. Complete, minimal

and model-continuous kinematic models for robot cali-

bration. Robotics Comput Integrated Manufacturing 1997;

13(1):73–85.

[9] Riels MR, Pathre US. Significance of observation strategy on the

design of robot calibration experiments. J Robotic Systems

1990;7(2):197–223.

[10] Dennis JE, Schnabel RB. Numerical methods for unconstrained

optimisation and non-linear equations. New Jersey: Prentice-Hall,

1983.

[11] Press WH, Teukolsky SA, Flannery BP, Vetterling WT.

Numerical recipes in PascalFthe art of scientific computer.

New York: Cambridge University Press, 1994.

[12] Prescott B, McLean GF. Line-based correction of radial lens

distortion. Graphical Models and Image Processing 1997;

59(1):39–47.

Fig. 10. Average error and standard deviation calculated before and after calibration in each volume in region 2.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497496

Page 11: Robot calibration using a 3D vision-based measurement system with a single camera

[13] Zhuang H, Roth ZS. Camera-aided robot calibration. Boca

Raton: CRC Press, 1996.

[14] Tsai RY. A versatile camera calibration technique for high-

accuracy 3D machine vision metrology using off-the shelf TV

cameras and lenses. IEEE Int J Robotics Autom 1987;RA-

3(4):323–44.

[15] Weng J, Cohen P, Herniou M. Camera calibration with distortion

models and accuracy evaluation. IEEE Trans Pattern Anal

Machine Intel 1992;14(10):965–80.

[16] Wolf PR. Elements of photogrammetry. Singapore: McGraw-

Hill, 1983.

[17] Lenz RK, Tsai RY. Techniques for calibration of the scale factor

and image centre for high accuracy 3D machine vision metrology.

IEEE Proceedings of the international conference on robotics and

automation, 1987. p. 68–75.

[18] Zhuang H, Roth ZS. A linear solution to the kinematic parameter

identification of robot manipulators. IEEE Trans Robotics

Automation 1993;9(2):174–85.

[19] Zhuang H, Roth ZS, Xu X, Wang K. Camera calibration issues in

robot calibration with eye-on-hand configuration. Robotics

Comput Integrated Manufacturing 1993;10(6):401–12.

J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487–497 497


Top Related