# Robot calibration using a 3D vision-based measurement system with a single camera

Post on 04-Jul-2016

216 views

Embed Size (px)

TRANSCRIPT

<ul><li><p>Robotics and Computer Integrated Manufacturing 17 (2001) 487497</p><p>Robot calibration using a 3D vision-based measurement systemwith a single camera</p><p>Jos!e Maur!cio S.T. Mottaa,*, Guilherme C. de Carvalhob, R.S. McMasterc</p><p>aDepartment of Mechanical Engineering, University of Brasilia, 70910-900 Bras!lia, DF, BrazilbDepartment of Mechanical Engineering, University of Brasilia, 70910-900 Bras!lia, DF, Brazil</p><p>cSchool of Industrial and Manufacturing Science, Craneld University, Building 62, Craneld MK43 OAL, England, UK</p><p>Received 1 March 2001; received in revised form 3 June 2001; accepted 27 August 2001</p><p>Abstract</p><p>One of the problems that slows the development of o-line programming is the low static and dynamic positioning accuracy ofrobots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production</p><p>and maintenance. This work presents techniques for modeling and performing robot calibration processes with o-lineprogramming using a 3D vision-based measurement system. The measurement system is portable, accurate and low cost, consistingof a single CCD camera mounted on the robot tool ange to measure the robot end-eector pose relative to a world coordinate</p><p>system. Radial lens distortion is included in the photogrammetric model. Scale factors and image centers are obtained withinnovative techniques, making use of a multiview approach. Results show that the achieved average accuracy using a common o-the-shelf CCD camera varies from 0.2 to 0.4mm, at distances from 600 to 1000mm from the target, respectively, with dierentcamera orientations. Experimentation is performed on two industrial robots to test their position accuracy improvement using the</p><p>calibration system proposed: an ABB IRB-2400 and a PUMA-500. The robots were calibrated at dierent regions and volumeswithin their workspace achieving accuracy from three to six times better when comparing errors before and after calibration, ifmeasured locally. The proposed o-line robot calibration system is fast, accurate and easy to set up. r 2001 Elsevier Science Ltd.</p><p>All rights reserved.</p><p>Keywords: Kinematic model; Robot calibration; Absolute accuracy; Camera calibration</p><p>1. Introduction</p><p>Most currently used industrial robots are stillprogrammed by a teach pendant, especially in theautomotive industry. However, the importance of o-line programming in industry as an alternative to teach-in programming is steadily increasing. The main reasonfor this trend is the need to minimize machine downtimeand thus, to improve the rate of robot utilization. Atypical welding line with 30 robots and 40 welding spotsper robot takes about 400 h for robot teaching [1].Nonetheless, for a successful accomplishment of o-lineprogramming, the robots need to be not only repeatablebut also accurate.</p><p>Robot repeatability is unaected by the method ofprogramming since it is due to random errors (e.g. dueto the nite resolution of joint encoders). In contrast,the systematic errors in absolute position are almostentirely due to programming the robot o-line. One ofthe leading sources of lack of accuracy is the mismatchbetween the prediction made by the kinematic modeland the actual system. Robot constant-pose errorsare attributed to several sources, including errors ingeometrical parameters (e.g. link lengths and jointosets) and deviations which vary predictably withposition (e.g. compliance or gear transmission errors)[13].Robot static calibration is an integrated process of</p><p>modeling, measurement, numerical identication ofactual physical characteristics of a robot, and imple-mentation of a new model. The measurement step of thecalibration process is necessary for the determination ofthe best t set of parameters to the actual robot. The</p><p>*Corresponding author. Tel.: +55-61-368-7137; fax: +55-61-307-</p><p>2978.</p><p>E-mail address: jmmotta@unb.br (J.M.S.T. Motta).</p><p>0736-5845/01/$ - see front matter r 2001 Elsevier Science Ltd. All rights reserved.</p><p>PII: S 0 7 3 6 - 5 8 4 5 ( 0 1 ) 0 0 0 2 4 - 2</p></li><li><p>types of measurement required comprise tool positionscorresponding to joint positions. Since the tting cannotbe of better quality than that of the original datacollection, it is very important to obtain a good set ofdata.Vision-based measurement systems using a single</p><p>camera for robot calibration seem to be one of thetrends of the development of robot calibration systemstoday [2], but the knowledge involved in this area is stillrestricted to research groups sponsored by large budgetcompanies, with little interest to publish technicaldetails. For this technology to be available and usedextensively in industry, there is still a long way to go.The objectives of this research were to investigate</p><p>theoretical aspects involved in robot calibration meth-ods and systems, to develop a feasible low-cost vision-based measurement system using a single camera placedon a robot tool ange and, nally, to construct aprototype of a robot calibration system. Experimentalresults to assess the measurement system accuracy arepresented. Results from experimentation on an ABBIRB-2400 and PUMA-500 robots to test their accuracyimprovement using the proposed calibration system arealso shown.</p><p>2. Kinematic modeling and parameter identication</p><p>The rst step to kinematic modeling is the properassignment of coordinate frames to each link. Eachcoordinate system here is orthogonal, and the axes obeythe right-hand rule.The rst step to assign coordinate frames to joints is</p><p>to make the z-axis coincident with the joint axis. Thisconvention is used by many authors and in many robotcontrollers [3,4].The origin of the frame is placed as the following [5,6]:</p><p>if the joint axes of a link intersect, then the origin of theframe attached to the link is placed at the joint axesintersection; if the joint axes are parallel or do notintersect, then the frame origin is placed at the distaljoint; subsequently, if a frame origin is described relativeto another coordinate frame by using more than onedirection, then it must be moved to make use of onlyone direction, if possible. Thus, the frame origins will bedescribed using the minimum number of link para-meters.The x- or the y-axis has its direction according to the</p><p>convention used to parametrize the transformationsbetween links. For either perpendicular or parallel jointaxes, the DenavitHartenberg or Hayati modelingconvention were used, respectively. The requirementsof a singularity-free parameter identication modelprevents the use of a single minimal modeling conven-tion that can be applied uniformly to all possible robotgeometries [7,8]. At this point, the homogeneous</p><p>transformations between joints must have already beendetermined. The other axis (x or y) can be determined byusing the right-hand rule.The end-eector or tool-frame location and orienta-</p><p>tion is dened according to the controller conventions.Geometric parameters of length are dened to have anindex of joint and direction. The length pni is thedistance between coordinate frames i 1 and i; and n isthe parallel axis in the coordinate system i 1: Fig. 1shows the above rules applied to the IRB-2400 robotwith all coordinate frames and geometric features.The kinematic equation of the robot manipulator is</p><p>obtained by consecutive homogeneous transformationsfrom the base frame to the last frame. Thus,</p><p>#T0</p><p>N #T0</p><p>Np T01 T</p><p>12 y T</p><p>N1N </p><p>YNi1</p><p>Ti1i ; 1</p><p>where N is the number of joints (or coordinate frames),p pT1 p</p><p>T2 y p</p><p>TN </p><p>T is the parameter vector for themanipulator, and pi is the link parameter vector forthe joint i; including the joint errors. The exact linktransformation Ai1i is [9]</p><p>Ai1i Ti1i DTi; DTi DTiDpi; 2</p><p>where Dpi is the link parameter error vector for thejoint i:The exact manipulator transformation #A</p><p>0</p><p>N1 is</p><p>#A0</p><p>N YNi1</p><p>Ti1i DTi YNi1</p><p>Ai1i : 3</p><p>Fig. 1. Skeleton of the IRB-2400 robot with coordinate frames in the</p><p>zero position and geometric variables for kinematic modeling (out of</p><p>scale).</p><p>J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487497488</p></li><li><p>Thus,</p><p>#A0</p><p>N #T0</p><p>N D #T; D #T D #Tq;Dp; 4</p><p>where Dp DpT1DpT2 yDp</p><p>TN </p><p>T is the manipulator para-meter error vector and q yT1 ; y</p><p>T2 ; y</p><p>TN </p><p>T is the vector ofjoint variables. It must be stated here that D #T is anon-linear function of the manipulator parameter errorvector Dp:Considering m, the number of measure positions, it</p><p>can be stated that</p><p>#A #A0</p><p>N #Aq; p 5</p><p>where #A : RnRN is a function of two vectors with nand N dimensions, n is the number of parameters and Nis the number of joints (including the tool). It followsthat</p><p>#A #A0</p><p>N</p><p> #Aq; p</p><p> #Aq1; p;y; #Aqm; pT : RnRmN 6</p><p>and</p><p>D #T D #Tq;Dp</p><p> D #Tq1;Dp; ::: ;D #Tqm;DpT : RnRmN : 7</p><p>All matrices or vectors in bold are functions of m: Theidentication itself is the computation of those modelparameter values p p Dp; which result in anoptimal t between the actual measured positions andthose computed by the model, i.e., the solution of thenon-linear equation system</p><p>Bq; p Mq; 8</p><p>where B is a vector formed with position and orientationcomponents of #A and</p><p>Mq Mq1;y;MqmTARfm 9</p><p>are all measured components and f is the number ofmeasurement equations provided by each measuredpose. If orientation measurement can be provided by themeasurement system, then six measurement equationscan be formulated per each pose. If the measurementsystem can only measure position, each pose measure-ment can supply data for three measurement equationsper pose and then B includes only the positioncomponents of #A:When one is attempting to t data to a non-linear</p><p>model, the non-linear least-squares method arises mostcommonly, particularly in the case that m is much largerthan n [10]. In this case, we have from Eqs. (2), (4)and (8):</p><p>Bq; p* Mq Bq; p Cq;Dp; 10</p><p>where C is the dierential motion vector formed by theposition and rotation components of D #T: From the</p><p>denition of the Jacobian matrix and ignoring second-order products</p><p>Cq;Dp J Dp 11</p><p>and so,</p><p>Mq Bq; p JDp: 12</p><p>The following notation can be used</p><p>b Mq Bq; pARfm; 13</p><p>J Jq;DpARfmn; 14</p><p>x DpARn; 15</p><p>r Jx bARfm: 16</p><p>Eq. (10) can be solved by a non-linear least-squaremethod in the form</p><p>Jx b: 17</p><p>One method to solve non-linear least-square problemsthat proved to be very successful in practice and wasthen recommended for general solutions is the algorithmproposed by LevenbergMarquardt (LM algorithm)[10]. Several algorithms versions of the LM algorithmhave been proved to be successful (globally convergent).From Eq. (17), the method can be formulated as</p><p>xj1 xj JxjT Jxj mj I</p><p> 1</p><p>Jxj bxj; 18</p><p>where, according to Marquardt suggestion, mj 0:001 ifxj is the initial guess, mj l0:001 if jjbxj1jjXjjbxjjj;mj 0:001=l if jjbxj1jjpjjbxjjj and l is a constantvalid in the range of 2:5olo10 [11].</p><p>3. Vision-based measurement system</p><p>There are basically two typical setups for vision-basedrobot calibration. The rst is to x cameras in the robotsurroundings so that the camera can frame a calibrationtarget mounted on the robot end-eector. The othersetup is named hand-mounted camera robot calibration.This latter setup can use a single camera or a pair ofcameras. A single moving camera presents the advan-tages of a large eld-of-view with a potential largedepth-of-eld, and a considerable reduced hardware andsoftware complexity of the system. On the other hand, asingle camera setup needs full camera re-calibration ateach pose.The goal of camera calibration is to develop a</p><p>mathematical model of the transformation betweenworld points and observed image points resulting fromthe image formation process. The parameters whichaect this mapping can be divided into three categories[12,13]: (a) extrinsic (or external) parameters, whichdescribe the relationship between the camera frame andthe world frame, including position (three parameters)</p><p>J.M.S.T. Motta et al. / Robotics and Computer Integrated Manufacturing 17 (2001) 487497 489</p></li><li><p>and orientation (three parameters); (b) intrinsic (orinternal) parameters, which describe the characteristicsof the camera, and include the lens focal length, pixelscale factors, and location of the image center; and (c)distortion parameters, which describe the geometricnon-linearities of the camera. Some authors includedistortion parameters in the group of intrinsic para-meters (e.g. Tsai [14] and Weng et al. [15]). Distortionparameters can or not be present in a model.The algorithm developed here to obtain the camera</p><p>parameters (intrinsic, extrinsic and distortions) is a two-step method based on the radial alignment constraint(RAC) model (see Ref. [14]). It involves a closed-formsolution for the external parameters and the eectivefocal length of the camera. Then, a second stage is usedto estimate three parameters: the depth component inthe translation vector, the eective focal length, and theradial distortion coecient. The RAC model is recog-nized as a good compromise between accuracy andsimplicity, which means short processing time [13]. Somefew modications were introduced here in the originalRAC algorithm, and will be explained later.</p><p>3.1. RAC-based camera model</p><p>In Fig. 2, the world coordinate system is {xw; yw; zw};the camera coordinate system is {x; y; z}. The origin ofthe camera coordinate system is centerd at Oc; and the z-axis coincides with the optical axis. (X ;Y) is the imagecoordinate system at Oi (intersection of the optical axiswith the front image plane) and is measured in pixels.(u; v) are the analog coordinates of the object point inthe image plane (usually measured in meters ormillimeters). (X ;Y) lies on a plane parallel to the xand y axes. f is the distance between the front imageplane and the optical center.</p><p>The rigid body transformation from the objectcoordinate system (xw; yw; zw) to the camera coordinatesystem (x; y; z) is</p><p>x</p><p>y</p><p>z</p><p>264</p><p>375 R</p><p>xw</p><p>yw</p><p>zw</p><p>264</p><p>375 T ; 19</p><p>where R is the orthogonal rotation matrix, which alignsthe camera coordinate system with the object coordinatesystem, and it can be represented as</p><p>R </p><p>r1 r2 r3</p><p>r4 r5 r6</p><p>r7 r8 r9</p><p>264</p><p>375; 20</p><p>and T is the translation vector represented as</p><p>T </p><p>Tx</p><p>Ty</p><p>Tz</p><p>264</p><p>375; 21</p><p>which is the distance from the origin of the cameracoordinate system to the origin of the object coordinatesystem represented in the camera coordinate system.(xi; yi; zi) is the representation of any point (xwi; ywi; zwi)in the camera coordinate system.The distortion-free camera model, or pinhole</p><p>model, assumes that every real object point is connectedto its correspondent image on the image plane in astraight line passing through the focal point of thecamera lens, Oc: In Fig. 2, the undistorted image pointof the object point, Pu; is shown.The transformation from 3D coordinate (x; y; z) to the</p><p>coordinates of the object point in the image planefollows the perspective equations below [16]:</p><p>u fx</p><p>zand v f</p><p>y</p><p>z; 22</p><p>where f is the focal length (in physical units, e.g.millimeters).The image coordinates (X ;Y) are related to (u; v) by</p><p>the following equations (see Refs. [13,14]):</p><p>X su v and Y sv v; 23</p><p>where su and sv are scale factors accounting for TVsca...</p></li></ul>

Recommended

View more >