error evaluation technique for three-dimensional digital image correlation

9
Error evaluation technique for three-dimensional digital image correlation Zhenxing Hu, 1 Huimin Xie, 1, * Jian Lu, 2 Huaixi Wang, 1 and Jianguo Zhu 1 1 Applied Mechanical Laboratory, Department of Engineering Mechanics, Tsinghua University, Beijing 100084, China 2 College of Science and Engineering, City University of Hong Kong, Hong Kong, China *Corresponding author: [email protected] Received 21 March 2011; revised 11 August 2011; accepted 7 September 2011; posted 21 September 2011 (Doc. ID 144350); published 16 November 2011 Three-dimensional (3D) digital image correlation (DIC) is one of the most popular techniques used in engineering for strain and deformation measurement. However, the error analysis of 3D DIC, especially which kind of parameters dominates the error of 3D coordinate reconstruction in any kind of configura- tion, is still under study. In this paper, a technique that can be used for error determination of recon- struction is presented. The influence from the system calibration and image correlation to the error is theoretically analyzed. From numerical experiments of one-dimensional line and two-dimensional plane, the evaluation procedure is validated to be flexible. A typical test with standard objects is also conducted. With this technique, once a 3D DIC system is set up and images of objects with speckles and calibration boards are recorded, the error of the configuration can be immediately evaluated. The standard deviation of every point in the world coordinate can be determined by statistical analysis. © 2011 Optical Society of America OCIS codes: 100.2000, 330.1400, 120.4880. 1. Introduction Three-dimensional (3D) digital image correlation (DIC), also known as stereo-based deformation mea- surement [1,2], is now widely used in experimental mechanics. It has been proven to be a full-field, ro- bust, convenient, and powerful noncontact technique for measuring the displacement or strain field of a 3D object surface [3]. It is based on binocular stereo vi- sion for recovering the 3D structure of a scene from two different viewpoints [4,5]. It is currently used prevalently in materials science, engineering me- chanics [6,7], biomechanics [8], polymer forming [9], high-speed impact testing [10,11], fatigue and frac- ture testing [12], and many other fields [13]. However, the accuracy of 3D DIC depends on many factors: the quality of the cameras and their resolu- tion, the configuration of the two cameras (distance between which governs the triangulation accuracy), the accuracy of the stereo vision sensor calibration, and the accuracy of image stereo matching [3]. In other words, the accuracy of 3D DIC depends on the accuracy of parameters, such as intrinsic parameters (focal length, principal point, skew, and distortion coefficient), extrinsic parameters (swing, tilt, pan an- gle, and relative distance of two cameras), and image correlation errors. Wang and coworkers [1,2,14] have studied the error assessment in 3D DIC. Theoretically, the fac- tors affecting variance of the calibration results and reconstruction result are presented. General equations are employed to model a simplified two- camera stereo vision configuration that can be solved explicitly. Wang used a simplified configuration to study how specific experimental parameters affect bias and variance in the reconstructed 3D position of points. These parameters that affect the error of 3D DIC can be simply divided into two categories: cali- bration errors and correlation error [1517]. Differ- ent calibration procedures [14,1820] can obtain different level error of parameters in binocular stereo 0003-6935/11/336239-09$15.00/0 © 2011 Optical Society of America 20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6239

Upload: jianguo

Post on 03-Oct-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Error evaluation technique for three-dimensional digital image correlation

Error evaluation technique for three-dimensionaldigital image correlation

Zhenxing Hu,1 Huimin Xie,1,* Jian Lu,2 Huaixi Wang,1 and Jianguo Zhu1

1Applied Mechanical Laboratory, Department of Engineering Mechanics, Tsinghua University, Beijing 100084, China2College of Science and Engineering, City University of Hong Kong, Hong Kong, China

*Corresponding author: [email protected]

Received 21 March 2011; revised 11 August 2011; accepted 7 September 2011;posted 21 September 2011 (Doc. ID 144350); published 16 November 2011

Three-dimensional (3D) digital image correlation (DIC) is one of the most popular techniques used inengineering for strain and deformation measurement. However, the error analysis of 3D DIC, especiallywhich kind of parameters dominates the error of 3D coordinate reconstruction in any kind of configura-tion, is still under study. In this paper, a technique that can be used for error determination of recon-struction is presented. The influence from the system calibration and image correlation to the error istheoretically analyzed. From numerical experiments of one-dimensional line and two-dimensional plane,the evaluation procedure is validated to be flexible. A typical test with standard objects is also conducted.With this technique, once a 3D DIC system is set up and images of objects with speckles and calibrationboards are recorded, the error of the configuration can be immediately evaluated. The standard deviationof every point in the world coordinate can be determined by statistical analysis. © 2011 Optical Societyof AmericaOCIS codes: 100.2000, 330.1400, 120.4880.

1. Introduction

Three-dimensional (3D) digital image correlation(DIC), also known as stereo-based deformation mea-surement [1,2], is now widely used in experimentalmechanics. It has been proven to be a full-field, ro-bust, convenient, and powerful noncontact techniquefor measuring the displacement or strain field of a 3Dobject surface [3]. It is based on binocular stereo vi-sion for recovering the 3D structure of a scene fromtwo different viewpoints [4,5]. It is currently usedprevalently in materials science, engineering me-chanics [6,7], biomechanics [8], polymer forming [9],high-speed impact testing [10,11], fatigue and frac-ture testing [12], and many other fields [13].

However, the accuracy of 3D DIC depends on manyfactors: the quality of the cameras and their resolu-tion, the configuration of the two cameras (distancebetween which governs the triangulation accuracy),

the accuracy of the stereo vision sensor calibration,and the accuracy of image stereo matching [3]. Inother words, the accuracy of 3D DIC depends on theaccuracy of parameters, such as intrinsic parameters(focal length, principal point, skew, and distortioncoefficient), extrinsic parameters (swing, tilt, pan an-gle, and relative distance of two cameras), and imagecorrelation errors.

Wang and coworkers [1,2,14] have studied theerror assessment in 3D DIC. Theoretically, the fac-tors affecting variance of the calibration resultsand reconstruction result are presented. Generalequations are employed to model a simplified two-camera stereo vision configuration that can be solvedexplicitly. Wang used a simplified configuration tostudy how specific experimental parameters affectbias and variance in the reconstructed 3D position ofpoints. These parameters that affect the error of 3DDIC can be simply divided into two categories: cali-bration errors and correlation error [15–17]. Differ-ent calibration procedures [14,18–20] can obtaindifferent level error of parameters in binocular stereo

0003-6935/11/336239-09$15.00/0© 2011 Optical Society of America

20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6239

Page 2: Error evaluation technique for three-dimensional digital image correlation

vision. Bias in image correlation has been shown tobe primarily a function of the (a) decimal part of mo-tion, (b) interpolation method, (c) matching subsetsize, (d) intensity noise level, and (e) sum of squaresfor gradients of intensity [14,21]. Pan et al. [22] alsoproposed a theoretical model to indicate that the dis-placement measurement accuracy of DIC can be ac-curately predicted based on the variance of imagenoise and sum of square of subset intensity gradients(SSSIG).

In this paper, a technique used for assessing theerror of 3D DIC is presented. Theoretical predictionand numerical simulation are conducted to verifythis technique. The technique is applied not only inthe simplified configuration but also in a practicalstereo vision configuration. The error evaluationtechnique is quite useful to determine the accuracyof 3D DIC once the configuration of 3D DIC systemis set up and the images of object speckle arerecorded.

2. Principle and Theory

The principle of 3D DIC was first proposed by Luoet al. [23] and has been represented in many studies[14,24]. The schematic diagram is shown in Fig. 1.Here is a brief introduction. A point P is a 3Dpoint in world coordinates expressed as P ¼½XW ;YW ;ZW ; 1�T . Suppose that the rotation matrixand translation vector relative to the world coordi-nate system are R1 and T1, respectively, with thecamera parameter K. And the pixel p1 ¼½Xsl Ysl 1 �T on the camera sensor and the worldpoint P has the expression (provided that the camerasatisfies a pinhole model)

zc1p1 ¼K1

�R1 T1

0 1

�P; K1 ¼

24 f x1 γ1 u01

0 f y1 v010 0 1

35; ð1Þ

where zc1 is ratio scale. The rotation matrix R1 is anexpression of ax1, ay1, and az1, which represent anglesfor the transformation between coordinate systems;more details can be found in [2,14]. K is the cameraintrinsic matrix, where u01, v01 are the coordinates ofthe principal point and f x1 and f y1 are the scale fac-tors in the image horizontal and vertical direction,respectively. The translation vector presents the co-ordinate of the camera system in the world coordi-nates T1 ¼ ½ tx1 ty1 tz1 1 �T.

Similarly, the rotation and translation from cam-era 1 to camera 2 systems can be written as R0and T0, respectively. Thus, the second camera is re-lated with world coordinates as

zc2p2 ¼ K2

�R0 T0

0 1

��R1 T1

0 1

�P ¼ K2

�R2 T2

0 1

�P;

ð2Þ

where R2 and T2 present the rotation matrix andtranslation vector to the camera 2, respectively.

In a 3D DIC system, the parameters of the intrin-sic and extrinsic can be calibrated from numerouscalibration algorithms [25], and the matching proce-dure could be realized by using DIC method [26].During the reconstruction procedure, simplify theexpression and combine Eq. (1) with Eq. (2) as

zc1

264Xs1

Ys1

1

375 ¼

2664mð1Þ

11 mð1Þ12 mð1Þ

13 mð1Þ14

mð1Þ21 mð1Þ

22 mð1Þ23 mð1Þ

24

mð1Þ31 mð1Þ

32 mð1Þ33 mð1Þ

34

37752664XW

YW

ZW

1

3775; ð3Þ

zc2

264Xs2

Ys2

1

375 ¼

2664mð2Þ

11 mð2Þ12 mð2Þ

13 mð2Þ14

mð2Þ21 mð2Þ

22 mð2Þ23 mð2Þ

24

mð2Þ31 mð2Þ

32 mð2Þ33 mð2Þ

34

37752664XW

YW

ZW

1

3775; ð4Þ

where mij is the element of the projection matrix,which is the combination of all the parameters, thesuperscripts (1) and (2) mean the left and right cam-era, respectively. Thus, combining Eqs. (3) and (4),the world coordinate is given by

ðXs1mð1Þ31 −mð1Þ

11 ÞXW þ ðXs1mð1Þ32 −mð1Þ

12 ÞYW

þ ðXs1mð1Þ33 −mð1Þ

13 ÞZW ¼ mð1Þ14 − Xs1m

ð1Þ34 ;

ðYs1mð1Þ31 −mð1Þ

21 ÞXW þ ðYs1mð1Þ32 −mð1Þ

22 ÞYW

þ ðYs1mð1Þ33 −mð1Þ

23 ÞZW ¼ mð1Þ14 − Ys1m

ð1Þ34 ;

ðXs2mð2Þ31 −mð2Þ

11 ÞXW þ ðXs2mð2Þ32 −mð2Þ

12 ÞYW

þ ðXs2mð2Þ33 −mð2Þ

13 ÞZW ¼ mð2Þ14 − Xs2m

ð2Þ34 ;

ðYs2mð2Þ31 −mð2Þ

21 ÞXW þ ðYs2mð2Þ32 −mð2Þ

22 ÞYW

þ ðYs2mð2Þ33 −mð2Þ

23 ÞZW ¼ mð2Þ14 − Ys2m

ð2Þ34 : ð5Þ

Fig. 1. (Color online) The schematic diagram of simplified stereoconfiguration and the rotation and translation vector from theworldcoordinate to camera 2 is calculated from the transform matrix ofthe world coordinate to camera 1 and camera 1 to camera 2.

6240 APPLIED OPTICS / Vol. 50, No. 33 / 20 November 2011

Page 3: Error evaluation technique for three-dimensional digital image correlation

In Eq. (5), it can be seen that XW , YW , and ZW arefunctions of all the parameters and can be solved byusing least-squares method. Although solution ofEq. (5) is simple, the expression is too complicated.Up to now, the lens distortions of a camera have notbeen taken into consideration. The distortion coeffi-cients are always obtained in the procedure of cam-era calibration in the literature [20,27–31]. In Eq. (5),the stereo matching point (Xs1, Ys1) and (Xs2, Ys2)should be corrected by distortion model [20]. Thus,the expressions of XW, YW , and ZW are as follows:

XW ¼ XWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; αx1; αy1; αz1; tx1; ty1; tz1; f x2; f y2; s2; cx2;cy2; κ3; κ4; αx2; αy2; αz2; tx2; ty2; tz2Þ;

YW ¼ YWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; αx1; αy1; αz1; tx1; ty1; tz1; f x2; f y2; s2; cx2;cy2; κ3; κ4; αx2; αy2; αz2; tx2; ty2; tz2Þ;

ZW ¼ ZWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; αx1; αy1; αz1; tx1; ty1; tz1; f x2; f y2; s2; cx2; cy2;κ3; κ4; αx2; αy2; αz2; tx2; ty2; tz2Þ; ð6Þ

where κ1, κ2, κ3, and k4 are the distortion coefficientsof radial and tangential direction of camera 1 andcamera 2. In a 3D DIC system, the world coordinatecan be set subjectively, which means R1 and T1 areerror free, and thus R1, T1 and R2, T2 can be replacedby the transformation between camera 1 and camera2 in Eq. (6). As a further simplification, Eq. (6) can bereduced as

XW ¼ XWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; f x2; f y2; s2; cx2; cy2; κ3; κ4; αx0; αy0; αz0; tx0; ty0; tz0Þ;YW ¼ YWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; f x2; f y2; s2; cx2; cy2; κ3; κ4; αx0; αy0; αz0; tx0; ty0; tz0Þ;ZW ¼ ZWðXs1;Ys1;Xs2;Ys2; f x1; f y1; s1; cx1; cy1; κ1; κ2; f x2; f y2; s2; cx2; cy2; κ3; κ4; αx0; αy0; αz0; tx0; ty0; tz0Þ: ð7Þ

Assume the reference sensor plane location is errorfree; then (Xs1, Ys1), the center-point pixel locationfor the reference subset in camera 1, is “exact.”(Xs2, Ys2) denotes the camera 2 center-point pixel lo-cation of the optimally matched subset. Error of XW,YW , and ZW is affected by the accuracy of all the other22 parameters.

Using equation u ¼ Uð~θÞ to stand for Eq. (7),~θ is anm-dimensional vector of variables in Eq. (7), U repre-sents the function of Eq. (7), and u represents the vec-tor in 3D reconstructed coordinate.

A flow chart of error numerical simulation and the-oretical analysis is illustrated in Fig. 2.

Numerically, we will test the error sensitivity ofthe method by using Monte Carlo simulation. First,

these parameters for binocular stereo vision are se-lected, and Gaussian distribution noise is generatedwith a standard deviation (SD) of parameter θi, σðθiÞ;thus, the variance is VðθÞ ¼ σ2ðθÞ, and the generatednormally distributed pseudorandom noise is Rθi withmean value 0, and the random noise is added to thespecific parameter. In simulation, the SD is StdXW ¼σðXW�!Þ; XW

�!is a serial of reconstructed coordinates ob-

tained from all Gaussian distribution random noiseadded to each specific parameter on its expecta-tion value.

Theoretically, taking the partial of the reconstruc-tion coordinate u with respect to θ, the Jacobian ma-trix J can be calculated numerically; then

VðuÞ ≅ J · VðθÞ · JT ; Jij ¼∂ui

∂θj; ð8Þ

where J is a 3 × 22 matrix. V means the variance ofvectors. Jij denotes the row ith and column jth

element of the Jacobian matrix. The variance ofthe reconstructed coordinate caused by the variedparameters can be calculated by using Eq. (8). In the-oretical derivation, the variance of every parameteris independent. Assume each of them follows Gaus-sian distribution with a mean value of zero. Since theexpectation positions of the 3D point in the world co-ordinate can be determined by the expectation valueof all parameters, the partial of the world coordinateto every parameter can be obtained numerically,Jij ¼ Δui=Δθj.

In this section, we have presented a procedure ofhow to analyze the SD errors of the reconstructedcoordinate from every parameter. A flow procedure

20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6241

Page 4: Error evaluation technique for three-dimensional digital image correlation

for simulation and theoretical prediction is alsopresented.

3. Numerical Simulation

A. Simplified Configuration Analysis

In a simplified configuration, the parameters ofTable 1 are chosen. The Gaussian distributions withzero mean value and a known SD value are added toeach parameter. The SD values are shown in the col-umn to each specific parameter in Table 1. The SD ofextrinsic parameters ax0, az0, ty0, and tz0 are zero, ay0is π=3 with SD 0.0001, and tx0 is 500 with SD 0.001.

Based on these parameters, the SD results of thereconstructed XW , YW , and ZW can be obtained. Be-cause XW and YW are 3D coordinates in a plane, theresultant SD obeys a symmetrical distribution; thus,YW is assumed to be zero. Generate 4000 groups ofthese parameters, which obey Gaussian normal dis-tribution, with a specific SD as seen in Table 1. XW is

in range of (−10mm, 10mm). All ZW are 10mm in theworld coordinate. The partial of XW and ZW with re-spect to all these parameters can be calculated nu-merically. Each world coordinate is projected ontothe exact position in camera sensors 1 and 2. The ex-act position in camera sensor 2 is modified to includerandom variability. Based on the literature on imagecorrelation accuracy [26,32,33], assume that the ac-curacy of image matching in the horizontal and ver-tical is 0.005 pixels. It can be found in Fig. 3 that theSD results of XW and ZW are only affected by imagematching errors by simulation and theoretical pre-diction. From Fig. 3, we can see that the in-planeaccuracy is much higher than out-of-plane in thisconfiguration. Good agreement between the resultsobtained by our simulation and theory reveals thefeasibility of the technique.

Figure 4 shows the SD results of XW and ZWcaused by all parameters and only by pan angleand image correlation error. From the results, thepan angle has a great influence on the error of recon-struction coordinate in this configuration.

B. 3D DIC System

In Subsection 3.A, the SD results of a line in theworld coordinate system are obtained. In this

Fig. 2. Procedure of comparison simulation and theoreticalresults.

Table 1. Intrinsic Parameters of 3D DIC System

SystemParameters

Camera 1

SystemParameters

Camera 2

MeanValue SD

MeanValue SD

f x1 (pixel) 12000 0.1 f x2 (pixel) 12000 0.1f y1 (pixel) 12000 0.1 f y2 (pixel) 12000 0.1Cx1 (pixel) 256 0.01 Cx2 (pixel) 256 0.01Cy1 (pixel) 256 0.01 Cy2 (pixel) 256 0.01

s1 0 0 s2 0 0κ1 ðpixelÞ−2 0 0 κ3 ðpixelÞ−2 0 0κ1 ðpixelÞ−4 0 0 κ4 ðpixelÞ−4 0 0

Fig. 3. (Color online) SD of XW and ZW only by image matching error in simulation and theory.

6242 APPLIED OPTICS / Vol. 50, No. 33 / 20 November 2011

Page 5: Error evaluation technique for three-dimensional digital image correlation

subsection, the error of the points, where the planeZW ≡ 5mm, are calculated. With the same param-eters and the same SD, only the distortion coeffi-cients are different; the mean values of k1, k2,k3, and k4 are 0:08 ðpixel−2Þ, −0:05 ðpixel−4Þ,−0:06 ðpixel−2Þ, and 0:06 ðpixel−4Þ, respectively, andtheir SD values are 8:0 × 10−5, 1:4 × 10−4, 7:0 × 10−4,and 1:2 × 10−4, respectively. Figure 5 shows the SDresults of the reconstructed coordinate by simulationand theoretical prediction. From a plane in the 3Dworld coordinate for error obtained, it can be seenthat the theoretical prediction has identical distribu-tion with the simulation results. From the results,the errors of extrinsic parameters ay0 and tx0 domi-nate the accuracy of 3D DIC reconstruction. Thus,

the error evaluation techniques for 3D DIC are quitefundamental in this system.

4. Numerical Experiment

In this section, a pair of 3D speckle images is gener-ated by using 3D speckle image generating algorithm[34]. The intrinsic parameters are chosen as shownin Table 2. The extrinsic parameters ax0, az0, and ty0are zeros, and ay0 is 0:3840 ðradÞ, tx0 is 539.318, andtz0 is −132:426. Once the parameters are determined,the 3D speckle images can be generated as shown inthe Fig. 6. The speckles on a cylinder with a radiusof 40mm are recorded, and the black field is theempty area of cylinders viewed from the two cameraposition angles, and the rectangle on Fig. 6(a) is the

Fig. 4. (Color online) SD of XW and ZW caused by accuracy of all parameters and only by pan angle and image matching.

Fig. 5. (Color online) SD results of simulation (top) and theory prediction (bottom) of XW (left), YW (middle), and ZW (right).

20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6243

Page 6: Error evaluation technique for three-dimensional digital image correlation

area of interest. The size of the images is512pixels × 512pixels.

With this technique, when the configuration of the3D DIC system is set up, the extrinsic and intrinsicparameter can be obtained by using camera calibra-tion algorithm. Once speckle patterns are captured,two kinds of theoretical prediction models for the dis-placement measurement error of DIC are proposed.One is proposed by Pan et al. [22], using the varianceof image noise and the SSSIG. The other is proposedby Wang and coworkers [14,21]. The main differencebetween these twomodels is that the latter has takeninto account more factors, including interpolationmethod, subpixel motion, and intensity contrast,which could affect the accuracy of image correlation.Wang et al.’s technique [21] can be applied to evalu-ate the variance of the displacement of image corre-lation. Here is a brief introduction. The techniquesare realized in three steps: first, two image correla-tions are calculated by bilinear interpolation with aleast-squares correlation efficient function and itera-tion by the Levenberg–Marquardt method. Second,once the correlation fields are obtained, the intensitygradients can be calculated by using Eq. 34 in [21].Finally, the SD in image matching can be predicted.

Gaussian distribution noises with mean value ofzero and SD of 8 are added in the original 8 bits ofintensity gray image. One hundred images with thesame level of noise are generated. In this simulationtest, the SD of image correlation is obtained by usingone left image correlated with 100 right images. TheSD results of image correlation in the horizontal andvertical field are illustrated in Fig. 7.

Because the Xs1 and Ys1 are error free, the theore-tical SD of the image correlation is the SD of Xs2 andYs2. Assume the other parameters are all error free;by using the theory presented in Section 2, the the-oretical SD of XW, YW , and ZW can be obtained.

The actual SDs of XW, YW , and ZW are obtainedfrom each calculation from 100 pairs of imagesfrom the reconstruction coordinate. As shown inFigs. 8(a)–8(c), top (numerical simulation) and bot-tom (theoretical prediction), the numerical resultsare in good accordance with theoretical prediction.

5. Experiment

A typical 3D DIC experiment is conducted in this sec-tion using three standard cylinders. The CCD cam-eras are AVT Oscar F-810C, Germany, the lensesare Computar F25mm, Japan, and the image sizeis 1088pixels × 322pixels. After several calibrations,the mean value and SD can be obtained as shown inTables 3 and 4. Tables 3 and 4 are the intrinsicand extrinsic parameters, respectively. Figures 9(a)and 9(b) show the three standard cylinders from

Table 2. Camera Parameters Used for Numerical Simulation

System Parameters Camera 1 System Parameters Camera 2

f x1 (pixel) 6020 f x2 (pixel) 6057f y1 (pixel) 6010 f y2 (pixel) 6049Cx1 (pixel) 256 Cx2 (pixel) 255Cy1 (pixel) 259 Cy2 (pixel) 257

s1 0 s2 0κ1 ðpixelÞ−2 0 κ3 ðpixelÞ−2 0κ2 ðpixelÞ−4 0 κ4 ðpixelÞ−4 0

Fig. 6. (Color online) Pair of 3D speckle images of a cylinder:(a) view from camera 1, (b) view from camera 2.

Fig. 7. (Color online) SD in (a) horizontal and (b) vertical fieldobtained by using one left image correlated to 100 right images.

Fig. 8. (Color online) SD of world coordinates by numerical (top)and theoretical (bottom): (a) SD of XW, (b) SD of YW, (c) SD of ZW.

6244 APPLIED OPTICS / Vol. 50, No. 33 / 20 November 2011

Page 7: Error evaluation technique for three-dimensional digital image correlation

the left and right view. In order to acquire the SDresults of image correlation, the fields of view is cap-tured 60 times simultaneously. The diameters of thecylinders are shown at the top of Fig. 9(a). The rec-tangle in Fig. 9(a) is the area of interest.

Considering the left camera coordinate as theworld coordinate, the shape reconstruction of cylin-ders is shown in Fig. 10.

Using the mean values of all the camera param-eters and image correlation, the expectation of thereconstruction coordinate can be obtained. Thus,the partial with respect to all parameters can be ob-tained. The SD of image correlation is obtained bycorrelating one left image with the 60 right images.The SD values of intrinsic and extrinsic parametersare shown in Tables 3 and 4. Thus, in theory, the SDvalues can be calculated in this situation, and the im-age is shown in Fig. 11.

From these results, the SD results of the cameracalibration parameters (such as principal pointand focal length) have a significant influence onthe SD values of coordinate reconstruction. The im-

age correlation results can be omitted at this con-figuration. In the experiment, the SD of the recon-structed world coordinate point can be evaluatedonce the configuration is set up.

Table 4. Optimal Extrinsic Camera Parameter Estimates

ExtrinsicParameters

Transformation fromCamera 1 to Camera 2

Mean Value SD

ax0 0.828 (deg) 0.0263ay0 7.991 (deg) 0.0212az0 1.774 (deg) 0.0201tx0 −50:070 ðmmÞ 0.0324ty0 5:094 ðmmÞ 0.0157tz0 0:854 ðmmÞ 0.0252

Baseline 50.337 0.0303

Fig. 9. (Color online) (a) Left view and (b) right view of cylinders.

Fig. 10. (Color online) Shape of the three cylinders (one of theresults in 60 reconstructions).

Fig. 11. (Color online) SD value (units in millimeters) of threecylinders: (a) left, (b) middle, and (c) right in this configuration.

Table 3. Mean Value and SD of Each Parameter

SystemParameters

Camera 1

SystemParameters

Camera 2

MeanValue SD

MeanValue SD

f x1 (pixel) 3186.078 1.569 f x2 (pixel) 3209.071 1.368f y1 (pixel) 3186.078 1.569 f y2 (pixel) 3209.071 1.368Cx1 (pixel) 493.624 2.836 Cx2 (pixel) 480.301 2.970Cy1 (pixel) 325.840 8.076 Cy2 (pixel) 304.786 9.336

s1 0 0 s2 0 0κ1 ðpixelÞ−2 −0:229 1:231e − 3 κ3 ðpixelÞ−2 −0:220 1:110e − 3κ2 ðpixelÞ−4 0.413 1:150e − 3 κ4 ðpixelÞ−4 −0:679 1:033e − 3

20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6245

Page 8: Error evaluation technique for three-dimensional digital image correlation

6. Conclusion

In this paper, a technique for evaluating the error of a3D DIC system is presented and verified by theore-tical derivation and numerical simulation. It is ap-plied to evaluate the SD of a reconstructed worldcoordinate by using generated 3D speckle images.The theoretical prediction of the SD of the recon-struction coordinate is in good accordance with thenumerical results. The error evaluation techniqueis applied to the measurement of standard cylinders.The technique can be readily applied to a practical3D DIC test once the system is set up.

In brief, the procedure of this error evaluationtechnique can be presented as follows. First, whenthe configuration of the 3D DIC system is set up,the mean value and SD value of the intrinsic and ex-trinsic parameters can be obtained after the times ofcalibration procedure. Second, many images from theleft and right view of the objects can be obtained;thus, the SD of image correlation can be calculated.Finally, based on the theoretical prediction model,the SD of the reconstructed error can be evaluated.

The error evaluation technique can be extended tobe used in practical measurement to evaluate theerror level of any kind of 3D DIC configuration.

The authors are grateful for the financial sup-port by the National Basic Research Program ofChina (“973” Project) (grants 2010CB631005,2011CB606105), the National Natural ScienceFoundation of China (NSFC) (grants 10732080,90916010), and the Specialized Research Fund forthe Doctoral Program of Higher Education (grant20090002110048).

References1. X. D. Ke, H. Schreier, M. Sutton, and Y. Wang, “Error assess-

ment in stereo-based deformation measurements,”Exp. Mech.51, 423–441 (2011).

2. Y. Q. Wang, M. Sutton, X. D. Ke, H. Schreier, P. Reu, and T.Miller, “On error assessment in stereo-based deformationmeasurements,” Exp. Mech. 51, 405–422 (2011).

3. J.-J. Orteu, “3-D computer vision in experimental mechanics,”Opt. Lasers Eng. 47, 282–291 (2009).

4. Y. B. Guo, Y. Yao, X. G. Di, and IEEE, “Research on structuralparameter optimization of binocular vision measuring systemfor parallel mechanism,” in Proceedings of the IEEE Interna-tional Conference on Mechatronics and Automation (IEEE,2006), pp. 1131–1135.

5. K. Zhang, B. Xu, L. X. Tang, and H. M. Shi, “Modeling ofbinocular vision system for 3D reconstruction with improvedgenetic algorithms,” Int. J. Adv. Manuf. Technol. 29, 722–728(2006).

6. L. Robert, F. Nazaret, T. Cutard, and J. J. Orteu, “Use of 3-Ddigital image correlation to characterize the mechanical beha-vior of a fiber reinforced refractory castable,” Exp. Mech. 47,761–773 (2007).

7. M. J. McGinnis, S. Pessiki, and H. Turker, “Application ofthree-dimensional digital image correlation to the core-drilling method,” Exp. Mech. 45, 359–367 (2005).

8. M. A. Sutton, X. Ke, S. M. Lessner, M. Goldbach, M. Yost, F.Zhao, and H. W. Schreier, “Strain field measurements onmouse carotid arteries using microscopic three-dimensional

digital image correlation,” J. Biomed. Mater. Res. A 84,178–190 (2008).

9. P. Compston, M. Styles, and S. Kalyanasundaram, “Low en-ergy impact damage modes in aluminum foam and polymerfoam sandwich structures,” J. Sandw. Struct. Mater. 8,365–379 (2006).

10. F. Barthelat, Z. Wu, B. C. Prorok, and H. D. Espinosa,“Dynamic torsion testing of nanocrystalline coatings usinghigh-speed photography and digital image correlation,” Exp.Mech. 43, 331–340 (2003).

11. V. Tiwari, M. Sutton, and S. McNeill, “Assessment of highspeed imaging systems for 2D and 3D deformation measure-ments: methodology development and validation,” Exp. Mech.47, 561–579 (2007).

12. M. A. Sutton, J. Yan, X. Deng, C.-S. Cheng, and P. Zavattieri,“Three-dimensional digital image correlation to quantifydeformation and crack-opening displacement in ductile alumi-num under mixed-mode I/III loading,” Opt. Eng. 46, 051003(2007).

13. B. Pan, H. Xie, L. Yang, and Z. Wang, “Accurate measurementof satellite antenna surface using 3D digital image correlationtechnique,” Strain 45, 194–200 (2009).

14. Y. Wang, “Error assessment in 3D computer vision,” in Thesesand Dissertations (University of South Carolina, 2010).

15. T. Siebert, T. Becker, K. Spiltthof, I. Neumann, and R. Krupka,“Error estimations in digital image correlation technique,”Appl. Mech. Mater. 7–8, 265–270 (2007).

16. T. Becker, K. Splitthof, T. Siebert, and P. Kletting, “Error es-timations of 3D digital image correlation measurements,”Proc. SPIE 6341, 63410F (2006).

17. M. Whelan, E. Hack, T. Siebert, R. Burguete, E. A. Patterson,and Q. Saleem, “On the calibration of optical full-field strainmeasurement systems,” Appl. Mech. Mater. 3–4, 397–402(2005).

18. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelfTV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344(1987).

19. J. Heikkila and O. Silven, “A four-step camera calibration pro-cedure with implicit image correction,” in Proceedings of the1997 IEEE Computer Society Conference on Computer Visionand Pattern Recognition (IEEE, 1997), pp. 1106–1112.

20. Z. Y. Zhang, “A flexible new technique for camera calibration,”IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334(2000).

21. Y. Q. Wang, M. A. Sutton, H. A. Bruck, and H. W. Schreier,“Quantitative error assessment in pattern matching: effectsof intensity pattern noise, interpolation, strain and image con-trast on motion measurements,” Strain 45, 160–178 (2009).

22. B. Pan, H. M. Xie, Z. Y. Wang, and K. M. Qian, “Study onsubset size selection in digital image correlation for specklepatterns,” Opt. Express 16, 7037–7048 (2008).

23. P. Luo, Y. Chao, M. Sutton, and W. Peters, “Accurate measure-ment of three-dimensional deformations in deformable andrigid bodies using computer vision,” Exp. Mech. 33, 123–132(1993).

24. M. Sutton, J. Orteu, and H. Schreier, Image Correlation forShape, Motion and Deformation Measurements: Basic Con-cepts, Theory and Applications (Springer Verlag, 2009).

25. F. Remondino and C. Fraser, “Digital camera calibrationmethods considerations and comparisons,” in InternationalArchives of the Photogrammetry, Remote Sensing and SpatialInformation Sciences (ISPRS, 2006), Vol. 36, pp. 266–272.

26. B. Pan, K. M. Qian, H. M. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displace-ment and strain measurement: a review,” Meas. Sci. Technol.20, 062001 (2009).

6246 APPLIED OPTICS / Vol. 50, No. 33 / 20 November 2011

Page 9: Error evaluation technique for three-dimensional digital image correlation

27. C. Bräer-Burchardt, “A simple new method for precise lensdistortion correction of low cost camera systems,” in PatternRecognition, C. Rasmussen, H. Bülthoff, B. Schölkopf, andM. Giese, eds. (Springer, 2004), pp. 570–577.

28. C. S. Fraser and S. Al-Ajlouni, “Zoom-dependent camera cali-bration in digital close-range photogrammetry,” Photogramm.Eng. Remote Sensing 72, 1017–1026 (2006).

29. Y. L. Lay and C. S. Lin, “Lens distortion correction by adjust-ing image of calibration target,” Indian J. Pure Appl. Phys. 40,770–774 (2002).

30. S. Yoneyama, H. Kikuta, A. Kitagawa, andK. Kitamura, “Lensdistortion correction for digital image correlation by measur-ing rigid body displacement,” Opt. Eng. 45, 023602 (2006).

31. S. Yoneyama, A. Kitagawa, K. Kitamura, and H. Kikuta,“In-plane displacement measurement using digital image cor-relation with lens distortion correction,” JSME Int. J. A 49,458–467 (2006).

32. D. Nelson, A. Makino, and T. Schmidt, “Residual stress deter-mination using hole drilling and 3D image correlation,” Exp.Mech. 46, 31–38 (2006).

33. W. Tong, “Subpixel image registration with reduced bias,”Opt.Lett. 36, 763–765 (2011).

34. Z. Hu, H. Xie, J. Lu, T. Hua, and J. Zhu, “Study of the per-formance of different subpixel image correlation methods in3D digital image correlation,” Appl. Opt. 49, 4044–4051(2010).

20 November 2011 / Vol. 50, No. 33 / APPLIED OPTICS 6247