a four-camera videogrammetric system for 3-d motion measurement of deformable object

12

Click here to load reader

Upload: hao-hu

Post on 11-Sep-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Optics and Lasers in Engineering 50 (2012) 800–811

Contents lists available at SciVerse ScienceDirect

Optics and Lasers in Engineering

0143-81

doi:10.1

n Corr

Univers

E-m

xiaoxice

journal homepage: www.elsevier.com/locate/optlaseng

A four-camera videogrammetric system for 3-D motion measurementof deformable object

Hao Hu a, Jin Liang a, Zhen-zhong Xiao a,b,n, Zheng-zong Tang a, Anand K. Asundi b, Yong-xin Wang a

a School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, Chinab School of Mechanical and Aerospace Engineering, Nanyang Technological University, 639798, Singapore

a r t i c l e i n f o

Article history:

Received 16 August 2011

Received in revised form

21 December 2011

Accepted 24 December 2011Available online 20 January 2012

Keywords:

Videogrammetry

Close-range photogrammetry

Four-camera calibration

Trajectory and displacement measurement

Deformable moving object

66/$ - see front matter & 2011 Elsevier Ltd. A

016/j.optlaseng.2011.12.011

esponding author at: School of Mechanical

ity, Xi’an 710049, China.

ail addresses: [email protected] (H. Hu),

@hotmail.com (Z.-z. Xiao).

a b s t r a c t

A four-camera videogrammetric system with large field-of-view is proposed for 3-D motion measure-

ment of deformable object. Four high-speed commercial-grade cameras are used for image acquisition.

Based on close-range photogrammetry, an accurate calibration method is proposed and verified for

calibrating the four cameras simultaneously, where a cross target as calibration patterns with feature

points pasted on its two-sides is used. The key issues of the videogrammetric processes including

feature point recognition and matching, 3-D coordinate and displacement reconstruction, and motion

parameters calculation are discussed in detail. Camera calibration experiment indicates that the

proposed calibration method, with a re-projection error less than 0.05 pixels, has a considerable

accuracy. Accuracy evaluation experiments prove that the accuracy of the proposed system is up to

0.5 mm on length dynamic measurement within 5000 mm�5000 mm field-of-view. Motion measure-

ment experiment on an automobile tire is conducted to validate performance of our system. The

experimental results show that the proposed four-camera videogrammetric system is available and

reliable for position, trajectory, displacement and speed measurement of deformable moving object.

& 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Tracking the position of deformable moving bodies has been adifficult and challenging task. Certain non-contact approaches,including LASER vibrometers [1] and capacitive sensors [2], arecapable of measuring out-plane displacement of single-point.Electronic laser speckle interferometry (ELSI) and moire interfero-metry can be used for obtaining in-plane full-field displacement.However, those approaches have insufficient dynamic range tomeasure large-scale motion [3]. The global position system (GPS) isa potential alternative position measurement technique for large-scale motion [4]. However, it is very expensive and impossible tolocate a large number of positions simultaneously.

Videogrammetric technique, dating from the 1970s, providesthe motion and structure parameters of moving object withinfield-of-view by detecting and tracking the coordinates of thefeature points on it. In videogrammetry, multiple sets of images(frames) are taken sequentially at a high rate, allowing the 3-Dposition of a feature point array as a function of time. Assuming asufficiently dense feature point array is chosen, this technique canpotentially enable an object’s overall surface motion to be

ll rights reserved.

Engineering, Xi’an Jiaotong

reconstructed. It is known that a good camera calibration is verycritical for the precision and reliability of videogrammetricmeasurement and much research has been done on it. Luo et al.[5] used a multiple-precision moving object to calibrate thecamera. But results show that it is quite laborious and time-consuming. A popular and practical algorithm was developed byTsai [6], using radial alignment constraint. But the initial para-meters of camera were required and only lens radial distortionwas considered in this method. Zhang [7] and Bouguet [8]separately proposed a flexible technique for camera calibrationby viewing a plane from several arbitrary views, in which thecalibration target was assumed to be an ideal plane and itsmanufacturing errors were ignored. However, these errors couldaffect the accuracy of calibration, especially when being used inlarge field-of-view vision measurement. The self-calibrationmethod [9–12] without relying on external 3-D measurementwas also investigated. It only requires the corresponding points ofimages, and thus is more flexible in practical applications.

With the rapid development of optical measurement andimage acquisition technology, the videogrammetric techniquehas been applied to many fields, including biomechanics,mechanics, aircraft, agriculture [13–17] and so on. Nowadays,there are also some commercial 3-D motion measurement sys-tems on the market, such as the PONTOS system (GOM Company)and the MoveInspect system (AICON Company). However, thesesystems are normally too expensive for many research institutes

Page 2: A four-camera videogrammetric system for 3-D motion measurement of deformable object

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 801

to afford, especially for institutes in China. Thus there is a need todevelop a low-cost videogrammetric system for 3-D motionmeasurement. In this paper, a four-camera videogrammetricsystem with large field-of-view is proposed and developed. Basedon close-range photogrammetry, an accurate calibration methodfor four-camera simultaneous calibration is presented and ver-ified, where a cross target as calibration patterns with featurepoints pasted on its two-sides is used. The key issues of thevideogrammetric processes are discussed in detail. To validate theproposed system, camera calibration, accuracy evaluation andmotion measuring experiments are conducted. Experimentalresults show that the proposed system can meet the require-ments of position, trajectory, displacement and speed measure-ment of deformable moving object.

2. Methodology

2.1. Principle of videogrammetry

Let’s take a binocular videogrammetric system [16] for example.Fig. 1 describes the principle of videogrammetry, where twocameras were used for image acquisition. At time i, a featurepoint Pi on moving object surface is imaged in the left sequentialimage series as pi while in the right sequential image series as p0i.If the cameras for image acquisition have been calibrated, the 3-Dcoordinates of feature point Pi can be reconstructed based on thetriangulation principle in photogrammetry. When the pointmoves to the position Piþ1 after a constant interval (at timeiþ1), its two corresponding image points are moving to piþ1 and

Y

ZB

x

y

pi

p

moving object at ti

1pi+

Fig. 1. Principle of v

Camera 1

Camera 2

Objec

~30°

~12m

~12m

Fig. 2. Camera-arrangement of the four

p0iþ1, respectively. By locating their 2-D coordinates, the 3-Dcoordinates of feature point Piþ1 on the moved object can alsobe reconstructed in the same way. Then the space displacement ofthe feature point during this interval can be calculated. Therefore,with the image sequences captured in the moving process, theposition, the displacement and even the speed of the featurepoints on moving object can be fully determined.

2.2. Design of the four-camera videogrammetric system

In order to get more motion and deformation information, fourhigh-speed digital cameras are used for image acquisition. Fig. 2shows the position and attitude of the four cameras arranged inour proposed four-camera videogrammetric system.

Here, two binocular stereovision systems (two-camera video-grammetric systems) are designed for feature point reconstruc-tion and tracking. One is composed of camera 1 and camera 2, andthe other is composed of camera 3 and camera 4. For eachbinocular stereovision system, angle between two cameras candirectly influence its precision and stability. Much work and testshave been carried out according to the reference [18] and theresults suggest that the optimal camera angle used in our systemlies between 25 and 35 degrees.

The camera used for image acquisition is Photron FastcamSA-1.1, as shown in Fig. 3. It provides with blistering fast speed,unmatched light sensitivity and true 12-bit performance. Capableof over 5000 full frames per second with a pixel solution of1024�1024 pixels, the physical dimension of each pixel is 20 mm.Each camera is equipped with a 50-mm fixed focal-length opticallens and is placed about 12 m away from the measured object in

X

x

y

T i=1T i= +

2T i= +'p i

' 1i+

1Pi+

Pi

me

moving object at time

moving trajectory1i+

i

ideogrammetry.

Camera 4

Camera 3

t ~12m

~12m

~30°

-camera videogrammetric system.

Page 3: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Fig. 3. (a) Photron Fastcam SA-1.1 camera and (b) Synchronization trigger box.

Feature Points Recognition

Feature Points Matching

CameraCalibration

3-D Reconstruction

Displacement and Motion Calculation

3-D Display

Image Acquisition

Fig. 4. Workflow of videogrammetric system.

World coordinate

y

x

O

SProjection

center

Pobject point

p

Image planeCamera coordinate

Xw

YwZw

Xc

Zc

Image point Yc

f

Fig. 5. Pinhole camera model.

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811802

our experiments. To ensure the synchronization of four cameras, acamera trigger box compatible with Photron Fastcam SA-1.1is used.

3. Videogrammetric process

Fig. 4 shows the workflow of the proposed four-cameravideogrammetric system. Seven procedures are included asbellow.

1.

Conduct camera calibration to obtain the precise interiorparameters and exterior orientation parameters of the fourcameras.

2.

Capture sequential images of the moving deformable objectwith the four calibrated cameras.

3.

Recognize the feature points pasted discretely on the measur-ing object and locate their image coordinates.

4.

Compute the disparity between the corresponding featurepoints in the synchronizing images captured by differentcameras.

5.

Reconstruct the 3-D coordinates of the feature points accord-ing to the camera parameters and the stereo matching results.

6.

Calculate the displacement and motion parameter using the3-D coordinates of the feature points.

7.

Display the displacement and motion information in theOpenGL 3-D view window.

Details of these procedures are discussed in the following.

3.1. Multi-camera calibration

3.1.1. Principle of close-range photogrammetry

Close-range photogrammetry [19] is a 3-D measurementtechnique that uses photographs to establish the geometricalrelationship between a 3-D object and its 2-D photographicimages. Its fundamental mathematical model is pinhole imagingmodel, as shown in Fig. 5, where S is the optical center of lens, P isthe object point, p is image point, and O is principal point ofcamera, SO(—) is the camera optical axis which is vertical to theimage plane, and the distance between S and O is the focal lengththat is expressed as f. Ideally, P, S and p lie in the same line. Theirrelationship can be expressed by:

x¼�fa1ðX�XsÞþb1ðY�YsÞþc1ðZ�ZsÞ

a3ðX�XsÞþb3ðY�YsÞþc3ðZ�ZsÞ

y¼�fa2ðX�XsÞþb2ðY�YsÞþc2ðZ�ZsÞ

a3ðX�XsÞþb3ðY�YsÞþc3ðZ�ZsÞð1Þ

Eq. (1) is the well-known collinearity equation, where (X,Y,Z)are the 3-D coordinates of object point in the world coordinate

Page 4: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Fig. 6. Cross calibration target.

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 803

system, (Xs,Ys,Zs) are the 3-D coordinates of projection center inthe world coordinate system, (x,y) are the image point coordi-nates, and

a1 a2 a3

b1 b2 b3

c1 c2 c3

0B@

1CA

is the transform matrix from the world coordinate system tocamera coordinate system. When the interior and exterior para-meters of camera are given and the two corresponding imagepoints are known, 4 equations can be listed according to Eq. (1)and the 3-D coordinates of object point P can be calculated.

3.1.2. Mathematical model of camera calibration

Camera calibration is used to determine a set of interior andexterior parameters of camera that relate a 3-D object to its 2-Dimages. Interior parameters define the geometric and the opticalcharacteristics of camera while exterior parameters describe therotational and the translational information of an image coordinatesystem with respect to a predefined global coordinate system. Forcommercial camera, the coordinates of camera principal point(x0,y0) does not necessarily coincide with the origin of the x–y

image coordinate system (see Fig. 5), and there are some lensdistortion (dx, dy) that could affect its accuracy in photogram-metric applications. So the mathematical model is expressed by:

x�x0þdx¼�fa1ðX�XsÞþb1ðY�YsÞþc1ðZ�ZsÞ

a3ðX�XsÞþb3ðY�YsÞþc3ðZ�ZsÞ

y�y0þdy¼�fa2ðX�XsÞþb2ðY�YsÞþc2ðZ�ZsÞ

a3ðX�XsÞþb3ðY�YsÞþc3ðZ�ZsÞð2Þ

Considering the radial and decentering distortion of cameralens, non-orthogonality and shear in the x–y image coordinatesystem [20], dx, dy can be calculated by:

dx¼ A1xðr2�r20ÞþA2xðr4�r4

0ÞþA3xðr6�r60Þ

þB1ðr2þ2x2Þþ2xyB2þE1xþE2y

dy¼ A1yðr2�r20ÞþA2yðr4�r4

0ÞþA3yðr6�r60Þ

þB2ðr2þ2y2Þþ2xyB1 ð3Þ

where A1, A2, A3 are the radial distortion parameters, B1 and B2 arethe parameters of decentering distortion (which includes tangen-tial and radial non-symmetric distortion), E1 and E2 describe non-orthogonality and shear in the image coordinate system, r is theradial distance from the principal point (r2

¼(x�x0)2þ(y�y0)2),

r0 is the second zero crossing of the distortion curve. So, there arealtogether 10 parameters (f,x0,y0,A1,A2,A3,B1,B2,E1,E2) used in ourcalibration algorithm, which is referred to as 10-parametersnonlinear camera model. It is slightly different from Bouguet’smodel [8] since r0 is employed for balance parameters.

3.1.3. Multi-camera calibration using bundle adjustment

A two-side cross calibration target was designed, as shown inFig. 6. 16 coded feature points and 33 un-coded feature points arepasted discretely on its own side, and the ID for every coded pointis unique. Feature points of each side are not all located on acommon plane. Using this cross target as calibration pattern, weproposed an accurate multi-camera calibration method to cali-brate the four cameras simultaneously. Reliable calibration resultcan be achieved based on 10-parameters nonlinear camera modeland bundle adjustment in close-range photogrammetry.

The calibration method includes the following six steps:

1.

Reconstruct the 3-D coordinates of the feature points laid onthe calibration target using Coordinate Measuring Machine(CMM) system. Here, some commercially available systems,such as the V-Stars system of the GSI Company (U.S.A), the

TRITOP system of the GOM Company (Germany), and theXJTUDP system of the Xi’an Jiaotong University (China) etc.,can provide a high measurement accuracy of 0.1 mm/m. In thisstudy, the XJTUDP system is employed to measure the 3-Dcoordinates of feature points on the calibration target. Themain measuring procedures are shown as follows (Moredetails can refer to reference [21]):a. Fix the calibration target on the flat open ground and place

some coded feature points around it for measurement.b. Hold camera at different height, then walk around the

calibration target and record images at an orientation ofabout 45 degrees from each other.

c. Import the captured images to the software of XJTUDP system(Fig. 7) for computation. After the process of feature pointsrecognition and matching, camera orientation, bundle adjust-ment and some related calculations, the 3-D coordinates ofthe feature points on the calibration target can be recon-structed in a photogrammetric coordinate system.

2.

Place the calibration target about 12 m from the four cameras,then capture the images of the calibration target shown atdifferent orientations simultaneously. Fig. 10 shows one set ofcalibration target images taken at one orientation.

3.

Recognize the feature points in the images acquired by thefour cameras to get the image coordinates of feature points’centers and the codes of ring coded feature points. This can berealized using the feature point recognition algorithm (seeSection 3.2) regardless of their rotation or translation in thecalibration process.

4.

Calculate the rotation and translation vectors for all capturedimages using the Direct Linear Transform (DLT) method. Sincethe application of the DLT algorithm requires known pointswith considerable extension in depth, we first ensured thefeature points used in DLT calculation were not coplanar.

5.

Adjust the interior and exterior parameters of the four camerasusing the bundle adjustment method. Eq. (2) allows the directformulation of primary observed values (image coordinates) asfunctions of all unknown parameters (3-D coordinates ofobject points, interior and exterior orientation parameters,distortion parameters of lens). All these unknowns can beiteratively determined using image coordinates as observa-tions. The observation Eq. [22] can be obtained through thelinearization of Eq. (2):

V ¼ AX1þBX2þCX3�L ð4Þ

Page 5: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Pull-down menus Toolbar 3-D OpenGL view: Display 3-D coordinates of feature pointsProject Browser 2-D Image view Results list

Fig. 7. Software Interface of XJTUDP system.

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811804

where V is the residual of re-projection (distance between theprojected image points and the observed image points), X1, X2

and X3 are the partial derivative of the interior orientationparameters(including lens distortion parameters), exteriororientation parameters and 3-D coordinates of object points,A, B and C are the corresponding derivative matrix. respec-tively, and L is the observation vector. Since all 3-D coordinatesof feature points are certainly obtained in step 1, the observa-tion equation now can be expressed by:

V ¼ AX1þBX2�L ð5Þ

Then the camera parameters can be obtained through thefollowing optimization function:

min VT V ð6Þ

The objective of optimization is to determine ten interiorparameters and six exterior parameters for each image tominimize the sum of re-projection error.

6.

Transform the four cameras’ exterior orientation parameters toa unified coordinate system and calculate the mean value ofthem as the calibration results. Assume that n (n¼8–24)images of the calibration target shown at different orientationsare, respectively acquired by the four cameras to be calibrated.For each camera, the rotation and translation vectors for thesen images with respect to the photogrammetric coordinatesystem are expressed as Ri,j and Ti,j, where i is the cameranumber and j is the image serial number, i¼1,2,3,4, j¼1,2,y,n.Completing the five steps above, Ri,j and Ti,j can be obtained.To achieve four cameras global calibration, firstly, set the

camera coordinate system of the first image captured by thecamera 1 as the reference coordinate system (that is R1,1¼ I,T1,1¼O, here I is a 3�3 unity matrix, O is a 3�1 null vector).Secondly, transform the rotation and translation vectors forother n�1 images of camera 1 to ½I O�. Thirdly, transform therotation and translation vector for the images of other threecameras to the reference coordinate system by:

R0

i,j ¼ R�1i,j R1,j

T0

i,j ¼ R�1i,j ðT1,j�Ti,jÞ

9=; ð7Þ

Finally, the global exterior parameters of the four cameras canbe calculated by:

Ri ¼1

n

Xn

j ¼ 1

R0

i,j

Ti ¼1

n

Xn

j ¼ 1

T0

i,j

9>>>>>=>>>>>;

ð8Þ

Obviously, R1¼ I, T1¼O.

For a better calibration, some images with bad recognitionand\or big re-projection error should be ignored in the calcula-tion. In our calibration algorithm, all camera exterior orientationsare treated as totally independent. But reference [12] provides arigorous approach to have as unknowns in the bundle adjustmentthe exterior orientations only of the first camera and threeinvariant relative orientations for the other three cameras. Wewill try to do the work of comparing this approach with ouralgorithm in the future task.

Page 6: A four-camera videogrammetric system for 3-D motion measurement of deformable object

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 805

3.2. Feature points recognition

For target tracking and measurement, two kinds of circularfeature points are used in our proposed system, as shown in Fig. 8.One is referred to as circular un-coded point that is a whitecircular point with black background. Another is referred to asconcentric-rings coded point that also has a white central circle,especially surrounding discrete ring sections and a unique ID.Both can also be used for camera calibration, and their dimensionand distribution density can be flexibly adjusted to meet therequirements of the target measurement.

Canny [23] algorithm is used to extract feature points fromimage. However, as a multi-stage process, Canny edge detectorbeing used in the whole image is very complex and timeconsuming since the feature areas are relatively small comparedto the image dimension. Moreover, since different feature areacould be filled with different gray distribution, neither high norlow gray threshold for image edge detection is inappropriate inthe Canny process. So a ‘‘Local Canny Detector’’ algorithm isproposed and applied for feature point recognition.

First, the original image should be converted to the two-valueimage (with only white or black gray value) based on image graygradient. Second, search all rectangular areas (shown in Fig. 9 (a))with high-intensity as the feature point candidate areas. Thoseareas will be processed by the Canny edge detector to obtain a setof single-pixel edges, as shown in Fig. 9(b). In addition, someareas will be eliminated according to the denoise rulers.

To improve the accuracy of edge detection, the searchedsingle-pixel edges will be corrected using the mean gradientmethod, resulting in a set of sub-pixel edges. For a more accurateand reliable sub-pixel edge, a bilinear interpolation algorithm isused to calculate the gray value of sub-pixel point before usingmean gradient detector. It can provides a continuous intensityfunction of gray image.

Iðx,yÞ ¼ ð1�aÞð1�bÞIði,jÞþað1�bÞIðiþ1,jÞ

þð1�aÞbIði,jþ1ÞþabIðiþ1,jþ1Þ ð9Þ

where I(x,y) is the intensity function of image, (i,j) is the imagecoordinate of single-pixel edge, and (x,y) is the image coordinateof sub-pixel edge, ioxo iþ1, joyo jþ1, a¼x� i,b¼y� j.

Fig. 8. Circular Feature Points (a) un-coded feature point and (b) coded feature point.

Fig. 9. Result of feature point recognition: (a) coarsely positioning of feature point; (

To locate the feature point center, a least-squares ellipse fittingalgorithm is adopted on the sub-pixel edges to realize the precisepositioning of feature point centers and to get parameters of theellipses (center coordinates, long axis, short axis, rotation angle)[24]. Then the ring region can be determined according to thelong axis and short axis of the fitting ellipse. If the ring regiondoes not exist, the feature point is an un-coded feature point.Otherwise, we sample (Fig. 9 (c)) along the ring region and set thewhite section of the ring to 1 and the black section to 0, to get ann-bit code sequence. Finally find its ID using a look-up table.According to the experimental results, by using this new algo-rithm, the recognition time for an image with 3888�2592 pixelsis about 1–2 s with an accuracy of 0.02 pixel.

3.3. Feature points matching

Feature point matching is to locate homologous pointsbetween different contemporaneous images. Coded feature pointmatching can be easily realized according to their unique ID.For un-coded feature point matching, epipolar geometry is oftenused as restriction of matching. The epipolar geometry meansthat for a point in one image, the corresponding points in otherimages are constrained on epipolar lines which depend only onthe camera’s internal parameters and the relative orientation,which are known after camera calibration.

For sparsely distributed feature points, feature points match-ing based on the epipolar geometry constraint works well. But fordensely distributed feature points, if more than one point lies onor nearby the same epipolar line, wrong matching will happen.To improve matching reliability, a manual matching method ispresented and adopted as a complementary tool for featurepoint matching. The method is also based on the epipolargeometry constraint. If there are some wrong matched pointpairs, we manually sort them out based on their identified centercoordinates. The matching deviation and 3-D space coordinates ofthe feature points can be automatically computed using thetriangulation method, and their ID number are manually set andrelated to existing feature points.

3.4. Three-dimensional coordinate and displacement reconstruction

After feature point recognition and matching, all correspond-ing feature points and their coordinates are determined. Inaddition, the cameras’ interior and exterior parameters havealready been obtained through multi-camera calibration. Thenthe 3-D coordinates of all feature points can be reconstructedusing the triangulation method.

Since all cameras are kept fixed during object motion process,the orientation parameters are accepted as the same for the restof the image pairs. So the second and the remaining image pairscan be oriented using these parameters, and the moved featurepoints can be reconstructed into the same coordinate system.Given a known 3-D feature point (X(ti),Y(ti),Z(ti)) at stage ti, the

b) result of ‘‘Local Canny Detector’’ and (c) ring sampling of coded feature point.

Page 7: A four-camera videogrammetric system for 3-D motion measurement of deformable object

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811806

displacement (DX,DY,DZ) of the feature point during each timeinterval can be calculated directly by:

DX ¼ Xðtiþ1Þ�XðtiÞ

DY ¼ Yðtiþ1Þ�YðtiÞ

DZ ¼ Zðtiþ1Þ�ZðtiÞ

8><>:

ð10Þ

Where i¼1,2,y,N, N is the number of the stages recorded.

3.5. Part of the motion parameters determination

3.5.1. Instantaneous velocity calculation of feature point

Given the 3-D coordinates of certain feature point in before-and-after stages (ti�1and tiþ1), the instantaneous velocity (VX(ti),-VY(ti),VZ(ti)) of the feature point at stage ti can be calculated by:

VXðtiÞ ¼Xðtiþ 1Þ�Xðti�1Þ

tiþ 1�ti�1

VY ðtiÞ ¼Yðtiþ 1Þ�Yðti�1Þ

tiþ 1�ti�1

VZðtiÞ ¼Zðtiþ 1Þ�Zðti�1Þ

tiþ 1�ti�1

8>>><>>>:

ð11Þ

3.5.2. Centroid calculation of moving object

After getting the 3-D coordinates of all feature points on themeasured object in the whole movement, the centroid of theobject at each stage can be obtained. Since feature points arepasted discretely and randomly on the surface of the object whichis relatively small in the field-of-view, the object centroid can beapproximately determined by averaging the coordinates of allfeature points (Eq. (12)), and its displacement and velocity can bealso calculated similarly referring to the method described above.

CXðtiÞ ¼1n

Xn

j ¼ 1

XjðtiÞ

CY ðtiÞ ¼1n

Xn

j ¼ 1

YjðtiÞ

CZðtiÞ ¼1n

Xn

j ¼ 1

ZjðtiÞ

8>>>>>>>>>>><>>>>>>>>>>>:

ð12Þ

Fig. 10. Images for calibration taken

Table 1The interior parameters of the four cameras.

Parameters Camera 1 Camera 2

f (pixel) 2646.30670.807 2627.51270.7

x0 (pixel) �56.24471.356 �36.72371.6

y0 (pixel) �46.28071.280 �27.95871.3

A1 (�10�8) �2.17270.859 2.71770.7

A2 (�10�15) 6.03670.555 �2.41770.6

A3 (�10�20) �1.37771.335 18.85571.3

B1 (�10�7) 5.98670.354 �1.38770.5

B2 (�10�6) �7.44970.357 �1.13470.8

E1 (�10�2) �2.29370.293 �2.20870.1

E2 (�10�3) 5.22570.445 6.19370.

s0¼0.156

Where (CX(ti),CY(ti),CZ(ti)) is the 3-D coordinates of targetcentroid in stage ti, and n is the number of all measured featurepoints in stage ti.

4. Results and discussion

4.1. Camera calibration experiment

According to the calibration process described in Section 3.1,we have got the parameters of the four cameras. Fig. 10 showsone set of images of the calibration target shown at an orienta-tion. Those images were captured by the four calibrating camerasseparately and simultaneously. The calibrated interior parametersof the four cameras are shown in Table 1.

The calibrated exterior parameters of each camera are asfollows:

R1 ¼

1 0 0

0 1 0

0 0 1

0B@

1CA, T1 ¼

0

0

0

0B@

1CA

R2 ¼

0:8906270:00005 0:0270370:00002 �0:45393þ70:00053

�0:0271270:00011 0:9996170:00017 0:0063070:00001

0:4539370:00025 0:0066970:000015 0:8910170:00035

0B@

1CA,

T2 ¼

�5356:5070:19

86:4770:35

�1236:8770:28

0B@

1CA

R3 ¼

�0:9989370:00015 �0:0450870:00001 0:0103170:00045

�0:0447470:00003 0:9984970:00038 0:0316570:00005

�0:0117270:00007 0:0311670:00011 �0:9994570:00024

0B@

1CA,

T3 ¼

826:9770:23

756:9970:45

�22867:5070:30

0B@

1CA

R4 ¼

0:8828570:00025 0:0832970:00012 �0:4621970:00008

�0:0849670:00009 0:9962370:00025 0:0172570:00005

0:4618870:00010 0:0240570:00001 0:8866170:00015

0B@

1CA,

by the four calibrating cameras.

Camera 3 Camera 4

92 2671.76270.727 2659.41470.815

35 �53.85471.418 �60.68371.593

14 �130.98171.319 �115.46171.227

75 �1.75570.670 �1.33970.814

05 �4.33570.579 5.50270.712

59 8.54571.280 �2.33870.973

64 �1.64770.468 �2.28170.387

50 �3.34470.864 �3.78870.455

54 �2.39670.099 �2.35770.176

367 7.29170.264 3.34870.314

pixel

Page 8: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Fig. 11. Re-projection error in the images of cross target shown at three orientations.

Fig. 12. Invar scale bar used as length standard.

Table 2Results of the scale bar length measurement.

Position Measured length (mm) Error (mm)

1 1100.336 0.279

2 1100.366 0.309

3 1100.315 0.257

4 1100.337 0.279

5 1100.424 0.367

6 1100.445 0.388

7 1100.603 0.545

8 1100.405 0.348

9 1099.944 �0.113

10 1099.954 �0.102

11 1099.887 �0.170

RMS: 0.235

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 807

T4 ¼

�5558:9170:33

157:1470:26

�1284:7870:22

0B@

1CA:

The dimensions of the values for vectors T1, T2, T3, T4 are inmm.

To verify the accuracy of our calibration method, we reprojectthe 3-D feature points of calibration target using the obtainedparameters. Fig. 11 shows the re-projection error in the images ofcalibration target shown at three different orientations. At eachorientation, the four images are taken by the four camerassimultaneously. The mean re-projection error is less than0.05 pixels, which indicates that the proposed calibration methodhas a considerable accuracy.

4.2. Accuracy assessment experiments

It is hard to evaluate the accuracy of such a dynamic 3-Doptical measurement system for lack of available comparisonmethods with higher precision. An acceptable and practical testfor verifying the proposed videogrammetric system is designedbased on the VDI_2634 standard. To evaluate its measuringaccuracy, an invar scale bar with two circle targets at each endis used as an artifact, as shown in Fig. 12. Its nominal length is1100.057 mm with an uncertainty of 0.003 mm. In this test, thescale bar is moved within the field-of-view at an arbitraryorientation and speed, and the length of the moving scale bar ismeasured eleven times using the videogrammetric system.

Page 9: A four-camera videogrammetric system for 3-D motion measurement of deformable object

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811808

The measurement results show that the Root Mean Square(RMS) of the length measurement error of the moving scale bar isless than 0.5 mm (0.15 pixels), and the relative accuracy is betterthan 1/4000. Of course with higher quality image and moresuitable environment, a better accuracy can be obtained to someextent (Table 2).

4.3. Motion estimation of deformable object

To validate the performance of the videogrammetric system, amotion measurement experiment is conducted using an auto-mobile tire. In this experiment, feature points are pasted dis-cretely on the surface of automobile tire prior to themeasurement as the tracked feature points, as shown in Fig. 13.

Fig. 13. Automobile tire pasted with videogrammetric targets.

Fig. 14. Images acquired by four cameras in three motion stages and result of feature poi

There are 200 motion stages captured by four camerasthroughout the whole movement, and Fig. 14 shows three ofthem. Each stage contains four images, in which the recognizedfeature points are marked with its center and ID. Fig. 15 showsthe 3-D coordinates and displacements of the feature points onthe tire at three different motion stages. Here, the green point isthe feature point, its color ray describes the displacement direc-tion and size of the feature point with respect to the first motionstage, feature points 4–11 are reconstructed with the imagescaptured by camera 1 and camera 2, and Feature points 21–30with the images captured by camera 3 and camera 4. From Fig. 15,we can conclude, in any stage, feature points 4–11 and points21–30 are correctly reconstructed in a uniform coordinate systemand they approximately constitute the 3-D morphology of thetire. Their displacements directly reflect the whole motion anddeformation of the tire surface. When the automobile tire fell tothe ground, the maximum displacement occurs with obviousdeformation. Fig. 16 shows the trajectory, displacement timehistories and speed time histories of the tire centroid in thefalling movement. The results basically meet the law of fallingmovement.

To evaluate the accuracy of the tire motion measurement, thevertical displacement, measured by our system using the first 100motion stages completely acquired during the free fall motion, iscompared with that obtained by theoretical calculation:

s¼�1

2gt2 ð13Þ

where, s is the displacement of free falling object, g is thegravity acceleration and t is the time.

Throughout the 100 motion stages, the vertical displacementsresulting from videogrammetric measurement and theoreticalcalculation are, respectively employed as the x, y coordinates ofa point. One hundred points are obtained. According to these

nt recognition: (a) Motion stage¼1, (b) Motion stage¼60 and (c) Motion stage¼120.

Page 10: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Fig. 15. Displaying result of 3-D coordinate and displacement reconstruction of feature points on falling tire in three different motion stages. (The left column is displayed

with front view and the right column is displayed with right view): (a) Motion stage¼1, (b) Motion stage¼60 and (c) Motion stage¼120.

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 809

points (blue points in Fig. 17), a straight line (red line in Fig. 17)can be fitted. The fitted line equation is y¼1.0188nxþ0.9643,from which it can be seen that only 1.88% deviation existsbetween the two methods.

5. Conclusions

In this paper, we have presented a four-camera videogram-metric system with large field-of-view for motion measurementof deformable object. The novelty of the system is the implemen-tation of a four-camera calibration method. Camera calibrationexperiment indicates that the proposed calibration method has a

considerable accuracy with a re-projection error less than 0.05pixels. Accuracy evaluation experiment proves that the proposedsystem could achieve an accuracy of 0.5 mm on length dynamicmeasurement within 5000 mm�5000 mm field-of-view. Experi-mental results of motion measurement on an automobile tireshow that the measured surface coordinates and displacementscan visually reflect the motion and deformation process of thetire. According to the analysis of all experimental results, thefollowing conclusions are drawn:

1.

The proposed four-camera videogrammetric system provides agood solution for position, trajectory, displacement and speedmeasurement of deformable moving object.
Page 11: A four-camera videogrammetric system for 3-D motion measurement of deformable object

Fig. 16. Trajectory, displacement and speed time histories of the moving tire

centroid: (a) motion trajectory of tire centroid; (b) displacement time histories of

tire centroid and (c) speed time histories of tire centroid.

Fig. 17. Comparison of videogrammetric measurement and theoretical calculation

on vertical displacement.

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811810

2.

The feature points pasted on measured object are light and thin sothat they will not interfere with the movement of object itself.

3.

Many feature points pasted on measured object can bereconstructed simultaneously through the proposed measure-ment method, which can be used to approximately describethe outline of moving object at different motion stages.

4.

By comparing the outline of measured object between differ-ent motion stages, we can obtain its full-field deformation inthe whole movement.

5.

The proposed four-camera videogrammetric system is alsobetter regarded as a complement to traditional approaches forposition tracking.

6.

The system is not suitable for all-around tracking of movingobjects. For this, calibrated cameras in more directions shouldbe used and new calibration pattern should be designed forcalibrating more cameras. And some key issues also need to bestudied and improved in the future tasks.

Acknowledgment

This study is supported by the National Natural Science Founda-tion of China (Project code: 50975219) and the Science andTechnology Support Plan of Jiangsu (Project code: SYG201014).

References

[1] Dharamsi UK ED Blandino, JR, Comparing photogrammetry with a conven-tional displacement measurement technique on a square kapton membrane,Proceedings of the 3rd AIAA Gossamer Structures Forum, Denver, CO, (2002)2002–1258.

[2] Jenkins CH HF, Spicher WH. Experimental measurement of wrinkling inmembranes undergoing planar deformation. Exp. Mech. 1998;38:147–52.

[3] Leifer J, Weems B, Kienle S, Sims A. Three-dimensional acceleration measure-ment using videogrammetry tracking data. Exp. Mech. 2011;51:199–217.

[4] C- elebi A, Sanli M. GPS in pioneering dynamic monitoring of long-periodstructures. Earthq. Spektra 2002;18:47–61.

[5] Luo PF, Chao YJ, Sutton MA. Application of stereo vision to three-dimensionaldeformation analyses in fracture experiments. Opt. Eng. 1994;33:981–90.

[6] Tsai RY. A versatile camera calibration technique for high-accuracy 3Dmachine vision metrology using off-the-shelf TV cameras and lenses. Radio-metry, Jones and Bartlett Publishers, Inc.; 1992 pp. 221–244.

[7] Zhang Z. A flexible new technique for camera calibration. IEEE Trans. PatternAnal. Mach. Intell. 2000;22:1330–4.

[8] J.-Y. Bouguet, Camera Calibration Toolbox for Matlab, 2010-07-09 [2011-07-20] /http://www.vision.caltech.edu/bouguetj/calib_doc/S.

[9] Grammatikopoulos L, Karras G, Petsa E. An automatic approach for cameracalibration from vanishing points. ISPRS J. Photogramm. Remote Sens.2007;62:64–76.

[10] Song LM, Wang MP, Lu L, Jing Huan H. High precision camera calibration invision measurement. Opt. Laser Technol. 2007;39:1413–20.

[11] Tian J, Ding Y, Peng X. Self-calibration of a fringe projection system usingepipolar constraint. Opt. Laser Technol. 2008;40:538–44.

[12] Prokos A, Karras G, Petsa E. Automatic 3D surface reconstruction bycombining stereovision with the slit-scanner approach. Int. Arch. Photogram.,Remote Sens. Spatial Inf. Sci. 2010:505–9XXXVIII 2010:505–9.

[13] Mesqui F, Niederer P, Schlumpf M, Lehareinger Y, Walton J. Semi-automaticreconstruction of the spatial trajectory of an impacted pedestrian surrogateusing high-speed cinephotogrammetry and digital image analysis. J. Bio-mech. Eng. 1984;106:357–9.

Page 12: A four-camera videogrammetric system for 3-D motion measurement of deformable object

H. Hu et al. / Optics and Lasers in Engineering 50 (2012) 800–811 811

[14] Chang CC, Xiao XH. Three-dimensional structural translation and rotationmeasurement using monocular videogrammetry. J. Eng. Mech. 2010;136:840–8.

[15] Graves SS, Burner AW, Edwards JW, Schuster DM. Dynamic deformationmeasurements of an aeroelastic semispan model. J. Aircr. 2003;40:977–84.

[16] Liu J-W, Liang J, Liang X-H, Tang Z-Z. Videogrammetric system for dynamicdeformation measurement during metal sheet welding processes. Opt. Eng.2010;49:033601.

[17] Hobbs S, Seynat C, Matakidis P. Videogrammetry: a practical method formeasuring vegetation motion in wind demonstrated on wheat. Agric. For.Meteorol. 2007;143:242–51.

[18] Wang Y-Q, Sutton MA, Ke X-D, Schreier HW, Reu PL, Miller TJ. On errorassessment in stereo-based deformation measurements Part I: Theoreticaldevelopments for quantitative estimates. Exp. Mech. 2011;51:405–22.

[19] Luhmann T, Robson S, Kyle S, Harley I. Close Range Photogrammetry:Principles, Techniques and Applications. London.Whittles Publishing; 2006.

[20] Gruen A, Huang TS. Calibration and Orientation of Cameras in Computer

Vision. New York: Springer-Verlag; 2001.[21] Zhang D-h, Liang J, Guo C, Liu J-W, Zhang X-Q, Chen Z-X. Exploitation of

photogrammetry measurement system. Opt. Eng. 2010;49:037005.[22] Xiao Z, Jin L, Yu D, Tang Z. A cross-target-based accurate calibration method

of binocular stereo systems with large-scale field-of-view. Measurement2010;43:747–54.

[23] Canny JF. A computational approach to edge detection. IEEE Trans. PatternAnal. Mach. Intell. 1986;8:679–714.

[24] Mai F, Hung YS, Zhong H, Sze WF. A hierarchical approach for fast and robustellipse extraction. Pattern Recognit. 2008;41:2512–24.