determination of edge correspondence using color codes for one-shot shape acquisition

7
Determination of edge correspondence using color codes for one-shot shape acquisition Xu Zhang, Limin Zhu State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China article info Article history: Received 4 June 2010 Received in revised form 19 August 2010 Accepted 19 August 2010 Available online 15 September 2010 Keywords: Structured light Decoding of color codes 3D shape acquisition abstract A robust method for measuring 3D shapes is proposed, in which only one stripe pattern image is required. To determine edge correspondence, we match color codes instead of edge codes because the former are more stable and immune to the standard deviation of the Gaussian filter in edge detection and width of the color band. The color code is identified by K-means. This method exhibits huge advantages in adaptability and automation over thresholding techniques. The proposed decoding method is compared with two well-known algorithms, dynamic programming and multi-pass dynamic programming. Using ground truth, we evaluate the performance of the methods in measuring three different objects. Quantitative and qualitative comparisons are shown in the experiments, and results affirm that our method is effective and robust. & 2010 Elsevier Ltd. All rights reserved. 1. Introduction The 3D measurement is an important issue that has been extensively investigated [1,2]. Recent years have seen numerous advancements in real-time 3D-shape acquisition, which is becoming increasingly crucial in key industries, including man- ufacturing, medical science, computer science, home security, and entertainment [3,4]. Coded structured light is an optical technique based on active stereovision, which simplifies the correspondence problem with the help of controlled illumination. The position information is encoded into a pattern image or a sequence of patterns projected onto the scene. Their reflections are captured by taking photo- graphs from a shifted position over time. The correspondence is determined by decoding the modulated image, and depth information is triangulated. Many studies have contributed to pattern codification strate- gies [5,6]. The various pattern images proposed include color multi-slits [7], stripe patterns [8–11], grid patterns [12–14], matrix dots [15,16], geometric coding [17], and hybrid methods [18]. Most of them are based on spatial coding, which encodes position information in a local neighborhood. Many decoding methods are based on different coding strategies, such as local matching based on the window property [19], dynamic program- ming [8], graph cut [20], and so on. Phase shifting [4] and frequency multiplexing methods [3] are constituted by the set of patterns showing continuous variations in intensity or color throughout one or two axes. The use of periodic and absolute patterns can be found among these methods. For phase shifting methods, phase decoding is per- formed in the spatial domain, whereas the frequency domain is for frequency multiplexing methods. These methods typically use complex patterns for the phase unwrapping process and require assumptions of smooth reflectance, either locally or globally. If the assumptions do not hold, the decoding process of the patterns may be easily affected, leading to ambiguities near depth discontinuities. In this paper, we propose a robust method for measuring 3D shapes, in which only one stripe pattern image is required for projection. The correspondence is determined through matching color codes instead of edge codes. Color codes are more stable and immune to the standard deviation of the Gaussian filter for edge detection and width of the color band. The code is identified by clustering method. This method exhibits huge advantages in adaptability and automation over thresholding techniques. The proposed decoding method is compared with two well-known algorithms, dynamic programming (DP) [21] and multi-pass dynamic programming (M-DP) algorithms [8]. The performance of the methods is evaluated using ground truth. The rest of the paper is structured as follows. Section 2 introduces our proposed method. The experiments implemented are discussed in Section 3. We obtain ground truth data by the robust spacetime analysis [22]. To conduct quantitative evalua- tion, we compute the correct rates and recalls for three decoding Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/optlaseng Optics and Lasers in Engineering 0143-8166/$ - see front matter & 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.optlaseng.2010.08.013 Corresponding author. E-mail addresses: [email protected] (X. Zhang), [email protected] (L. Zhu). Optics and Lasers in Engineering 49 (2011) 97–103

Upload: xu-zhang

Post on 21-Jun-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Determination of edge correspondence using color codes for one-shot shape acquisition

Optics and Lasers in Engineering 49 (2011) 97–103

Contents lists available at ScienceDirect

Optics and Lasers in Engineering

0143-81

doi:10.1

� Corr

E-m

(L. Zhu

journal homepage: www.elsevier.com/locate/optlaseng

Determination of edge correspondence using color codesfor one-shot shape acquisition

Xu Zhang, Limin Zhu �

State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China

a r t i c l e i n f o

Article history:

Received 4 June 2010

Received in revised form

19 August 2010

Accepted 19 August 2010Available online 15 September 2010

Keywords:

Structured light

Decoding of color codes

3D shape acquisition

66/$ - see front matter & 2010 Elsevier Ltd. A

016/j.optlaseng.2010.08.013

esponding author.

ail addresses: [email protected] (X. Zhan

).

a b s t r a c t

A robust method for measuring 3D shapes is proposed, in which only one stripe pattern image is

required. To determine edge correspondence, we match color codes instead of edge codes because the

former are more stable and immune to the standard deviation of the Gaussian filter in edge detection

and width of the color band. The color code is identified by K-means. This method exhibits huge

advantages in adaptability and automation over thresholding techniques. The proposed decoding

method is compared with two well-known algorithms, dynamic programming and multi-pass dynamic

programming. Using ground truth, we evaluate the performance of the methods in measuring three

different objects. Quantitative and qualitative comparisons are shown in the experiments, and results

affirm that our method is effective and robust.

& 2010 Elsevier Ltd. All rights reserved.

1. Introduction

The 3D measurement is an important issue that has beenextensively investigated [1,2]. Recent years have seen numerousadvancements in real-time 3D-shape acquisition, which isbecoming increasingly crucial in key industries, including man-ufacturing, medical science, computer science, home security, andentertainment [3,4].

Coded structured light is an optical technique based on activestereovision, which simplifies the correspondence problem withthe help of controlled illumination. The position information isencoded into a pattern image or a sequence of patterns projectedonto the scene. Their reflections are captured by taking photo-graphs from a shifted position over time. The correspondence isdetermined by decoding the modulated image, and depthinformation is triangulated.

Many studies have contributed to pattern codification strate-gies [5,6]. The various pattern images proposed include colormulti-slits [7], stripe patterns [8–11], grid patterns [12–14],matrix dots [15,16], geometric coding [17], and hybrid methods[18]. Most of them are based on spatial coding, which encodesposition information in a local neighborhood. Many decodingmethods are based on different coding strategies, such as localmatching based on the window property [19], dynamic program-ming [8], graph cut [20], and so on.

ll rights reserved.

g), [email protected]

Phase shifting [4] and frequency multiplexing methods [3] areconstituted by the set of patterns showing continuous variationsin intensity or color throughout one or two axes. The use ofperiodic and absolute patterns can be found among thesemethods. For phase shifting methods, phase decoding is per-formed in the spatial domain, whereas the frequency domain isfor frequency multiplexing methods. These methods typically usecomplex patterns for the phase unwrapping process and requireassumptions of smooth reflectance, either locally or globally.If the assumptions do not hold, the decoding process of thepatterns may be easily affected, leading to ambiguities neardepth discontinuities.

In this paper, we propose a robust method for measuring3D shapes, in which only one stripe pattern image is requiredfor projection. The correspondence is determined throughmatching color codes instead of edge codes. Color codes are morestable and immune to the standard deviation of the Gaussianfilter for edge detection and width of the color band. The code isidentified by clustering method. This method exhibits hugeadvantages in adaptability and automation over thresholdingtechniques. The proposed decoding method is compared withtwo well-known algorithms, dynamic programming (DP) [21]and multi-pass dynamic programming (M-DP) algorithms [8].The performance of the methods is evaluated using groundtruth.

The rest of the paper is structured as follows. Section 2introduces our proposed method. The experiments implementedare discussed in Section 3. We obtain ground truth data by therobust spacetime analysis [22]. To conduct quantitative evalua-tion, we compute the correct rates and recalls for three decoding

Page 2: Determination of edge correspondence using color codes for one-shot shape acquisition

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–10398

methods on theoretical and practical codes. The conclusionsdrawn are presented in Section 4.

2. The proposed method

Our proposed one-shot shape acquisition method includes sixsteps, namely encoding, image capture, edge detection, codeidentification, matching, and triangulation (Fig. 1).

The stripe pattern, which can be seen as a color sequence, isadopted. Edges instead of intensities are used to represent codes.The advantages of this approach are: the edge can be detectedwith higher precision using sub-pixel methods; more codes canbe designed in a pattern image to obtain denser reconstruction;band size can reach one pixel because the stripe pattern does notneed isolated pixels. The color edge is detected using Cumani’smethod [23] with non-maximum suppression and withouthysteresis thresholding. In the code identification, the color codeinstead of edge code is recognized using the clustering methodbecause color gradients are not stably detected. They areconsiderably affected by the camera aperture, standard deviationof the Gaussian filter for edge detection, and width of the colorband. Although hard [19] and soft thresholds [8], are simple andeffective, selecting the threshold is bothersome and requiresrepeated attempts before a good threshold is determined.Conversely, clustering can intuitively identify the classes indifferent situations. In the matching step, local spatial coherenceis assumed, and the edge correspondence is determined throughmatching the color code sequences instead of the edge codesequences. The window-based method derived from on ourcoding strategy is adopted to match the color code sequences.Finally, the 3D shape is obtained with the calibrated structuredlight system. Details on our calibration method can be found inRef. [24].

2.1. Encoding information into the stripe pattern

The stripe pattern can be seen as a color sequence P¼

(p0,p1,y,pN) with good windowed uniqueness, which yields anedge sequence Q¼(q1,q2,y,qN). Each color code or edge code hasthree bits, corresponding to the R, G, and B channels. The edgecode is produced by performing a bitwise exclusive-or (XOR) ontwo adjacent color codes. Each two adjacent color codes alsoconforms to the condition of being different in at least one bit.This rule can be expressed as follows:

qi ¼ pi�1 XOR pi, 8piAP and qiað0,0,0Þ: ð1Þ

The window property means that if a window of prescribedsize, say k, is slid over the color sequence, each possible nonzerosubsequence with size k can be observed through the window

Encoding Edgedetection

Codiden

Capturingimage

Object

Fig. 1. Pipeline for 3D-sh

exactly once. The Hamming distance between any two sub-sequences in two positions is larger than 1.

If there are M color labels, the theoretical number of uniquewindows can be easily calculated by the following equation:

NðM,kÞ ¼MðM�1Þk�1ð2Þ

All these windows appear only once in sequence P. At this time,the length of sequence P is N+k�1, the maximum. Obtaining thismaximum sequence is an issue of combinatorial optimization—not easily achieved when k42. The practical method is to find asuboptimal solution, the length of which is enough to code thepattern image. The algorithm we use to generate sequence P isbased on pseudorandom coding.

First, k color codes are generated to form the seed of sequenceP, which conforms to Eq. (1). Each time, a new code different fromits preceding code is randomly selected from the M color labels.This new code with foregoing k�1 codes forms a new subse-quence. The Hamming distances between the new subsequenceand all preceding subsequences at different positions areexamined. If all Hamming distances are larger than 1, this newcolor code is added at the end of the present sequence, and thesequence is updated. If not, another new color code is randomlygenerated and checking is conducted until the sample times islarger than the preset threshold st. The sequence grows by theiteration of the sampling and checking processes. A detailedexplanation of the pseudorandom coding can be found inRef. [15].

Many difficulties, such as shadow and shading, occlusion,specular reflection, and so on, happen in practical 3D-shapemeasurement. Black appears on the captured image if there is lessor no light. By contrast, white appears if the specular reflection orsaturation intensity is reached. Thus, we assign six color labelswithout black and white. Parameter k is set to 4 and st to 30. Thesequence is obtained using the aforementioned algorithm andtransformed into a pattern image, shown in Fig. 1.

2.2. Color edge detection

After the stripe pattern is projected onto the object, themodulated image is recorded by a camera. In an ideal setting, onecamera color channel is affected only by the correspondinglyprojected light. In practice, however, each camera channel iscontributed to by three projector channels, a phenomenon calledcolor crosstalk. In some extreme situations, the maximumchannel of the camera image differs from the projected mono-chromatic light. Color crosstalk is represented as a transformationmatrix by Caspi et al. [25], and we conduct the color correction toreduce the effect of color crosstalk [26].

For edge detection in color or multispectral images, Cumanisuggests the extension of procedures based on the second partial

Matchingetification

Crisp clustering(K-means)

Triangulation

Window-basedmethod

Calibration

ape reconstruction.

Page 3: Determination of edge correspondence using color codes for one-shot shape acquisition

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–103 99

derivatives of the image functions [23]. The extremal edges aredefined as loci of transversal zero crossings of the first derivativeof the contrast function in the direction of maximal contrast. Thelocation and direction of the edges are finally determined bycomputing the eigenvalues and corresponding eigenvectors. Moredetails on these can be found in [23].

2.3. Color code identification

The edges divide the image into several parts in each scan line.A resized example (zoomed in) is shown in Fig. 1. The white edgeis surrounded by color pixels. Thus, each scan line in the acquiredimage can be seen as a pattern sequence. The color codes in theobtained pattern have to be identified initially to determinethe correspondence between the obtained color sequences withthe projected pattern.

Our method consists of two steps. First, K-means is adopted atthree channels to identify these pixels. For each channel, there areonly two values, 0 and 1. We find the minimum and maximumvalues as the initial cluster centroids, and then K-means iteratesthe process of updating cluster centroids until there is no changefor each cluster. After that, each pixel in the obtained image islabeled. Second, the color code between two edges in one row canbe identified based on the maximum vote of pixel labels.Generally, all pixel labels between two neighboring edges arenonidentical because of complex situations, such as non-uniformreflection, surface discontinuity, color texture, noise, etc. Toreduce the effect of error pixel labels, the frequency of the labelis computed from all the pixel labels in the color code window,and the label with the maximum frequency is set as the colorcode label. The width of the color code window is determined bythe neighboring edge pixels, and the height is set to 3. In ourexperiment, this method is effective and the correct rate issatisfactory.

2.4. Matching

Fig. 2 is an example of explaining the problem of matching theobtained pattern sequence with the projected pattern sequence.In the figure, the corresponding edge codes are connected usinglines. Because of occlusion, edge codes {e6,e7,e8} go to the rear ofedge codes {e3,e4,e5}. However, in the projected pattern sequence,the former corresponding code segment is ahead of the latter.Edge codes {q1,q2,q3,q7,q12,q16} in the projected pattern sequenceare missed in the obtained pattern sequence; Edge codes{e1,e9,e10} do not contain the corresponding edge codes. Code e2

corresponds to code q8, however, the codes are not the same.Because the corresponding surface reflects only red light whenpurple light is projected, the camera captures the color code c1 asred. Consequently, an error code red is received instead of purple.

In practice, the formation of the obtained pattern image ismore complex than is depicted in Fig. 2, and four code situations,

e1 e2 e3 e4

q1 q2 q3 q4

Projector

Pr

O

p0 p1 p2 p3

c0 c1 c2 c3

Camera

red

Fig. 2. Example of matching the projec

namely error codes, missing codes, fake codes and code swaps,occur. Error codes are the codes in E, which are not the same asthe corresponding codes in Q. Missing codes are the codes in Q,which do not have the corresponding codes in E. Fake codes arethe codes in E, which do not contain the corresponding codes in Q.Code swap is the situation in which some corresponding codeshave different orders in Q and E.

The key problem in 3D measurement is determining the edgecorrespondence from these two sequences, presenting complexcode situations. Direct matching edge codes, such as DP, M-DP,have been proposed. In this paper, edge correspondence isobtained through matching color codes. It can be denoted as

coredge ¼MatchðP,CÞ ð3Þ

Because the local continuity can be satisfied for the surface ofthe object, local spatial coherence is assumed for the obtainedpattern sequence. The entire color code sequence is composed ofone or several code segments; the consecutive codes in eachsegment correspond to the projected consecutive codes, and thesize of each segment is bigger than window size k. For example,{c2,c3,c4,c5}, {c5,c6,c7,c8}, and {c10,c11,c12,c13} have unique corre-spondent window subsequences in the projected pattern,{p8,p9,p10,p11}, {p3,p4,p5,p6}, and {p12,p13,p14,p15}. Then, theedge correspondence is determined through Eq. (1). For example,in Fig. 2, {e3,e4,e5}, {e6,e7,e8} and {e11,e12,e13} correspond with{q4,q5,q6}, {q9,q10,q11}, and {q13,q14,q15}.

Our matching method is based on this local spatial coherence.First, one k code segment is determined, as the core, which isuniquely identical to one window in the projected patternsequence. Second, the code segment grows from the two endsin an outward direction until matching codes can on longer befound. These two processes, finding the core and growing, areiterated until a core is no longer found. Third, the rule S2 [27] isadopted to obtain the best match, i.e., choosing the one with thebest support. Finally, the edge correspondence is resolved fromthe color code match in Eq. (1).

3. Experiments

We developed a portable structured light system (Fig. 3(a)),which is composed of a camera with 1024�1024 resolution and adigital light procession projector with 1024�768 resolution. Theproposed method is implemented on this system for measuringthree different objects (Fig. 3). The first subject has weak color, deepslop, surface texture, and occlusion. The surface of the second objecthas discontinuity. The third object has saturated color and specularreflection, which leads to substantial difficulty in reconstructing the3D shape. Six pattern images from the same color code sequencewere generated to reduce the dependence of the pattern image. Onecolor label represented different colors in six pattern images, i.e., 3Dmeasurement was conducted six times for each object.

e5 e6 e7 e8 e9 e10 e11 e12 e13

q5 q6 q7 q8 q9 q10 q11 q12 q13 q14

ojected pattern sequence

btained pattern sequence

p4 p5 p6 p7 p9 p10 p11 p12 p13 p14p8

c4 c5 c6 c7 c9 c10 c11 c12 c13c8

q15 q16

p15 p16

ted pattern and obtained pattern.

Page 4: Determination of edge correspondence using color codes for one-shot shape acquisition

Fig. 3. Portable structured light system and four subjects: (a) portable structured light system; (b) box and doll; (c) hands; and (d) feet and pig.

Surface

Projector image

The captured lightness at different time

Camera image

p

(x'p, y'p)xp

o1o2

ti

ti

ti+3

ti +3

ti +2

ti +2

ti +1

ti +1

tpti

ti +3

ti +2

ti +1

tp

tp

Ic (ti +3)Ic (ti +2)Ic (ti +1)

Ic (ti)

n

Fig. 4. Spacetime mapping of a Gaussian illuminant [22].

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–103100

The proposed method was implemented on our system, andthe well-known algorithms, DP [21] and M-DP [8], wereconducted for a direct comparison with our approach.

To compare these decoding methods, we obtained groundtruth using spacetime analysis [22]. Ground truth provides us thecorrespondence between the captured and projected image, andhelps evaluate the decoding results of these methods.

3.1. Obtaining ground truth

Ground truth is essential because it is involved in assessingwhether each decoding result is correct and evaluating theperformance of different algorithms. Ground truth provides thecorrespondence between each pixel in the captured image withthe position in the projected image. Spacetime analysis, proposedby Curless and Levoy [22], substantially increases immunity toshape and reflectance variations.

This method projects only one stripe in each frame, and thestripe position can shift in the pattern image with time, as shownin Fig. 4. The results from this temporal analysis are of highaccuracy and can be used as ground truth. Fig. 5 shows the groundtruth data of different steps in the process of reconstruction. Theground truth of color codes is the standard data used inidentifying the color codes. The surface comes from all correctcorrespondences, and it can judge whether each correspondencefrom decoding methods is correct.

3.2. Results

The correct rate of the identified color code is shown in Table 1.The minimum, maximum, and mean are computed from theresults of six implementations. The correct rates of the first two

subjects are good. Because the third object has saturated color, thecorrect rate is much lower. The color codes in three channels areindependently identified. Although color correction have beenmade, interaction between different channels is inevitable. Thecorrect rate of the color code is the combination of the correct rateof three channels; thus, it is lower than 50%.

The proposed method was conducted on these three objects.DP and M-DP were also implemented to determine the edgecorrespondence.

Using the ground truth data as bases, we can determinewhether one computed correspondence is correct or not.

Page 5: Determination of edge correspondence using color codes for one-shot shape acquisition

Fig. 5. Ground truth: (a) the codes; (b) the surfaces; and (c) the zoomed-in surfaces.

Table 1The correct rate of color codes(%).

Box and doll Hands Feet and pig

Min 66.46 63.95 31.87

Max 68.84 67.22 41.24

Mean 67.66 65.99 36.47

Table 2Performance of the three algorithms on theoretical codes(%).

S2 DP M-DP

Cr Rec Cr Rec Cr Rec

Box and doll Min 95.27 85.45 81.07 78.59 73.57 79.12

Max 95.34 85.62 84.98 81.66 76.27 81.84

Mean 95.30 85.51 82.09 79.67 74.55 80.09

Hands Min 97.31 91.94 46.63 53.94 44.13 53.96

Max 97.49 92.29 57.17 66.13 54.20 66.27

Mean 97.45 92.14 52.58 60.50 49.53 60.56

Feet and pig Min 97.66 94.92 74.05 79.17 69.10 79.19

Max 97.70 95.02 78.99 83.65 73.15 83.71

Mean 97.68 94.95 76.52 81.27 70.99 81.30

Cr: the correct rate; Rec: the recall.

Table 3Performance of the three algorithms on practical codes(%).

S2 DP M-DP

Cr Rec Cr Rec Cr Rec

Box and doll Min 90.17 44.84 63.34 44.06 53.61 44.37

Max 94.47 49.09 68.99 47.19 57.37 47.61

Mean 92.60 47.43 65.65 45.24 54.91 45.55

Hands Min 91.64 44.76 40.33 32.66 36.67 32.71

Max 97.33 50.34 55.89 46.98 53.12 46.99

Mean 95.43 47.62 47.93 39.41 44.88 39.62

Feet and pig Min 65.37 5.06 7.73 3.22 6.78 3.47

Max 96.98 17.71 30.04 13.91 25.67 14.43

Mean 80.48 11.85 19.56 8.23 16.50 8.50

Cr: the correct rate; Rec: the recall.

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–103 101

Then, two indexes, the correct rate and recall, are calculated todepict the performance of each method.

Correct rate¼Correct correspondences

Total detected correspondencesð4Þ

Recall¼Correct correspondences

Total correspondences in ground truthð5Þ

A higher correct rate means a more robust method, whilehigher recall means enhanced efficiency.

First, the theoretical color codes coming from the ground truthdata were applied, indicating that all the codes are correct. Theperformance of the three algorithms is depicted in Table 2. Theyall exhibit high correct rates and recall; however, ours are muchbetter.

Second, the three algorithms were adopted on the samepractical codes identified using our methods. Comparing the datain Tables 2 and 3, the correct rates in our method slightlydecrease, but are still very high, up to 95%. The highlight is thecorrect rate of the color code is only 69%. Conversely, the correctrates of the other two methods are very low. In terms of the recall,our matching method also exhibits better performance. The twotables show that the recalls decrease faster than the correct ratesbecause of the error in the color codes. This indicates that therelationship between the correct rate of color code and the recallof matching methods is extremely close.

Page 6: Determination of edge correspondence using color codes for one-shot shape acquisition

Fig. 6. Point clouds and surfaces from different algorithms: (a) point cloud from DP; (b) surface from correct points in a; (c) point cloud from M-DP; (d) surface from correct

points in c; (e) point cloud from our method; and (f) surface from correct points in e.

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–103102

Third, the point clouds and surfaces using different methods werealso obtained. In Figs. 6(a) and (c), a large number of outliers exist; Bycontrast, Fig. 6(e) has fewer outliers. This situation conforms to thecorrect rates in Table 3. Our method has a higher correct rate. InFig. 6(f), a large amount of the surface is reconstructed because therecall in our method is higher.

4. Conclusion

A robust method for measuring 3D shapes is proposed in whichonly one stripe pattern image is adopted. Color codes instead ofedge codes were used to determine the edge correspondencebecause color codes are more stable and immune to the standarddeviation of the Gaussian filter for edge detection and width of thecolor band. The color code was identified by K-means. This methodexhibits huge advantages in adaptability and automation overthresholding techniques. The proposed decoding method wascompared with two well-known algorithms, DP and M-DP. Theperformance of the methods in measuring three different objectswas evaluated using the ground truth data. Quantitative andqualitative comparisons in the experiments affirm that our methodis effective and robust.

Acknowledgements

This work was partially supported by the National NaturalScience Foundation of China under Grant Nos. 50821003 and50775147, the National Key Basic Research Program under Grantno. 2007CB714005, and the Science & Technology Commission ofShanghai Municipality under Grant no. 10JC1408000.

References

[1] Chen F, Brown G, Song M. Overview of three-dimensional shape measure-ment using optical methods. Optical Engineering 2000;39:10.

[2] Blais F. Review of 20 years of range sensor development. Journal of ElectronicImaging 2004;13:231.

[3] Su X, Zhang Q. Dynamic 3-D shape measurement method: a review. Opticsand Lasers in Engineering 2010;48(2):191–204.

[4] Zhang S. Recent progresses on real-time 3D shape measurement using digitalfringe projection techniques. Optics and Lasers in Engineering 2010;48(2):149–58.

[5] Salvi J, Pages J, Batlle J. Pattern codification strategies in structured lightsystems. Pattern Recognition 2004;37(4):827–49.

[6] Salvi J, Fernandez S, Pribanic T, Llado X. A state of the art in structured lightpatterns for surface profilometry. Pattern Recognition 2010;43:2666–80.

[7] Fechteler P, Eisert P. Adaptive color classification for structured light systems.In: The 15th international conference on computer vision and patternrecognition-workshop on 3D face processing; 2008. p. 1–7.

[8] Zhang L, Curless B, Seitz S. Rapid shape acquisition using color structuredlight and multi-pass dynamic programming. In: The 1st IEEE international

Page 7: Determination of edge correspondence using color codes for one-shot shape acquisition

X. Zhang, L. Zhu / Optics and Lasers in Engineering 49 (2011) 97–103 103

symposium on 3D data processing, visualization, and transmission; 2002.p. 24–36.

[9] Hall-Holt O, Rusinkiewicz S. Stripe boundary codes for real-time structured-light range scanning of moving objects. In: Eighth IEEE internationalconference on computer vision; 2001. p. 359–66.

[10] Li H, Straub R, Prautzsch H. Structured light based reconstruction under localspatial coherence assumption. In: The third international symposium on 3Ddata processing, visualization, and transmission; 2006. p. 575–82.

[11] Je C, Lee SW, Park R-H. High-contrast color-stripe pattern for rapidstructured-light range imaging. In: 8th European conference on computervision; 2004. p. 95–107.

[12] Kawasaki H, Furukawa R, Sagawa R, Yagi Y. Dynamic scene shapereconstruction using a single structured light pattern. In: IEEE conferenceon computer vision and pattern recognition; 2008. p. 1–8.

[13] Chen S, Li Y, Zhang J. Vision processing for realtime 3-D data acquisitionbased on coded structured light. IEEE Transactions on Image Processing2007;17(2):167.

[14] Salvi J, Batlle J, Mouaddib E. A robust-coded pattern projection for dynamic3D scene measurement. Pattern Recognition Letters 1998;19(11):1055–65.

[15] Morano R, Ozturk C, Conn R, Dubin S, Zietz S, Nissanov J. Structured lightusing pseudorandom codes. IEEE Transactions on Pattern Analysis andMachine Intelligence 1998:322–7.

[16] Albitar I, Graebling P, Doignon C. Robust structured light coding for 3dreconstruction. In: IEEE 11th international conference on computer vision;2007. p. 1–6.

[17] Koninckx T, Van Gool L. Real-time range acquisition by adaptive structuredlight. IEEE Transactions on Pattern Analysis and Machine Intelligence2006;28(3):432–45.

[18] Pages J, Salvi J, Collewet C, Forest J. Optimised De Bruijn patterns for one-shotshape acquisition. Image and Vision Computing 2005;23(8):707–20.

[19] Boyer K, Kak A. Color-encoded structured light for rapid active ranging.IEEE Transactions on Pattern Analysis and Machine Intelligence 1987;9(1):14–28.

[20] Koninckx T, Geys I, Jaeggli T, Van Gool L, Leuven B. A graph cut based adaptivestructured light approach for real-time range acquisition. In: Internationalsymposium on 3D data processing, visualization and transmission; 2004.p. 413–21.

[21] Chen C, Hung Y, Chiang C, Wu J. Range data acquisition using color structuredlighting and stereo vision. Image and Vision Computing 1997;15(6):445–56.

[22] Curless B, Levoy M. Better optical triangulation through spacetime analysis.In: Proceedings of IEEE international conference on computer vision; 1995.p. 987–94.

[23] Cumani A. Edge detection in multispectral images. CVGIP: Graphical Modelsand Image Processing 1991;53(1):40–51.

[24] Zhang X, Zhu L. Projector calibration from the camera image point of view.Optical Engineering 2009;48:117–208.

[25] Caspi D, Kiryati N, Shamir J. Range imaging with adaptive color structuredlight. IEEE Transactions on Pattern Analysis and Machine Intelligence1998;20(5):470–80.

[26] Zhang X, Zhu L. Robust calibration of a color structured light system usingcolor correction. In: 2nd international conference on intelligent robotics andapplications, Singapore; 2009.

[27] Hugli H, Maitre G. Generation and use of color pseudorandom sequences forcoding structured light in active ranging. In: Society of photo-opticalinstrumentation engineers (SPIE) conference series, vol. 1010; 1989. p. 75.