constructing 3d human model from front and side images

7
Constructing 3D human model from front and side images Yueh-Ling Lin, Mao-Jiun J. Wang Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan, ROC article info Keywords: 2D images Feature extraction 3D human model Shape deformation abstract Constructing 3D human model from 2D images provides a cost-effective approach to visualize digital human in virtual environment. This paper presents a systematic approach for constructing 3D human model using the front and side images of a person. The silhouettes of human body are first detected and the feature points on the silhouettes are subsequently identified. The feature points are further used to obtain the body dimensions that are necessary for identifying a template 3D human model. The shape of the template human model can be modified by the free-form deformation method. Moreover, the pro- posed approach has been applied for constructing the 3D human models of 30 subjects. The comparisons between the constructed 3D models and the 3D scanning models of the 30 subjects indicate that the pro- posed system is very effective and robust. Ó 2011 Elsevier Ltd. All rights reserved. 1. Introduction Digital human modeling is very useful in the areas of custom- ized product design, multimedia games, and virtual reality (Kuo & Wang, 2009). Once the 3D human model is constructed, the dig- ital human can be visualized and animated. With the advancement of optoelectronic technology, the 3D human model can be con- structed by using 3D whole body scanner. The 3D whole body scanning system has been used in several national anthropometric surveys, such as the Civilian American and European Surface Anthropometry Resource (CAESAR) Project for establishing the anthropometric database (Robinette, Daanen, & Paquet, 1999). The obtained 3D scanning human models can be used for various applications including proactive product design and workplace evaluations (Lu & Wang, 2008). However, the use of 3D scanner for constructing 3D human model is very costly, and lacking of mobility (Seo, Yeo, & Wohn, 2006). A low-cost system for con- structing 3D human model is therefore needed. Constructing 3D human models based on 2D images is a simple and logical approach. There is increasing demand for generating appropriate shape and plausible human model from photographs (Seldon, Stevens, & Tucker, 1940; Wang, Wang, Chang, & Yuen, 2003). To construct a 3D human model from photographs, the sil- houette information and its corresponding relationship of human body can be obtained from images (Lee, 2000; Lu, Wang, Chena, & Wua, 2010). With the detection of shape feature from body silhouette, the spatial relations between the 2D images and the 3D template model can be established in the coordinate system. The relationship is subsequently used for morphing the body shape of the human model. The 3D modeling of the human body can be modified according to the relevant anthropometric measurements (Lee, Gu, & Magnenat-Thalmann, 2000). Since the variation of body shape is associated with anthropometric measurements in differ- ent parts of the body, the anthropometric data can be used as parameters to vary the body shape with the required attributes. For the method of morphing body shape, the free-form defor- mation (FFD) is a general approach (Mochimaru & Kouchi, 1998). With a 3D whole body laser scanner, a rich source of data for hu- man body surface can be obtained. From the database, a template 3D human model is selected as a common template mesh. The body shape of the template model can be deformed by moving the control points through translation and scaling in specific body regions. The regional displacement in the template model can gen- erate a change of body shape, and then the shape of the current 3D model can be freely deformed. As one part of body shape changes, the proportion of the body part is directly adjusted. Then the mesh surface of the 3D model can be allometric depending on the shape variation in all parts of the body. Thus, the customized 3D body shape of a real human can be presented. For the method of constructing 3D human model from 2D images, the captured images are analyzed to obtain the feature points. However, some important feature points such as neck point, thigh point, knee point, and shank point may not be fully adopted (Hilton et al., 2000; Seo et al., 2006; Yeo, 2005). The deformation of body shape with few feature points may affect the quality and reality of the resulting 3D body shape (Lee et al., 2000). In order to have a genuine representation of 3D human model, sufficient feature information should be adopted. In this study, a systematic 0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2011.10.011 Corresponding author. Address: Department of Industrial Engineering and Engineering Management, National Tsing Hua University, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan, ROC. Tel.: +886 3 5742655; fax: +886 3 5737107. E-mail addresses: [email protected] (Y.-L. Lin), [email protected] (M.-J.J. Wang). Expert Systems with Applications 39 (2012) 5012–5018 Contents lists available at SciVerse ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Upload: yueh-ling-lin

Post on 05-Sep-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Constructing 3D human model from front and side images

Expert Systems with Applications 39 (2012) 5012–5018

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

Constructing 3D human model from front and side images

Yueh-Ling Lin, Mao-Jiun J. Wang ⇑Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan, ROC

a r t i c l e i n f o

Keywords:2D imagesFeature extraction3D human modelShape deformation

0957-4174/$ - see front matter � 2011 Elsevier Ltd. Adoi:10.1016/j.eswa.2011.10.011

⇑ Corresponding author. Address: Department ofEngineering Management, National Tsing Hua UniverRoad, Hsinchu 30013, Taiwan, ROC. Tel.: +886 3 5742

E-mail addresses: [email protected] (Y.-L.(M.-J.J. Wang).

a b s t r a c t

Constructing 3D human model from 2D images provides a cost-effective approach to visualize digitalhuman in virtual environment. This paper presents a systematic approach for constructing 3D humanmodel using the front and side images of a person. The silhouettes of human body are first detectedand the feature points on the silhouettes are subsequently identified. The feature points are further usedto obtain the body dimensions that are necessary for identifying a template 3D human model. The shapeof the template human model can be modified by the free-form deformation method. Moreover, the pro-posed approach has been applied for constructing the 3D human models of 30 subjects. The comparisonsbetween the constructed 3D models and the 3D scanning models of the 30 subjects indicate that the pro-posed system is very effective and robust.

� 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Digital human modeling is very useful in the areas of custom-ized product design, multimedia games, and virtual reality (Kuo& Wang, 2009). Once the 3D human model is constructed, the dig-ital human can be visualized and animated. With the advancementof optoelectronic technology, the 3D human model can be con-structed by using 3D whole body scanner. The 3D whole bodyscanning system has been used in several national anthropometricsurveys, such as the Civilian American and European SurfaceAnthropometry Resource (CAESAR) Project for establishing theanthropometric database (Robinette, Daanen, & Paquet, 1999).The obtained 3D scanning human models can be used for variousapplications including proactive product design and workplaceevaluations (Lu & Wang, 2008). However, the use of 3D scannerfor constructing 3D human model is very costly, and lacking ofmobility (Seo, Yeo, & Wohn, 2006). A low-cost system for con-structing 3D human model is therefore needed.

Constructing 3D human models based on 2D images is a simpleand logical approach. There is increasing demand for generatingappropriate shape and plausible human model from photographs(Seldon, Stevens, & Tucker, 1940; Wang, Wang, Chang, & Yuen,2003). To construct a 3D human model from photographs, the sil-houette information and its corresponding relationship of humanbody can be obtained from images (Lee, 2000; Lu, Wang, Chena,& Wua, 2010). With the detection of shape feature from body

ll rights reserved.

Industrial Engineering andsity, 101, Section 2, Kuang Fu655; fax: +886 3 5737107.Lin), [email protected]

silhouette, the spatial relations between the 2D images and the3D template model can be established in the coordinate system.The relationship is subsequently used for morphing the body shapeof the human model. The 3D modeling of the human body can bemodified according to the relevant anthropometric measurements(Lee, Gu, & Magnenat-Thalmann, 2000). Since the variation of bodyshape is associated with anthropometric measurements in differ-ent parts of the body, the anthropometric data can be used asparameters to vary the body shape with the required attributes.

For the method of morphing body shape, the free-form defor-mation (FFD) is a general approach (Mochimaru & Kouchi, 1998).With a 3D whole body laser scanner, a rich source of data for hu-man body surface can be obtained. From the database, a template3D human model is selected as a common template mesh. Thebody shape of the template model can be deformed by movingthe control points through translation and scaling in specific bodyregions. The regional displacement in the template model can gen-erate a change of body shape, and then the shape of the current 3Dmodel can be freely deformed. As one part of body shape changes,the proportion of the body part is directly adjusted. Then the meshsurface of the 3D model can be allometric depending on the shapevariation in all parts of the body. Thus, the customized 3D bodyshape of a real human can be presented.

For the method of constructing 3D human model from 2Dimages, the captured images are analyzed to obtain the featurepoints. However, some important feature points such as neck point,thigh point, knee point, and shank point may not be fully adopted(Hilton et al., 2000; Seo et al., 2006; Yeo, 2005). The deformationof body shape with few feature points may affect the quality andreality of the resulting 3D body shape (Lee et al., 2000). In orderto have a genuine representation of 3D human model, sufficientfeature information should be adopted. In this study, a systematic

Page 2: Constructing 3D human model from front and side images

Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018 5013

approach for constructing 3D human model using the feature infor-mation from the front and side 2D images is proposed. With anautomated feature extraction method, the detected feature pointson the silhouette of human body can facilitate the collection ofanthropometric data. Subsequently, the anthropometric measure-ments can be used for identifying a template 3D human model.Then the free-form deformation method is applied to constructthe customized 3D human model from a template human model.Finally, the resulting human models are quantitatively evaluatedand validated.

2. Feature extraction and shape deformation

The proposed system for constructing 3D human model in-volves feature extraction, body shape deformation and 3D modelconstruction. The framework of the proposed system is illustratedin Fig. 1. For image analysis of 2D photographs, the silhouette ofhuman body is first detected by using an edge detector. Then thefeature points on the body silhouette can be extracted by using afeature extraction algorithm. The extracted feature points can fur-ther be used to obtain the body dimensions that are necessary foridentifying a template 3D human model and morphing 3D bodyshape.

2.1. Silhouette detection

To extract the body silhouette from 2D images, two photo-graphs are taken from the front and side views respectively by adigital camera. In image processing, the color images are first con-verted to binary images. The Canny edge detector (Canny, 1986) issubsequently applied to locate the silhouettes of human body fromthe two binary images. Then the edge pixels of the body silhouettesare encoded by using the Freeman’s 8-connected chain codes(Freeman, 1961).

2.2. Feature extraction

To extract feature points from the chain coded body silhouette,an automatic feature extraction algorithm is proposed to analyzethe directional differences between two connected edge pixels ofthe body silhouette (Lin & Wang, 2011). The proposed methodcharacterizes the feature by calculating the directional differencesbetween two connecting edge pixels, dj and dj+1, with the absolutedifference, ek. If ek is equal to 2, then the edge pixel with a 90-de-gree directional change will be considered as a feature point. Therule for detecting the feature point is presented as:

ek ¼ jdj þ 1� djj ¼ 2 ð2:1Þ

Based on this procedure, a series of feature points can be subse-quently detected. However, these detected feature points are notenough to obtain all the body dimensions; other feature points needto be further extracted by referring these detected feature points.Thus the detected feature points are used as the auxiliary pointsto determine new feature points, which have 45-degree directionalchange. Following this procedure, 38 feature points F1-F38 from thefront silhouette and 22 feature points S1-S22 from the side silhou-ette can be detected. They are illustrated in Fig. 2 (a) and (b).

2.3. Body dimension measurement

In order to collect body dimensions from feature points, the fea-ture points of the front-body silhouette and side-body silhouetteare being represented in the x–z and y–z coordinates. By connect-ing a line between two relevant feature points in the coordinatespace, several feature lines and body dimensions can be obtained.

For example, as shown in Fig. 2, the stature is defined by astraight line linking the head vertex point F1 and the ground basepoint F36. Similarly, the head breadth is obtained by a horizontalline linking the two head points F2 and F3. Based on this procedure,a series of body dimensions including breadth and depth can beobtained. The vertical lines in Fig. 2(a) and (b) indicate the statureof a person. The horizontal lines in Fig. 2(a) illustrate the breadthmeasures of head, neck, shoulder, chest, waist, hand, hip, thigh,knee, shank, ankle, and foot. The horizontal lines in Fig. 2(b) illus-trate the depth measures of head, neck, chest, waist, hip, thigh,knee, shank, ankle, and foot.

The body dimensions can be obtained by directly converting thepixel data into millimeter value using the scaling relation. Theheight, breadth, and depth dimensions are listed in Tables 1 and2. Besides, the curvilinear dimensions and girth measures can alsobe obtained by calculating the arc lengths of body silhouettes. Thusa total of 25 body dimensions can be obtained from the identified60 feature points. Moreover, the collected body dimensions areused to identify a template 3D human model for body shapedeformation.

In order to quantify the consistency of body dimension mea-surements, the extracted measurements were compared withthose measured by traditional manual method. 30 subjects includ-ing 15 males and 15 females aged from 18-30 were tested. Eachsubject was measured three times by a trained staff with the tradi-tional anthropometry. The precision of the extracted measure-ments is assessed by comparing the mean absolute difference(MAD) with the maximum allowable error (MAE) being proposedby the Canadian Land Forces (Forest, Chamberland, Billette, &Meunier, 1999). The MAD is defined as

MAD ¼ 1n

Xn

i¼1

1r

Xr

j¼1

Xij �1r

Xr

j¼1

Yij

����������

!ð2:2Þ

where Xij denotes the jth extracted measurement of the ith subject,Yij denotes the jth manual measurement of the ith subject, n de-notes the number of subjects, and n = 30, r denotes the number ofrepetitions, and r = 3.

In here, the MAD is used to quantify the consistency betweenmeasurements (Gordon et al., 1989). The MAD between manualand automated measurements is calculated and compared to theMAE for each dimension (Forest et al., 1999). Eleven of the bodydimensions including length, breadth and depth measures were se-lected for comparison. Table 3 lists the mean absolute differencebetween manual and automated measurements. The statisticalanalysis results indicate that all of the MAD values are within theacceptable limits. In other words, the dimension measurementsobtained from the images were rather consistent.

Furthermore, Table 4 presents the comparisons of the obtainedbody dimensions by using the proposed feature extraction algo-rithm as well as from the other methods (Hilton et al., 2000; Lee,2000; Lee et al., 2000; Meunier & Yin, 2000; Seldon et al., 1940;Seo et al., 2006; Wang et al., 2003). As it can be seen, the proposedalgorithm collects more body dimensions than that of the othermethods in both the front and side images. The inclusion of moredimensions tends to facilitate body morphing to generate morerealistic body model.

2.4. Selection of template 3D human model

In this study, the Vitronic Vitus-1600 3D whole body scanningsystem was used to construct a database of 3D scanning humanmodels. The 3D scanning database contains 263 human subjects(172 males and 91 females) with age ranging from 18 to 30. Here,the 3D body shape of the subjects in the 3D human model databasecan be characterized by using the ratio of chest circumference to

Page 3: Constructing 3D human model from front and side images

Fig. 1. The framework for constructing 3D human model from 2D images.

5014 Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018

Page 4: Constructing 3D human model from front and side images

Fig. 2. The 38 and 22 feature points from the front (a) and side (b) silhouettes are used for obtaining the 25 body dimensions.

Table 1The front body dimensions being obtained from each body region.

Body region The front body dimensions Codea

Head Stature 1Head breadth 2

Neck Neck breadth 3,4Shoulder Biacromial breadth 5Chest Chest breadth 6Waist Waist breadth 7Hand Body breadth 8Hip Hip breadth 9Thigh Thigh breadth 10Knee Knee breadth 11Shank Shank breadth 12Ankle Bimalleolar breadth 13Foot Foot breadth 14

a Refers to Fig. 2(a).

Table 2The side body dimensions being obtained from each body region.

Body region The side body dimensions Codea

Head Stature 1Head length 2

Neck Neck depth 3Chest Chest depth 4Waist Waist depth 5Hip Buttock depth 6Thigh Thigh depth 7Knee Knee depth 8Shank Shank depth 9Ankle Ankle depth 10Foot Foot length 11

a Refers to Fig. 2(b).

Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018 5015

waist circumference and the ratio of hip circumference to waistcircumference. On the other hand, the ratio of chest breadth to

waist breadth and the ratio of hip breadth to waist breadth areused to characterize the body shape of a person in a 2D image. A

Page 5: Constructing 3D human model from front and side images

Table 3The MAD between manual and automated measurements.

Items Maximumallowableerror (mm)

Mean(mm)

SD(mm)

MAD(mm)

Head length 2 199.8 0.4 1.0Biacromial breadth 8 349.0 1.6 4.1Chest breadth 8 320.4 1.9 3.6Chest depth 4 236.8 1.0 2.3Waist breadth 6 287.0 1.5 3.4Waist depth 8 207.0 1.9 4.2Hip breadth 7 372.5 1.7 4.0Buttock depth 8 248.6 2.0 3.9Bimalleolar breadth 2 69.8 0.4 1.0Foot breadth 2 100.7 0.4 0.9Foot length 3 242.4 0.7 1.6

The maximum allowable error is based on the survey of the Canadian Land Forces(Forest et al., 1999).

5016 Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018

template 3D human model can be identified by comparing thebreadth ratios obtained from the 2D image with the circumferenceratios from the 3D human model database.

2.5. Relationship between 2D image to 3D model

To establish a corresponding relationship between the original2D images and the template 3D human model, the 3D templatemodel is first projected to 2D images. An edge detection methodis applied to detect the silhouettes of the 2D images. Then, the fea-ture points of the body silhouette are subsequently identified.Fig. 1 illustrates the procedure of extracting the feature points fromthe 2D images. By mapping the projected 2D images onto the ori-ginal 2D images, each feature point from the projected 2D image

Table 4The comparison of the obtained body measurements from the proposed method and the

Body region Bodymeasurements

The proposedmethod

Seo et al.(2006)

Wang et a(2003)

Head Stature (frontview)

p p

Stature (sideview)

p p

Head breadthp

Head lengthp

Neck Upper neckbreadth

p

Neck breadthp p

Neck depthp p p

Shoulder Biacromialbreadth

p

Chest Chest breadthp p p

Chest depthp

Waist Waist breadthp p

Waist depthp p p

Hand Body breadthp p

Hip Hip breadthp

Buttock depthp

Thigh Thigh breadthp p p

Thigh depthp p p

Knee Knee breadthp

Knee depthp

Shank Shank breadthp

Shank depthp

Ankle Bimalleolarbreadth

p p

Ankle depthp p

Foot Foot breadthp

Foot lengthp

Total number ofmeasurements

25 9 8 12

has a corresponding feature point on the original 2D image(Fig. 1(e)). The spatial relations between the feature points onthe projected 2D image and its corresponding feature points onthe original 2D image are used for body shape deformation.

2.6. Shape deformation of 3D model

Since each feature point from the projected 2D images has acorresponding feature point on the 3D template model, the featurepoints on the template human model can be taken as the controlpoints in the x-z and y-z coordinate systems. A free-form bodyshape deformation method is used to manipulate the controlpoints (Mochimaru & Kouchi, 1998). For example, a control point,P, is set for changing the shape of a body region in the 3D templatemodel. To perform the translation and scaling of the control points,the local coordinate P = (x,y,z) should be mapped to the worldcoordinate P ¼ ðx; y; zÞ by taking the following conversionequation:

PðtÞ ¼XI

i¼0

Ii

� �ð1� xÞI�ixi

XJ

j¼0

Jj

� �ð1� yÞJ�jyj

(

�XK

k¼0

Kkð1� zÞK�kzkpijk

" #)ð2:3Þ

where P(t) is a control point located in local coordinate t = (x, y, z),pijk is the coordinate of the (i, j, k)th control point on the model, I,J, and K are the number of control points in x, y, z directions,respectively.

By using the conversion Eq. (2.3), the smooth deformation indifferent body regions can be effectively executed.

Further, for collecting the height, breadth, and depth dimen-sions, the distance between two relevant control points in 2D

other methods.

l. Meunier and Yin(2000)

Hilton et al.(2000)

Lee(2000)

Lee et al.(2000)

Seldon et al.(1940)

p p

p p

p p pp pp p p

p p p pp p pp p p p

p p p p pp p p p pp p p pp p p pp pp p p pp p p p

pp p

pp

p p p

p p p

9 14 14 13

Page 6: Constructing 3D human model from front and side images

Fig. 3. The user interface of the 3D human model construction system.

Table 5The comparison of silhouette variation before and after deformation.

Front view Ea (%) Side view Ea (%)

Before deformation 52.4% Before deformation 41.7%

After deformation 11.5% After deformation 6.8%

Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018 5017

images is calculated. Then the 3D body shape of the template mod-el is deformed by using the 14-front and 11-side body dimensionsbeing collected from the original 2D images. Each body dimensionrepresents an attribute of shape variation.

The shape variation relative to body dimensions in differentbody regions can be freely deformed. Once the body dimensionsin different body regions are given, the proportion of each bodypart can be directly adjusted. The body shape of the 3D modelcan be allometric depending on the dimension control of each bodyregion. Thus, the shape of the body can be changed.

2.7. 3D human model construction

The commercial software 3D MAXTM and its software develop-ment kit (SDK) MaxscriptTM was used as a platform to implementthe proposed system. The system was developed as a plug-in in 3DMAXTM. By integrating the proposed approach of feature extractionand body shape deformation, the 3D human model can be con-structed. Fig. 3 illustrates the interface design of the proposed sys-tem. After logging in the system, the user can input his/her 2Dimages. Through a series of image analysis, the feature points ofthe body silhouettes can be extracted automatically. The obtainedbody dimensions can be used for identifying a template 3D humanmodel and morphing the 3D body shape. Finally, the constructed3D human model can be visualized.

3. Performance evaluation

In order to evaluate the effectiveness of the proposed 3D humanmodel construction system, the differences between the silhouetteof the constructed 3D model and that of the original 2D image wereanalyzed by using the silhouette variation comparison method (Guet al., 1999; Seo et al., 2006; Xi, 2007). The silhouette variation, Ea,which represents the non-overlapping area of the two silhouettes,is used to indicate the level of matching between the constructed3D model and the original 2D image (Seo et al., 2006). A smallerEa, i.e., a smaller non-overlapping area, indicates a higher similar-ity. The Ea can be obtained by the following equation:

Ea ¼PðTði; jÞ � Dði; jÞÞP

Tði; jÞ þPðTði; jÞ � Dði; jÞÞP

Dði; jÞ ð3:1Þ

Page 7: Constructing 3D human model from front and side images

Table 6The comparison of silhouette variation with the previous studies.

Ea (%)

Front view Side view

Seo et al. (2006) Yeo et al. (2005) Xi (2007) The proposed method Seo et al. (2006) Yeo et al. (2005) The proposed method

14.2% 22.0% 15.7% 9.3 ± 2.7% 28.0% 27.0% 8.9 ± 2.1%

Table 7The comparison of the dimensional difference between the constructed 3D humanmodels and the original 3D scan human models with the previous studies.

Table type Xi et al. (2007) Xi (2007) The proposed method

Mean error 13.3 mm 19.5 mm 12.8 mmSD 26.3 mm 32.5 mm 10.3 mmMax error (+) 127.0 mm 176.5 mm 72.3 mmMin error (�) �67.5 mm �74.6 mm �10.2 mm

5018 Y.-L. Lin, Mao-Jiun J. Wang / Expert Systems with Applications 39 (2012) 5012–5018

where T(i, j) represents the value of the pixel if the pixel at (i, j) lo-cates inside the 3D template model, Tði; jÞ represents the value ofthe pixel if the pixel at (i, j) locates outside the 3D template model,D(i, j) represents the value of the pixel if the pixel at (i, j) is the fore-ground pixel, Dði; jÞ represents the value of the pixel if the pixel at(i, j) is the background pixel.

The calculated Ea between the silhouette of the constructed 3Dmodel and the original 2D image of a male subject is illustrated inTable 5. As it can be seen, prior to deformation, the silhouette var-iation was relatively large. After completing body shape deforma-tion, the resulting 3D model showed a good matching with thatof the original 2D image. It demonstrates that a 3D human modelcan be effectively constructed by the proposed system.

In order to further validate the robustness and effectiveness ofthe proposed system, 30 subjects including 15 males and 15 fe-males with age ranging from 18-30 were recruited and scannedby using the 3D whole body scanner. For each subject, the silhou-ette of the constructed 3D model using the proposed approach wascompared with the silhouette of the original 2D image. As shown inTable 6, the Ea value of the front view and side view images was9.3 ± 2.7% and 8.9 ± 2.1%, respectively. The silhouette variationsof the two views were much smaller than those of the previousstudies (Seo et al., 2006; Xi, 2007; Yeo et al., 2005).

Moreover, the comparison was made by comparing the differ-ences between the constructed 3D human models and the original3D scan human models. Xi (2007), Xi, Lee, and Shu (2007) made acomparison between the constructed 3D human model and theoriginal 3D scan human model by using the ‘‘IMInspect’’ modulein the PolyWorks software (Polyworks, 2007). In here, the evalua-tion was performed by using the ‘‘Qualify-Tolerance’’ module inthe Geomagic software (User Manual, 2002). Table 7 shows thatthe mean error, maximum error, and minimum error of the 30 sub-jects was 12.8, 72.3, and 10.2 mm, respectively. All the perfor-mance indices between the constructed 3D models and theoriginal scan models are much smaller than the results being re-ported in the previous studies (Xi, 2007; Xi et al., 2007). This find-ings support the effectiveness of the proposed system.

4. Conclusion

This paper proposes a 3D human model construction systemusing the feature information from the front and side 2D imagesof a person. The integrated system for constructing 3D human mod-els involves feature extraction, body shape deformation and 3Dmodel construction. After taking 2D photographs, a body featureextraction algorithm was developed to obtain feature points and

collect body dimensions. Based on the obtained body dimensions,a 3D template human model can be identified for body shape defor-mation. Thus, a customized 3D human model can be constructed. Inorder to evaluate the effectiveness of the proposed system, 30 sub-jects were scanned and their 3D human models were also con-structed by using the proposed system. Comparing theconstructed 3D human models with the original 3D scanning hu-man models, the results support the effectiveness and robustnessof the proposed system. Additionally, the mechanism of interac-tively morphing of 3D human models was developed. Moreover,the constructed 3D human models can further be used for custom-ized product design and dynamic virtual simulation.

References

Canny, J. (1986). A computational approach to edge detection. IEEE Transactions onPattern Analysis and Machine Intelligence, 8(6), 679–698.

Forest, F., Chamberland, A., Billette, J., & Meunier, P. (1999). Anthropometric surveyof the Land Forces: Measurement error analysis. DCIEM No. CR 1999-041,DCIEM, Toronto.

Freeman, H. (1961). On the encoding of arbitrary geometric configuration. IRETransactions on Electronics Computers, 10, 264–268.

Geomagic User Manual (2002). Raindrop Geomagic. Research Triangle, NC, USA.Gordon, C. C., Bradtmiller, B., Clausen, C. E., Churchill, T., McConville, J. T., &

Tebbetts, I. (1989). 1987–1988 Anthropometric survey of US Army personnel.Methods & summary statistics. Natick/TR-89-044. Natick, MA: US Army NatickResearch Development and Engineering Center.

Gu, X., Gortler, S.J., Hoppe, H., Mcmillan, L., Brown, B.J., & Stone, A.D. (1999).Silhouette mapping. Tech. Rep. TR-1-99, Harvard.

Hilton, A., Beresford, D., Gentils, T., Smith, R., Sun, W., & Illingworth, J. (2000).Whole-body modelling of people from multiview images to populate virtualworlds. The Visual Computer, 16(7), 411–436.

Kuo, C. F., & Wang, M. J. (2009). Motion generation from MTM semantics. Computersin Industry, 60, 339–348.

Lee, W. (2000). Feature-based approach on animatable virtual human cloning. PhDthesis, in MIRALab, University of Geneva, Switzerland.

Lee, W., Gu, J., Magnenat-Thalmann, N. (2000). Generating animatable 3D virtualhumans from photographs Eurographics, Computer Graphics Forum, BlackwellPublisher, 19(3).

Lin, Y. L., & Wang, M. J. (2011). Automated body feature extraction from 2D images.Expert Systems with Applications, 38, 2585–2591.

Lu, J. M., & Wang, M. J. (2008). Automated data collection using 3D whole bodyscanner. Expert Systems with Applications, 35, 407–414.

Lu, J. M., Wang, M. J., Chena, C. W., & Wua, J. H. (2010). The development of anintelligent system for customized clothing making. Expert Systems withApplications, 37(1), 799–803.

Meunier, P., & Yin, S. (2000). Performance of a 2D image-based anthropometricmeasurement and clothing sizing system. Applied Ergonomics, 31(5), 445–451.

Mochimaru, M., & Kouchi, M. (1998). A new method for classification and averagingof 3D human body shape based on the FFD technique. International Archives ofPhotogrammetry and Remote Sensing, XXXII, 888–893.

Polyworks, Innovmetric Company, (2007). http://www.innovmetric.comRobinette, K.M., Daanen, H., & Paquet, E., 1999. The Caesar project: A 3D surface

anthropometry survey. In Proceedings of the Second international conference on3D digital imaging and modeling, pp. 380–386.

Seldon, W., Stevens, S. S., & Tucker, W. B. (1940). The varieties of human physique: Anintroduction to constitutional psychology. Darien, Conn.: Hafner.

Seo, H., Yeo, Y., & Wohn, K. (2006). 3D body reconstruction from photos based onrange scan. Lecture Notes in Computer Science, 3942, 849–860.

Wang, C. C. L., Wang, Y., Chang, T. K. K., & Yuen, M. M. F. (2003). Virtual humanmodeling from photographs for garment industry. Computer-Aided Design,35(6), 577–589.

Xi, P. (2007). A PCA-based approach to the 3D reconstruction of human body fromsingle frontal-view silhouette. Master thesis, University of Ottawa.

Xi, P., Lee, W., & Shu, C. (2007). A data-driven approach to human-body cloningusing a segmented body database. Pacific Graphics, 139–147.

Yeo, Y. (2005). A data-driven method for the 3D avatars creation from photos.Master thesis, Department of Computer Science in Korea Advanced Institute ofScience and Technology.