saliency-guided luminance enhancement for 3d …jsj.xaut.edu.cn/uploadfile/wyh/131.pdf ·...

Download Saliency-Guided Luminance Enhancement for 3D …jsj.xaut.edu.cn/UploadFile/wyh/131.pdf · Saliency-guided Luminance Enhancement for 3D Shape ... saliency of Golf ball using the color

If you can't read please download the document

Upload: lamkhuong

Post on 06-Feb-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • Saliency-guided Luminance Enhancement for 3D Shape Depiction

    Wen Hao, Yinghui Wang Department of Computer Science and Engineering

    Xi'an University of technology Xian, China

    e-mail:[email protected]

    AbstractWe present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.

    Keywords-mesh saliency; non-photorealistic rendering; shape enhancement; shape index

    I. INTRODUCTION Shape is arguably the most important property of 3D

    objects which has received extensive attention in Computer Graphics community. Line drawing is one of the most important fields among previous techniques that try to depict the shape of objects. Many line-based approaches have been proposed for the shape depiction since the work of Saito and Takahash[1], including suggestive contour[2], ridge and valleys[3], apparent ridges[4], demarcating curves[5] and Laplacian lines[6]. Such techniques find curves that have special properties in terms of the differential geometry of the surface. However, line-based algorithms often ignore material or illumination information and depict mainly sharp surface features by ignoring less important details.

    Shape through shading is another highly popular approach that tries to depict the shape of objects. Artists always convey shape through the subtle tweaking of shading behaviors. Owing to their efficiency for conveying perceptual information of the underlying shape, shading-based techniques are now becoming a widely used technique for depicting 3D object shape. Many Non-Photorealistic Rendering shading(NPR) techniques have been proposed aiming at enhancing shape depiction explicitly[7-16]. However, most of these methods have not taken the effect of visual saliency into consideration during shape depiction. In this paper, we present a novel saliency-guided shading scheme to improve shape depiction explicitly incorporating the mesh saliency into luminance

    enhancement. Our goal is to improve the shape information of salient regions by dynamically perturbing the luminance of each vertex whilst keeping the desired appearance unimpaired. Guided by the visual saliency measure, we adaptively enhance the vertex luminance of the salient regions. The focus of our algorithm is precisely on finding alternatives that preserve material and illumination information, while still efficiently depicting shape. Fig. 1 shows the overview of our algorithm. Fig. 1(a) shows the Phong shading of Golf ball. Fig. 1(c) shows the mesh saliency of Golf ball using the color code (Fig. 1(b)). ( )v denotes the vertex curvature. Fig. 1(d) shows the color which is smoothed over the surface of the mesh. Fig. 1(e) shows the contrast signal which is the difference between the original signal (Fig. 1(a)) and smooth signal (Fig. 1(d)). Fig. 1(f) shows the final enhanced result. The whole process of our algorithm is described as follows:

    (1) Mesh saliency computation. We introduce another mesh saliency computation method based on the distance-weighted center-surround evaluation of surface curvature. Our method has given us very promising results on several 3D models. (Fig. 1 A)

    (2) Contrast signal computation. The difference between the original and smooth signals is denoted as the contrast signal. (Fig. 1 B)

    (3) Shape enhancement. According to the Phong lighting model, the vertex luminance in the high saliency regions is enhanced by adding back a -scaled contrast signal. The scaling factor is not a user-defined parameter whereas it is computed according to surface shapes. (Fig. 1 C)

    Figure 1. Overview of our algorithm.

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

    2013 International Conference on Virtual Reality and Visualization

    978-0-7695-5150-0/13 $26.00 2013 IEEEDOI 10.1109/ICVRV.2013.10

    9

  • The remaining of the paper is organized as follows. After a brief review of shading-based shape depiction techniques in Section 2, a new mesh saliency computation algorithm is described in Section 3. Section 4 presents the procedures of shape enhancement. The experimental results are shown in section 5. The limitations of our method and future research are indicated in the last section.

    II. RELATED WORK Appropriate shading supplies both surface details and

    overall shape information that help for qualitative understanding 3D shape. The most widely used of these techniques is Ambient Occlusion[7] which is related to Accessibility shading technique[8]. Such methods tend to darken surface regions such as concavities, whereas shallow(yet salient) surface details are often missed or even smoothed out. Kindlmann[9] made use of the curvature-based transfer functions to determine the color of the feature regions. Barla[10] extended the classic 1D texture mapping by adding a vertical detail axis to build a 2D texture mapping. It can highlight the near-silhouette region and stimulate various material highlights such as metals. Cignoni[11] enhanced the geometric features during the rendering by scaling up high-frequency of the surface normal. The enhancement strength is controlled by user. The method is efficient to the regular CAD model but not suitable for the highly detailed 3D models. Rusinkiewicz[12] introduced a new shading model to expose shape features and surface details by positioning a local light per-vertex to achieve maximum contrast. This method uses a very specific cosine shading model unable to accommodate most existing materials or illumination. Vergne[13] presented a new light warping approach that locally deforms lighting patterns to reveal important surface features. Vergne[14,15] enhanced the surface feature by adjusting reflected light intensity per incoming light direction in a way that depends on both surface curvature and material characteristics. Ritshel[16] proposed a 3D Unsharp Masking technique which increases the contrast of the reflected radiance in 3D scenes. This technique is not performed indiscriminately at every surface point that tends to make flat surfaces appear rounded.

    All these methods mentioned above have not taken the effect of visual saliency into consideration during shape depiction which can guide the visual attention in low-level human vision. Inspired by image saliency map proposed by Itti[17], Lee[18] introduced the concept of mesh saliency as a measure of regional importance for 3D models. Mesh saliency captures visually interesting regions on a model. It is an important measure of visual saliency which has been used in shape enhancement[19-21]. Kim[19] presented a saliency-based enhancement technique to draw visual attention to user-specified region by changing the luminance and saturation in a direct volume rendering application. Kim[20] introduced a geometry modification method to persuasively direct visual attention. Miao[21] proposed a normal perturbation technique to enhance the visually salient features of 3D shapes incorporating the visual

    saliency measure of a polygonal mesh into the normal enhancement operation.

    In this paper, we establish a relationship between shape enhancement and view attention. The final luminance of each vertex in high saliency regions is enhanced in order to depict the object shape explicitly. By pushing the influence of attention into graphics pipeline, the perception of detail and overall shape characteristics of 3D models are clearly enhanced. Experimental results demonstrate that our method is effective and efficient.

    III. DISTANCE-BASED MESH SALIENCY Mesh saliency is defined as a measure of regional

    importance for 3D models which is able to capture the most interesting regions on the model. It is changes in the curvature that lead to saliency or non-saliency[18]. The first step of our saliency-guided shading involves computing surface curvature. We adapt the algorithm in Rusinkiewicz [22] to compute curvature for each vertex. As a measure of surface bending, the curvature is largely influenced by the surrounding points. In this paper, we propose a distance-based mesh saliency computation, which is expressed as a centersurround operator on distance-weighted average of the mean curvatures.

    Suppose ( )v is the mean curvature of vertex v. ( )M v are the direct neighbors which connect to vertex v. We call

    ( )M v 1-ring of v. x v is the Euclidean distance between vertex x and v. Let ( ( ), ( ))G v M v denote 1-ring distance-weighted average of the mean curvature. We compute this as:

    ( )

    ( )

    ( )( ( ), ( )) x M v

    x M v

    x v vG v M v

    x v

    =

    (1)

    Furthermore, we use kd-tree to find k nearest neighbors of vertex v. Let ( )N v denote the k nearest neighbors of vertex v. ' ( ( ), ( ))G v N v is defined as:

    ( )'

    ( )

    ( )( ( ), ( )) x N v

    x N v

    x v vG v N v

    x v

    =

    (2)

    We compute the saliency ( )v of a vertex v as the absolute difference between the distance-weighted averages computed of different neighbors:

    '( ) ( ( ), ( )) ( ( ), ( ))v G v M v G v N v = (3) Different number of nearest neighbors of vertex may

    lead to different mesh saliency. Fig. 2 shows examples where saliency is computed for several models with different k value, and displayed using the color code (Fig. 1(b)). The first row shows the Phong shading of different models. The second row shows mesh saliency of different models when k =30 and the third row shows the mesh saliency when k =200. In color images shown in this paper, reds, yellows and greens show the high saliency of the

    101010101010

  • model and blues show the low saliency. The regions with high saliency attract the viewers attention more easily.

    (a) Phong shading of Horse (d) Phong shading of Golf ball

    (b) k =30 (e) k =30

    (c) k =200 (f) k =200

    Figure 2. Mesh saliency of different k value.

    IV. SHAPE ENHANCEMENT

    A. Contrast Signal Computation Once we have computed the salient regions, we can use

    it to modulate the various visualization parameters. Here, we focus on adjusting the final luminance of each vertex as determined by the Phong lighting model.

    According to the Phong lighting model, the final luminance L of each vertex can be calculated as follows:

    ( ) ( )sa a d d s sL k I k I n l k I n h= + +

    (4) Where , ,a d sI I I are the intensity of ambient, diffuse and

    specular light respectively, and , ,a d sk k k correspond to the ambient, diffuse and specular reflection coefficients. l

    is

    the light direction and v

    is the view direction. The halfway unit vector h

    can be easily determined as ( ) / 2h l v= +

    .

    We only enhance vertex luminance in the high salient regions and have nothing to do with the low-salient regions.

    We use threshold to determine which region should be enhanced. If mesh saliency ( )v is larger than a

    threshold, ( ( ))W v is set as 1. On the contrary, ( ( ))W v is set as 0.

    1 ( )( ( ))

    0 ( )v

    W vv

    =

  • saddle-like shapes. Given the principal curvatures of each vertex 1 2,k k , shape index is defined as:

    2 11 2

    2 1

    2 arctan ( )k ks k kk k

    +=

    [ 1,1]s (8)

    We calculate the enhanced coefficient as: 1 0 1

    s

    s

    ee

    += < < (9)

    Equation (9) is tailored to enhance surface details and overall shape information, which we translate into three properties. (1) As shown in Fig. 3, the function is monotone decreasing. (2) When 0s = , 1 = . (3) The enhanced luminance coefficient of concavities surface is greater than convexities surface. In addition, is a user-defined parameter and when 1 = , 1 = . The enhancement coefficient tends to decrease with the incensement of . In our implementation, we set 0.1 = .

    Figure 3. The correspondence between shape index and .

    V. EXPERIMENTAL RESULTS AND DISCUSSION In this paper, the threshold is set as 1. The regions are

    defined as the high saliency regions when 1 > . Using similar viewpoints and models, Fig. 4 shows the final enhanced results after the saliency-guided luminance enhancement operation. The Phong shading of Golf ball is shown in Fig. 2(d). Fig. 4(a) and Fig. 4(b) show the final rendering results with different k value. Different k value always leads to different enhanced results. Fig. 4(a) shows the enhanced rendering results guided by the mesh saliency in Fig. 2(e). Fig. 4(b) shows the enhanced rendering results guided by the mesh saliency in Fig. 2(f). It can be seen from Fig. 4(a) and (b) that the edges of the Golf ball are enhanced effectively.

    (a) k=30 (b) k =200

    Figure 4. The enhanced results for Golf ball with different k value.

    Fig. 5 shows the enhanced results with different value. The Phong shading of Golf ball is shown in Fig. 2(d). Fig. 5(a) shows the enhanced result when 1 = and Fig. 5(b) shows the enhanced result when 0.1 = . Both of them are computed based on the mesh saliency shown in Fig. 2(e). The experimental results demonstrate that the enhancement strength decreases with increased.

    (a) 1 = (b) 0.1 =

    Figure 5. The enhanced results with different value.

    Fig. 6 shows the enhanced rendering results for the Budda model. Fig. 6(a) shows the Phong shading of Budda model. Fig. 6(b) shows the mesh saliency of Budda model which is computed when k=30. The crease of the Buddha clothes is the visually salient regions (red, yellow and green colors). Fig. 6(c) shows the rendering result guided by the distance-based saliency measure. The luminance is enhanced to reveal surface features such as details in the crease of clothes.

    (a) Phong shading (b) Mesh saliency (c) Enhanced result

    Figure 6. The enhanced result for Budda model.

    Fig. 7 shows the enhanced rendering results for the Horse model. Fig. 7(a) shows the Phong shading of Horse model. Fig. 7(b) shows the enhanced result based on the saliency measure shown in Fig. 2(b). The head and leg of the horse are enhanced. However, the body of the horse is not processed because the regions are low saliency. Saliency-guided shading properly enhances the shape depiction of 3D model and avoids impairing the desired appearance of the model.

    Fig. 8 shows the enhanced rendering results after saliency-guided luminance enhancement operation for the Hand model. Fig. 8(a) shows the Phong shading of the Hand. Fig. 8(b) shows the enhanced result after the saliency-guided luminance enhancement operation. As shown in Fig. 8(b), the palm and wrist of Hand model are also improved

    121212121212

  • clearly and attract the viewers attention.

    (a) Phong shading

    (b) Enhanced result

    Figure 7. The enhanced result for Horse model.

    (a) Phong shading (b) Enhanced result

    Figure 8. The enhanced result for Hand model.

    As shown in Fig. 9, surface features are also convincingly enhanced with Gooch shading and cartoon shading. Fig. 9(a) shows the Gooch shading of Lucy and Fig. 9(b) shows the enhanced result obtained by our saliency-based luminance enhancement technique. Surface features of Lucy are also convincingly enhanced with the Gooch shading (e.g. observe the head, the torch, or the regions in body). Fig. 9(c) shows the Cartoon shading of Gargo and Fig. 9(d) shows the enhanced result obtained with our saliency-based luminance enhancement technique. The head and wing of Gargo are effectively depicted. The experimental results demonstrate that the enhancement is retained the results to give a cartoon appearance.

    Fig. 10 demonstrates that the enhanced result of our method is indeed comparable to Miaos method. Fig. 10(a) shows the enhanced rendering result by using Miaos method[21]. The inner of each cell of the Golf ball is enhanced which guided by Lees mesh saliency. The images of previous techniques are extracted from their

    corresponding original paper. Fig. 10(b) shows the enhanced result guided by our saliency measure(Fig. 2(e)). The brim of each cell of the Golf ball is enhanced which explicitly enhances the feature details of the model. Compared to the saliency-guided normal enhancement technique, our method can effectively bring out the geometric details of 3D model.

    (a) Gooch shading (b) Enhanced result

    (c) Cartoon shading (d) Enhanced result

    Figure 9. Saliency-guided luminance enhancement in simple lighting scenarios.

    (a) Miaos method[21] (b) Our method

    Figure 10. Comparison of our method and Miaos method for Golf ball.

    Table II shows the time for saliency-guided shape enhancement on PC with Intel(R) Core(TM) 2, CPU

    131313131313

  • 2.80GHz, 2G memory. The time of shape enhancement is computed when k=30.

    TABLE II. RUN TIMES FOR SALIENCY-GUIDED SHAPE ENHANCEMENT.

    Model Horse Golf Budda Hand Lucy Gargo Face number 4219 122882 189372 50085 131455 107882

    Time (ms) 78 1969 3500 1110 3375 2641

    Our algorithm enables real-time performance in direct different scenarios. The experimental results demonstrate that our saliency-based luminance enhancement technique can effectively improve the depiction of the visually salient shape features.

    VI. CONCLUSION We have proposed a saliency-based enhancement of 3D

    model visualization and successfully bring out the geometric details of 3D shape. Our approach takes visual saliency into account to depict surface shape by shading. The key idea is to adjust final luminance in the salient regions that depends on both surface geometry and current lighting. Our algorithm is efficient to improve shape depiction to highly 3D complex models. Compared to the previous works, the main advantage of our method is that we only enhance human perception of the 3D models by highlighting the luminance in visual saliency regions which avoid impairing the desired illumination of the model. Contrary to the user-defined parameter, the enhancement strength is determined by the local surface shapes. Furthermore, the saliency-guided enhancement framework can be used in Phong shading, Gooch shading, and cartoon shading which efficiently aid viewers in tasks like understanding complex or detailed geometric models.

    However, the limitation of our method is that we only focus on enhancing the high-frequency of vertex luminance and have little control on what property of objects will be enhanced. It will also be interesting to see how other properties can be considered for shape depiction in the future. Moreover, we will introduce more differential geometry information to improve shape depiction which could bring out better visual effect.

    ACKNOWLEDGMENT This work is supported in part by National Natural

    Science Foundation of China under Grant No.61072151, No.61272284; and in part by Shaanxi Educational Science Research Plan under Grant No.2010JK734; and in part by Shaanxi Science Research Plan under Grant No. 2011K06-35, Xian Science Research Plan under Grant No.CX1252(3); and in part by Doctoral Fund of Ministry of Education of China under Grant No. 20126118120022.

    REFERENCES [1] T. Saito and T. Takahashi, Comprehensible rendering of 3D shapes,

    in Proc. of ACM SIGGRAPH, 1990, pp.197-206.

    [2] D. Decarlo, A. Finkelstein, S. Rusinkiewicz and A. Santella, Suggestive contours for conveying shape, ACM Transactions on Graphics, vol.22, no.3, 2003, pp.848855.

    [3] Y. Ohtake, A. Belyaev and H.P. Seidel, Ridge-valley lines on meshes via implicit surface fitting, ACM Transactions on Graphics, vol.23, no.3, 2004, pp.609-612.

    [4] T. Judd, F. Durand and E.H. Adelson, Apparent ridges for line drawing, ACM Transactions on Graphics, vol.26, no.3, 2007, pp.1-7.

    [5] M. Kolomenkin, I. Shimshoni and A. Tai, Demarcating curves for shape illustration, ACM Transactions on Graphics, vol.27, no.5, 2008, pp.157:1-9.

    [6] L. Zhang, Y. He, J.Z. Xia, X. Xie and W. Chen, Real-Time Shape Illustration Using Laplacian Lines, IEEE Transactions on Visualization and computer graphics, vol.17, no.7, 2011, pp.993-1006.

    [7] M. Pharr and S. Green, GPU Gems, Addison-Wesley, pp. 279-292, 2004.

    [8] G. Miller, Efficient Algorithms for Local and Global Accessibility Shading, in Proc. of ACM SIGGRAPH, 1994, pp. 319-326.

    [9] G. Kindlmann, R.Whitaker, T. Tasdizen and T. Moller, Curvature-Based transfer functions for direct volume rendering methods and applications, in Proc. of IEEE Visualization, 2003, pp.513-520.

    [10] P. Barla, J. Thollot and L. Markosian, X-toon:An extended toon shader, in Proc. of international symposium on non-photorealistic animation and rendering, 2006, pp.127-132.

    [11] P. Cignoni, R. Scopigno and M. Tarini, A simple normal enhancement technique for interactive non-photorealistic renderings, Computers & Graphics, 2005, pp.125-133.

    [12] S. Rusinkiewicz, M. Burns and D. Decarlo, Exaggerated shading for depicting shape and detail, ACM Transactions on Graphics, vol. 25, no.3, 2006, pp. 1199-1205.

    [13] R. Vergne, R.Pacanowski, P. Barla, X. Granier and C. Schlick, Light Warping for Enhanced Surface Depiction, ACM Transactions on Graphics, vol.28, no.3, 2009, pp.25:1-8.

    [14] R. Vergne, R. Pacanowski, P. Barla, X. Granier and C. Schlick, Radiance Scaling for versatile surface enhancement, in Proc. of Interactive 3D Graphics and Games, 2010, pp. 143-150.

    [15] R. Vergne, R. Pacanowski, P. Barla, X. Granier and C. Schlick, Improving Shape Depiction under Arbitrary Rendering, IEEE Transactions on visualization and computer graphics, vol.17, no.8, 2011, pp.1071-1081.

    [16] T. Ritschel, K. Smith, M. Ihrke, T. Grosch, K. Myszkowski and H.P. Seidel, 3D Unsharp Masking for scene coherent enhancement, ACM Transactions on Graphics, vol.27, no.3, 2008, pp.90:1-8.

    [17] L. Itti, C. Koch and E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no.11, 1998, pp.1254-1259.

    [18] C.H. Lee, A. Varshney and D.W. Jacobs, Mesh Saliency, ACM Transactions on Graphics, 2005, pp.659-666.

    [19] Y. Kim and A. Varshney, Saliency-guided enhancement for volume visualization, IEEE Transactions on Visualization and Computer Graphics, vol.12, no.5, 2006, pp.925-932.

    [20] Y. Kim and A. Varshney, Persuading visual attention through geometry, IEEE Transactions on Visualization and Computer Graphics, vol.14, no.4, 2008, pp.772-782.

    [21] Y.W. Miao, J.Q. Feng and R. Pajarola, Visual saliency guided normal enhancement technique for 3D shape depiction, Computer & Graphics, vol.35, no.3, 2011, pp.706-712.

    [22] S. Rusinkiewicz, Estimating curvatures and their derivatives on triangle meshes, in Proc. of IEEE Symposium on 3D Data Processing, Visualization, and Transmission, 2004, pp. 486-493.

    [23] J.J. Koenderink and A.J.V. Doorn, Surface shape and curvature scales, Journal Image and Vision Computing, 1992, pp.557-564.

    141414141414