saliency-guided enhancement for volume visualization

29
Saliency-guided Enhancement for Volume Visualization Youngmin Kim and Amitabh Varshney Department of Computer Science University of Maryland at College Park

Upload: adelie

Post on 12-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Saliency-guided Enhancement for Volume Visualization. Youngmin Kim and Amitabh Varshney Department of Computer Science University of Maryland at College Park. Motivation. The volume datasets have grown in complexity Visible Human Project 13GB ~ 60GB National Library of Medicine (NIH) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Saliency-guided Enhancement for Volume Visualization

Saliency-guided Enhancement for Volume Visualization

Youngmin Kim and Amitabh Varshney

Department of Computer Science

University of Maryland at College Park

Page 2: Saliency-guided Enhancement for Volume Visualization

2

Motivation

The volume datasets have grown in complexity• Visible Human Project

• 13GB ~ 60GB

• National Library of Medicine (NIH)

• Richtmyer-Meshkov Instability Simulation• 2 TB (= 7.5GB * 273 time steps)

• Lawrence Livermore National Laboratory

Human visual capabilities remain fixed The need to draw visual attention to appropriate

regions in their visualization

Page 3: Saliency-guided Enhancement for Volume Visualization

3

Motivation

We can draw viewer attention in several ways Obtrusive methods like arrows or flashing pixels

• Distracts the viewer from exploring other regions Principles of visual perception used by artists and

illustrators• Gently guide to regions that they wished to emphasize

Page 4: Saliency-guided Enhancement for Volume Visualization

4

Contributions

A new saliency-based enhancement operator• Guides visual attention in volume visualization without

sacrificing local context

• Considers the influence of each voxel at multiple scales

Augments the existing visualization pipeline• Enhances regional visual saliency

Validation by eye-tracking-based user study• Our method elicits greater visual attention

Page 5: Saliency-guided Enhancement for Volume Visualization

5

Related Work - Saliency

Computation and Evaluation• Computational models for image [Itti et al. PAMI 98]

and mesh [Lee et al. SIGGRAPH 05]• Evaluation by predicting eye movements

[Parkhurst et al. 02], [Privitera and Stark PAMI 00]

Use of eye movements• Volume composition [Lu et al. EuroVis 06]• Abstractions of photographs [DeCarlo and Santella SIGGRAPH 02,

NPAR 04] Use of Saliency

• Progressive visualization [Machiraju et al., 01]• Importance-based enhancement [Rheingans and Ebert TVCG 01]• Interior and exterior visualization [Viola et al. TVCG 05]• Generalizing focus+context [Hauser Dagstuhl 03]

Saliency has not been used for guiding visual attention

Mesh Saliency

Page 6: Saliency-guided Enhancement for Volume Visualization

6

Related Work – Transfer Functions

Transfer Functions map the physical appearance to the local geometric attributes such as:• Gradient magnitude [Levoy CG&A 88]

• First and second derivatives [Kindlmann and Durkin Volume Rendering 98]

• Multi-dimensional transfer functions [Kindlmann et al. Vis 03], [Kniss et al. TVCG 02], [Kniss et al. Vis 03], [Machiraju et al. 01]

Have played a crucial role in informative Visualization Difficult to emphasize (or deemphasize) regions specified

exclusively by locations in a volume

Page 7: Saliency-guided Enhancement for Volume Visualization

7

Overview Saliency Field Enhancement Operators Emphasis Field Saliency Enhancement Saliency-enhanced Volume Rendering Validation by eye-tracking based user study

Transfer Functions

Saliency Field by User Input

Emphasis Field Computed

Enhancement Operators

Saliency-enhanced Volume Rendering

Validation by eye-tracking device

SaliencyEnhancement

Page 8: Saliency-guided Enhancement for Volume Visualization

8

S (v) = |G(C, v, σ) – G(C, v, 2σ)|

Basic idea from Saliency Computation

Saliency map is:

Mesh saliency based on curvature values Image saliency based on intensity and color In general, saliency may be defined on a given

scalar field

C : Mean curvature

Page 9: Saliency-guided Enhancement for Volume Visualization

9

Emphasis Field Computation

Mesh Saliency: S (v) = G(C, v, σ) – G(C, v, 2σ) We introduce the concept of an Emphasis Field

E to define a Saliency Field S in a volume

S (v) = G(E, v, σ) – G(E, v, 2σ)

Given a saliency field, can we design some scalar field that will generate it?

KnownUnknown

Known Unknown

Page 10: Saliency-guided Enhancement for Volume Visualization

10

Emphasis Field Computation

Expressible as simultaneous linear equations

Saliency Enhancement Operator (C-1)• CE =S , which implies E = C-1S• Given a saliency field S , the enhancement operator C-1 will

generate the emphasis field E

where cij is the difference between two Gaussian weights at scale σ and at scale 2σ for a voxel vj from the center voxel vi

=

Page 11: Saliency-guided Enhancement for Volume Visualization

11

Emphasis Field Computation

We like to use enhancement operators at multiple scales σi

• Let E i be the emphasis field at scale σi

• Compute this by applying the enhancement operator Ci-1 on the

saliency field S

• Final emphasis field is computed as the summation of E i

Page 12: Saliency-guided Enhancement for Volume Visualization

12

Emphasis Field in Practice

A system of simultaneous linear equations in n variables• Generally, can handle arbitrary saliency regions and values• Computationally expensive: O(kn2) or O(n3)

Alleviate this by solving a 1D system of equations• Given a saliency field• Solve 1D system of equations at

multiple scales and sum them up• Approximate results using

piecewise polynomial radial functions [Wendland 1995]

Interpret results to be along the radial dimension• Assume spherical regions of interest (ROI)

Page 13: Saliency-guided Enhancement for Volume Visualization

13

Visualization Enhancement

Emphasis Fields can alter visualization parameters in several ways• Various rendering stylizations and effects possible

We outline a couple of possibilities

• Brightness• Widely used to elicit visual attention by artists • Modulate the Value parameter in the HSV model as follows:

– Vnew(v) = V(v)•(1+E (v)), where –λ- ≤E (v) ≤ λ+

– Used 0.4 ≤ λ+ ≤ 0.6 and 0.15 ≤ λ- ≤ 0.35

• Saturation• Can modulate Saturation instead of Value if the latter is not

effective (for instance, in regions already very bright)

Page 14: Saliency-guided Enhancement for Volume Visualization

14

Gaussian-based vs. Saliency-guided Enhancement

Previous Gaussian-based Enhancement of a Volume• Volume Illustration [Rheingans and Ebert TVCG 01]• Importance-based regional enhancement

We use a Gaussian fall-off from the boundary of ROI

Page 15: Saliency-guided Enhancement for Volume Visualization

15

Visualization Enhancement - Brightness

Traditional Volume Rendering

Gaussian-based Enhancement

Saliency-guided Enhancement

Traditional Volume Rendering

Gaussian-based Enhancement

Saliency-guided Enhancement

Page 16: Saliency-guided Enhancement for Volume Visualization

16

Visualization Enhancement - Saturation

Traditional Volume Rendering

Saliency-guided Enhancement

Increasing brightness diminishes the appearance of blood vessels at the center of the Sheep Heart model

Page 17: Saliency-guided Enhancement for Volume Visualization

17

User Study

Validated results by an eye-tracking-based user study

Hypotheses: The eye fixations increase over the region of interest (ROI) in a volume by the saliency-guided enhancement compared to• the traditional volume visualization (Hypothesis H1)• the Gaussian-based enhancement (Hypothesis H2)

Page 18: Saliency-guided Enhancement for Volume Visualization

18

User Study – Experimental Design

Eye-tracker and General Settings• ISCAN ETL-500

• Records eye movements at 60Hz• 17-inch LCD monitor

• With a resolution of 1280x1024• Placed at a distance of 50cm (19.7’’) from the subjects

Eye-tracker Calibration• Desired accuracy of 30 pixels• Two-step calibration process

• Standard calibration with 5 points• Look and click on 13 points

– Triangulation and interpolationwith 4 corner points

• Accuracy test on 16 random points

Page 19: Saliency-guided Enhancement for Volume Visualization

19

User Study – Experimental Design

Extracting fixations from raw points Raw points: all points from the eye-tracker Saccade Removal

• Velocity > 15°/sec Fixation combining

• Filter out the points which stay less than 100ms within 15 pixels

• Average eye locations within 15 pixels and 100ms

Page 20: Saliency-guided Enhancement for Volume Visualization

20

User Study – Experimental Design

Image Ordering• 10 users (who passed the accuracy tests)• Total of 20 images: 4 models * (1 original + 2 regions * 2

different enhancement methods (Gaussian, Saliency))• Each user saw 12 images out of these 20 images

• 4 models * (1 original + 2 altered))• Enhanced different regions with different methods

• Placed similar images far apart to alleviate differential carryover effects

• Randomized the order of regions and the order of enhancement types (Gaussian and saliency-based) to counterbalance overall effects

Duration• 12 trials (images), each of which takes 5 seconds

Page 21: Saliency-guided Enhancement for Volume Visualization

21

User Study – Result I

Traditional Volume RenderingTraditional Volume Rendering With Fixation Points

Saliency FieldGaussian-based EnhancementGaussian-based Enhancement With Fixation Points

Saliency-guided Enhancement With Fixation Points

Saliency-guided Enhancement

Page 22: Saliency-guided Enhancement for Volume Visualization

22

User Study – Result II

Traditional Volume RenderingTraditional Volume Rendering With Fixation Points

Saliency FieldGaussian-based EnhancementGaussian-based Enhancement With Fixation Points

Saliency-guided Enhancement With Fixation Points

Saliency-guided Enhancement

Page 23: Saliency-guided Enhancement for Volume Visualization

23

Data Analysis I

The percentage of fixations on the ROI for the original, Gaussian-enhanced, and Saliency-enhanced visualizations

Page 24: Saliency-guided Enhancement for Volume Visualization

24

Data Analysis II

A two-way ANOVA on the percentage of fixations for two conditions, regions and enhancement methods for each volume

For regions, no statistically significant results as expected• F(1,34) = 0.2827 ~ 3.3336, p > 0.05

For enhancement methods, statistically significant results• F(2,34) = 7.2668 ~ 31.479, p ≤ 0.01

Page 25: Saliency-guided Enhancement for Volume Visualization

25

Data Analysis III

Carried out a pairwise t-test on the percentage of fixations before and after we applied enhancement techniques for each model

Found a statistically significant difference in the percentage of fixations with saliency-guided enhancement for all the models

H1

H2

H1

H1

H1

H2

H2

H2

Hypothesis H1: More fixations than the traditionalHypothesis H2: More fixations than the Gaussian

Page 26: Saliency-guided Enhancement for Volume Visualization

26

Conclusions

Introduced the concept of the Emphasis Field for selective visual emphasis (or de-emphasis)

Developed the computational framework to generate the Emphasis Field from a given Saliency Field

Illustrated the use of the Emphasis Field in Visualization Validated its ability to successfully guide visual attention

to desired regions Saliency-guided Enhancement provides a powerful tool

to help scientists, engineers, and medical researchers explore large visual datasets

Page 27: Saliency-guided Enhancement for Volume Visualization

27

Future Work

Measure comprehensibility of the volume rendered images

Explore other appearance attributes such as opacity and texture detail

Generalize to handle time-varying datasets with multiple superposed scalar and vector fields

Identify the relative importance of various scales

Page 28: Saliency-guided Enhancement for Volume Visualization

28

Acknowledgments

Datasets: Stefan Roettger (University of Erlangen) and Dirk Bartz (University of Tuebingen)

Discussions: David Jacobs, François Guimbretière, Derek Juba, and Robert Patro (University of Maryland)

Eye-tracker: François Guimbretière

The Anonymous Referees

Supported by NSF grants: CCF 05-41120, CCF 04-29753, CNS 04-03313, and IIS 04-14699