direct volume rendering

81
Direct Volume Rendering cknowledgement : Han-Wei Shen Lecture Notes 사사

Upload: alcina

Post on 19-Jan-2016

59 views

Category:

Documents


0 download

DESCRIPTION

Direct Volume Rendering. Acknowledgement : Han-Wei Shen Lecture Notes 사용. Direct Volume Rendering. Direct : no conversion to surface geometry Three methods Ray-Casting Splatting 3D Texture-Based Method. Data Representation. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Direct Volume Rendering

Direct Volume Rendering

Acknowledgement : Han-Wei Shen Lecture Notes 사용

Page 2: Direct Volume Rendering

Direct Volume Rendering

• Direct : no conversion to surface geometry

• Three methods– Ray-Casting– Splatting– 3D Texture-Based Method

Page 3: Direct Volume Rendering

Data Representation

•3D volume data are represented by a finite number of cross sectional slices (hence a 3D raster)•On each volume element (voxel), stores a data value (if it uses only a single bit, then it is a binary data set. Normally, we see a gray value of 8 to 16 bits on each voxel.)

N x 2D arraies = 3D array

Page 4: Direct Volume Rendering

Data Representation (2)

What is a Voxel? – Two definitions

A voxel is a cubic cell, whichhas a single value cover the entire cubic region

A voxel is a data pointat a corner of the cubic cellThe value of a point inside the cell is determined by interpolation

Page 5: Direct Volume Rendering

Basic Idea

Based on the idea of ray tracing • Trace from eat each pixel as a ray into object space• Compute color value along the ray• Assign the value to the pixel

Page 6: Direct Volume Rendering

Viewing

Ray Casting• Where to position the volume and image plane • What is a ‘ray’ • How to march a ray

Page 7: Direct Volume Rendering

Viewing (1)

1. Position the volumeAssuming the volume dimensions is w x w x w

We position the center of the volume at the world origin

y

(0,0,0) x

z

Volume center = [w/2,w/2,w/2](local space)

Translate T(-w/2,-w/2,-w/2)

(data to world matrix? world to data matrix )

Page 8: Direct Volume Rendering

Viewing (2)

2. Position the image planeAssuming the distance between the image plane and the volume center is D, and initially the center of the image plane is (0,0,-D)

y

(0,0,0) x

z

Image plane

Page 9: Direct Volume Rendering

Viewing (3)

3. Rotate the image plane

A new position of the image plane can be defined in termsof three rotation angle with respect to x,y,z axes

Assuming the original view vector is [0,0,1], then the newview vector g becomes:

cossincossing = [0,0,1] 0 1 0 0 cos sinsincos sin 0 cossincos

Page 10: Direct Volume Rendering

Viewing (4)

y

(0,0,0) x

z

uv

E

S

E0 u0v0

+ S0

B

B = [0,0,0]S0 = [0,0,-D]u0 = [1,0,0]v0 = [0,1,0]

Now,R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R

Page 11: Direct Volume Rendering

Viewing (5)

R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R

+ S

Image Plane: L x L pixels

E

Then

E = S – L/2 x u – L/2 x v

So Each pixel (i,j) has coordinates

P = E + i x u + j x v

u

v

We enumerate the pixels by changingi and j (0..L-1)

Page 12: Direct Volume Rendering

Viewing (6)

4. Cast raysRemember for each pixel on the image planeP = E + i x u + j x vand the view vector g = [0,0,1] x RSo the ray has the equation:

Q = P + k (d x g) d: the sampling distance at each step

xx

dx

xp

Q

K = 0,1,2,…

Page 13: Direct Volume Rendering

Old Methods

• Before 1988• Did not consider transparency• did not consider sophisticated light

transportation theory• were concerned with quick solutions• hence more or less applied to binary data

non-binary data - require sophisticated

classification/compositing methods!

Page 14: Direct Volume Rendering

Ray Tracing -> Ray Casting

• “another” typical method from traditional graphics

• Typically we only deal with primary rays -hence: ray-casting

• a natural image-order technique• as opposed to surface graphics - how do we

calculate the ray/surface intersection???• Since we have no surfaces - we need to

carefully step through the volume

Page 15: Direct Volume Rendering

Ray Casting

• Stepping through the volume: a ray is cast into the volume, sampling the volume at certain intervals

• The sampling intervals are usually equi-distant, but don’t have to be (e.g. importance sampling)

• At each sampling location, a sample is interpolated / reconstructed from the grid voxels

• popular filters are: nearest neighbor (box), trilinear (tent), Gaussian, cubic spline

• Along the ray - what are we looking for?

Page 16: Direct Volume Rendering

Example: Using the nearest neighbor kernel

In tuys’ paperQ = P + K x V (v=dxg)

At each step k, Q is roundedoff to the nearest voxel(like the DDA algorithm)

Check if the voxel is on the boundary or not (compareagainst a threshold)

If yes, perform shading

Page 17: Direct Volume Rendering

Basic Idea of Ray-casting Pipeline

- Data are defined at the corners of each cell (voxel)

- The data value inside the voxel is determined using interpolation (e.g. tri-linear)

- Composite colors and opacities along the ray path

- Can use other ray-traversal schemes as well

c1

c2

c3

Page 18: Direct Volume Rendering

Ray Traversal Schemes

Depth

IntensityMax

Average

Accumulate

First

Page 19: Direct Volume Rendering

Ray Traversal - First

Depth

Intensity

First

• First: extracts iso-surfaces (again!)done by Tuy&Tuy ’84

Page 20: Direct Volume Rendering

Ray Traversal - Average

Depth

Intensity

Average

• Average: produces basically an X-ray picture

Page 21: Direct Volume Rendering

Ray Traversal - MIP

Depth

IntensityMax

• Max: Maximum Intensity Projectionused for Magnetic Resonance Angiogram

Page 22: Direct Volume Rendering

Ray Traversal - Accumulate

Depth

Intensity

Accumulate

• Accumulate opacity while compositing colors: make transparent layers visible!Levoy ‘88

Page 23: Direct Volume Rendering

Raycasting

color

opacity

1.0

volumetric compositing

object (color, opacity)

Page 24: Direct Volume Rendering

Raycasting

color

opacity

Interpolationkernel

1.0

object (color, opacity)

volumetric compositing

Page 25: Direct Volume Rendering

Raycasting

color c = c s s(1 - ) + c

opacity = s (1 - ) +

1.0

object (color, opacity)

volumetric compositingInterpolation

kernel

Page 26: Direct Volume Rendering

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Page 27: Direct Volume Rendering

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Page 28: Direct Volume Rendering

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Page 29: Direct Volume Rendering

Raycasting

color

opacity

1.0

object (color, opacity)

volumetric compositing

Page 30: Direct Volume Rendering

Raycasting

color

opacity

object (color, opacity)

volumetric compositing

Page 31: Direct Volume Rendering

Volume Rendering Pipeline

Acquired values

Data preparation

Prepared values

classificationshading

Voxel colors

Ray-tracing / resampling Ray-tracing / resampling

Sample colors

compositing

Voxel opacities

Sample opacities

Image Pixels

Page 32: Direct Volume Rendering

DCH DVR Pipeline

Page 33: Direct Volume Rendering

Common Components of General Pipeline

• Interpolation/reconstruction

• Classification or transfer function

• Gradient/normal estimation for shading– Question: are normals also interpolated?

Page 34: Direct Volume Rendering

Shading and Classification

- Shading: compute a color(lighting) for each data point in the volume - Classification: Compute color and opacity for each data point in the volume

-Done by table lookup (transfer function) - Levoy preshaded the entire volume

f(xi) C(xi), a(xi)

Page 35: Direct Volume Rendering

Shading

• Common shading model – Phong model

• For each sample, evaluate: – C = ambient + diffuse + specular

= constant + Ip Kd (N.L) + Ip Ks (N.H)

– Ip: emission color at the sample– N: normal at the sample

in

Page 36: Direct Volume Rendering

Gradient/Normals (Levoy 1988)

• Central difference• per voxel

2

2

2

1,,1,,

,1,,1,

,,1,,1

kjikjiz

kjikjiy

kjikjix

vvG

vvG

vvG

X+1y-1,

Y+1

z-1

Page 37: Direct Volume Rendering

Shading (Levoy 1988)

• Phong Shading + Depth Cueing

))()((21

nsd

pap HxNkLxNk

xdkk

CkCxC

• Cp = color of parallel light source

• ka / kd / ks = ambient / diffuse / specular light coefficient

• k1, k2 = fall-off constants

• d(x) = distance to picture plane• L = normalized vector to light• H = normalized vector for maximum highlight

• N(xi) = surface normal at voxel xi

Page 38: Direct Volume Rendering

Compositing

c1

c2

c3

you can use ‘Front-to-Back’Compositing formula

Front-to-Back compositing: use ‘over’ operator

C = backgrond ‘over’ C1C = C ‘over’ C2 C = C ‘over’ C3…

Cout = Cin + C(x)*(1- inout = in + (x) *(1-in)

Page 39: Direct Volume Rendering

Classification

• Map from numerical values to visual attributes – Color – Transparency

• Transfer functions– Color function: c(s) – Opacity function: a(s)

21.05 27.05

24.03 20.05

Page 40: Direct Volume Rendering

Classification/Transfer Function

• Maps raw voxel value into presentable entities: color, intensity, opacity, etc.

Raw-data material (R, G, B, , Ka, Kd, Ks, ...)

• May require probabilistic methods (Drebin). Derive material volume from input. Estimate % of each material in all voxels. Pre-computed. AKA segmentation.

• Often use look-up tables (LUT) to store the transfer function that are discovered

Page 41: Direct Volume Rendering

Levoy - Classification

• Usually not only interested in a particular iso-surface but also in regions of “change”

• Feature extraction - High value of opacity exists in regions of change

• Transfer function (Levoy) - Saliency• Surface “strength”

Page 42: Direct Volume Rendering

Opacity function (1)

Goal: visualize voxels that have a selected threshold value fv - No intermediate geometry is extracted - The idea is to assign voxels that have value fv the maximum opacity (say ) - And then create a smooth transition for the surrounding area from 1 to 0 -Levoy wants to maintain a constant thickness for the transition area.

Page 43: Direct Volume Rendering

Opacity function (2)

Maintain a constant isosurface thickness

opacity = opacity = 0

Can we assign opacity based on function value instead of distance? (local operation: we don’t know wherethe isosurface is)

Yes – we can based on the value distance f – fv but we need to take into account the local gradient

Page 44: Direct Volume Rendering

Opacity function (3)

Assign opacity based on value difference (f-fv) and local gradient

gradient: the value fall-off rate grad = fsAssuming a region has a constant gradient and the isosurfacetransition has a thickness R

opacity = F = fv

opacity = 0F = fv – grad * R

thickness = R

F = f(x)Then we interpolate the opacity

opacity = – * ( fv-f(x))/ (grad * R)

Page 45: Direct Volume Rendering

Levoy - Classification A

Opacity (xi)

Acquired value f (xi)

Gradient magnitude |f (xi)| Opacity (xi)

Acquired value f (xi)

Gradient magnitude |f (xi)|

v

fv

i

ivvi xf

xff

rx

'

11

Page 46: Direct Volume Rendering

DCH - Material Percentage V.

• Probabilistic classifier• probability that a voxel has intensity I:

• pi - percentage of material

• Pi(I) - prob. that material i has value I

• Pi(I) given through statistics/physics

• pi then given by:

n

iii IPpIP

1

n

jj

ii

IP

IPIp

1

Page 47: Direct Volume Rendering

DCH - Classification

• Like Levoy - assumes only two materials per voxel• that will lead to material percentage volumes

• from them we conclude color/opacity:

– where Ci=(iRi, iGi, iBi, i)

n

iiiCpC

1

Page 48: Direct Volume Rendering

DCH- Classification

Air Fat TissueBone

CT Number

Air Fat Tissue Bone

histogramConstituent’s Distributions

Material Assignment%

CT

Page 49: Direct Volume Rendering

Levoy - Improvements

• Levoy 1990• front-to-back with early ray termination

= 0.95• hierarchical oct-tree data structure

– skip empty cells efficiently

Page 50: Direct Volume Rendering

Texture Based Volume Rendering

Page 51: Direct Volume Rendering

3D Texture Based Volume Rendering

• Best known practical volume rendering method for rectlinear grid datasets

• Realtime Rendering is possible

Page 52: Direct Volume Rendering

Interpolation of Samples

• Volume stored as 3D texture

• Viewport-aligned slices

• Blended back-to-front

• Trilinear interpolation by hardware

Page 53: Direct Volume Rendering

Classification

• Density values from texture map

• Classification via lookup table

• Takes place in texture mapping stage

Page 54: Direct Volume Rendering

Shading is possible

• Principle– Precompute Gradient plus density in texture– Shade first intensity (keep density!)– Classification via 2D pixel texture

Page 55: Direct Volume Rendering

Texture Mapping

2D image 2D polygon

+

Textured-mappedpolygon

Page 56: Direct Volume Rendering

Tex. Mapping for Volume Rendering

Consider ray casting …

x

yz

(top view)

Page 57: Direct Volume Rendering

Texture based volume rendering

x

z

y

• Render every xz slice in the volume as a texture-mapped polygon• The proxy polygon will sample the volume data • Per-fragment RGBA (color and opacity) as classification results• The polygons are blended from back to front

Use pProxy geometry for sampling

Page 58: Direct Volume Rendering

Texture based volume rendering

Page 59: Direct Volume Rendering

Changing Viewing Direction

What if we change the viewing position?

That is okay, we justchange the eye position(or rotate the polygons and re-render),

Until …

x

y

Page 60: Direct Volume Rendering

Changing View Direction (2)

Until … You are not going to see anything this way …

This is because the view direction now is Parallel to the slice planes

What do we do?

x

y

Page 61: Direct Volume Rendering

Switch Slicing Planes

What do we do?

• Change the orientation of slicing planes

• Now the slice polygons are parallel to YZ plane in the object space

x

y

Page 62: Direct Volume Rendering

Some Considerations… (5)

When do we need to change the slicing orientation?

When the major component of view vector changes from y to -x

x

y

Page 63: Direct Volume Rendering

Major component of view vector?

Given the view vector (x,y,z) -> get the maximal component

If x: then the slicing planes are parallel to yz planeIf y: then the slicing planes are parallel to xz planeIf z: then the slicing planes are parallel to xy plane

-> This is called (object-space) axis-aligned method.

Some Considerations… (6)

Page 64: Direct Volume Rendering

Three copies of data needed

x

zy

xz slices yz slices xy slices

•We need to reorganize the input textures for diff. View directions.

• Reorganize the textures on the fly is too time consuming. We want to prepare the texture sets beforehand

Page 65: Direct Volume Rendering

Texture based volume rendering

Algorithm: (using 2D texture mapping hardware)

Turn off the depth test; Enable blendingFor (each slice from back to front) { - Load the 2D slice of data into texture memory - Create a polygon corresponding to the slice - Assign texture coordinates to four corners of the polygon - Render and blend the polygon (use OpenGL alpha blending) to the frame buffer}

Page 66: Direct Volume Rendering

Problem (1)

• Non-even sampling rate

d d’ d’’

d’’ > d’ > d Sampling artifact will become visible

Page 67: Direct Volume Rendering

Problem (2)

Object-space axis-alighed method can create artifact:Popping Effect

There is a sudden change of slicing direction when the view vector transits from one major direction to another.The change in the image intensity can be quite visible

x

y

Page 68: Direct Volume Rendering

Solution (1)

• Insert intermediate slides to maintain the sampling rate

d d’ d’’

Page 69: Direct Volume Rendering

Solution (2)

Use Image-space axis-aligned slicing plane: the slicing planes are always parallel to the view plane

Page 70: Direct Volume Rendering

3D Texture Based Volume Rendering

Page 71: Direct Volume Rendering

Image-Space Axis-Aligned

Arbitrary slicing through the volume and texturemapping capabilities are needed

- Arbitrary slicing: this can be computed using software in real time

This is basically polygon-volumeclipping

Page 72: Direct Volume Rendering

Image-Space Axis-Aligned (2)

Texture mapping to the arbitrary slices

This requires 3D Solid texture mapping

Input texture: A bunch of slices (volume)

Depending on the position of the polygon, appropriate textures are mapped to the polygon.

Page 73: Direct Volume Rendering

3D Texture Mapping

Now the input texture space is 3D

Texture coordinates (r,s) -> (r,s,t)

(0,0,0) (1,0,0)

(0,1,0)(1,1,0)

(0,1,1) (1,1,1)

(r0,s0,t0) (r1,s1,t1)

(r2,s2,t2) (r3,s3,t3)

Page 74: Direct Volume Rendering

Pros and Cons

• 2D textured object-aligned slices + Very high performance

+ High availability

- High memory requirement

- Bi-linear interpolation

- inconsistent sampling rates

- popping artifacts

Page 75: Direct Volume Rendering

Pros and Cons

+ Higher quality+ No popping effect - Need to compute the slicing planes for every view angle- Limited availability (not anymore)

3D textured view-aligned slices

Page 76: Direct Volume Rendering

Classification Implementation

v

Red

v

Green

v

Blue

v

Alphavalue

(R, G, B, A)

Page 77: Direct Volume Rendering

Classification Implementation (2)

• Pre-classification – using color palette– glColorTableExt( GL_SHARED_TEXTURE_PALETTE_EXT,

GL_RGBA8, 256*4, GL_RGBA, GL_UNSIGNED_BYTE,

color_palette);

• Post-classification – using 1D(2D,3D) texture– glTexImage1D(Gl_TEXTURE_1D, 0, GL_RBGA8, 256*4, 0,

GL_RGBA, GL_UNSIGNED_BYTE, color_palette);

Page 78: Direct Volume Rendering

Classification implementation (3)

• Post-classification – dependent texture

Texture Unit 0 (volume intensity)

v

Texture Unit 1 (transfer function)

(s, t, r)

(R, G, B, A)

Page 79: Direct Volume Rendering

Shading

• Use per-fragment shader – Store the pre-computed gradient into a

RGBA texture– Light 1 direction as constant color 0– Light 1 color as primary color – Light 2 direction as constant color 1 – Light 2 color as secondary color

Page 80: Direct Volume Rendering

Non-polygonal isosurface

• Store voxel gradient as GRB texture

• Store the volume density as alpha

• Use OpenGL alpha test to discard volume density not equal to the threshold

• Use the gradient texture to perform shading

Page 81: Direct Volume Rendering

- Isosurface rendering results

No shading diffuse diffuse + specular

Non-polygonal isosurface (2)