shadows dinesh manocha computer graphics comp770 lecture spring 2009
TRANSCRIPT
Shadows
Dinesh Manocha
Computer Graphics
COMP770 lecture
Spring 2009
What are Shadows?
From Webster’s dictionary:
Shadow (noun): partial darkness or obscurity within a part of space from which rays from a source of light are cut off by an interposed opaque body
Is this definition sufficient?
What are Shadows?
• Does the occluder have to be opaque to have a shadow?– transparency (no scattering)
– translucency (scattering)
• What about indirect light?– reflection
– atmospheric scattering
– wave properties: diffraction
• What about volumetric or atmospheric shadowing?– changes in density
Is this still a shadow?
What are Shadows Really?
• Is this definition sufficient?• In practice, too general!• We need some restrictions
Volumes of space that receive no light or lightthat has been attenuated through obscuration
Common Shadow Algorithm Restrictions
• No transparency or translucency!– Limited forms can sometimes be handled efficiently
– Backwards raytracing has no trouble with these effects, but it is much more expensive than typical shadow algorithms
• No indirect light!– More sophisticated global illumination algorithms handle this
at great expense (radiosity, backwards raytracing)
• No atmospheric effects (vacuum)!– No indirect scattering
– No shadowing from density changes
• No wave properties (geometric optics)!
What Do We Call Shadows?
• Regions not completelyvisible from a light source
• Assumptions:– Single light source
– Finite area light sources
– Opaque objects
• Two parts:– Umbra: totally blocked from light
– Penumbra: partially obscured
umbra
penumbra
area light source
shadow
Basic Types of Light & Shadows
area, direct & indirect area, direct only point, direct only directional, direct only
simplermore realistic
more realistic for smallscale scenes, directional is realistic for scenes lit by sunlight in space!
SOFT SHADOWS HARD or SHARP SHADOWS
Goal of Shadow Algorithms
• Shadow computation can be considered a global illumination problem– this includes raytracing and radiosity!
• Most common shadow algorithms are restricted to direct light and point or directional light sources
• Area light sources are usually approximated by many point lights or by filtering techniques
Ideally, for all surfaces, find the fraction of lightthat is received from a particular light source
Global Shadow Component inLocal Illumination Model
• Shadowi is the fraction of light received at the surface– For point lights, 0 (shadowed) or 1 (lit)
– For area lights, value in [0,1]
• Ambient term approximates indirect light
NumLights
iiiiii SpecularDiffuseAmbientSpotDistentGlobalAmbiI
1
NumLights
iiiiiii
NumLights
iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI
11
Without shadows:
With shadows:
What else does this say?
• Multiple lights are not really difficult (conceptually)• Complex multilight effects are many singlelight
problems summed together!– Superposition property of illumination model ()
• This works for shadows as well!• Focus on singlesource shadow computation• Generalization is simple, but efficiency may be improved
NumLights
iiiiiii
NumLights
iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI
11
Characteristics of Shadow Algorithms
• Lightsource types– Directional
– Point
– Area
• Light transfer types– Direct vs. indirect
– Opaque only
– Transparency / translucency
– Atmospheric effects
• Geometry types– Polygons
– Higherorder surfaces
Characteristics of Shadow Algorithms
• Computational precision (like visibility algorithms)– Object precision (geometrybased, continuous)
– Image precision (imagebased, discrete)
• Computational complexity– Runningtime
– Speedups from static viewer, lights, scene
– Amount of user intervention (object sorting)
• Numerical degeneracies
Characteristics of Shadow Algorithms
• When shadows are computed– During rendering of fullylit scene (additive)– After rendering of fullylit scene (subtractive)
not correct, but fast and often good enough
• Types of shadow/object interaction– Between shadowcasting object and receiving object
– Object selfshadowing
– General shadow casting
Taxonomy of Shadow Algorithms• Objectbased
– Local illumination model (Warnock69,Gouraud71,Phong75)
– Area subdivision (Nishita74,Atherton78)
– Planar projection (Blinn88)
– Radiosity (Goral84,Cohen85,Nishita85)
– Lloyd (2004)
• Imagebased– Shadowmaps (Williams78,Hourcade85,Reeves87,
Stamminger/Drettakis02, Lloyd 07)
– Projective textures (Segal92)
• Hybrid– Scanline approach (Appel68,Bouknight70)
– Raytracing (Appel68,Goldstein71,Whitted80,Cook84)
– Backwards raytracing (Arvo86)
– Shadowvolumes (Crow77,Bergeron86,Chin89)
Good Surveys of Shadow AlgorithmsEarly complete surveys found in (Crow77 & Woo90)
Recent survey on hard shadows: Lloyd 2007 (Ph.D. thesis)
Recent survey on soft shadows: Laine 2007 (Ph.D. thesis)
Survey of Shadow AlgorithmsFocus is on the following algorithms:
– Local illumination
– Raytracing
– Planar projection
– Shadow volumes
– Projective textures
– Shadowmaps
Will briefly mention:– Scanline approach
– Area subdivision
– Backwards raytracing
– Radiosity
Local Illumination “Shadows”• Backfacing polygons are in shadow (only lit by ambient)• Point/directional light sources only• Partial selfshadowing
– like backface culling is a partial visibility solution
• Very fast (often implemented in hardware)• General surface types in almost any rendering system!
Local Illumination “Shadows”• Typically, not considered a shadow algorithm• Just handles shadows of the most restrictive form• Dramatically improves the look of other restricted
algorithms
Local Illumination “Shadows”
Properties:– Point or directional light sources
– Direct light
– Opaque objects
– All types of geometry (depends on rendering system)
– Object precision
– Fast, local computation (single pass)
– Only handles limited selfshadowingconvenient since many algorithms do not handle any selfshadowing
– Computed during normal rendering pass
– Simplest algorithm to implement
Raytracing Shadows
Only interested in shadowray tracing (shadow feelers)– For a point P in space, determine if it is shadow with respect
to a single point light source L by intersecting line segment PL (shadow feeler) with the environment
– If line segment intersects object, then P is in shadow, otherwise, point P is illuminated by light source L
L
P
shadow feeler(edge PL)
Raytracing Shadows• Arguably, the simplest general algorithm• Can even handle area light sources
– pointsample area source: distributed raytracing (Cook84)
Li
P
Area light Li
P
NumLights
iiiiiii
NumLights
iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI
11
Shadowi = 0 Shadowi = 2/5
Raytracing Shadows
– Slow• Intersection tests are (relatively) expensive
• May be sped up with standard raytracing acceleration techniques
– Shadow feeler may incorrectly intersect object touching P
• Depth bias
• Object tagging– Don’t intersect shadow feeler with object touching P
– Works only for objects not requiring selfshadowing
Sounds great, what’s the problem?
Raytracing Shadows
How do we use the shadow feelers?
2 different rendering methods
– Standard raycasting with shadow feelers
– Hardware Zbuffered rendering with shadow feelers
Raytracing Shadows
For each pixel:
• Trace ray from eye through pixel center
• Compute closest object intersection point P along ray
• Calc Shadowi for point by performing shadow feeler intersection test
• Calc illumination at point P
Eye
Light
Raycasting with shadow feelers
Raytracing Shadows
• Render the scene into the depthbuffer (no need compute color)
• For each pixel, determine if in shadow:– “unproject” the screen space pixel point
to transform into eye space
– Perform shadow feeler test with light in eye space to compute Shadowi
– Store Shadowi for each pixel
• Light the scene using perpixel Shadowi values
Eye
Light
Zbuffering with shadow feelers
Raytracing ShadowsZbuffering with shadow feelers
Method 1: compute lighting at each pixel in software
• Deferred shading
• Requires object surface info (normal, materials)
• Could use more complex lighting model
How do we use perpixel Shadowi values to light the scene?
Raytracing ShadowsZbuffering with shadow feelers
Method 2: use graphics hardwareFor point lights:
• Shadowi values either 0 or 1
• Use stencil buffer, stencil values = Shadowi values
• Rerender scene with the corresponding light on using graphics hardware but use stencil test to only write into lit pixels (stencil=1). Should perform additive blending and ambientlit scene should be rendered in depth computation pass.
For area lights:
• Shadowi values continuous in [0,1]
• Multiplepasses and modulation blending
• Pixel Contribution = Ambienti + Shadowi*(Diffusei+Speculari)
How do we use perpixel Shadowi values to light the scene?
– Point, directional, and area light sources
– Direct light (may be generalized to indirect)
– Opaque (thinfilm transparency easily handled)
– All types of geometry (just need edge intersection test)
– Hybrid : objectprecision (line intersection), imageprecision for generating pixel rays
– Slow, but many acceleration techniques are available
– General shadow algorithm
– Computed during illumination (additive, but subtractive is possible)
– Simple to implement
Raytracing ShadowsProperties
Planar Projection Shadows
• Shadows cast by objects onto planar surfaces• Brute force: project shadow casting objects onto the
plane and draw projected object as a shadow
Directional light(parallel projection)
Point light(perspective projection)
Planar Projection Shadows
Not sufficient – coplanar polygons (Zfighting) : depth bias
– requires clipping to relevant portion of plane : shadow receiver stenciling
Planar Projection Shadowsbetter approach, subtractive strategy
Render scene fully lit by single light
For each planar shadow receiver:• Render receivers: stencil pixels covered
• Render projected shadow casters in a shadow color with depth testing on, depth biasing (offset from plane), modulation blending, and stenciling (to write only on receiver and to avoid double pixel writing)
– Receiver stencil value=1, only write where stencil equals 1, change to zero after modulating pixel Texture is visible in shadow
Planar Projection Shadowsproblems with subtractive strategy
• Called subtractive because it begins with fulllighting and removes light in shadows (modulates)
• Can be more efficient than additive (avoids passes)
• Not as accurate as additive. Doesn’t follow lighting model– Specular and diffuse components in shadow
– Modulates ambient term
– Shadow color is chosen by user
NumLights
iiiiiii SpecularDiffuseAmbientSpotDistrShadowColoentGlobalAmbiI
1
NumLights
iiiiiii
NumLights
iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI
11
as opposed to the correct version
Planar Projection Shadowseven better approach, additive strategy
• Draw ambient lit shadow receiving scene (global and all lights’ local ambient)
• For each light source:For each planar receiver– Render receiver: stencil pixels
covered– Render projected shadow casters into
stenciled receiver area: depth testing on, depth biasing, stencil pixels covered by shadow
– Rerender receivers lit by single light source (no ambient light): depthtest set to EQUAL, additive blending, write only into stenciled areas on receiver and not in shadow
• Draw shadow casting scene: fulllighting
Planar Projection ShadowsProperties
– Point or directional light sources
– Direct light
– Opaque objects (could fake transparency using subtractive)
– Polygonal shadow casting objects, planar receivers
– Object precision
– Number of passes: L=num lights, P=num planar receivers• subtractive: 1 fully lit pass, L*P special passes (no lighting)
• additive: 1 ambient lit pass, 2*L*P receiver passes, L*P caster passes
Planar Projection ShadowsProperties
– Can take advantage of static components:• static objects & lights: precompute silhouette polygon from light source
• static objects & viewer: precompute first pass over entire scene
– Visibility from light is handled by user(must choose casters and receivers)
– No selfshadowing (relies on local illumination)
– Both subtractive and additive strategies presented
– Conceptually simple, surprisingly difficult to get rightgives techniques needed to handle more sophisticated multipass methods
Shadow VolumesWhat are they?
Volume of space in shadow of a single occluder with respect to a point light sourceOR
Volume of space swept out by extruding an occluding polygon away from a point
light source along the projector rays originating at the point light and passing through the vertices of the polygon
point light
occluding triangle
3D shadow volume
Shadow VolumesHow do you use them?
• Parity test to see if a point P on a visible surface is in shadow:– Initialize parity to 0
– Shoot ray from eye to point P
– Each time a shadowvolume boundary is crossed, invert the parity
• if parity=0, P is in shadowif parity=1, P is lit
What are some potential problems?
point light
eye
occluder
parity=0 parity=1 parity=0
00
0
1
1
0
Shadow VolumesProblems with Parity Test
0 0
0
1
Eye inside of shadow volume
Selfshadowing of visible occluders
Multiple overlapping shadow volumes
• Incorrectly shadows pts(reversed parity)
• Should a point on the occluder flip the parity?(consistent if not flipped)
0
• Point on the occluder should not flip the parity
• Touching boundary is not counted as a crossing
0 1 10 0
• Incorrectly shadows pts (incorrect parity)
• Is parity’s binary condition sufficient?
Shadow VolumesSolutions to Parity Test Problems
1 1
1
0
Eye inside of shadow volume
Selfshadowing of visible occluders
Multiple overlapping shadow volumes
• Init parity to be 0 when starting outside and 1 when inside
• Do not flip parity when viewing the “in”side of an occluder
0
• Do not flip parity when viewing “out”side of an occluder either
0 1 12 0
• Binary parity value is not sufficient, we need a general counter for boundary crossings: +1 entering a shadow volume, 1 exiting
+1 +1 1 1
Shadow VolumesA More General Solution
Determine if point P is in shadow:– Init boundary crossing counter to number of
shadow volumes containing the eye pointWhy? Because ray must leave this many shadow volumes to reach a lit point
– Along ray, increment counter each time a shadow volume is entered, decrement each time one is exited
– If the counter is >0, P is in shadow
Special case when P is on an occluder– Do not increment or decrement counter
– Point on boundary does not count as a crossing 0 1 12 0
+1 +1 1 1
Shadow VolumesMore Examples
Can you calculate the final boundary count for these visible points?
Shadow VolumesMore Examples
Can you calculate the final boundary count for these visible points?
0
+1+1
+1
+1
+1
1
1
1
1
1
+1
0 2 0
1
1
0
Shadow VolumesHow do we use this information to find shadow pixels?
Could just use raycasting (ray through each pixel)– Too slow, possibly more primitives to intersect with
– Could use silhouette of complex objects to simplify shadow volumes
+ 
+
+
+
++
++
++
+
+
++
+
+









1
1
1 1 2 0
0
0
00
Shadow VolumesUsing Standard Graphics Hardware
Simple observations: – For convex occluders, shadows volumes form convex shape.
– Enter through frontfacing shadowvolume boundariesExit through backfacing
+ 
+
+
+
++
++
++
+
+
++
+
+









1
1
0
0
00
Shadow VolumesUsing Standard Graphics Hardware
Use standard Zbuffered rendering and the stencil buffer (8 bits) to calculate boundary count for each pixel– Create shadow volumes for each occluding object (should be convex)
– Render the ambient lit scene, keep the depth values
– For each light source• Initialize stencil values to number of volumes containing the eye point
• Still using the Zbuffer depth test (strictly lessthan), but no depth update– Render the frontfacing shadowvolume boundary polygons, increment stencil
values for all pixels covered by the polygons that pass the depth test
– Render the backfacing boundary polygons, but decrement the stencil.
• Pixels with stencil value of zero are lit, rerender the scene with lighting on (no ambient, depthtest should be set to equal).
Shadow VolumesUsing Standard Graphics Hardware: stepbystep
• Create shadow volumes
• Initialize stencil buffer valuesto # of volumes containing eye
perpixel stencil values initially 0
Shadow VolumesUsing Standard Graphics Hardware: stepbystep
• Render the ambient lit scene
• Store the Zbuffer
• Set depthtest to strictly lessthan
Shadow VolumesUsing Standard Graphics Hardware: stepbystep
• Render frontfacing shadowvolume boundary polygons– Why front faces first? Unsigned stencil values
• Increment stencil values for pixels covered that pass depthtest
Shadow VolumesUsing Standard Graphics Hardware: stepbystep
• Render backfacing shadowvolume boundary polygons
• Decrement stencil values for pixels covered that pass depthtest
Shadow VolumesUsing Standard Graphics Hardware: stepbystep
• Pixels with stencil value of zero are lit
• Set depthtest to strictly equals
• Rerender lit scene with no ambient into lit pixels
Shadow VolumesMore Potential Problems
• Lots o’ geometry!– Only create on shadow
casting objects (approximation)
– Use only silhouettes
• Lots o’ fill!– Reduce geometry
– Have a good “max distance”
– Clip to viewvolume
• Nearplane clipping
Shadow VolumesProperties
– Point or directional light sources
– Direct light
– Opaque objects (could fake transparency using subtractive)
– Restricted to polygonal objects (could be generalized)
– Hybrid: object precision in creation of shadowvolumes, imageprecision perpixel stencil evaluation
– Number of passes: L=num lights, N=number of tris• additive: 1 ambient lit, 3*N*L shadowvolume, 1 fully lit
• subtractive: 1 fully lit, 3*N*L shadowvolume, 1 image pass (modulation)
• Could be made faster by silhouette simplification, and by handpicking shadow casters and receivers
Shadow VolumesProperties
– Can take advantage of static components:• static objects & lights: precompute shadow volumes from light
sources
• static objects & viewer: precompute first pass over entire scene
– General shadow algorithm, but could be restricted for more speed
– Both subtractive and additive strategies presented
Projective Texture ShadowsWhat are Projective Textures?
Texturemaps that are mapped to a surface through a projective transformation of the vertices into the texture’s “camera” space
Projective Texture ShadowsHow do we use them to create shadows?
Project a modulation image of the shadow casting objects from the light’s pointofview onto the shadow receiving objects
Light’s pointofview Shadow projective texture (modulation image or lightmap)
Eye’s pointofview, projective texture
applied to groundplane(selfshadowing is from
another algorithm)
Projective Texture ShadowsMore details
Fast, subtractive method• For each light source:
– Create a light camera that encloses shadowed area
– Render shadow casting objects into light’s viewonly need to create a light map (1 in light, 0 in shadow)
– Create projective texture from light’s view
– Render fullylit shadow receiving objects with applied modulation projectivetextures (need additive blending for all light sources except first one)
• Render fullylit shadow casting objects
Projective Texture ShadowsMore examples
Cast shadows from complex objects onto complex objects in only 2 passes over shadow casters and 1 pass over receivers (for 1 light)
Lighting for shadowed objects are computed independently for each light source and summed into a final image
Colored light sources. Lit areas are modulated by value of 1 and shadow areas can be any ambient modulation color
Projective Texture ShadowsProblems
• Does not use visibility information from the light’s view– Objects must be depthsorted
– Parts of an object that are not visible from the light also have the projective texture applied (ambient light appears darker on shadows receiving objects)
• Receiving objects may already be textured– Typically, only one texture can be applied to an object at a time
Projective Texture ShadowsSolutions… well, sort of...
• Does not use visibility information from the light’s view– User selects shadow casters and receivers
– Casters can be receivers, receivers can be casters
– Must create and apply projective textures in fronttoback order from the light
– Darker ambient lighting is accepted. Finding these regions requires a more general shadow algorithm
• Receiving objects may already be textured– Use two passes: first to apply base texture, second apply
projective texture with modulation blending
– Use multitexture: this is what it is for! Avoids passes over the geometry!
Projective Texture ShadowsProperties
• Point or directional light sources
• Direct light (fake transparency, with different modulation colors)
• All types of geometry (depends on the rendering system)
• Image precision (imagebased)
• For each light, 2 passes over shadowcasting objects (1 to create modulation image, 1 with full lighting), 1 pass over shadow receiving object (fullylit w/ projective texture)
• More passes will be required for shadowcasting objects that are already textured
• Benefits mostly from static scene (precompute shadow textures)
• User must partition objects into casters and receivers (casters could be receivers and vice versa)
Projective Texture ShadowsHow do we apply projective textures?
• All points on the textured surface must be mapped into the texture’s camera space (projective transformation)
• Position on texture’s camera viewplane window maps into the 2D texturemap
How can this be done efficiently?Slight modification to perspectivelycorrect texturemapping
Projective Texture ShadowsPerspectivelyincorrect Texturemapping
• Relies on interpolating screenspace values along projected edge
• Vertices after perspective transformation and perspective divide:(x,y,z,w)(x/w,y/w,z/w,1)
11
1
1
1
1
1
1 ,,,, tsw
z
w
y
w
xA
22
2
2
2
2
2
2 ,,,, tsw
z
w
y
w
xB
BtAttI )1()(
Projective Texture ShadowsPerspectivelycorrect Texturemapping
• Add 3D homogeneous coordinate to texturecoords (s,t,1)
• Divide all vertex components by w after perspective transformation
• Interpolate all values, including 1/w
• Obtain perspectivelycorrect texturecoords (s’,t’) by applying another homogeneous normalization (divide interpolated s/w and t/w terms by interpolated 1/w term)
11
1
1
1
1
1
1
1
1
1 1,,,,,ww
t
w
s
w
z
w
y
w
xA
BtAttI )1()(
22
2
2
2
2
2
2
2
2
2 1,,,,,ww
t
w
s
w
z
w
y
w
xB
w
t
w
szyxcorrectpersp I
I
I
IIIItszyxI
/1/1
,,,,',',',','
Final perspectivelycorrect values, by normalizing homogeneous texturecoords
Projective Texture ShadowsProjective Texturemapping
• Texturecoords become 4D just like vertex coords:(x,y,z,w)(s,t,r,q)
• Full 4x4 matrix transformation is applied to texturecoords
• Projective transformations also allowed, another perspective divide is needed for texturecoords:Vertices: homogeneous space to screenspace
(x,y,z,w)(x/w,y/w,z/w)
Texturecoords: homogeneous space to texturespace(s,t,r,q) (s/q,t/q,r/q)
• Requires another pervertex transformation, but perpixel work is same as in perspectivelycorrect texturemapping (Segal92)
Projective Texture ShadowsProjective Texturemapping
Given vertex v, corresponding texturecoords t, and two 4x4 matrix transformations M and T (M = composite modeling, viewing, and projection transformations, and T = texturecoords transformation matrix)– Each vertex represented as [ M*v, T*t ] = [ x y z w s t r q ]
– Transformed into screen space through a perspective divide of all components by w
[ x y z w s t r q ] [ x/w y/w z/w s/w t/w r/w q/w ]
– All values are linearly interpolated along edge (across polygon face)
– Perform perpixel homogeneous normalization of texturecoords by dividing interpolated q/w value
[ x’ y’ z’ s’ t’ r’ ] = [ x/w y/w z/w (s/w)/(q/w) (t/w)/(q/w) (r/w)/(q/w) ]
– Same as perspectivelycorrect texturemapping, but instead of dividing by interpolated 1/w, divide by interpolated q/w (Segal92)
Projective Texture ShadowsProjective Texturemapping
1
1
1
1
1
1
1
1
1
1
1
1
1
1 ,,,,,w
q
w
r
w
t
w
s
w
z
w
y
w
xA
BtAttI )1()(
2
2
2
2
2
2
2
2
2
2
2
2
2
2 ,,,,,w
q
w
r
w
t
w
s
w
z
w
y
w
xB
wq
r
wq
t
wq
szyxcorrectpersp I
I
I
I
I
IIIIrtszyxI
///
,,,,,',',',',','
Final perspectivelycorrect values, by normalizing homogeneous texturecoords
Projective Texture ShadowsProjective Texturemapping
So how do we actually use this to apply the shadow texture?
• Use the vertex’s original coords as the texturecoords
• Texture transformation:T = LightProjection*LightViewing* NormalModeling
ShadowMapsfor accelerating raytraced shadow feelers
• Previously, shadow feelers had to be intersected against all objects in the scene
• What if we knew the nearest intersection point for all rays leaving the light?
• The depthbuffer of the rendered scene from a camera at the light would give us a discretized version of this
• This depthbuffer is called a shadowmap
• Instead of intersecting rays with objects, we intersect the ray with the light viewplane, and lookup up the nearest depth value.
• If the light’s depth value at this point is less than the depth to the eyeray nearest intersection point, then this point is in shadow!
Light
Eye
Eyeray nearest intersection point
Lightray nearest intersection point
L
E
If L is closer to the light than E, then E is in shadow
ShadowMapsfor accelerating raytraced shadow feelers
Cool, we can really speed up raytraced shadows now!– Render from eye view to accelerate firsthit raycasting
– Render from light view to store firsthits from light
– For each pixelray in the eye’s view, we can project the first hit point into the light’s view and check if anything is intersecting the shadow feeler with a simple table lookup!
– The shadowmap is discretized, but we can just use the nearest value.
What are the potential problems?
ShadowMapsProblems with Raytraced Shadow Maps
• Still too slow– requires many perpixel operations
– does not take advantage of pixel coherence in eye view
• Still has selfshadowing problem– need a depth bias
• Discretization error– Using the nearest depth value to the projected point, may not
be sufficient
– How can we filter the depthvalues? The standard way does not really make sense here.
ShadowMapsfaster way: standard shadowmap approach
• Not normally used as a raytracing acceleration technique, normally used in a standard Zbuffered graphics system
• Two methods presented (Williams78):– Subtractive: postprocessing on final lit image (like fullscene
image warping)
– Additive: as implemented in graphics hardware (OpenGL extension on InfiniteReality)
ShadowMapsillustration of basic idea
Shadowmap from light 1 Shadowmap from light 2 Final view
ShadowMapsSubtractive
• Render fullylit scene• Create shadowmap: render depth from light’s view• For each pixel in final image:
– Project point at each pixel from eye screenspace into light screenspace (keep eyepoint depth De)
– Look up light depth value Dl
– Compare depth values, if Dl<De eyepoint is in shadow
– Modulate, if point is in shadow
ShadowMapsSubtractive: advantages
• Constant time shadow computation!just like fullscene imagewarping: eye view pixels are warped to
light view and then a depth comparison is performed
• Only a 2pass algorithm:1 eye pass, 1 light pass (and 1 constant time imagewarping pass)
• Deferred shading (for shadow computation)
Zhang98 presents a similar approach using a forwardmapping (from light to eye, reverses this whole process)
ShadowMapsSubtractive: disadvantages
• Not as accurate as additive (same reasons)– Specular and diffuse components in shadow
– Modulates ambient term
• Has standard shadowmap problems:– Selfshadowing : depthbias needed
– Depth sampling error : how do we accurately reconstruct depth values from a pointsampling?
ShadowMapsAdditive
• Create shadowmap: render depth from light’s view• Use shadowmap as a projective texture!• While scanconverting triangles:
– apply shadowmap projective texture
– instead of modulating with lookedup depth value Dl, compare the value against the rvalue (De) of the transformed point on the triangle
– Compare De to Dl , if Dl<De eyepoint is in shadow
Basically, scanconverting triangle in both eye and light spaces simultaneously and performing a depth comparison in light space against previously stored depth values
ShadowMapsAdditive: advantages
• Easily implemented in hardwareonly a slight change to the standard perspectivelycorrect
texturemapping hardware: add an rcomponent compare op
• Fastest, most general implementation to date!As fast as projective textures, but general!
ShadowMapsAdditive: disadvantages
• Computes shadows on a perprimitive basisAll pixels covered by all primitives must go through shadowing
and lighting operation whether visible or not (no deferred shading)
• Still has standard shadowmapping problems– Selfshadowing
– Depth sampling error
ShadowMapsSolving main problems: selfshadowing
Use a depth bias during the transformation into light space– Add a z translation towards the light source after
transformation from eye to light
OR
– Add ztranslation towards eye before transforming into light space
OR
– Translate eyespace point along surface normal before transforming into light space
ShadowMaps
Solving main problems: depth sampling
Could just use the nearest sample, but how would you antialias depth?
ShadowMapsDepth sampling: normal filtering
• Averaging depth doesn’t really make sense (unrelated to surface, especially at shadow boundaries!)
• Still a binary result, (no antialiased softer shadows)
ShadowMapsDepth sampling: percentage closer filtering (Reeves87)
• Could average binary results of all depth map pixels covered
• Soft antialiased shadows
• Very similar to pointsampling across an area light source in raytraced shadow computation
ShadowMapsHow do you choose the samples?
Quadrilateral represents the area covered by a pixel’s projection onto a polygon after being projected into the shadowmap
Scanline Algorithmsclassic by Bouknight and Kelley
• Project edges of shadow casting triangles onto receivers
• Use shadowvolumelike parity test during scanline rasterization
AreaSubdivision Algorithmsbased on AthertonWeiler clipping
• Find actual visible polygon fragments (geometrically) through generalized clipping algorithm
• Create model composed of shadowed and lit polygons
• Render as surface detail polygons
AreaSubdivision Algorithmsbased on AthertonWeiler clipping
Multiple Light Sourcesfor any singlelight algorithm
• Accumulate all fullylit singlelight images into a single image through a summing blend op (standard accumulation buffer or blending operations)
• Global ambient lit scene should be added in separately• Very easy to implement• Could be inefficient for some algorithms• Use higher accuracy of accumulation buffer (usually
12bit per color component)
Area light Sourcesfor any pointlight algorithm
• Soft or “fuzzy” shadows (penumbra)• Some algorithms have some “natural” support for these• For restricted algorithms, we can always sample the
area light source with many point light sources: jitter and accumulate
• Very expensive: many “high quality” passes to obtain something fuzzy
• Not really feasible in most interactive applications• Convolution and image based methods are usually
more efficient here
Backwards Raytracing
• Big topic: sorry, no time
Radiosity
• Big topic: sorry, no time
ReferencesAppel A. “Some Techniques for Shading Machine Renderings of Solids,” Proc AFIPS
JSCC, Vol 32, 1968, pgs 3745.
Arvo, J. “Backward Ray Tracing,” in A.H. Barr, ed., Developments in Ray 8Tracing, Course Notes 12 for SIGGRAPH 86, Dallas, TX, August 1822, 1986.
Atherton, P.R., Weiler, K., and Greenberg, D. “Polygon Shadow Generation,” SIGGRAPH 78, pgs 275281.
Bergeron, P. “A General Version of Crow’s Shadow Volumes,” CG & A, 6(9), September 1986, pgs 1728.
Blinn, Jim. “Jim Blinn’s Corner: Me and My (Fake) Shadow,” IEEE CG&A, vol 8, no 1, Jan 1988, pgs 8286.
Bouknight, W.J. “A Procedure for Generation of ThreeDimentional HalfToned Computer Graphics Presentations,” CACM, 13(9), September 1970, pgs 527536. Also in FREE80, pgs 292301.
Bouknight, W.J. and Kelly, K.C. “An Algorithm for Producing HalfTone Computer Graphics Presentations with Shadows and Movable Light Sources,” SJCC, AFIPS Press, Montvale, NJ, 1970, pgs 110.
Chin, N., and Feiner, S. “Near RealTime Shadow Generation Using BSP Trees,” SIGGRAPH 89, pgs 99106.
ReferencesCohen, M.F., and Greenberg, D.P. “The HemiCube: A Radiosity Solution for Complex
Environments,”SIGGRAPH 85, pgs 3140.
Cook, R.L. “Shade Trees,” SIGGRAPH 84, pgs 223231.
Cook, R.L., Porter, T., and Carpenter, L. “Distributed Ray Tracing,” SIGGRAPH 84, pgs 127145.
Crow, Frank. “Shadow Algorithms for Computer Graphics,” SIGGRAPH ‘77.
Goldstein, R.A.and Nagel, R. “3D Visual Simulation,” Simulation, 16(1), January 1971, pgs 2531.
Goral, C.M., Torrance, K.E., Greenberg, D.P., and Gattaile, B. “Modeling the Interaction of Light Between Diffuse Surfaces,” SIGGRAPH 84 pgs 213222.
Gouraud, H. “Continuous Shading of Curved Surfaces,” IEEE Trans. On Computers, C20(6), June 1971, 623629. Also in FREE80, pgs 302308.
Hourcade, J.C. and Nicolas, A. “Algorithms for Antialiased Cast Shadows,” Computers & Grahpics 9, 3 (1985), pgs 259265.
Nishita, T. and Nakamae, E. “An Algorithm for HalfTone Representation of ThreeDimensional Objects,” Information Processing in Japan, Vol. 14, 1974, pgs 9399.
Nishita, T., and Nakamae, E. “Continuous Tone Representation of ThreeDimensional Objects Taking Account of Shadows and Interreflection,” SIGGRAPH 85, pgs 2330.
ReferencesReeves, W.T., Salesin, D.H., and Cook, R.L. “Rendering Antialiased Shadows with Depth
Maps,” SIGGRAPH 87, pgs 283291.
Segal, M., Korobkin, C., van Widenfelt, R., Foran, J., and Haeberli, P. “Fast Shadows and Lighting Effects Using Texture Mapping,” Computer Graphics, 26, 2, July 1992, pgs 249252.
Warnock, J. “A HiddenSurface Algorithm for Computer Generated HalfTone Pictures,” Technical Report TR 415, NTIS AD753 671, Computer Science Department, University of Utah, Salt Lake City, UT, June 1969.
Whitted, T. “An Improved Illumination Model for Shaded Display,” CACM, 23(6), June 1980, pgs 343349.
Williams, L. “Casting Curved Shadows on Curved Surfaces,” SIGGRAPH 78, pgs 270274.
Woo, Andrew, Pierre Poulin, and Alain Fournier. “A Survey of Shadow Algorithms,” IEEE CG&A, Nov 1990, pgs 1332.
Zhang, H. “Forward Shadow Mapping,” Rendering Techniques 98, Proceedings of the 9th Eurographics Rendering Workshop.
Acknowledgements
Mark Kilgard (nVidia) : for various pictures from presentation slides (www.opengl.org)
Advanced OpenGL Rendering course notes (www.opengl.org)