95.4501
DESCRIPTION
95.4501. Advanced Detailing. References. Rendering Surface Details in Games with Relief Mapping Using a Minimally Invasive Approach, Policarpo and Oliveira, pp 109-119, ShaderX4, 2006. - PowerPoint PPT PresentationTRANSCRIPT
Wilf LaLonde ©2012Comp 4501
95.4501
Advanced Detailing
Wilf LaLonde ©2012Comp 4501
References
• Rendering Surface Details in Games with Relief Mapping Using a Minimally Invasive Approach, Policarpo and Oliveira, pp 109-119, ShaderX4, 2006.
• Practical Parallax Occlusion Mapping with Approximate Soft Shadows for Detailed Surface Rendering, Tatarchuk, pp 75-105, ShaderX5, 2007.
• Relief Mapping of Non-Height-Field Surface Details, Policarpo and Oliveira, Proceedings of the 2006 Symposium on Interactive 3D Graphics and Games, pp. 55–62.
• Relaxed Cone Stepping for Relief Mapping, Policarpo and Oliveira, GPU Gems 3, pp. 409–428.
Wilf LaLonde ©2012Comp 4501
95.4501
Introduction
Wilf LaLonde ©2012Comp 4501
Advanced Detailing?
a flat surface
making it “bumpy”
Wilf LaLonde ©2012Comp 4501
Advanced Detailing?
Advanced Detailing:Using shaders and special textures to make flatsurfaces appear 3-dimensional…
Two techniquesBump mapping: A pure side effect of lighting Problem: Silhouettes (grazing view) are flat.Displacement mapping: A 3D effect independent of lighting; originally
done by materializing actual geometry but more recently done via geometry shaders.
Policarpo Bumpmapping Demo
Not yet a topic in the course
Wilf LaLonde ©2012Comp 4501
Techniques
Simple:Procedurally perturb normals via noise or via modifications to the color texture itself.
Normal textures: Provide entire normals via a separate texture.
Normal + height textures:Provide both normals and heights via one or more separate textures.
With geomety shaders, can build geometry from normal + height textures:
Fast enough for demos but perhaps not fast enough for games (certainly does not work on IPhone, xBox, or Playstation)
Wilf LaLonde ©2012Comp 4501
Parallax
Parallax: the difference in apparent position as you move from side to side :What some people call “the 3D effect”; namelythat near objects MOVE MORE than far objects.
From wikipedia
Wilf LaLonde ©2012Comp 4501
Artistic Issues
Is the effect barely noticeable?Is it just a perturbation or more sophisticated ray tracing pullout effect.
Does it self-occlude?Is a bump occluded by another in front of it?
Does it self-shadow?Is bump valley dark when the lightis occluded by a bump mountain?
Does it have a flat silhouette?Are grazing angles flat?
Is it expensive?Does it make the game crawl?
Wilf LaLonde ©2012Comp 4501
Increasingly Realistic Bumpmapping Techniques
Simple bump mapping (Blinn 78):Using generated normals OR normals computed from the original color texture.
Normal mapping (Cohen 78, Cignoni 78):Using normals stored in a texture. Can’t self-shadow
Parallax mapping (Kaneko 01):Using normals + depth stored in a texture to
simulate parallax in one-pass.
Wilf LaLonde ©2012Comp 4501
Increasingly Realistic Bumpmapping Techniques
Relief Mapping (Policarpo 05):Using textures with normals + depth + iterativeray tracing techniques to allow for better occlusion and more depth.
Cone Mapping (Dummer 06):Using cones to indicate how far you can stepbefore you need to check for possible intersection.
Wilf LaLonde ©2012Comp 4501
Implementation Issues
Texture encodings?How do we store and access the normal and optional height information in a texture.
Texture coordinate space?How many spaces do shaders have to deal with? Texture coordinate space, local space(for a model), world space, camera space,perspective space…
Ray tracing?Do shaders loop and if so, do they need to perform linear and binary searches.
Wilf LaLonde ©2012Comp 4501
95.4501
Texture Encodings
Wilf LaLonde ©2012Comp 4501
Encoding Normal [nx, ny, nz] in 8-bit RGB Textures
What is often done: Quantize each float value in range [-1.0, +1.0] by mapping to range [0.0, 255.0] and flooring to convert to integer.
0.0 1.0 254.0 255.0
0 1 255 Very few values map to 255
floor floor
254
Wilf LaLonde ©2012Comp 4501
Encoding Normals in Textures
Fairer technique: Quantize each float value in range [-1.0, +1.0] to range [0.0, 256.0], floor to convert to integer, then map 256.0 to 255.
0.0 1.0 255.0 256.0
0 1 255
As many float values from 255.0 to 256.0 map to 255
as from 0.0 to 1.0 map to 0
floor floor
Wilf LaLonde ©2012Comp 4501
Encoding Normals in Textures
• Can encode nx, ny, nz in 8-bit RGB texture Building: -1 to 1 0 to 255
x in range -1 to 1x+1 in range 0 to 2(x+1)*0.5 in range 0 to 1(x+1)*0.5*256 in range 0 to 256 min (255, floor ((x+1)*128)) in range 0 to 255
Accessing (via normalizing texture): x in range 0 to 255 is provided in range 0 to 1x-0.5 in range -0.5 to +0.5(x-0.5)*2 in range -1 to 1
• Or directly in non-normalizing float RGB textures
All numbers from 0.0 to 1.0 exclusive map to 0. Also want all numbers from 255.0 to 256.0 inclusive to map to 255
Wilf LaLonde ©2012Comp 4501
Relief Texture = Normal + Height
Use 8-bit RGBA texture to encode nx, ny, nz, h
Note: Some authors recommend that h representa normalized value in range 0 to 1 and that ascale factor be encoded in vertices.. to get small versus large bumps.
relief texture
Wilf LaLonde ©2012Comp 4501
Relief Texture = Normal + Height OR Normal + Depth
RGB = normal, A = depth (brighter = deeper)
Wilf LaLonde ©2012Comp 4501
Do we use a depth map or a height map?
depth map(since top is 0)
bottom LOW
top HIGH
Black is 0, White is 1
height map(since top is 1)
bottom HIGH
top LOW
Wilf LaLonde ©2012Comp 4501
95.4501
Tangent Space
Wilf LaLonde ©2012Comp 4501
Texture Coordinate Space (Or Tangent Space)
In texture, x goes right, y goes up, z goes out,
y
z
xnormal
tangent
bitangent
v
u
at least in a right-handed systemw
Artist Terminology
Wilf LaLonde ©2012Comp 4501
Transforming TO/FROM Tangent Space
• Use the term vertex space to mean the space of the vertices; i.e., either
Model space for geometry that can move.World space for static geometry.
• The conventional notation/terminology:T = [Tx, Ty, Tz] for the u-axis (tangent vector)
B = [Bx, By, Bz] for the v-axis (bitangent vector) N = [Nx, Ny, Nz] for the w-axis (normal).
all in vertex space
Wilf LaLonde ©2012Comp 4501
Transforming TO/FROM Tangent Space
• Shaders need access to the average tangent, bitangent, and normal at all vertices (pixels).
• Need access to the triangles and texture coordinates to compute this.
• In the discussion to follow, we’ll show how to compute this from one triangle… An average vector is computed by
• Averaging the values of neighboring triangles, and
• Normalizing the result (average changes the length)
Wilf LaLonde ©2012Comp 4501
Transforming TO/FROM Tangent Space
• First, we compute T, B, N from an existing triangle. Then we build the following 3x3 transformations (right handed system).
fromTangent = Tx Ty Tz to vertex Bx By Bz
space Nx Ny Nz
toTangent = T'x B'x Nx from vertex T'y B'y Ny
space T'z B'z Nz
almosttranspose
(many use just the transpose)
continued
Wilf LaLonde ©2012Comp 4501
Mapping To/From Tangent Space
[tx,ty,tz] = [x,y,z] *
[x,y,z]
[Tx,Ty,Tz]
[Bx,By,Bz]
T'x B'x Nx T'y B'y Ny
T'z B'z Nz
toTangent mapping
[x,y,z] = [tx,ty,tz] *Tx Ty Tz Bx By Bz
Nx Ny Nz
fromTangent mapping
u-axisv-axis
Wilf LaLonde ©2012Comp 4501
Computing T, B, and N
• We compute N from the triangle vertices using a cross product which is normalized; i.e., normalized (AxB) in a right handed system.
• T and B are more complex. We start off giving you the answer... Then we provide the derivation which comes from the following.
A
B
Mathematics for 3D Game Programming and Computer Graphics, Third Edition, Eric Lengyel, 2012, pp. 180-185.
Wilf LaLonde ©2012Comp 4501
Transforming TO/FROM Tangent Space
• Given vertices P0, P1, P2 with texture coordinates [u0,v0], [u1,v1], [u2,v2], solve for T, B as follows:
toTangentSpace T' = T - (N.T)B' = B - (N.B)N - (T'.B)T'
Q1 = P1-P0, Q2 = P2-P0 [s1,t1] = [u1-u0,v1-v0], [s2,t2] = [u2-u0,v2-v0],
Compute Deriv
ation
coming up
and normalize
T = normalize ( t2 Q1 - t1Q2)
B = normalize (-s2Q1 + s1Q2)
fromTangentSpace
Wilf LaLonde ©2012Comp 4501
Before we Derive The Result
• Need 2 capabilities
• The inverse of a 2X2 transformation• The 3x3 rotation matrix that will map
• Old axes [1,0.0], [0,1,0], [0,0,1] to new axes X-Axis, Y-Axis, Z-Axis(which for us will be T, B, N).
Wilf LaLonde ©2012Comp 4501
Inverse of a 2x2 Transformation
1
s1t2 - s2t1
t2 -t1
-s2 s1
s1 t1
s2 t2
-1=
Proof 1
s1t2 - s2t1
t2 -t1
-s2 s1* s1 t1
s2 t2
= 1
s1t2 - s2t1
s1 t2 - s2 t1 0
0 s1t2 - s2t1
= 1 0
0 1
Wilf LaLonde ©2012Comp 4501
What transformation will map standard axes to new Axes?
• My current axes are [1,0,0], [0,1,0], [0,0,1].• I want my new axes to be xAxis, yAxis, zAxis.• The transformation R that will do it is
R = xAxisx xAxisy xAxisz yAxisx yAxisy yAxisz zAxisx zAxisy zAxisz
Proof on next page.
Wilf LaLonde ©2012Comp 4501
[1,0,0] * R = [xAxisx xAxisy xAxisz][0,1,0] * R = [yAxisx yAxisy yAxisz][0,0,1] * R = [zAxisx zAxisy zAxisz]
Proof: Try Transforming [1,0,0]!
xAxisx xAxisy xAxiszyAxisx yAxisy yAxiszzAxisx zAxisy zAxisz
[1,0,0] *
Transformation R
original x-axis transformed x-axis
Wilf LaLonde ©2012Comp 4501
Starting in Texture Space
T= [1,0,0] *
Tx Ty Tz Bx By Bz
Nx Ny Nz
fromTangent mapping
• In texture space, the u-,v-,w-axes are [1,0,0], [0,1,0], [0,0,1]. We are defining T, B, N to be the corresponding vertex space vectors; i.e.
B= [0,1,0] *
Tx Ty Tz Bx By Bz
Nx Ny Nz
N= [0,0,1] *
Tx Ty Tz Bx By Bz
Nx Ny Nz
fromTangent mapping
fromTangent mapping
But we don’t Y
ET
know their a
ctual
numerical v
alues
This proves this
matrix m
aps from
texture space to
vertex space.
Wilf LaLonde ©2012Comp 4501
Deriving T and B From Vertex + Text. Coord.
• In texture space, the u-,v-,w- axes are [1,0,0], [0,1,0], [0,0,1]. In vertex space, they are T, B, N. How do we compute T, B, N?
B
N
T
normal
tangent
bitangent
P0 [u0,v0]P1 [u1,v1]
P2 [u2,v2]
Q [u,v]
Q can be obtained from P0 by moving a little in the T direction and a little in the B direction
a vertex coordinate
a texture coordinate
Wilf LaLonde ©2012Comp 4501
Derivation From Texture Coordinates [u,v]
• To solve for T and B (6 unknowns), plug in known values for Q [u,v]; i.e., P1 [u1,v1] and P2 [u2,v2].
B
NT P0 [u0,v0]
P1 [u1,v1]
P2 [u2,v2]
Q [u,v]
Q-P0 = (u-u0)T + (v-v0)B directions only
go to the right along T
go up along B
Let Q be ANY point inside the triangle.
Wilf LaLonde ©2012Comp 4501
Derivation
B
NT
P0 [u0,v0] P1 [u1,v1]
P2 [u2,v2]
Q [u,v]
Q-P0 = (u-u0)T + (v-v0)B
P1-P0 = (u1-u0)T + (v1-v0)B Plug in P1
Q1 s1 t1
P2-P0 = (u2-u0)T + (v2-v0)B Plug in P2
Q2 s2 t2
Q1 = s1T + t1B
Q2 = s2T + t2B
Wilf LaLonde ©2012Comp 4501
Derivation
Q1 = s1T + t1B Q2 = s2T + t2B
• Solve for T, B. How hard can it be?
• In matrix form,
= s1 t1
s2 t2
Tx Ty Tz Bx By Bz
Q1x Q1y Q1z Q2x Q2y Q2z
= s1Tx + t1Bx s1Ty + t1By s1Tz + t1Bz s2Tx + t2Bx s2Ty + t2By s2Tz + t2Bz
Tx Ty Tz Bx By Bz
= Q1x Q1y Q1z Q2x Q2y Q2z
s1 t1
s2 t2
• Flipped
Wilf LaLonde ©2012Comp 4501
Derivation• So far
• Premultiply both sides by
Tx Ty Tz Bx By Bz
= s1 t1
s2 t2
Q1x Q1y Q1z Q2x Q2y Q2z
-1
s1 t1
s2 t2
-1
Tx Ty Tz Bx By Bz
= Q1x Q1y Q1z Q2x Q2y Q2z
s1 t1
s2 t2
• Result
T
B= s1 t1
s2 t2
Q1 Q2
-1• More compactly
Wilf LaLonde ©2012Comp 4501
Derivation
T
B= s1 t1
s2 t2
Q1 Q2
-1• From
T
B
• Get1
s1t2 – s2t1
t2 -t1
-s2 s1=
Q1 Q2
= 1
s1t2 – s2t1
t2 Q1 - t1Q2
-s2Q1 + s1Q2
T
B
Wilf LaLonde ©2012Comp 4501
Notation
T
B
• Short form
T
• Is Equivalent to
= 1
s1t2 – s2t1
(t2 Q1 - t1Q2)
= 1
s1t2 – s2t1
t2 Q1 - t1Q2
-s2Q1 + s1Q2
B = 1
s1t2 – s2t1
(-s2Q1 + s1Q2)
Wilf LaLonde ©2012Comp 4501
Summarizing
Q1 = P1-P0, Q2 = P2-P0 [s1,t1] = [u1-u0,v1-v0], [s2,t2] = [u2-u0,v2-v0],
where
fromTangent = Tx Ty Tz to vertex Bx By Bz
space Nx Ny Nz
TB
T = [Tx, Ty, Tz], B = [Bx, By, Bz]
= 1
s1t2 – s2t1
t2 Q1 - t1Q2
-s2Q1 + s1Q2
and normalize
WAIT: This is not as
simple as it gets...
Wilf LaLonde ©2012Comp 4501
Summarizing
Q1 = P1-P0, Q2 = P2-P0 [s1,t1] = [u1-u0,v1-v0], [s2,t2] = [u2-u0,v2-v0],
where
fromTangent = Tx Ty Tz to vertex Bx By Bz
space Nx Ny Nz
T = normalize ( t2 Q1 - t1Q2)
B = normalize (-s2Q1 + s1Q2)
Wilf LaLonde ©2012Comp 4501
Mapping from Vertex to Tangent Space
fromTangent = Tx Ty Tz to vertex Bx By Bz
space Nx Ny Nz
• Mapping the other way is just the inverse; the inverse of a rotation is just the transpose…
• That does not quite work for normals and tangents… Why not?
• The transpose is not the inverse if theaxes are skewed (not perpendicular)
u-axis
v-axis
w-axis
toTangent = T'x B'x Nx from vertex T'y B'y Ny
space T'z B'z Nz
= inverse of
Wilf LaLonde ©2012Comp 4501
So We Fix It (ASSUMES a normalized T and B)
• Fix T by making it perpendicular to N.T
N
• Fix B by making it perpendicular to N and T ' (the fixed T).
shadow of A on B = |A|cos * (B/|B|) = |A||B|cos * (B/|B|2) = A.B * (B/|B|2)
= (A.B) B if |B| = 1 A.B
T' = T - (N.T)NT '
B' = B - (N.B)N - (T'.B)T' N fix T ' fixAlso normalize T' and B'
Wilf LaLonde ©2012Comp 4501
Conclusion
• Given vertices P0, P1, P2 with texture coordinates [u0,v0], [u1,v1], [u2,v2], solve for T, B as follows:
toTangentSpace T' = T - (N.T)NB' = B - (N.B)N - (T'.B)T'
Q1 = P1-P0, Q2 = P2-P0 [s1,t1] = [u1-u0,v1-v0], [s2,t2] = [u2-u0,v2-v0],
Compute
fromTangentSpace
T = normalize ( t2 Q1 - t1Q2)
B = normalize (-s2Q1 + s1Q2)
Wilf LaLonde ©2012Comp 4501
95.4501
Beginning Our
Investigations
Wilf LaLonde ©2012Comp 4501
Background
• Based on “ParallaxOcclusionMapping” demo by Tatarchuk from the June 2010 DirectX SDK.
• Demo uses a 2-sided disc and builds tangent vectors using built-in DirectX routines.
Demo “looks” good
Look at Demo
Wilf LaLonde ©2012Comp 4501
Publication
• Dynamic Parallax Occlusion Mapping with Approximate Soft Shadows, Natalya Tatarchuk, ATI Research, Inc, Siggraph 2005.
with POM with Normal MappingOther authors call this relief mapping
Wilf LaLonde ©2012Comp 4501
Desired Modifications
• We wanted to be able to use a cube shape and move the demo to a local directory…
• We created a texturedCube.x file but problems resulted.
Wilf LaLonde ©2012Comp 4501
Problems Encountered
• File lookup does NOT use LOCAL MEDIA first (so we hard-wired the file path).
• Parallax occlusion only worked on 2 of the 6 faces… Suspected the DirectX routines for computing tangent/bitangent vectors. Could not get it to work correctly for all faces.
• Solution: We hand built the direct X mesh using vertex and index buffers and wrote our own routines.
Real game engine building tools probably want their own routines anyway.
Demo (drawWilfCube (false for “texturedCube.x” and D3DXComputeTangentFrameEx; true for newVersion) +
CubeForParallaxOcclusionMappingDemo.cpp)
Wilf LaLonde ©2012Comp 4501
Further Changes
• Wanted to do everything in a right handed system.
D3DXMatrixLookAtRHD3DXMatrixPerspectiveFovRH
• This screwed up the camera movement directions and the trackball for looking around… Host of low-level changes were needed to fix it…
Kept it that way but in retrospect, might have been a mistake…
Wilf LaLonde ©2012Comp 4501
95.4501
Changing the
Original Shader
Effect File
Wilf LaLonde ©2012Comp 4501
Modified ParallaxOcclusionMapping.fx as Follows
//Comments at the top of “ParallaxOcclusionMapping.fx”#include “WilfBumpMappingLibrary.choice”
//Global variables….
//End of global variables
#include “WilfBumpMappingLibrary.all” #include “WilfBumpMappingLibrary.shader”
//The existing code that was there as part of the demo… all technique….
RenderSceneWithWilfExperiments{
pass P0{
VertexShader = compile vs_3_0 RenderWilfsVS ();PixelShader = compile ps_3_0 RenderWilfPS ();
}}
Look at Demo (Shader Variables)
Wilf LaLonde ©2012Comp 4501
Comparing Input Structures (Vertex Shader)
VS_OUTPUT RenderSceneVS ( float4 inPositionOS : POSITION, float2 inTexCoord : TEXCOORD0,float3 vInNormalOS : NORMAL,float3 vInBinormalOS : BINORMAL,float3 vInTangentOS : TANGENT )
{ … }
struct WILF_VS_INPUT {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;float3 normalVS : NORMAL; float3 bitangentVS : BINORMAL;float3 tangentVS : TANGENT;
};
New Version (Basically unchanged)
Original Version
VS = Vertex Space
Structure is implicit
Wilf LaLonde ©2012Comp 4501
Comparing Input Structures (Pixel Shader)
Struct VS_OUTPUT {float4 position : POSITION;float2 texCoord : TEXCOORD0;float3 vLightTS : TEXCOORD1; // light vector in tangent space, denormalizedfloat3 vViewTS : TEXCOORD2; // view vector in tangent space, denormalizedfloat2 vParallaxOffsetTS : TEXCOORD3; // Parallax offset vector in tangent spacefloat3 vNormalWS : TEXCOORD4; // Normal vector in world spacefloat3 vViewWS : TEXCOORD5; // View vector in world spacefloat3 vInTangentOS : TANGENT )
}
struct WILF_COMMON { //Both a vertex shader output and a pixel shader input…float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;
};
New Version (Basically unchanged)
Original Version
For now
Wow: Ignore since not using it
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 0:
Drawing Red
Wilf LaLonde ©2012Comp 4501
WilfBumpMappingLibrary.choice
//--------------------------------------------------------------------------------------// Wilf's Bumpmapping library experiment selection facility
//--------------------------------------------------------------------------------------
//Note that shader compilers are so efficient that unused code is completed//discarded and functions are completely inline expanded. Thus there is no //overhead in, for example, passing parameters that are unused or for that matter //providing a small list of utilities that are selectively used depending on the //current state of the implementation...
//--------------------------------------------------------------------------------------// Wilf's Experiments: Controlled by macro "experiment" below....//--------------------------------------------------------------------------------------
#define drawRed 0#define experiment drawRed
Wilf LaLonde ©2012Comp 4501
WilfBumpMappingLibrary.all
//--------------------------------------------------------------------------------------// Wilf's Bumpmapping library functions...//--------------------------------------------------------------------------------------
#define clamp01 saturate
//Define the ambient/diffuse/specular specifics for this application....float4 Ka () {return g_materialAmbientColor;}float4 Kd () {return g_materialDiffuseColor;}float4 Ks () {return g_materialSpecularColor * g_bAddSpecular} //* 1 if true; * 0 if false...float g_SpecularExponent = g_fSpecularExponent;
//Then the simple lighting routines discussed earlier……
Wilf LaLonde ©2012Comp 4501
WilfBumpMappingLibrary.shader
struct WILF_VS_INPUT {…}struct WILF_COMMON {…}
WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) { WILF_COMMON Out = (WILF_COMMON) 0;//Transform input position to clip (projection space) and output...Out.position = mul (In.position, g_mWorldViewProjection);#if (experiment == drawRed)
return Out;#endif
}
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { #if (experiment == drawRed)
return float4 (1, 0, 0, 1);#endif
} Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 1:
Drawing With The
Texture
Wilf LaLonde ©2012Comp 4501
The Texture Being Used
• Texture “wood.jpg” in shader variable “g_baseTexture” accessed via sampler “tBase”.
Wilf LaLonde ©2012Comp 4501
Texture Accessing
• When accessing textures, we sometimes need to scale the result in some way.
• A convenient way of doing this consists of adding an accessing macro that does the dirty work; e.g.,
sampleColor works differently thansampleNormal
Wilf LaLonde ©2012Comp 4501
Addition to WilfBumpMappingLibrary.all
//--------------------------------------------------------------------------------------// Texture accessing Functions...//--------------------------------------------------------------------------------------
//Note: that texture access in variable length loops need to make use either of //tex2DGrad or tex2Dlod instead of tex2D... These macros were added so we can//change our minds without needing to change the routines that use them... //They also provide simple capabilities such as additionally scaling the result of a//texture probe, switching height data into depth data or converting "right handed//data" into "left handed data"...
#define sampleColor(sampler, textureCoordinate) tex2D (sampler, textureCoordinate)
Look at Demo
To find out what the color sample variable is called
Wilf LaLonde ©2012Comp 4501
Renaming Variables
• It would also be nice if we could rename some of the more obscure variables or the variables that are NOT precise enough without actually changing the working code…
• Add the following to the “.shader” file.
//--------------------------------------------------------------------------------------// Wilf's shader variable renaming section... //--------------------------------------------------------------------------------------
#define colorSampler tBase
Wilf LaLonde ©2012Comp 4501
Additions to WilfBumpMappingLibrary.choice
#define drawRed 0#define drawTexture 1
#define experiment drawTexture
Wilf LaLonde ©2012Comp 4501
Additions To WilfBumpMappingLibrary.shader
WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) { WILF_COMMON Out = (WILF_COMMON) 0;//Transform input position to clip (projection space) and output...Out.position = mul (In.position, g_mWorldViewProjection);#if (experiment == drawRed)
return Out;#elif (experiment == drawTexture)
Out.textureCoordinate = In.textureCoordinate; return Out;#endif
}
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { #if (experiment == drawRed)
return float4 (1, 0, 0, 1);#elif (experiment == drawTexture)
float2 uv = In.textureCoordinate;return sampleColor (colorSampler, uv.xy);
#endif}
Look at Demo
struct WILF_VS_INPUT {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;};
struct WILF_COMMON {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;};
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 2:
Lighting The Texture
Wilf LaLonde ©2012Comp 4501
What Space Should We Perform Lighting In.
• World space…Should work..
• Camera space (or view space)…Should work too…
• Tangent SpaceShould also work.
Tatarchuk suggests Tangent Spacesince other things need to be done in that space too…
Wilf LaLonde ©2012Comp 4501
Recall: From “Lighting for Dummies”
• There was no notion of which SPACE the lighting was done in…float4 combinedColor (float4 textureColor, float3 normal, float3 toLight, float3 toCamera , float shadowBrightness) {//1 for bright, 0 for dark… //All vectors must be normalized… return textureColor * (ambientColor () + diffuseColor (normal, toLight, shadowBrightness)) + specularColor (normal, toLight, toCamera); }
float4 combinedColor (float4 textureColor, float3 normal, float3 toLight, float3 toCamera ) {return combinedColor (textureColor, normal, toLight, toCamera, 1);
}
Need toLightTS and
toCameraTS
Wilf LaLonde ©2012Comp 4501
But
• But most information is given in global (world) space…
• Our tangent/bitangent/normals are in model (vertex) space.
• Note: For lighting, we will only need to transform vectors (directions), not positions (implying that if we build a transformation, we only need a 3x3 transformation).
Tangent space => Vertex space (via TBN) => World space (via g_mWorld)
Let’s have a look
Wilf LaLonde ©2012Comp 4501
So for lighting we havefloat3 g_LightDir; //Light's direction in world spacefloat4 g_vEye; //Camera's location (wilf: likely world space)
float4 positionWS = mul (In.position, g_mWorld);float3 toCameraWS = g_vEye - positionWS;
• To start off, we need the camera in world space to match the light in world space...
//--------------------------------------------------------------------------------------// Wilf's shader variable renaming section... //--------------------------------------------------------------------------------------
#define colorSampler tBase#define g_toLight g_LightDir
• To be more precise, we could add
On to building a transformation (and inverse) mapping from tangent space to world space
Wilf LaLonde ©2012Comp 4501
Tangent to World Space
float3x3 tangentToWorldSpace (float3 tangentVS, float3 bitangentVS, float3 normalVS) {//Convert tangent, bitangent, normal vectors from vertex (model) space to world space and then build the world to tangent space transformation.
//Transform the normal, tangent and binormal vectors from object to world space//and normalize the results in case world space transformation has scale factors.float3 T = normalize (mul (tangentVS, (float3x3) g_mWorld)); //T in WS.float3 B = normalize (mul (bitangentVS, (float3x3) g_mWorld)); //B in WS.float3 N = normalize (mul (normalVS, (float3x3) g_mWorld)); //N in WS.//Build the transformation that maps directly from tangent space axis vectors//[1,0,0], [0,1,0], [0,0,1] to world space axis vectors T, B, N respectively.
return float3x3 (T, B, N); //It's easy to check that this performs the desired mapping...}
What’s the easiest way of getting the inverse?
Wilf LaLonde ©2012Comp 4501
Consider the Following
• There is no HLSL matrix inverse function.
• Recall the FIXED UP TRANSPOSE from Lengyel’s “Mathematics for 3D Game Programming and Computer Graphics”
Let’s use the UNFIXED UP VERSION (since we are not using skewed texturing).
Do we need to physically build the transpose?
Wilf LaLonde ©2012Comp 4501
Pre- Versus Post- Multiplication
[1,0,0] * Tx Ty Tz Bx By Bz
Nx Ny Nz
• Consider pre-multiplication versus post-multiplication
[1,0,0] * Tx Bx Nx
Ty By Ny
Tz Bz Nz
1
[Tx, Ty, Tz]
0 0
*
[Tx, Bx, Nx]• Versus
[Tx, Bx, Nx]
Wilf LaLonde ©2012Comp 4501
In Other Words
• If TtoW maps from tangent space to world space and TtoW t = inverse (TtoW) maps back, then
toLightTS = mul (TtoW, toLightWS)
toLightWS = mul (toLightTS, TtoW)
• Then for toLightTS and toLightWS
typical matrix multiply is pre-multiply
but post-multiply gives you the inverse
Wilf LaLonde ©2012Comp 4501
Additions to WilfBumpMappingLibrary.choice
#define drawRed 0#define drawTexture 1#define drawLitTexture 2
#define experiment drawLitTexture
Wilf LaLonde ©2012Comp 4501
WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) { WILF_COMMON Out = (WILF_COMMON) 0;//Transform input position to clip (projection space) and output...Out.position = mul (In.position, g_mWorldViewProjection);#if (experiment == drawRed)
return Out;…
#elif (experiment == drawLitTexture)Out.textureCoordinate = In.textureCoordinate;float3x3 tangentToWorldSpaceMap = tangentToWorldSpace (
In.tangentVS, In.bitangentVS, In.normalVS); //Assuming no scaling factors... float3 toLightWS = g_toLight;
Out.toLightTS = mul (tangentToWorldSpaceMap, toLightWS); //i.e., toLightWS * inverse (tangentToWorldSpaceMap)
float4 positionWS = mul (In.position, g_mWorld);float3 toCameraWS = g_vEye - positionWS; Out.toCameraTS = mul (tangentToWorldSpaceMap, toCameraWS);
//i.e., toCameraWS * inverse (tangentToWorldSpaceMap)return Out;
#endif}
Additions To WilfBumpMappingLibrary.shader
struct WILF_VS_INPUT {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;float3 normalVS : NORMAL; float3 bitangentVS : BINORMAL;float3 tangentVS : TANGENT;};
struct WILF_COMMON {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;float3 toLightTS : TEXCOORD1;float3 toCameraTS : TEXCOORD2};
Wilf LaLonde ©2012Comp 4501
Additions To WilfBumpMappingLibrary.shader
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { #if (experiment == drawRed)
return float4 (1, 0, 0, 1);#elif (experiment == drawTexture)
float2 uv = In.textureCoordinate;return sampleColor (colorSampler, uv.xy);
#elif (experiment == drawLitTexture) float2 uv = In.textureCoordinate;float3 toCameraTS = normalize (In.toCameraTS); //Interp. changes the length...float3 toLightTS = normalize (In.toLightTS); //Interpolation changes the length...float4 textureColor = sampleColor (colorSampler, uv.xy);float3 normalTS = float3 (0, 0, 1); return combinedColor (textureColor, normalTS, toLightTS, toCameraTS);
#endif}
Look at Demo
struct WILF_COMMON {float4 position : POSITION;float2 textureCoordinate : TEXCOORD0;float3 toLightTS : TEXCOORD1;float3 toCameraTS : TEXCOORD2};
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
Much of the Code we Just Wrote will get Repeated Many Times.
Create MACROS to Make It Easier For Later
#define VSOutputUVCameraAndLights \Out.textureCoordinate = In.textureCoordinate; \float3x3 tangentToWorldSpaceMap = tangentToWorldSpace ( \
In.tangentVS, In.bitangentVS, In.normalVS); /*Assuming no scaling factors... */ \float3 toLightWS = g_toLight; \Out.toLightTS = mul (tangentToWorldSpaceMap, toLightWS); \
/*i.e., toLightWS * inverse (tangentToWorldSpaceMap)*/ \float4 positionWS = mul (In.position, g_mWorld); \float3 toCameraWS = g_vEye - positionWS; \Out.toCameraTS = mul (tangentToWorldSpaceMap, toCameraWS); \
/*i.e., toCameraWS * inverse (tangentToWorldSpaceMap)*/
#define PSInputUVCameraAndLights \float2 uv = In.textureCoordinate; \float3 toCameraTS = normalize (In.toCameraTS); /*Interp. changes the length...*/ \float3 toLightTS = normalize (In.toLightTS); /*Interpolation changes the length...*/
Need to convert “\\” comments into “/*...*/ comments.
note “\” at the end of each line
Wilf LaLonde ©2012Comp 4501
The shader Code With The Macros
WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) {…#elif (experiment == drawLitTexture)
VSOutputUVCameraAndLights;return Out;
#endif}
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawLitTexture) PSInputUVCameraAndLights; float4 textureColor = sampleColor (colorSampler, uv.xy);float3 normalTS = float3 (0, 0, 1);return combinedColor (textureColor, normalTS, toLightTS, toCameraTS);
#endif}
Wilf LaLonde ©2012Comp 4501
Review• Spaces
T(angent) M(odel) W(orld)
toCameraT EyeW
Trick: p * TBN forward converts But TBN * q backward converts
toLightWtoLightT
PositionM
• Forward Converters
• What we Know (red) and what we want (blue)
TBNM World View?
Wilf LaLonde ©2012Comp 4501
Step 1: Get the Pixel POSITION in World Space So we can Compute toCameraW
T(angent) M(odel) W(orld)
toCameraTEyeW
toLightW
toLightT
PositionM
• Forward Converters
• What we Know (red) and what we want (blue)
TBNM World View?
toCameraW
PositionW = PositionM * World toCameraW = EyeW - PositionW
PositionW
Wilf LaLonde ©2012Comp 4501
Step 2: Construct TBNW so we can convert toLightW and toCameraW to Tangent Space
T(angent) M(odel) W(orld)
toCameraTEyeW
toLightWtoLightT
PositionM
• Forward Converters
• What we Know (red) and what we want (blue)
TBNM World View?
toCameraW
TBNW Build TBNW from T * World, B * World, N * World
Wilf LaLonde ©2012Comp 4501
Step 3: Compute toLightT and toCameraT from TBNW
T(angent) M(odel) W(orld)
toCameraTEyeW
toLightWtoLightT
PositionM
• Forward Converters
• What we Know (red) and what we want (blue)
TBNM World View?
toCameraW
TBNW toCameraT = TBNW * toCameraWtoLightT = TBNW * toLightW
Post multiply is multiplying by the inverse (or transpose)
Wilf LaLonde ©2012Comp 4501
Note: There is Another Way To Compute TBNW
• Your task is to figure out how...
Wilf LaLonde ©2012Comp 4501
95.4501
Normal Mapping
Wilf LaLonde ©2012Comp 4501
Normal Mapping
• Normal mapping (also called "Dot3 bump mapping“) is a technique used for faking the lighting of bumps and dents via a texture containing the pixel normals.
from Wikipedia
Wilf LaLonde ©2012Comp 4501
The Texture Being Used
• Texture “four_NM_height.tga” in shader variable “g_nmhTexture” accessed via sampler “tNormalHeightMap”.
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 3:
Normal Mapping
Wilf LaLonde ©2012Comp 4501
Texture Accessing
• Add a new entry to the library…//--------------------------------------------------------------------------------------// Texture accessing Functions...//--------------------------------------------------------------------------------------#define sampleColor(sampler, textureCoordinate) \
tex2D (sampler, textureCoordinate) #define sampleNormal(reliefSampler, textureCoordinate) \
normalize (tex2D (reliefSampler, textureCoordinate).xyz * 2 - 1)
//--------------------------------------------------------------------------------------// Wilf's shader variable renaming section... //--------------------------------------------------------------------------------------
#define colorSampler tBase#define reliefMapSampler tNormalHeightMap
• Add a new entry to the shader…
Wilf LaLonde ©2012Comp 4501
• Add to the “choice” file.
#define drawRed 0#define drawTexture 1#define drawLitTexture 2#define drawNormalMappedTexture 3
#define experiment drawNormalMappedTexture
Wilf LaLonde ©2012Comp 4501
New Shader Code
WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) {…#elif (experiment == drawNormalMappedTexture )
VSOutputUVCameraAndLights;return Out;
#endif}
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawNormalMappedTexture ) PSInputUVCameraAndLights; float4 textureColor = sampleColor (colorSampler, uv.xy); float3 normalTS = sampleNormal (reliefMapSampler , uv.xy); return combinedColor (textureColor, normalTS, toLightTS, toCameraTS);
#endif}
no change
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
Analyzing The Normal Map With The PHOTOSHOP Sampler Tool
G = 217
G =37
RGB = [Nx,Ny,Nz]
What Do You Conclude?
Wilf LaLonde ©2012Comp 4501
Texture Accessing
• The given map is a RIGHT HANDED NORMAL MAP where up is +1.
• In the DirectX shader, +1 is down (it needs a LEFT HANDED NORMAL MAP).
//Sampling also convert from a right handed to left-handed normal map.#define sampleNormal(reliefSampler, textureCoordinate) \
(normalize (tex2D (reliefSampler, textureCoordinate).xyz * 2 - 1) * float3(1,-1,1))
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
old new
Wilf LaLonde ©2012Comp 4501
Another MACRO to Make It Easier For Later
• Pixel shader now looks like the following...
#define PSSampleTextureAndNormalAndBuildColor(uv) \float4 textureColor = sampleColor (colorSampler, uv.xy); \float3 normalTS = sampleNormal (reliefMapSampler , uv.xy); \float4 color = combinedColor (textureColor, normalTS, toLightTS, toCameraTS);
note “\” at the end of each line
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawNormalMappedTexture ) PSInputUVCameraAndLights; PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
Wilf LaLonde ©2012Comp 4501
95.4501
Parallax Mapping
Wilf LaLonde ©2012Comp 4501
Parallax Mapping
• Parallax mapping is a technique that displaces the texture coordinates of the pixel as a function of the view direction in tangent space and the value of the height map at that point (BUT NOT TOO MUCH).
• This gives the illusion of depth due to parallax effects as the view changes; i.e., objects at height 0 (on the surface) don’t move; objects at height 1 (below the surface) move the most.
Wilf LaLonde ©2012Comp 4501
Parallax Mapping: In Tangent Space
• Displace [u,v] away from the camera (eye) by an amount proportional to depth d.
[u,v][u,v]
• [u,v] = [u,v] - normalized (toCamerauv) * (d * scale)
Only works if displacement is small
Wilf LaLonde ©2012Comp 4501
Parallax Mapping
eye
toCamera[u,v]
d
• Get the depth at [u,v] and displace away from eye.
[u,v] = [u,v] - normalized (toCamerauv) * (d * scale)
[u,v]
Wilf LaLonde ©2012Comp 4501
Add ParallaxOffset To The Library
• Note that parallaxOffset needs to sample a depth map but the map it has is a height map... We’ll let sampleDepthFromHeight convert it,
float2 parallaxOffset (float2 uv, sampler2D heightSampler, float3 toCameraTS) {float scaleFudge = 1.0; //No idea what to use yet...float depth = sampleDepthFromHeight (heightSampler, uv.xy) ; float2 direction = normalize (-toCameraTS.xy); //away from camera…return (depth * scaleFudge) * direction;
}
• [u,v] -= normalized (toCamerauv) * (d * scale)
If we permanently fix the texture, Taturchuk’s version will break…
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 4:
Parallax Mapping
Wilf LaLonde ©2012Comp 4501
Note That The Demo Has A Scale Control
This controls the shader variable “g_fHeightMapScale”.
Wilf LaLonde ©2012Comp 4501
Texture Accessing
• Add a new entry to the library…//--------------------------------------------------------------------------------------// Texture accessing Functions...//--------------------------------------------------------------------------------------
#define sampleDepthFromHeight(reliefSampler, textureCoordinate) \((1.0 - tex2D (reliefSampler, textureCoordinate).w) * g_fHeightMapScale)
• Add a new entry to the “choice” file…#define drawRed 0#define drawTexture 1#define drawLitTexture 2#define drawNormalMappedTexture 3#define drawParallaxMappedTexture 4
#define experiment drawParallaxMappedTexture
depth = 1-height
Wilf LaLonde ©2012Comp 4501
Only Changed The Pixel Shader
• Note: parallaxOffset samples the height map... WILF_COMMON RenderWilfsVS (WILF_VS_INPUT In) {
…#elif (experiment == drawParallaxMappedTexture )
VSOutputUVCameraAndLights;return Out;
#endif}
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawParallaxMappedTexture ) PSInputUVCameraAndLights; uv += parallaxOffset (uv, reliefMapSampler, toCameraTS); PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
no change
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
Parallax Offset
• Refining the parallax offset function...
float2 parallaxOffset (float2 uv, sampler2D heightSampler, float3 toCameraTS) {float scaleFudge = 0.05; //Scale 1 is way too large...float depth = sampleDepthFromHeight (heightSampler, uv.xy) ; float2 direction = normalize (toCameraTS.xy);return (depth * scaleFudge) * direction;
}
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
95.4501
Relief Mapping
Wilf LaLonde ©2012Comp 4501
Relief Mapping
• Relief mapping is a pixel shader ray tracing technique that dramatically exhances parallax mapping.
• It seeks to provide accurate depictions of self-occlusion, self-shadowing, and parallax.
from Wikipedia
Wilf LaLonde ©2012Comp 4501
Original paper
Oliveira, Manuel M., Gary Bishop, and David McAllister. 2000. "Relief Texture Mapping." In Proceedings of SIGGRAPH 2000, pp. 359–368.
without relief mapping with relief mapping
Wilf LaLonde ©2012Comp 4501
Definition: Stepping vector
• Stepping vector (denoted ): a vector away from the eye from surface top to surface bottom.
eye
d = 0
d = 1
e = toCameraTS
Stepping vector
There is an x,y,z component to e…
We intend to step by small deltas in the stepping vector direction...
Wilf LaLonde ©2012Comp 4501
Aside: Tatarchuk Uses ParallaxOffset Vector Instead
• Defined as the following BLUE vector ... Also, it works on an actual height map.
eye
h = 1
h = 0
e = toCameraTS
Stepping vector
There is an x,y,z component to e…
She computes the vertical displacement when needed...
parallax offset vector
Wilf LaLonde ©2012Comp 4501
Definition: Stepping vector• The stepping vector can be computed from the
normalized tangent space toCamera vector e and an adjustable scale s (from 0 to 1) applied to d.
eye
d = 0
d = 1
e = toCameraTS
There is an x,y,z component to e…
SteppingVector = -L* e where L = s * (|e| / |ez|)
ez e
exy
s L
By similar triangles|L| / s = |e| / |ez|
almost correct
Wilf LaLonde ©2012Comp 4501
Definition: Stepping vector
• In tangent space, ez is up, so Lz is down BUT d is supposed to be increasing. So the z part of L must be flipped. Also, |e| = 1. eye
d = 0
d = 1
e = toCameraTS
SteppingVector = -L* e where L = s * (|e| / |ez|)
ez e
exy
s L
almost correct
SteppingVector = L * [-1, -1, +1] * e where L = s / |ez|
Wilf LaLonde ©2012Comp 4501
A Library Routine To Compute This
//--------------------------------------------------------------------------------------//Relief mapping Functions...//--------------------------------------------------------------------------------------
float3 steppingVector (float3 normalizedToCameraTS) {//Stepping vector is L * [-1, -1, +1] * e where L = s * (|e| / |ez|) = d / |ez|, s scale.float s = g_fHeightMapScale; float3 e = normalizedToCameraTS;float ez = abs (e.z);float length = ez < 1.0e-5 //Looking horizontal...
? s //Can't make it infinite...: s / ez;
return e * float3 (-length, -length, length);}
SteppingVector = L * [-1, -1, +1] * e where L = s / |ez|
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 5: Relief Mapping
(A constant number of iterations)
Wilf LaLonde ©2012Comp 4501
New Ray Tracing Library Routines
• Add a new entry to the library…int constantSteps () {return 10;}
void linearSearchUntilBelow (in sampler2D reliefSampler, inout float3 uv, inout float3 direction, int steps) {
direction /= steps; //Divide into small pieces.for (int step = 0; step < steps; step++) {
float depth = sampleDepthFromHeight (reliefSampler, uv.xy); if (uv.z < depth) uv += direction; else break; //If above, go further...
}}
uvBut we need uv.z to start
at the TOP; i.e, at 0.
uv.z = 0
uv.z = d 1uv.z
depth
2uv.z
depth
Wilf LaLonde ©2012Comp 4501
New Routines
• Add a new entry to the shader file…
#define PSInput3DUVCameraAndLights \float3 uv = float3 (In.textureCoordinate, 0.0); /* height (top) */ \float3 toCameraTS = normalize (In.toCameraTS); /*Interp. changes length...*/ \float3 toLightTS = normalize (In.toLightTS); /*Interpolation changes length...*/
Wilf LaLonde ©2012Comp 4501
We Can Now Provide The Pixel Shader
• No change is needed to the vertex shader
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawReliefMappedTexture_BASIC_CONSTANT_STEPS) PSInput3DUVCameraAndLights;//Perform computation that affects uv...float3 direction = steppingVector (toCameraTS); const int steps = constantSteps ();linearSearchUntilBelow (reliefMapSampler, uv, direction, steps); //Use the modified uv for everything else... PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
How Expensive is Sample Scaling?
• Assume 1 multiply per probe, 30 probes per pixel, ¼ of a 1024X1024 screen.
#define sampleDepthFromHeight(reliefSampler, textureCoordinate) \((1.0 - tex2D (reliefSampler, textureCoordinate).w) * g_fHeightMapScale)
this multiply
30*0.25*1024*1024 = 7,864,320... Perhaps its worth eliminating!
scale
• We can get rid of it if the stepping vector z-part is in unit space (like the texture w-part) instead of scaled space...
Wilf LaLonde ©2012Comp 4501
Conversion Between Unit/Scaled DEPTH Space
toScaledSpace(v) = [vxy, vz * s]toUnitSpace(v) = [vxy, vz / s]
s
1
to go from 1 down to s
scale down to s (only z, not xy)
d = s
d = 0
d = 1
d = 0Note that both start with d = 0 at the top and increases; i.e.
POSITIVE DOWNWARD
This is the real DEPTH space
This is the UNIT space the depth
map was built in...
Wilf LaLonde ©2012Comp 4501
Recall: When We Created The Stepping Vector
sZ = -s
Z = 0
We actually went from TANGENT space to DEPTH SPACE.
This is the real TANGENT space s
d = s
d = 0
This is the real DEPTH space
yx
z
INCREASING Z GOES UP
yx
d
INCREASING Z GOES DOWN
We actually went from TANGENT space to DEPTH SPACE.
Wilf LaLonde ©2012Comp 4501
How do We Provide These Conversions
• Two pairs of conversions:Tangent space Real Depth SpaceReal Depth Space Unit Depth Space
• One pair of conversionsTangent space Unit Depth Space
Seems simpler since fewer converters
Wilf LaLonde ©2012Comp 4501
Designing The Converters
• One pair of conversions
Tangent space Unit Depth Space
Handles scale + sign flip
float3 unitDepthSpaceToTangentSpace (float3 uv) {return float3 (uv.xy, -vv.z * s);}float3 tangentSpaceToUnitDepthSpace (float3 uv) {return float3 (uv.xy, -uv.z / s);}
/ s
* s
long names to prevent misconceptions
s = g_fHeightMapScale
Wilf LaLonde ©2012Comp 4501
Additionally, Indicate the Stepping Vector Space
• tangentSpaceSteppingVector versus unitDepthSpaceSteppingVector whereunitDepthSpaceSteppingVector = tangentSpaceToUnitDepthSpace (tangentSpaceSteppingVector )
We only need this one.
• Recall: both have same “.xy” but different “.z ”. What are the z-components again?
tangentSpaceSteppingVector: -sunitDepthSpaceSteppingVector : +1
To avoid confusion, let’s derive unitDepthSpaceSteppingVector AGAIN
Wilf LaLonde ©2012Comp 4501
Definition: Tangent Space Stepping vector
eye
z = 0
z = -s
e = normalizedToCameraTS
ez e
exy
s L
Tangent SpaceSteppingVector = L(-e) = [-Lex, -Ley, -Lez] = [-Lex, -Ley, -s] where L = s / ez
float3 unitDepthSpaceToTangentSpace (float3 uv) {return float3 (uv.xy, -vv.z * s);}float3 tangentSpaceToUnitDepthSpace (float3 uv) {return float3 (uv.xy, -uv.z / s);}
By similar triangles, L is a lengthL / s = |e| / |ez|; |ez| = ez (z goes up)L = s / ez
Tangent space
UnitDepthSpaceSteppingVector = [-Lex, -Ley, -(-s) / s] = [-Lex, -Ley, +1] where L = s / ez
Wilf LaLonde ©2012Comp 4501
A Unit Depth Space Stepping Vector
float3 unitDepthSpaceSteppingVector (float3 normalizedToCameraTS) {//Unit depth space stepping vector is [-Lex, -Ley, +1] where L = s / ez, //ez is positive, s scale.float s = g_fHeightMapScale; float3 e = normalizedToCameraTS; float length = e.z < 1.0e-5 //Looking horizontal...
? s //Can't make it infinite...: s / e.z;
return float3 (-length * e.x, -length * e.y, 1.0);}
UnitDepthSpaceSteppingVector = [-Lex, -Ley, +1] where L = s / ez
Wilf LaLonde ©2012Comp 4501
95.4501Experiment 6: Relief Mapping
(Removing Sample Scaling
or switching to UNIT DEPTH space)
Wilf LaLonde ©2012Comp 4501
Now We Can Make The Following Changes
• Add
• Change
• To
• Add
define sampleDepthFromHeight(reliefSampler, textureCoordinate) \((1.0 - tex2D (reliefSampler, textureCoordinate).w) * g_fHeightMapScale)
scale
define sampleDepthFromHeight(reliefSampler, textureCoordinate) \(1.0 - tex2D (reliefSampler, textureCoordinate).w)
float3 unitDepthSpaceSteppingVector (float3 normalizedToCameraTS) {...}
#define experiment drawReliefMappedTexture_BASIC_CONSTANT_STEPS_IN_UNIT_DEPTH_SPACE
Wilf LaLonde ©2012Comp 4501
And the Shader
• No change is needed to the vertex shader
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawReliefMappedTexture_BASIC_CONSTANT_STEPS_IN_UNIT_DEPTH_SPACE) PSInput3DUVCameraAndLights;//Perform computation that affects uv...float3 direction = unitDepthSpaceSteppingVector (toCameraTS); const int steps = constantSteps ();linearSearchUntilBelow (reliefMapSampler, uv, direction, steps); //Use the modified uv for everything else... PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Working Just Like Before
Wilf LaLonde ©2012Comp 4501
Should We Convert uv Back To Tangent Space
• No change is needed to the vertex shader
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawReliefMappedTexture_BASIC_CONSTANT_STEPS_IN_UNIT_DEPTH_SPACE) PSInput3DUVCameraAndLights;//Perform computation that affects uv...float3 direction = unitDepthSpaceSteppingVector (toCameraTS); const int steps = constantSteps ();linearSearchUntilBelow (reliefMapSampler, uv, direction, steps);//Use the modified uv for everything else...uv = unitDepthSpaceToTangentSpace (uv);PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
Do We NeedTo Do This?
After all, everything seemed to be working...
Wilf LaLonde ©2012Comp 4501
Do We Need To Convert?
• Recall the conversion function...
• What do we do with uv?
• Conclusion?
uv = unitDepthSpaceToTangentSpace (uv);PSSampleTextureAndNormalAndBuildColor (uv);
We sample the texture and normal using ONLY uv.xy
float3 unitDepthSpaceToTangentSpace (float3 uv) {return float3 (uv.xy, -vv.z * s);}
s = g_fHeightMapScale
Wilf LaLonde ©2012Comp 4501
Do We Need To Convert?
• Recall the conversion function...
• What do we do with uv?
• Conclusion? NO
uv = unitDepthSpaceToTangentSpace (uv);PSSampleTextureAndNormalAndBuildColor (uv);
We sample the texture and normal using ONLY uv.xy
float3 unitDepthSpaceToTangentSpace (float3 uv) {return float3 (uv.xy, -vv.z * s);}
s = g_fHeightMapScale
Wilf LaLonde ©2012Comp 4501
Improving The Relief Mapping Result
• Given that the linear search gives us a reasonable result (though quite approximate), why not improve the solution.
• Add a binary search routine that refines the linear search result...
Wilf LaLonde ©2012Comp 4501
New Ray Tracing Library Routines
• Provide a refinement to the previous search.void binarySearchForMoreAccurateBottomPosition (in sampler2D reliefSampler,
inout float3 uv, inout float3 direction, int steps) {//Go further or come back by successively smaller amounts...for (int step = 0; step < steps; step++) {
direction *= 0.5; //DECREASE step size further for more accuracy.float depth = sampleDepthFromHeight (reliefSampler, uv.xy); uv += (uv.z < depth) ? direction : -direction; //go further or come back
}}
uvuv.z = 0
uv.z = d
Initially, uv.z > depthsince we were BELOW
1uv.z
depth
1depth
uv.z
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 7: Relief Mapping
(A constant number of steps
BUT MORE REFINED)
Wilf LaLonde ©2012Comp 4501
We Can Now Provide The Pixel Shader
• No change is needed to the vertex shader
float4 RenderWilfPS (WILF_COMMON In) : COLOR0 { …
#elif (experiment == drawReliefMappedTexture_REFINED_CONSTANT_STEPS) PSInput3DUVCameraAndLights;//Perform computation that affects uv...float3 direction = unitDepthSpaceSteppingVector (toCameraTS); const int steps = constantSteps ();linearSearchUntilBelow (reliefMapSampler, uv, direction, steps);
const int refinementSteps = 6;binarySearchForMoreAccurateBottomPosition (reliefMapSampler,
uv, direction, refinementSteps) ; //Use the modified uv for everything else... PSSampleTextureAndNormalAndBuildColor (uv); return color;
#endif}
Look at Demo (tweeks?)
Wilf LaLonde ©2012Comp 4501
Result
Without Refinement With Refinement
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 8: Relief Mapping
(Artist controlled steps)
Wilf LaLonde ©2012Comp 4501
Allow The Number of Steps To Be Controlled
• Add a new entry to the library and a new case with the minor pixel shader change…
int constantSteps () {return 10;}int artistControlledSteps () {return g_nMinSamples;}
Look at Demo
...#define drawReliefMappedTexture_ARTIST_CONSTROLLED_VARIABLE_SIZE 8
#define experiment drawReliefMappedTexture_ARTIST_CONSTROLLED_VARIABLE_SIZE
Wilf LaLonde ©2012Comp 4501
The Shader Stopped Working. RECALL
• Early GPU shaders work on 2x2 pixels in parallel running in lock step.
• This was to allow gradients to be computed on any variable by taking neighbor differences.
• This is used internally by tex2D to compute the mipmap level to use.
• Dynamic branching breaks lock stepping requiring subsequent access to use tex2DLOD or tex2Dgrad instead.
Wilf LaLonde ©2012Comp 4501
Texture Lookup Functions
• tex2D (s, t): Samples a 2D texture via 2D t. Internally determines which mipmap to use.
• tex2Dlod (s, t): Samples a 2D texture with mipmaps via 4D t. The mipmap level of detail is specified in t.z.
• tex2Dgrad (s, t, ddx, ddy): Samples a 2D texture via 2D t and a gradient to select the mip level of detail.
s is the sampler
NOTE
Wilf LaLonde ©2012Comp 4501
So, Let’s switch from tex2D to tex2Dlod?
• But to use tex2Dlod requires a 4D texture coordinate [tx,ty,tz,tw].
#if (experiment < drawReliefMappedTexture_ARTIST_CONSTROLLED_VARIABLE_SIZE) #define sampleDepthFromHeight(reliefSampler, textureCoordinate) \
(1.0 - tex2D (reliefSampler, textureCoordinate).w)#else
#define sampleDepthFromHeight(reliefSampler, textureCoordinate) \(1.0 - tex2Dlod (reliefSampler, float4 (textureCoordinate, 0, 1)).w)
#endif //experiment
This is mipmap level to use
Warning: Normals, Heights, and DepthsSHOULD BE ACCESSED at MIPMAP LEVEL OF DETAIL 0
Recall: normal is [x,y,z];
height is w; depth = 1-height
Mipmap LOD 0
• Only height is repeatedly accessed in a loop (color and normal accessed once at the end).
Old
New
Wilf LaLonde ©2012Comp 4501
Result
10 samples 40 samples
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 9: Relief Mapping
(Let the shader control the
number of steps)
Wilf LaLonde ©2012Comp 4501
Getting Rid of The Adjustable Number of Steps
• In texture space, a vector going from x=0 to x=1 steps through 256 pixels IF THE WIDTH OF THE TEXTURE IS 256.
int stepsInPixels (float3 uvSteppingVector) {float2 duv = clamp01 (abs (uvSteppingVector.xy)) * g_vTextureDims;return max (duv.x, duv.y) + 1; //And make sure we have at least one...
}
Just the horizontal part
Look at Demo
e.g., 512x512
Wilf LaLonde ©2012Comp 4501
Oh Oh: A Bug Showed Up
• This is easier to see when running the demo. You should be able to see the that floor around the donut comes UP as you look at it at a steeper angle...
As if the shader stopped stepping too soon
Wilf LaLonde ©2012Comp 4501
How Do We Debug The Shader
• Maybe stepsInPixels is not giving us enough steps...
• Makes me want to be able to draw the number of steps taken by the shader...
To do that, we need some way of turning “steps” into a color...
Wilf LaLonde ©2012Comp 4501
An idea
• Let’s come up with a way of drawing a color range...
• Why not use 0 to 255? Because red 0 = green 0 = blue 0 = black...
R 128 255 G 128 255 B 128 255 W 128 255
Wilf LaLonde ©2012Comp 4501
An idea
• What if we supply 2 numbers.
integerLimit: the range of the numberinteger: the number to be drawn.
• The 1st quarter will give red, 2nd quarter green, etc... and we CLAMP IF OUTSIDE.
R 128 255 G 128 255 B 128 255 W 128 255
integerLimit
Wilf LaLonde ©2012Comp 4501
A color function for debugging
float4 colorForPositiveInteger (int integer, int integerLimit) {integer = clamp (integer, 0, integerLimit);float quarter = integerLimit * 0.25; float half = quarter * 0.5;float quarterReciprocal = 1.0 / quarter;#define test(case) integer < (quarter * case)#define color(case) (half + (integer - (quarter * case))) * quarterReciprocalif (test(1)) return float4 (color(0),0,0,1); //reddish (dark to bright)if (test(2)) return float4 (0,color(1),0,1); //greenish (dark to bright)if (test(3)) return float4 (0,0,color(2),1); //bluish (dark to bright)return float4 (color(3),color(3),color(3),1); //grayish (dark to white)#undef test#undef color
}
1 first time 0 first time
Wilf LaLonde ©2012Comp 4501
Experiment Results
const int steps = stepsInPixels (direction); return colorForPositiveInteger (steps, 256);
< 64 steps < 128 steps
< 192 steps
> 192 steps
The steeper it gets, the more steps it needs. WHAT’S THE MAXIMUM
Wilf LaLonde ©2012Comp 4501
Try 1000
const int steps = stepsInPixels (direction); return colorForPositiveInteger (steps, 1000);
So its between 500 and 750 (blue is 3rd quarter). THAT’S A LOT OF STEPS...
Wilf LaLonde ©2012Comp 4501
4 Hours of Debugging Later
• Initially, we thought that linear search might be accumulating delta errors as it added 500 deltas.... No. That wasn’t it.
• Then we tried the following at the start of the shader just to prove that we should get white for everything..
• The following shader compilation error resulted...
int count = 0; for (; count < 10000; count++);return colorForPositiveInteger (count, 10000);
Failed to compile "ParallaxOcclusionMapping.fx"...WilfBumpMappingLibrary.shader(252,21): warning X3569: loop executes for more than 255 iterations (maximum for this shader target), forcing loop to unrollWilfBumpMappingLibrary.shader(252,21): error X3511: unable to unroll loop, loop does not appear to terminate in a timely manner (1024 iterations)
Wilf LaLonde ©2012Comp 4501
Maybe We Don’t Need So Many Steps
• If we use only 50% of the steps, we’ll be jumping 2 pixels at a time instead of 1.
• Should still work provided we don’t have bumps that look like needles (2 pixels wide)...
int stepsInPixels (float3 uvSteppingVector) {float2 duv = clamp01 (abs (uvSteppingVector.xy)) * g_vTextureDims;
duv *= 0.5; //Take a chance and skip some pixels... ; divide steps by 2.return max (duv.x, duv.y) + 1; //And make sure we have at least one...
}
Look at Demo
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 10: Relief Mapping
(Use Tartarchuk’s MipmapLOD)
Wilf LaLonde ©2012Comp 4501
Use LOD for Control
At lower levels of detail (shown in blue), don’t relief map
Wilf LaLonde ©2012Comp 4501
• ddx gives the change of any variable f with respect to x... (the slope)
Tutorial on Differentials ddx and ddy
f f2xyz – f1xyz fx, fy, fz
x x x x xf1
f2= =
• If f is 3D, the result is 3D. If f is 2D, result is 2D.
• Similarly, ddy is the change of f wrt y.
Wilf LaLonde ©2012Comp 4501
In More Detail
• Calculate how much the extent changes at uv...float2 Dw = ddx (uv) * textureExtent.x;
float2 Dh = ddy (uv) * textureExtent.y;
wu, wv x x
hu, hv y y
How much the width changes wrt x[Dw.x, Dw.y] =
How much the heightchanges wrt y[Dh.x, Dh.y] =
Wilf LaLonde ©2012Comp 4501
In More Detail• The maximum change is max (Mw, Mh) where
• So max (Mw, Mh) = max (sqrt (Dw.Dw), sqrt (Dh.Dh)) = sqrt (max (Dw.Dw, Dh.Dh))= max (Dw.Dw, Dh.Dh)1/2
= d1/2 if we let d = max (Dw.Dw, Dh.Dh)
The maximum width change The maximum height change
Mw
Dw.x
Dw.y Mh
Dh.x
Dh.y
(Mw)2 = (Dw.x)2 + (Dw.y)2 = Dw.Dw (Mh)2 = (Dh.x)2 + (Dh.y)2 = Dh.Dh
Wilf LaLonde ©2012Comp 4501
In More Detail• Translate the maximum change into a mipmap
level as follows:
mipmapLevel = log2 (maximum change) = log2 (max (Mw, Mh))= log2 (d1/2) if we let d = max (Dw.Dw,
Dh.Dh) = 0.5 * log2 (d);
Using log (AB) = B * log (A)
float2 Dw = ddx (uv) * textureExtent.xfloat2 Dh = ddy (uv) * textureExtent.y;d = max (Dw.Dw, Dh.Dh);mipmapLevel = 0.5 * log2 (d);
Wilf LaLonde ©2012Comp 4501
To implement it
float mipmapLevel (float2 uv, float2 textureExtent, out float2 dx, out float2 dy) {dx = ddx (uv); dy = ddy (uv); //In case you use tex2Dgrad.float2 Dw = dx * textureExtent.x; float2 Dh = dy * textureExtent.y; float d = max (dot (Dw, Dw), dot (Dh, Dh));return 0.5 * log2 (d); //= log2 (sqrt (d));
}
float2 Dw = ddx (uv) * textureExtent.xfloat2 Dh = ddy (uv) * textureExtent.y;d = max (Dw.Dw, Dh.Dh);mipmapLevel = 0.5 * log2 (d);
The only reason to output dx and dy is IN CASE OF A FUTURE NEED SUCH AS tex2Dgrad
Wilf LaLonde ©2012Comp 4501
Add a new case
• Choose a maximum mipmap level and modify the pixel shader to use it...
Look at Demo
float maximumMipmapLevel = 3;If (mipmapLevel <= maximumMipmapLevel) {
//Perform computation to modify uv....
} //Otherwise, use unchanged uv; i.e., normal mapping...
Wilf LaLonde ©2012Comp 4501
Note: It might be worth drawing the LOD
• Add code such as the following.
return colorForPositiveInteger (level, 4);
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 11: Relief Mapping
(Shadows)
Wilf LaLonde ©2012Comp 4501
Tatarchuk Self Shadowing
• Tatarchuk discusses both hard and soft shadows.
soft shadow result
Wilf LaLonde ©2012Comp 4501
95.4501
Wilf: Details
Skipped To Allow
Students a Chance
To Do it…
Wilf LaLonde ©2012Comp 4501
Without Too Much Detail
• Create a stepping vector to the light and flip its direction (to go toward rather than away).
• Create a variation of linearSearchUntilBelow such as linearSearchReachedAbove which returns true if you manage to finish the stepping without going below anything
• Use the result as a brightness (true => 1; false => 0) for the lighting routine.
• Just be sure the uv stepping for the light doesn’t start slightly inside...
Good description in “Dynamic Parallax Occlusion Mapping with Approximate Soft Shadows, Natalya Tatarchuk, ATI Research, Inc, Siggraph 2005”. Also, her shader is provided.
Wilf LaLonde ©2012Comp 4501
Hard shadows via one probe
• Suggestion from the web. Compute shadows only if objects aligned with the light; | i.e. if N ● L > 0
• Wilf: I didn’t do this and it still worked...
Wilf LaLonde ©2012Comp 4501
95.4501
Cone Step Mapping
Wilf LaLonde ©2012Comp 4501
Cone Step Mapping
• Cone step mapping is an acceleration technique for allowing ray tracing to perform fewer steps without missing intersections...
Dummer, Jonathan. 2006. “Cone Step Mapping: An Iterative Ray-Heightfield Intersection Algorithm.” Available online at
http://www.lonesock.net/files/ConeStepMapping.pdf.
Wilf LaLonde ©2012Comp 4501
Cone Step Mapping Requirements
• Requires the building of “cone step ratio” which can be provided
• in a separate texture or• encoded in the relief map by replacing
one of [nx,ny,nz] since the missing component can be computed.
PolicarpO provides a cone map builder WITH SOURCE (input is a relief map, output is a cone map)
Look at DemoIt also uses a relief map with depth (NOT HEIGHT)
Wilf LaLonde ©2012Comp 4501
Building a Cone Step Map
Choose single, load, start, WAIT UNTIL DONE, save
Could NOT get Relaxed cones to work... BUT AN EXISTING PRE-BUILT TEXTURE EXISTS IN Folder “fxcomposer”
Wilf LaLonde ©2012Comp 4501
From “Relaxed Cone Steppingfor Relief Mapping” GPU Gems 3, Policarpo
Without an enormous number of sampling steps, artifacts will arise...
Wilf LaLonde ©2012Comp 4501
Conventional Single Stepping: Many Steps Needed To Avoid Skipping Over Obstacle
from Policarpo
Wilf LaLonde ©2012Comp 4501
Sample cone angle, jump within the cone, repeat
from Policarpo
Wilf LaLonde ©2012Comp 4501
Relaxed Cone Stepping: Allow Going INSIDE BUT NOT THROUGH (Larger Cone Angles)
from Policarpo
Wilf LaLonde ©2012Comp 4501
Definition of cone ratio: cone half width / height
w1
w2
h2
Let r be the cone ratio; r = w1/h1 = w2/h2.
h1
Wilf LaLonde ©2012Comp 4501
Cone ratio properties
Note : cone ratio r = w1/h1 ranges from 0 to ∞.
Also, r = tan
h1
w1
w2
0 degrees 45 degrees 90 degrees
r = 0 r = 1 r = ∞
For relief mapping, r is clamped to a number between 0 and 1 when building a cone map
texture; i.e., if the cone angle is larger than 45 degrees, use 45. Why? Hint: cone map
Wilf LaLonde ©2012Comp 4501
But we need d (different each step)so we can execute uv += d
d2
d1
We need a way of computing d from the cone ratio r.
Wilf LaLonde ©2012Comp 4501
Deriving d so we can say uv += d from cone ratio
Cone ratio r = w/b
Similar triangles:w/a = |Dxy|/1 d/a = D/1
Pieces:h = a + bh = h - uvz
Dxy
0.0
1.0
a
bw
d
D
uv
hKnown values:blue
Let D be the stepping vector
All we want is d but a, b,
w are unknown
h
Wilf LaLonde ©2012Comp 4501
Derivation of cone step a = (r h / (|Dxy|+ r)) D
(1) w/a = |Dxy|/1 (2) d/a = D/1 (3) h = a+b (4) r = w/b
From (1) and (4)+(3): w = a|Dxy| w = rb = r(h-a)Solve for a:
From (2), solve for d
a|Dxy| = r(h-a)
a|Dxy|+ ra = r ha = r h / (|Dxy|+ r)
d and w are interrelated... solve for w first, then a, then d with all variables interpreted as lengths.
d = aD = r h / (|Dxy|+ r) D
cone step
Wilf LaLonde ©2012Comp 4501
95.4501
Experiment 12: Relief Mapping
(Cone Mapping)
Wilf LaLonde ©2012Comp 4501
We Need the Cone Ratio IN the Depth Texture
• Instead of a texture with normal + depthnx ny nz d
• We need a texture with the cone ratio toonx ny r d
Since nx2+ny
2+nz2 = 1, nz = +/-sqrt (1-nx
2-ny2)
sampleNormal sampleTexel
In tangent space, the normal has positive z
texel = texture pixel
Wilf LaLonde ©2012Comp 4501
Extending The Library: Sampling Routines
#if (experiment < drawReliefMappedTexture_WITH_CONE_STEPPING)#define sampleNormal(reliefSampler, textureCoordinate) \
normalize ((tex2D (reliefSampler, textureCoordinate).xyz * 2 - 1) * float3(1,-1,1))
#elsefloat3 sampleNormal (sampler2D reliefSampler, float2
textureCoordinate) {//Obtain nx and ny taking care to convert ny from //right handed to left handed system.
float2 n = (tex2D (reliefSampler, textureCoordinate).xy * 2 - 1) * float2 (1,-1);
float nz = sqrt (1.0 - dot (n, n)); //n.n = nx*nx + ny*ny.return float3 (n, nz);
}#endif //experiment
#define sampleTexel(reliefSampler, textureCoordinate) \tex2Dlod (reliefSampler, float4 (textureCoordinate,0,1));
tex2D: NOT used inside a loop (used only for lighting)
tex2Dlod: used inside a loop
Note two float2 declarations
Wilf LaLonde ©2012Comp 4501
Extending ParallaxOcclusionMapping.cpp
• The “.cpp” file needs to provide a cone map...#include "WilfBumpMappingLibrary.choice“
...#define wilfUseConeMappingTexture (experiment == drawReliefMappedTexture_WITH_CONE_STEPPING)
WCHAR* g_strNMHTextureNames[] = {TEXT( "stones_NM_height.tga" ),TEXT( "rocks_NM_height.tga" ),TEXT( "wall_NM_height.tga" ),
//TEXT( "four_NM_height.tga" ), //WILF//TEXT( "TILE1.TGA" ), //WILF//TEXT( "TILE1_CONE.TGA" ), //WILF//TEXT( "TILE1_RELAXEDCONE.TGA" ), //WILF
wilfUseConeMappingTexture ? L"TILE1_CONE.TGA" : L"four_NM_height.tga", TEXT( "bump_NM_height.tga" ),TEXT( "dent_NM_height.tga" ),TEXT( "saint_NM_height.tga" )
};
Tatarchuk’s Version No Longer Works (she expects a height map without cone ratio)
Wilf LaLonde ©2012Comp 4501
Extending The Library: Computing the coneStep
float3 coneStepDelta (sampler2D reliefSampler, float3 uv, float Dxy) {float4 sample = sampleTexel (reliefSampler, uv.xy); //xy is normal, z is cone ratio, w is
depth.float coneRatio = sample.z; float sampleDepth = sample.w;float depthDelta = clamp01 (sampleDepth - uv.z); //Once delta is negative, force it to
zero...float coneStep = coneRatio * depthDelta / (Dxy + coneRatio); return coneStep;
}
cone step = r h / (|Dxy|+ r)
derivation does not deal with negative h
Wilf LaLonde ©2012Comp 4501
Extending The Library: Linear Search by Cone Steps
bool coneStepFinished (sampler2D reliefSampler, inout float3 uv, inout float3 uvPrevious, float3 steppingVector, float Dxy, int coneSteps) {//Cone step until either the limit is reached or no more progress can be made//and remember the previous uv value...uvPrevious = uv;for (int step = 0; step < coneSteps; step++) {
float amount = coneStepDelta (reliefSampler, uv, Dxy);if (amount > 0.0) uvPrevious = uv; else {return true;} //Prevent
stepping by 0.uv += steppingVector * amount;
} return false;
}
Track the previous uv and quit if can’t move...
Wilf LaLonde ©2012Comp 4501
Extending The Library: Binary Search Variation
float3 binarySearch (sampler2D reliefSampler, inout float3 uvBelow, in float3 uvAbove, int steps) {//Given uvBelow (too far) and uvAbove (not far enough), iterate "steps" times //to find a more accurate cross-over point.for (int step = 0; step < steps; step++) {
float3 middle = (uvBelow + uvAbove) * 0.5;float4 sample = sampleTexel (reliefSampler, middle.xy);
//xy is normal, z is cone ratio, w is depth.float depth = sample.w;if (middle.z > depth) {
uvBelow = middle; /* too far */} else {
uvAbove = middle; /* not far enough */}
} return (uvBelow + uvAbove) * 0.5; //Return most accurate estimate...
}
This is a more conventional binary search...
Wilf LaLonde ©2012Comp 4501
Extending The Library: Using Both Searches
void coneStep (in sampler2D reliefSampler, inout float3 uv, inout float3 steppingVector, int coneSteps, int binarySteps) {
float Dxy = length (steppingVector.xy); //Called rayRatio by Policarpo.float3 uvPrevious;
bool done = coneStepFinished (reliefSampler, uv, uvPrevious, steppingVector, Dxy, coneSteps);
if (!done) done = coneStepFinished (reliefSampler, uv, uvPrevious, steppingVector, Dxy, coneSteps);
if (!done) uv = binarySearch (reliefSampler, uv, uvPrevious, binarySteps); }
This main routine invoked by the pixel shader...
Look at Demo
Wilf LaLonde ©2012Comp 4501
Result
Wilf LaLonde ©2012Comp 4501
Ideas for Future Work• Relief mapping with bumped silhouettes on
2005 hardware....
Oliveira, Manuel M., and Fabio Policarpo. 2005. "An Efficient Representation for Surface Details." UFRGS Technical Report RP-351, 2005.
Wilf LaLonde ©2012Comp 4501
Ideas for Future Work
• Dual height field relief mapping.
Policarpo, Fabio, and Manuel M. Oliveira. 2006b. "Relief Mapping of Non-Height-Field Surface Details." In Proceedings of the 2006 Symposium on Interactive 3D
Graphics and Games, pp. 55–62.