monte carlo integration cos 323 acknowledgment: tom funkhouser
Post on 21-Dec-2015
219 views
TRANSCRIPT
Monte Carlo Integration Monte Carlo Integration
COS 323COS 323
Acknowledgment: Tom FunkhouserAcknowledgment: Tom Funkhouser
Integration in 1DIntegration in 1D
x=1x=1
f(x)f(x)
?)(1
0 dxxf ?)(1
0 dxxf
Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley
We can approximateWe can approximate
x=1x=1
f(x)f(x) g(x)g(x)
1
0
1
0
)()( dxxgdxxf 1
0
1
0
)()( dxxgdxxf
Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley
Or we can averageOr we can average
x=1x=1
f(x)f(x) E(f(x))E(f(x))
))(()(1
0
xfEdxxf ))(()(1
0
xfEdxxf
Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley
Estimating the averageEstimating the average
xx11
f(x)f(x)
xxNN
N
iixfN
dxxf1
1
0
)(1
)(
N
iixfN
dxxf1
1
0
)(1
)(
E(f(x))E(f(x))
Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley
Other DomainsOther Domains
x=bx=b
f(x)f(x) < f >< f >abab
x=ax=a
N
ii
b
a
xfN
abdxxf
1
)()(
N
ii
b
a
xfN
abdxxf
1
)()(
Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley
Benefits of Monte CarloBenefits of Monte Carlo
• No “exponential explosion” in required No “exponential explosion” in required number of samples with increase in number of samples with increase in dimensiondimension
• Resistance to badly-behaved functionsResistance to badly-behaved functions
VarianceVariance
xx11 xxNN
E(f(x))E(f(x))
2
1
))](()([1
)( xfExfN
xfVarN
ii
2
1
))](()([1
)( xfExfN
xfVarN
ii
VarianceVariance
xx11 xxNN
E(f(x))E(f(x))
)(1))(( xfVar
NxfEVar )(1))(( xfVar
NxfEVar
Variance decreases as 1/NVariance decreases as 1/NError decreases as 1/sqrt(N)Error decreases as 1/sqrt(N)
VarianceVariance
• Problem: variance decreases with 1/NProblem: variance decreases with 1/N– Increasing # samples removes noise slowlyIncreasing # samples removes noise slowly
xx11 xxNN
E(f(x))E(f(x))
Variance Reduction TechniquesVariance Reduction Techniques
• Problem: variance decreases with 1/NProblem: variance decreases with 1/N– Increasing # samples removes noise slowlyIncreasing # samples removes noise slowly
• Variance reduction:Variance reduction:– Stratified samplingStratified sampling
– Importance samplingImportance sampling
Stratified SamplingStratified Sampling
• Estimate subdomains separatelyEstimate subdomains separately
xx11 xxNN
EEkk(f(x))(f(x))
ArvoArvo
Stratified SamplingStratified Sampling
• This is still unbiasedThis is still unbiased
M
kii
N
iiN
FNN
xfN
F
1
1
1
)(1
M
kii
N
iiN
FNN
xfN
F
1
1
1
)(1
xx11 xxNN
EEkk(f(x))(f(x))
Stratified SamplingStratified Sampling
• Less overall variance if less variance Less overall variance if less variance in subdomainsin subdomains
M
kiiN FVarN
NFVar
12
1
M
kiiN FVarN
NFVar
12
1
xx11 xxNN
EEkk(f(x))(f(x))
Importance SamplingImportance Sampling
• Put more samples where f(x) is biggerPut more samples where f(x) is bigger
)(
)(
1)(
1
i
ii
N
ii
xp
xfY
YN
dxxf
)(
)(
1)(
1
i
ii
N
ii
xp
xfY
YN
dxxf
xx11 xxNN
E(f(x))E(f(x))
Importance SamplingImportance Sampling
• This is still unbiasedThis is still unbiased
xx11 xxNN
E(f(x))E(f(x))
dxxf
dxxpxp
xf
dxxpxYYE i
)(
)()(
)(
)()(
dxxf
dxxpxp
xf
dxxpxYYE i
)(
)()(
)(
)()(
for all Nfor all N
Importance SamplingImportance Sampling
• Zero variance if p(x) ~ f(x)Zero variance if p(x) ~ f(x)
x1 xN
E(f(x))
Less variance with betterLess variance with betterimportance samplingimportance sampling
0)(
1
)(
)(
)()(
YVar
cxp
xfY
xcfxp
i
ii
0)(
1
)(
)(
)()(
YVar
cxp
xfY
xcfxp
i
ii
Generating Random PointsGenerating Random Points
• Uniform distribution:Uniform distribution:– Use pseudorandom number generatorUse pseudorandom number generator
Pro
bab
ility
Pro
bab
ility
00
11
Pseudorandom NumbersPseudorandom Numbers
• Deterministic, but have statistical properties Deterministic, but have statistical properties resembling true random numbersresembling true random numbers
• Common approach: each successive Common approach: each successive pseudorandom number is function of pseudorandom number is function of previousprevious
• Linear congruential method:Linear congruential method:– Choose constants carefully, e.g.Choose constants carefully, e.g.
a = 1664525a = 1664525b = 1013904223b = 1013904223c = 2c = 232 32 – 1– 1
cbaxx nn mod1 cbaxx nn mod1
Pseudorandom NumbersPseudorandom Numbers
• To get floating-point numbers in [0..1),To get floating-point numbers in [0..1),divide integer numbers by divide integer numbers by cc + 1 + 1
• To get integers in range [u..v], divide byTo get integers in range [u..v], divide by(c+1)/(v–u+1), truncate, and add u(c+1)/(v–u+1), truncate, and add u– Better statistics than using modulo (v–u+1)Better statistics than using modulo (v–u+1)
– Only works if u and v small compared to cOnly works if u and v small compared to c
Generating Random PointsGenerating Random Points
• Uniform distribution:Uniform distribution:– Use pseudorandom number generatorUse pseudorandom number generator
Pro
bab
ility
Pro
bab
ility
00
11
Importance SamplingImportance Sampling
• Specific probability distribution:Specific probability distribution:– Function inversionFunction inversion
– RejectionRejection
)(xf )(xf
Importance SamplingImportance Sampling
• ““Inversion method”Inversion method”– Integrate Integrate ff((xx): Cumulative Distribution ): Cumulative Distribution
FunctionFunction
dxxf )( dxxf )(1111
)(xf )(xf
Importance SamplingImportance Sampling
• ““Inversion method”Inversion method”– Integrate Integrate ff((xx): Cumulative Distribution ): Cumulative Distribution
FunctionFunction
– Invert CDF, apply to uniform random Invert CDF, apply to uniform random variablevariable dxxf )( dxxf )(
1111
)(xf )(xf
Importance SamplingImportance Sampling
• Specific probability distribution:Specific probability distribution:– Function inversionFunction inversion
– RejectionRejection
)(xf )(xf
Generating Random PointsGenerating Random Points
• ““Rejection method”Rejection method”– Generate random (x,y) pairs,Generate random (x,y) pairs,
y between 0 and max(fy between 0 and max(f((xx))) )
Generating Random PointsGenerating Random Points
• ““Rejection method”Rejection method”– Generate random (x,y) pairs,Generate random (x,y) pairs,
y between 0 and max(fy between 0 and max(f((xx))))
– Keep only samples where y < f(x)Keep only samples where y < f(x)
Monte Carlo in Computer Monte Carlo in Computer GraphicsGraphics
or, Solving Integral Equationsor, Solving Integral Equationsfor Fun and Profitfor Fun and Profit
or, Ugly Equations, Pretty or, Ugly Equations, Pretty PicturesPictures
AnimationAnimationAnimationAnimation
Computer Graphics PipelineComputer Graphics Pipeline
RenderingRenderingRenderingRendering
LightingLightingandand
ReflectanceReflectance
LightingLightingandand
ReflectanceReflectance
ModelingModelingModelingModeling
xx
''
Rendering EquationRendering Equation
SurfaceSurfaceLightLight
dnxfxLxLxL rieo )()',,(),()',()',(
dnxfxLxLxL rieo )()',,(),()',()',(
dd
[Kajiya 1986][Kajiya 1986]
ViewerViewer
nn
Rendering EquationRendering Equation
• This is an This is an integral equationintegral equation
• Hard to solve!Hard to solve!– Can’t solve thisCan’t solve this
in closed formin closed form
– Simulate complexSimulate complexphenomenaphenomena
HeinrichHeinrich
dnxfxLxLxL rieo )()',,(),()',()',(
dnxfxLxLxL rieo )()',,(),()',()',(
Rendering EquationRendering Equation
• This is an This is an integral equationintegral equation
• Hard to solve!Hard to solve!– Can’t solve thisCan’t solve this
in closed formin closed form
– Simulate complexSimulate complexphenomenaphenomena
JensenJensen
dnxfxLxLxL rieo )()',,(),()',()',(
dnxfxLxLxL rieo )()',,(),()',()',(
Monte Carlo IntegrationMonte Carlo Integration
f(x)f(x)
N
iixfN
dxxf1
1
0
)(1
)(
N
iixfN
dxxf1
1
0
)(1
)(
ShirleyShirley
Monte Carlo Path TracingMonte Carlo Path Tracing
Estimate integral Estimate integral for each pixel for each pixel
by random samplingby random sampling
Estimate integral Estimate integral for each pixel for each pixel
by random samplingby random sampling
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
SurfaceSurface
EyeEye
PixelPixel
xx
dAe)L(xLS
P dAe)L(xLS
P
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
SurfaceSurface
EyeEye
LightLight
xx
x’x’
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
HerfHerf
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
SurfaceSurface
EyeEye
LightLight
xx
SurfaceSurface
’’
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
DebevecDebevec
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
Diffuse SurfaceDiffuse Surface
EyeEye
LightLight
xx
SpecularSpecularSurfaceSurface
’’
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
Monte Carlo Global IlluminationMonte Carlo Global Illumination
• Rendering = integrationRendering = integration– AntialiasingAntialiasing
– Soft shadowsSoft shadows
– Indirect illuminationIndirect illumination
– CausticsCaustics
JensenJensen
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
wdnw)w(x,Lwwxf)w(x,L)w(x,L ireo
)(),,(
ChallengeChallenge
• Rendering integrals are difficult to Rendering integrals are difficult to evaluateevaluate– Multiple dimensionsMultiple dimensions
– DiscontinuitiesDiscontinuities• Partial occludersPartial occluders
• HighlightsHighlights
• CausticsCaustics
DrettakisDrettakis
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
ChallengeChallenge
• Rendering integrals are difficult to Rendering integrals are difficult to evaluateevaluate– Multiple dimensionsMultiple dimensions
– DiscontinuitiesDiscontinuities• Partial occludersPartial occluders
• HighlightsHighlights
• CausticsCaustics
JensenJensen
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
dAxxGxxx)VxL(exxxxfe)(x,xL)wL(x,S
re ),(),(),,(
Monte Carlo Path TracingMonte Carlo Path Tracing
Big diffuse light source, 20 minutesBig diffuse light source, 20 minutes
JensenJensen
Monte Carlo Path TracingMonte Carlo Path Tracing
1000 paths/pixel1000 paths/pixel
JensenJensen
Monte Carlo Path TracingMonte Carlo Path Tracing
• Drawback: can beDrawback: can benoisy unless noisy unless lotslots of ofpaths simulatedpaths simulated
• 40 paths per pixel:40 paths per pixel:
LawrenceLawrence
Monte Carlo Path TracingMonte Carlo Path Tracing
• Drawback: can beDrawback: can benoisy unless noisy unless lotslots of ofpaths simulatedpaths simulated
• 1200 paths per pixel:1200 paths per pixel:
LawrenceLawrence
Reducing VarianceReducing Variance
• Observation: some paths more importantObservation: some paths more important(carry more energy) than others(carry more energy) than others– For example, shiny surfaces reflect more lightFor example, shiny surfaces reflect more light
in the ideal “mirror” directionin the ideal “mirror” direction
• Idea: put more samples where f(x) is Idea: put more samples where f(x) is biggerbigger
Importance SamplingImportance Sampling
• Idea: put more samples where f(x) is Idea: put more samples where f(x) is biggerbigger
)(
)(
1)(
1
1
0
i
ii
N
ii
xp
xfY
YN
dxxf
)(
)(
1)(
1
1
0
i
ii
N
ii
xp
xfY
YN
dxxf
Effect of Importance SamplingEffect of Importance Sampling
• Less noise at a given number of samplesLess noise at a given number of samples
• Equivalently, need to simulate fewer Equivalently, need to simulate fewer paths for some desired limit of noisepaths for some desired limit of noise
Uniform random samplingUniform random sampling Importance samplingImportance sampling