r e a l - t i m e r e n d e r i n g · photorealism is a desired quality when rendering computer...

15
REAL-TIME RENDERING IMAGE BASED RENDERING , PART 2 Mihai Ald´ en & Fredrik Salomonsson Thursday 9 th September, 2010 Abstract In this report we will go through how to light syn- thetic objects, with light captured from a real world scene, using HDR data stored in a longitude-latitude map. This project is done in the course TNM083 Image Based Rendering. The main purpose of this project is to implement a real-time renderer. We have implemented several techniques from the ground up: HDR Image Lighting, Spherical Harmonics, Screen Space Ambient Occlusion, Blooming and Depth Darkening. 1

Upload: others

Post on 29-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

R E A L - T I M E R E N D E R I N GIMAGE BASED RENDERING , PART 2

Mihai Alden & Fredrik Salomonsson

Thursday 9th September, 2010

Abstract

In this report we will go through how to light syn-thetic objects, with light captured from a real worldscene, using HDR data stored in a longitude-latitudemap. This project is done in the course TNM083Image Based Rendering. The main purpose of thisproject is to implement a real-time renderer.

We have implemented several techniques fromthe ground up: HDR Image Lighting, SphericalHarmonics, Screen Space Ambient Occlusion,Blooming and Depth Darkening.

1

Page 2: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

1 IntroductionPhotorealism is a desired quality when renderingcomputer generated scenes and objects for moviesand many modern computer games. Many differentdetails need to be considered when rendering photo-realistic images, but the most important aspect is thelighting. Light plays a key role in how we perceiveour surroundings since the human vision system isvery sensitive to light. Global illumination is a termthat describes light propagation in an environment.In the field of computer graphics many differentmethods have been developed over the years tosimulate the effects of global illumination. Themost general techniques try to solve the renderingequation.

The rendering equation, introduced by Kajiain 1986 [8], is an integral equation which describeshow light energy is distributed within a environment.The equation can be formulated as follows:

L(x, ωr) = Le(x, ωr) +∫

Ωfr(x, ωr, ωi)Li(x, ωi)cos(θi)dω

(1)where

L is the radiance returned from the point x in direc-tion ωr.

Le is the radiance emitted from x in direction ωr.

The integral describes the light radiance reflectedfrom x in direction ωr.

fr is the surface Bidirectional reflection distributionfunction. BRDF at x.

Li is the incoming radiance at x from direction ωi.

θi is the angle between the normal and the incomingdirection ωi.

Solving this equation analytically is not possiblewe can only approximate the result using dicretesamples. Since light is additive it is often common todivided light into two different parts; specular lightIs and lambertian/diffuse light Id. Each part is thencalculated separately and combined to give the finalresult L.

The specular light Is is often calculated usingsome basic ray-tracing method like the one intro-duced by Whitted in 1980 [18]. This method traceslight rays backwards from the camera into the scenein order to find an intersection point x. If x is found,secondary rays are traced towards each light sourcein the scene in order to calculate the contribution ofeach light. If these rays intersect other scene objectthere will be no contribution from that particularlight source causing the point to become shadowed.Then depending on the BRDF at point x we caneither abort the computation or reflect / refract theray and repeat the procedure.

To enhance the realism of the rendered imagewe also have to consider the indirect illuminationcaused by the diffuse reflectors in the scene. Severalmethods exists for approximating this diffuselight Id such as the Radiosity method introduced byGoral et.al. [6] and the Monte-Carlo method [3].The Monte-Carlo method is used to approximateintegrals using discrete random variables and can beformulated as follows:

I =∫ b

af (x)dx (2)

where the integral can be approximated with theMonte-Carlo estimation function:

〈I〉 = 1N

N

∑i=1

f (xi)

p(xi)(3)

A nice property of this equation is that the resultimproves, the variance decreases, linearly as thenumber of samples N increases. Meaning that themore computational power we use for the solver themore accurate results we will get. Unfortunatelythis kind of approach is not suitable for real-timerendering.

One common real-time rendering techniquethat has become very popular in the recent yearsis the HDR image based rendering method. Thismethod is also aimed towards simulating globalillumination but takes a different approach. Insteadof trying to solve the rendering equation 1 to obtain

2

Page 3: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

the light information at a certain point, the lightinformation is instead sampled in real world scenes.This approach captures the incident light at acertain point and can later be used to light syntheticcomputer graphics objects. One major issue withthis method is that it neglects the spatial propertiesof light, the sampled point is used to render allpoints on the computer generated model. Lightenergy radiates in all directions and interacts withthe environment which results in spatially varyingillumination. These volumetric properties give raiseto lighting effects such as; shadows and beams oflight which are entirely missed by this method.

Even though no spatial variation is accountedfor, we chose to implement this method becauseit produces good results and is straightforward toimplement. For more information on how to capturean use spatially illumination please consult [16].

2 Method

We produced our own HDR panoramic images byphotographing a reflective sphere with different ex-posures. This process is explained in the first part ofour project, pleas see [1].

2.1 Environment lighting

Most of the current graphics hardware have builtin support for environment lighting using cubemaps. Omnidirectional HDR images of a real worldenvironments can easily be sampled into cube maps,as we already did in the previous part of the project[1]. This allows for complex shading effects whichapproximate how synthetic objects would look ifplaced inside a real environment, this renderingtechnique is called image based rendering.

In order to perform image based rendering wehave to separate the light into specular and lam-bertian components, as explained in the previoussection. The specular light is represented usingenvironment maps, this is a very straightforwardtechnique. The diffuse light is a bit harder, there

are several ways of calculating the diffuse light, wechose to approximate diffuse light using a techniquecalled spherical harmonics lighting.

2.2 Spherical harmonics lighting

In this section we will be looking at a special set ofspherical harmonics, SH functions that from an orthog-onal system. Just like the Fourier basis representationwhich decompose any periodic function into the sumof multiple sines and cosines oscillating functions,in one or two dimensional space, the spherical har-monics basis is a similar technique but defined on thesurface of a sphere. This orthogonal basis can be usedto project spherical functions, like the function thatdescribes the incident light at a point in space, into aset of spherical harmonics coefficients. The SH coeffi-cients can later be used to reconstruct a band limitedapproximation of the projected function. Because ofthe band limitations and approximating nature ofthis technique, it is only practical to represent low fre-quency information like diffuse light. This techniqueis very powerful in image based rendering applica-tions, because it allows us to project any capturedenvironment map into a set of coefficients, this repre-sentation drastically reduces the memory footprintand it also reduces the shading computation. Thegeneral form of SH functions are defined on imagi-nary numbers but we are only concerned with thereal spherical harmonics . Below follows a overview ofthe theory and implementation, for a more thoroughdescription please see [7] and [14].

2.2.1 Orthogonal basis functions

The basis functions Bi are small pieces of informationthat can be scaled and combined to either producethe original function f (if a infinite series of basisfunctions are combined or if the original function fis band limited) or a approximation f

′of the origi-

nal function (if a finite series of basis functions areused). In order to approximate a function with basisfunction we first have to find the scalar weights thatdefine how much each basis function is like the origi-nal function. This is done by integrating the productf Bi, also known as a projection.

3

Page 4: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

Figure 1: Acquiring the scalar weights, image courtesyof [7].

These weights can be used to reconstruct the originalfunction by scaling each basis function with the ap-propriate weight and accumulating the results:

Figure 2: Reconstruction, image courtesy of [7].

The spherical harmonics basis is based on a specialgroup of basis functions called orthogonal polynomi-als which have the following property:

∫ −1

1Fm(x)Fn(x)dx =

0 f or n 6= mc f or n = m (4)

meaning that when integrating the product of twoorthogonal polynomials, the operation either returns0 if they are different or a constant value c if theyare the same. This property is further constrictedto return either 0 or 1 which gives a sub-family offunctions called Legendre Polynomials, see [17].

The Associated Legendre Polynomials, Pml , are de-

fined over the [−1, 1]range and return real numbers.The argument l is the band index and takes naturalnumbers and the argument m takes any naturalnumber in the range [0, l]. These arguments are usedto define different bands of functions. The AssociatedLegendre Polynomials are a central part of the sphericalharmonics orthogonal basis and we have to be ableto generate different polynomials in an efficient way.We do this with a recursive scheme that can generatethe desired polynomial from earlier results in theseries, this scheme is explained in [7].

Figure 3: The first six associated Legendre polynomi-als, image courtesy of [7].

2.2.2 Spherical functions

Spherical functions map spherical coordinates (θ, φ)to a scalar value. These functions can be used toexpress any circularly symmetric function in termsof Associated Legendre Polynomials by mapping θ intothe [−1, 1] range using cosθ and setting φ = 0.

2.2.3 Spherical harmonics

As we previously saw the Associated Legendre Polyno-mials can be used to express any piecewise continu-ous function over the [−1, 1] interval, and sphericalfunctions can be used to express circularly symmetricfunctions. These methods work good when we wantto represent one dimensional functions that only de-pend on one coordinate axis, but in order to expressomnidirectional functions like the incident light ata point, we need to provide orthogonality for non-circular symmetric functions. This is realized by com-bining the Associated Legendre Polynomials for the θdependance with sine functions and cosine functionsfor the φ dependent part. The spherical harmonicsfunctions are denoted as:

Yml (θ, φ) =

2Kml cos(mφ)Pm

l (cosθ)√2Km

l sin(−mφ)P−ml (cosθ)

K0l K0

m(cosθ)

(5)

4

Page 5: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

where θ ∈ [0, π] is the polar (colatitudinal) coordi-nate, φ ∈ [0, 2π] is the azimuthal (longitudinal) coor-dinate, P is the the Associated Legendre Polynomial andK is a scaling factor to normalize the function.

Kml =

√(2l + 1)(l − |m|)!

4π(l − |m|)! (6)

where l ∈ R+ and −l ≤ m ≤ l.

Figure 4: Demonstrating the dependencies of both axis,image courtesy of [14].

More theory can be found in [14]

2.2.4 Projection

The orthogonal spherical harmonics basis can be usedto project a spherical functions into a set of sphericalharmonics coefficients. This is done by integratingthe product of the spherical function f , and the spher-ical harmonics function Y:

cml =

∫S

f (s)Yml (s)ds (7)

In order to reconstruct the approximated function f′

we just need to reverse the process:

f′=

n−1

∑l=0

l

∑m=−l

cml Ym

l (s) =n2

∑i=0

ciYi(s) (8)

2.2.5 Monte-Carlo sampling

The projection equation 8 is stated in a continuousform, in computer graphics we only work withdiscrete functions. Therefore it is necessary touse a Monte Carlo estimator to generate sphericalharmonic coefficients for our function.

In order to evaluate the Monte Carlo estima-tor, equation 3, we need a way of randomly samplingthe desired function. Because this function definesthe incident light at a point p in space from alldirections, what we ultimately have to do in orderto randomly sample this function is to generaterandom directions from p and use them to samplethe incident light function.

The most straightforward way of generatingthese random directions is to first generate twoindependent canonical random numbers1ξx and ξyand map them into spherical coordinates:

θ = 2cos−1(√

1− ξx)φ = 2πξy

(9)

The random directions have all the same probability,p(xi) = 1/4π in equation 3. This way of generatingrandom directions will produce a lot of variance, soin order to lower the variance and speed up the con-vergence of the Monte Carlo estimator we have im-plemented a technique called stratified sampling. Wedivide the surface of the unit sphere in NxN cells andpick a random point inside each cell. Applying thisprocess to the incident light function with N samplewill produce a set of SH coefficients. Each coefficientswill contain one component for each spectral compo-nent, meaning that if a normal RGB color functionis used the SH coefficients will be three dimensionalvectors.

1Canonical random numbers are in the [0, 1] range.

5

Page 6: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

2.3 Materials

In the real world we see objects because they reflector emit light, this is also true in computer graphics.How objects reflect the incoming light is defined bythe surface material, they determine how and if theincident light is scattered or absorbed. The materialplays a very important part in how we perceive theobject and it is therefore very important to have goodmaterial approximations in order to produce realisticimages.

2.3.1 Diffuse

A diffuse material scatters light uniformly in all direc-tions, this is commonly defined with the Lambertianreflection model:

fd(x, ωi, ωr) =ρdπ

k (10)

where k is a spectral weight. To reproduce the dif-fuse reflection we preprocess the environment map toproduce coefficients that define the frequency spacerepresentation of an image over the unit sphere. Wecalculate 9 (3 component) coefficient for each lightprobe, and approximate the diffuse reflection usinga quadric polynomial that depends on the SH coef-ficients and the vertex normal, equation 11, this isimplemented in a shader.

fd(x, ωi, ωr) =k(c1L22(x2 − y2) + c3L20z2 + c4L20 − c5L20+

2c1(L2m2xy + L21xz + L2m1yz)+2c2(L11x + L1m1y + L10z))

(11)

where L denotes the SH coefficients and c1− c5 arethe polynomial constants.

c1 = 0.429043c2 = 0.511664c3 = 0.743125c4 = 0.886227c5 = 0.247708

(12)

This approximation is thoroughly described in [13].

2.3.2 Specular reflection

The reflected intensity of a specular surface can becalculated by using the reflection vector to index intothe environment map. The perfect reflection vectoris defined as follows:

R = E + 2(E · N)N (13)

where N is the normal and E is the view vector.

2.3.3 Specular refraction

Refractive materials can also reflect some of the light.The Fresnel equation 14 describes the fraction of lightthat is reflected or refracted when moving from amedium of a given refractive index n1, into a secondmedium with a different refraction index n2.

f =(1− n1

n2)2

(1 + n1n2)2 (14)

The ratio can now be calculated by:

F = f + (1− f )(1− E · N)5 (15)

where N is the normal and E is the view vector.

The refractive vector is calculated using thefollowing equation:

T = −n1

n2E+ N(

n1

n2(N ·E)−

√1− (

n1

n2)

2(1− N · E)2)

(16)To calculate the final intensity value we also need tocalculate the reflective vector R and use both thesevectors to index in the environment map. The ob-tained refractive and reflective pixel values are thencombined as follows:

I = FR + (1− F)T (17)

For more information please see [13].

2.3.4 Chromatic dispersion

Chromatic dispersion is a more genral refractionmethod. We compute different refractions for eachwavelength of light (RGB), this is done by usingslightly different refraction index for each compo-nent, see [13].

6

Page 7: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

2.4 Postprocessing

Before entering the post processing stage we first ren-der the scene into a texture and other data we needfor the different passes, a description on what andhow we store the data will be given in the correspond-ing subsection.

2.4.1 Screen space ambient occlusion

Ambient occlusion is a shading technique thatadds realism to the rendered scene by darkeningsurfaces on objects that are partially occlude eitherby them selves or by objects that are close. AsBunnel describes in [2] lambertian objects thatare lit by multiple light sources can look flat andunrealistic. But when taking inconsideration thelight attenuation due to occlusion in the renderedscene, soft shadows will be created making the scenelook much more vivid and we perceive the scene tohave a depth to it.

The ambient occlusion method calculates theocclusion in each point by ray tracing multiplerandom directions in order to see if any surface isoccluding that particular point. This can be verycostly because many directions need to be tracein order to reduce the variance and obtain a noisefree result. This makes this approach not practicalin a real-time application. Therefore some otherapproaches has been developed to make this goin real time, examples of this are Nvidia’s AmbientOcclusion Volumes AOV [10] and Crytek’s Screen SpaceAmbient occlusion SSAO [11]. We chose to implementa technique that is based on Crytek’s SSAO byFilion et.al [4]. This method operates in screenspace, meaning that it performs the calculations on anumber of pixels instead of the actual geometry.

SSAO is a post process technique and the methodrequires the knowledge of the normals and thepixel depth in the scene. We store the data intoa texture called a G-buffer (geometry buffer), see[4] for more details about G-buffers, the normalis stored in the RGB channels and the pixel depthis linearized and stored in the alpha channel. The

reason for linearizing the pixel depth is that thedepth buffer stores the depth values in such a waythat the depth resolution is good near the camera butquickly degrades farther away into the scene, see theRedbook [15] for more detail on the depth buffer.

As Filion et.al mention in [4] the basics in SSAOmethod is fairly straight forward to implement butsome details needs to added to make this methodproduce good results.

The SSAO works roughly the same way as theoriginal but instead of ray tracing the directions inthe world coordinates, where every geometry needsto be analyzed, the SSAO takes multiple randomdirections in screen space thus only calculate on thegeometry that is visible for the viewer, which reducethe computation time. The Number of randomdirections, these are still in 3D, lies between 8-32 perpixel depending on how much the computer canhandle and still produces frames in real-time. Wechose to have 16 random directions, because this is agood trade of between speed and visual. One naiveapproach for storing these random directions is tosimply store everything in a texture and upload thisto the GPU. This will result in a enormous texturethat will eat up all of the VRAM. Instead Filion et.alsuggest to generate a random normal for each pixeland separately generate 16 random directions usingthe same method as in spherical harmonics calledstratified sampling these random directions are alsoscaled to have a random length between 0.5 and 1.0.The reason for scaling the length is to avoid samplingto close to the source, forming clusters. These twomaps are passed to the GPU.

On the GPU, for each pixel 16 unique randomdirections are created by taking the current pixel’scorresponding random normal and use this to reflecteach stratified directions to get the unique directions,figure 5. These direction are then multiplied by aweight that the user controls, which corresponds tothe sampling area around the pixel. There is oneproblem with these directions and that is that theyare evenly distributed in a unit sphere making halfof the directions pointing inwards which will result

7

Page 8: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

Reect

Uniformly distributed 3D random vectors with a length between 0.5 to 1.0 Random 3D vectors map

16 unique, well distributed random vectors for each pixel

Figure 5: Combining random normals for each pixel with 16 random directions to create 16 unique random 3Dvectors

in that the object will be self occluding all the time.Fortunately this is simple to fix, Filion et.al just flipthese directions and the sampling area will be ahemisphere instead of a sphere. To flip the directionsis a simple dot product with the current pixel’snormal and the direction and if the dot product isnegative just flip the sign of the direction, figure 6.

The stratified directions are then projectedonto screen space and for each direction the corre-sponding pixel’s depth are sampled by using thepixel coordinates in view space to get the correctdepth from the G-buffer. The sampled depth arethen compare to the current pixels depth to see if

any occlusion occurs, and as Filion et.al describesthat this is not a simple boolean operation betweenthe two depth values. Instead a function is used toevaluate the difference of current pixel depth withthe sampled depth and returns a weight according tothe distance. Negative and values close to zero willbe zero. Those that are close, eg a small value willget a high weight and then the weights decreases asthe pixels are further away until it gets to zero again.The reason for using a function, is that only a pixelthat is occluded by a surface which is relatively closeto the pixel will shaded and not if they are furtherapart, see figure 7.

8

Page 9: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

N

Figure 6: Illustration on how the normals are flip, seenfrom the side. The dashed vector is the original vectorwhile the other is the flipped, red is the normal

Figure 7: Weight function for the SSAO, [4].

Since there are not enough samples to lowerthe variance, the result will be noisy, even though theapproach above will break up the noise pattern therewill still be a grain pattern. Filion et.al describes in[4] that ambient occlusion result is a low-frequencyphenomenon therefore the result can be blurred to re-move the grainy part completely. To perform blur incomputer graphics one hack is to first down samplethe image, making use of the hardware interpolation,to get some free blur and then convolve the downsampled image with a small Gaussian kernel to blurit even more. And because the ambient occlusionis a low-frequency phenomenon we do not need toapply this algorithm on the full resolution image.

Instead we down sample the image and the G-buffercontaining the normals and depth 4 times, generatesthe random normals for the down sampled imagethus reducing the number of pixels to be processedgiving us a speed boost. Before we upscale the imageagain we perform a small Gaussian blur on the downsampled image.

An overview of the SSAO algorithm:

• Down sampling the image 4 times.

• Generate 16 random directions, using stratifiedsampling on a unit sphere, the same way as in thespherical harmonics section. The length of thesedirections are randomly scaled between 0.5 to1.0.

• Generate one random normal for each pixel inthe down sampled image.

• For each pixel in the down sampled image. Takethe corresponding random normal, use this toreflect each of the 16 directions to get uniquedirections for each pixel.

• These vectors are then projected to screen space,in order to get the sampled pixel’s depth value.

• The sampled depth value are then comparedto the current pixel depth, to see if the sampleis closer or further away. If it is closer someocclusion may occur, and it is the average valueof these samples that will determine the overallocclusion on the current pixel.

2.4.2 Bloom

Bloom is a technique in computer graphics whichmimics an image artifact of real-world camerasand our own visual system. The effect producesa glow around high intensity objects in the image,and the reason to reproduce this flaw is to makethe synthetic scene more believable and not soperfect as a computer generated scenes usuallyare. The physical effect occurs in the real worldbecause no lenses can focus the light perfectly,resulting in light not only hitting one censor, but

9

Page 10: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

will spill onto adjacent censors as well. This alwaysoccurs but its not always visible since low intensitylight will not contribute as much to their adjacentcensor as high intensity does, see [12] for further info.

The overall algorithm for bloom is trivial andeasy to understand. The image of the renderedscene is blurred and stored in a separate texture, theblurred imaged is then added to the rendered imagewith a small weight. To speed up this algorithm weused two different techniques, first one is to takeadvantage of the built-in hardware support for linearinterpolation, by down sample the image with useof linear interpolation will result in a blurred image.To down sample our image we used the viewport inOpenGL to scale down the image. But only downsample the image will not replace look as good asperforming convolution with a Gaussian kernel.Therefore we also convolve the down sample imagewith a Gaussian kernel but this kernel is smaller thanwe would had to use if we did not down sample.The second one is to exploit that the Gaussian kernelis separable [5]. Splitting the convolution into twopasses, horizontal and vertical, this will reduce thenumber of texture lookups.

2.4.3 Tone mapping

When working with HDR data one of the majorproblems is that no commercial visual displays, e.g.computer displays, flat-screens, etc can display HDRcontent, hence the HDR data needs to be mappedinto Low Dynamic Range LDR data. This is calledtone mapping and can be divided into two parts;global tone mapping and local tone mapping, see thesection about tone mapping in our previous report[1] for more details on the difference.

We used two different tone maps one is the S-Curve and the other was a logarithmic function bothcan be found in the book [12].

W

Bloom

Down sample the image

Filter horizontal

Filter vertical

Figure 8: Outline of the bloom algorithm

2.4.4 Depth darkening

Depth darkening is a method that is developed by agroup of researchers from the university of Konstanz,Germany [9] they have analyzed different techniquesthat artists use to enhance their images, see figure 9they also looked at how we perceive objects and howdifferent details are enhanced by our visual system.

Depth darkening creates dark halos aroundthe objects in the scene to make them stand out fromthe background thus making us perceive the imageto have more depth to it. The depth darkening iscreated by first taking the difference between theoriginal depth buffer D and a Gaussian filtered copy

10

Page 11: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

G ∗ D, see equation 18.

∆D = G ∗ D− D (18)

The difference ∆D are called the Spatial ImportanceFunction, and this function can be interpret as fol-lows: ∆D ≈ 0 represents spatial areas that are ofno importance, |∆D| > 0 represent spatial areas ofinterest for this algorithm. ∆D < 0, the negativespatial importance represent areas of background ob-jects that are close to other occluding objects, ∆D > 0the positive spatial importance thus represents theareas of foreground objects. With the Spatial Impor-tance Function, depth darkening can be added to theoriginal image, I with the following equation:

I′= I + ∆D− · λ (19)

Where λ ∈ R, λ > 0 and ∆D− are the nega-tive spatial importance, if λ < 0 the algorithm willproduce white halos around the objects instead. Theimplementation where pretty much straight forwardsince we had the depth buffer stored in a G-bufferfrom the SSAO and we used the same blur techniqueas both SSAO and bloom uses.

Figure 9: Drawing by S. Dali and P.Picasso, exampleof how painters separates objects in the paintings bylocally change the contrast thus enhancing the depthperception. Image courtesy of Luft et.al in the paper[9]

3 Results

All images render at 220 - 240 FPS using an AMDRadeon 4890 graphics card and 60 - 70 FPS onan NVIDIA GeForce 8600M GT graphic card, with1200x800 resulution. The desert valley environmentmap used for the spherical harmonics images is fromthe sIBL archive at www.hdrlabs.com.

Figure 10: SSAO Map

Figure 11: Different materials; refractive, reflective andchromatic dispersion.

11

Page 12: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

Figure 12: Spherical harmonics lighting.

Figure 13: Spherical harmonics lighting.

Figure 14: Spherical harmonics lighting with SSAO.

Figure 15: Spherical harmonics lighting with SSAO.

12

Page 13: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

Figure 16: Depth darkening off.

Figure 17: Depth darkening off.

Figure 18: The blooming effect.

Figure 19: The blooming effect.

13

Page 14: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

4 Discussion and future work

When we started this project we tried to organizeeverything into different objects (classes) and makeuse of different interfaces (virtual classes) in orderto design a simple and easy to grasp renderingpipeline. This approach has worked well for themost part, but there are some areas that havebecome to complex. The scene objects is quietmessy and needs some more work. It needs tobe refined into several smaller objects, a model-view design pattern would probably be very efficient.

The spherical harmonics orthogonal basis of-fers a extremely efficient representation of theenvironment map. The major limitation with thismethod is the band-limited requirement for the pro-jected spherical function, this makes high-frequencyinformation like the spectral light informationimpractical to represent. Spherical harmonicslighting is demonstrated in figures 12 and 13.

The SSAO method gives good results, figures10, 14 and 15, but it needs a lot of tweaking andspecial fine-tuning for different scenes. This isbecause the method is view dependent, which meansthat the settings need to be adjusted for differentzoom levels and geometry size.

The depth darkening method, figures 16 and18, is not so convincing when only one object isrendered in front of an environment map, but themethod is probably better suited for many smallobjects and details.

Our own captured light probes are good forlighting objects but they are a bit boring to look at.This is because the singularity and camera reflectionstake up much of the view, destroying the realityillusion.

In this project we have implemented severalreal-time rendering methods from the ground up.This turned out to be quite a time consuming task,and we did not have time to experiment so muchwith each of the methods. If we ever find moretime to put into this project, the first thing to do isnot to add anything new, instead we should justplay around with the current methods and fine-tuneeverything.

References[1] Mihai Alden and Fredrik Salomonsson. Image Based

Rendering Part 1: High Dynamic Range Imaging. Studentproject in the course TNM083 Image Based Rendering,Linkping University, 2010.

[2] Michael Bunnel. GPU Gems 2, Chapter 14. Addison-Wesley, 2005.

[3] Philip Dutre, Kavita Bala, and Philippe Bekaert. Ad-vanced Global Illumination, Second Edition. AK PetersLtd, 2006.

[4] Dominic Filion and Rob McNaughton. Effects & tech-niques. ACM,SIGGRAPH ’08: ACM SIGGRAPH 2008classes, 2008.

[5] Rafael C Gonzalez and Richard E Woods. Digital ImageProcessing, International Edition, Third Edition. PearsonEducation, 2008.

[6] Cindy M. Goral, Kenneth E. Torrance, Donald P.Greenberg, and Bennett Battaile. Modeling the interac-tion of light between diffuse surfaces. ACM, SIGGRAPH’84, Proceedings of the 11th annual conference on Com-puter graphics and interactive techniques, 1984.

[7] Robin Green. Spherical Harmonic Lighting: The GrittyDetails. Sony Computer Entertainment America, 2003.

[8] James T. Kajiya. The rendering equation. ACM, SIG-GRAPH ’86, Computer Graphics Proceedings, Vol-ume 20, 1986.

[9] Thomas Luft, Carsten Colditz, and Oliver Deussen.Image Enhancement By Unsharp Masking The DepthBuffer. ACM Press, 2006.

[10] Morgan McGuire. Ambient occlusion volumes. ACM,I3D ’10: Proceedings of the 2010 ACM SIGGRAPHsymposium on Interactive 3D Graphics and Games,2010.

[11] Martin Mittring. Finding next gen: CryEngine 2. ACM,SIGGRAPH ’07: ACM SIGGRAPH 2007 courses, 2007.

[12] Erik Reinhard, Greg Ward, Sumanta Pattanaik, andPaul Debevec. High Dynamic Range Imaging Acquisi-tion, Display, and Image-Based Lighting. Morgan Kauf-mann, 2006.

[13] Randi J. Rost and Bill Licea-Kane. OpenGLShading Language, Third Edition. Pearson Educa-tion, Inc, 2010. http://mathworld.wolfram.com/

LegendrePolynomial.html.

[14] Volker Schnefeld. Spherical Harmonics. 2005.

14

Page 15: R E A L - T I M E R E N D E R I N G · Photorealism is a desired quality when rendering computer generated scenes and objects for movies and many modern computer games. Many different

[15] Dave Shreiner and The Khronos Opengl Arb Work-ing Group. OpenGL Programming Guide 7th Edition.Addison-Wesley, 2009.

[16] Jonas Unger. Incident Light Fields. Linkping University,Doctoral thesis, 2009.

[17] Eric W Weisstein. Legendre Polynomial. MathWorld-A Wolfram Web Resource., 2010. http://mathworld.wolfram.com/LegendrePolynomial.html.

[18] Turner Whitted. An Improved Illumination Model forShaded Display. ACM, SIGGRAPH ’80, Graphics andImage Processing, Volume 23, 1980.

15