real-time shallow water simulation and environment mapping...

50
Real-time Shallow Water Simulation and Environment Mapping and Clouds Rene Truelsen Department of Computer Science, University of Copenhagen Universitetsparken 1, DK-2100 Copenhagen, Denmark [email protected] April 25, 2007

Upload: others

Post on 22-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

Real-time Shallow Water Simulationand

Environment Mapping and Clouds

Rene Truelsen

Department of Computer Science, University of CopenhagenUniversitetsparken 1, DK-2100

Copenhagen, Denmark

[email protected]

April 25, 2007

Page 2: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

Contents

1 Abstract 4

2 Water 52.1 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Shallow Water Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2.1 Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 Snell’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.3 Fresnel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 The Basic Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Reflection Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4.1 Render steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.2 Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.5 Refraction Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.5.1 Screen-to-texture issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.5.2 Displacement by Snell’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 Wave Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.6.1 Dudv-maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.6.2 Moving waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.6.3 Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.7 Fresnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.8.1 Reflection-texture size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.8.2 Copy-to-texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.9 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Environment Mapping and Clouds 233.1 Skyplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.1 Pros and Cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2 Skybox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.2.1 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2.2 Drawbacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.3 Pros and Cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3 Skydome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3.1 Pros and Cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 Atmospheric Light Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.5 Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.5.1 Mapping using a virtual plane . . . . . . . . . . . . . . . . . . . . . . . . . 313.5.2 Implementation of the virtual plane . . . . . . . . . . . . . . . . . . . . . . 33

Page 3: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

Contents 3

3.5.3 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.5.4 Multi-layered clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.6 Combined Skydome Color Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 403.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.7.1 The costs of the effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.7.2 Number of faces on skydome . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.8 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Bibliography 44

A Source Code 46A.1 Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

A.1.1 Rendering Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46A.1.2 Grab Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47A.1.3 Vertex program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47A.1.4 Fragment program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

A.2 Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49A.2.1 Vertex program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49A.2.2 Fragment program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Page 4: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

1 Abstract

In computer graphics today we want to leave as many calculations to the graphics card (GPU) aspossible. The reason is that the graphics cards are constantly expanding in memory size and speedand are therefore often a lot of unused resources in the GPU compared to the CPU, which also usesresources on the operating system, background programs and so forth.

This report therefore focus on how to render a real-time sky/environment and water simulation usingthe GPU, and this consists of encoding two types of programs: the vertex and the fragment program.It is assumed that the reader is already aware of the concept of these programs along with regularterms related to OpenGL programming and the mathematics associated to this. If you are not, werecommend you read [T.D].

This paper is an extension to the Game Animation Course in 2005 at DIKU Copenhagen, where thecurriculum included topics like smooth deterministic noise, virtual grid, height maps, multi-layeredtextures, dynamic sky dome and billboards. The topics we discuss in this paper are outside the scopesof these mentioned topics for which we have already been credited.

The following topics are covered in this paper:

• An inexpensive shallow water simulation using bump mapping and uv displacement.

• Discussion of skydome vs. skyplane vs. skybox.

• Multi-layered clouds.

• An inexpensive model for representing atmospheric light scattering.

Furthermore, it’s important to notice that the experienced reader probably can find technical opti-mizations to our solution. Some optimizations are already mentioned in the project, but we decidedto focus mainly on the speed of the methods rather the technical optimizations as these are alwaysdictated by the current development of graphics cards.

Page 5: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2 Water

We want to implement a water model in a real-time environment that will give the user the impressionof a watery surface consistent with the level of detail and realism intended by the programmer. Thismeans that the level of realism may not be intended to be a fully realistic lake or ocean, but only that itfits the level of realism defined by the rest of the scene. This usually means as realistic as possible, butalso as low cost on resources as possible. Two things that are diametrically opposites in 3D engines.

As with any implementation there are several methods to follow, and usually each method focus ondifferent goals. For a water simulation this goal could be something like a shallow water model, deepwater model or a model which allows interaction with the surface.

We will in the following sections describe how we chose to implement our model, so it coincides withour needs for a shallow water model, and at the same time go into some of the technical issues thatshould be considered when implementing it.

(a) An evening photo of a lake (b) An evening snapshot of our simulatedwater

Figure 2.1: A comparison between our proposed water simulation and a photo of a real lake. Thispaper describes the process for making this simulated water.

2.1 Previous Work

There are several different ways of implementing a water model in a real-time environment, and whichmethod is chosen should mainly depend on which type of water one wish to implement.

Lasse Jensen and Robert Goliás [Jen04] describes a impressive implementation of a deep-wateroceanographic which combines Gerstner-like waves with bump mapping and includes additional ef-fects like caustics, foam and spray, Navier-Stokes equations for calculating a more dynamic bumpmapped surface and god rays for more improved underwater effects.

Another widely recognized work was done by Jerry Tessendorf [Tes04], which describes a more

Page 6: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.2 Shallow Water Problem 6

comprehensive implementation of the Navier-Stokes equations (fluid dynamics) which allows externalinteractions with the water, and uses a statistical wave model based on FFT to represent waves. Neitherof these methods did not fit our requirements as we do not intend to interact a lot with the water, as forexample a game taking place in a boat would. Also, both methods are considerable more expensiveand complex than what we really need.

Instead we were inspired by a paper written by Ben Humphrey [Hum05], which describes the basicsteps needed to implement a simple but nice real-time water simulation. The simulation is based onbump mapping, dudv-displacement and Fresnel effects, and although the method is simple, there aremany different ways of implementing it and many possibilities of adding additional effects. His paperdiffers from the previous examples in the simplicity of its method by rendering a bump map to aflat quad where the bump map is distorted over time to give the impression of waves. This simplewave structure and the fact we render small waves implies easier and faster calculations and henceshorter render time for the animation, and it also fits our needs for a shallow water rendering in anenvironment of clean transparent water.

2.2 Shallow Water Problem

In the following section we will describe the physics behind the shallow water rendering.

(a) Snell’s Law - Describes how the angle of the lightchanges when travelling between two materials withdifferent refraction indices

(b) The Fresnel Weight - Describes which of thepoints P1 (reflection) or P2 (refraction) should bemore dominant at a specific pixel in the viewport

Figure 2.2: Illustrations of Snell’s Law and the Fresnel Weight

Figure 2.2 illustrates some of the problems we are facing when implementing a shallow water simula-tion. When looking at some point on the water surface, we need to apply a reflection and refraction tothat point and we need to combine these in a way which coincides with nature. When this is done, weneed to apply the distortion caused by the waves, and finally applying the wavy motion to the surface.

2.2.1 Refraction

To begin with we will just explain our definition of the refraction as seen in figure 2.2(b).

As a 3D programmer our definitions and use of words does not always comply with for example howphysics define them, as we often focus on simplifying things in order to make them work faster on ourhardware. For this reason we would like to focus on the term refraction, as this paper uses the word

Page 7: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.2 Shallow Water Problem 7

slightly different from the correct definition of the word. As described by Snell’s Law the refraction isthe bending of light as it passes from one medium to another (Θ1 6= Θ2 in fig. 2.2(a)). The results ofthis definition, combined with waves, includes several of the visual effects we will see when lookingdown through the surface of a lake, e.g. caustics and the distorted lake floor.

In this paper we are using the word, refraction, to describe the transparent part in the water when weare looking down the water, as oppose to the reflection which is reflective part of our surface.

2.2.2 Snell’s Law

The physics behind the angles of the reflection and refraction are described by Snell’s Law [Bla00],which calculates the exit angle of a light beam going from one material to another. In our case goingfrom air to water. The law is commonly written as:

n1 sinΘ1 = n2 sinΘ2 (2.1)

With n representing the refraction indices for the two materials: nwater = 1.333 and nair = 1.0003.This gives us:

nwater sin(Θwater) = nair sin(Θair)⇒ (2.2)

sin(Θwater) =nair

nwatersin(Θair)⇒ (2.3)

sin(Θwater) = 0.75sin(Θair) (2.4)

2.2.3 Fresnel’s Equation

"The Fresnel equations, deduced by Augustin-Jean Fresnel, describe the behavior oflight when moving between media of differing refractive indices. When light moves froma medium of a given refractive index n1 into a second medium with refractive index n2,both reflection and refraction of the light may occur." 1

This means that when we look at some point on the water surface we will see both a reflection anda refraction in that point, and the specific weight of these two effects are distributed by the FresnelWeight (fig. 2.2(b)). This weight is based on the angle between the viewer and the surface of theinterface between the two materials, in our case this is the angle between the camera and the normalof the water surface. More precisely; when looking at a water surface near the surface, the amount ofreflected light will increase, the refracted will decrease, and it is impossible to see anything below thesurface. Looking directly down the water results in a much reduced reflection, and instead it is moreclear to see the lake floor below the surface.

How this Fresnel Weight is calculated is one of the most important, and an often overlooked step forgetting a good visual result, and our solution will be explained in section 2.7.

We will represent the reflection and the refraction by textures, and the resulting color in the watersurface will be the weighted sum between these two textures added with some ambient light. Thisgives us the following formula for our surface:

1http://www.mathdaily.com/lessons/Fresnel_equations

Page 8: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.3 The Basic Steps 8

Figure 2.3: This illustrates of the steps described in section 2.3. (a) The reflection texture is created(step 1), (b) The refraction texture -slightly darkened for perceptability (step 2), (c) The reflectiontexture distorted according to the dudv-map (step 3), (d) The resulting distorted textures combinedwith Fresnel and bump mapped (step 4 and 5). Notice the realism, although we have only used bumpmapping and a dudv-map.

water = F ∗ re f lection+(1−F)∗ re f raction+ light (2.5)

2.3 The Basic Steps

Implementing the water consists of five steps. Additional effects could and should be added accordingto needs, and some of these extra effects will be described later on, but we consider the following fiveto be the minimum effects required for giving a somewhat realistic impression.

1. Create the reflection texture

2. Create the refraction texture

3. Distort the textures according to a dudv-map

4. Apply bump mapping

5. Combine with fresnel effects

These five steps are described in the following sections and illustrated in figure 2.3.

2.4 Reflection Texture

This section describes the steps and procedures which are required to implement a mirror reflection.The source code for the function can be found in appendix A.1.1 and A.1.2 . For the sake of ease thereflection plane is assumed to be parallel to the XY-plane, with a normal (0,0,1) and located at theZ-value, waterlevel.

2.4.1 Render steps

The steps of the reflection are simple because of the flat wave structure this method represents. Otherwater algorithms as the ones mentioned in the beginning are based on larger waves and therefore

Page 9: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.4 Reflection Texture 9

requires cube mapping to ensure correct reflection, but due to our small waves, we can avoid thiscostly step. We can settle with rendering the objects, we wish reflected, flipped upside down, andsave this rendered image to a texture on the graphics card so that we are later able to manipulate therendered scene.

Were we only interested in rendering a mirror image, then strategies like rendering and combiningwith a stencil buffer would be considerably faster and more effective [Kil99]. This, however, wouldalso make it impossible for us to distort the reflection, as described later on in section 2.5.

The steps for rendering the reflection texture are illustrated in figure 2.4.

Figure 2.4: The process for rendering the reflection-texture.

1.Resizing the viewport

We will have to reduce the size of the viewport for two reasons:

• The main reason for wanting to resize the viewport is to induce better performance. We willachieve this speed by doing the re-rendering to a smaller window, and although this causes thereflection to be more coarse, the speed gain is far more noticeable than the lack of details whenwe apply the wave effect.

• Secondly one should consider compatibility; the primary render window is probably sized atsomething like 1024x768, and for better hardware support the textures should to have a widthand height in a power of 2, eg. 128, 256, 512, 1024. Graphics cards prior to the GeForce 6series [Cin] and ATI 9800 [Sup] do not have full support for NPOT sized textures.

The visible difference between the different sized viewports are illustrated in figure 2.5. The frame-rates when applying the wave distorted textures can be seen in section 2.8 on page 20.

2.Clipping plane

The clipping plane is added as we only need to render the objects below the water level. We also needto ensure that we do not accidentally reflect anything that is rendered below the water surface. So theplane helps us with both increased speed but also to avoid artifacts.

3.Flipping the image

The scene is flipped upside down, which in our case implies that z = −z, and is done by simpleOpenGL or DirectX statements.

Take notice that when flipping the scene, some items, like the terrain, might not have a backplane, andtherefore you need to consider if some of the render states should include frontplane culling.

Page 10: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.4 Reflection Texture 10

256x256, no wave distortion 1024x1024, no wave distortion

256x256, with wave distortion 1024x1024, with wave distortion

Figure 2.5: The difference between different resolutions for the reflection texture. Notice that withno wave distortion there is a clear difference between the resolutions, but with wave distortion appliedthe difference becomes negligible.

4.Translate according to reflection level

The scene needs to be translated according to our reflection plane, but because the difference betweenthe reflection plane before and after it is flipped is 2 x waterlevel, we will have to translate thewhole scene the same amount.

Again this can be done by a simple OpenGL or DirectX statement, that multiplies a translation matrixwith the current matrix stack.

5.Copying the viewport to a texture

The viewport is copied to an empty texture on the memory of the graphics card for later use, andwe can then clear the screen buffers for the correct scene rendering. The experienced programmermight wonder why we do not render directly to a texture, for example by using the framebuffer onthe graphics cards as a render target, which would save memory since we would only have one copyof the scene, and it would save us the texture copy. However the framebuffer object does not supporta multisampled buffer [Gre05], which we experimented with in other parts of the program, so wedecided to stay with copying the scene to a predefined texture.

2.4.2 Notice

Due to the resizing and clearing of the viewport, it is important that the reflection is rendered during thefirst render pass. Furthermore, because the objects have to be redrawn, it is also vital to differentiatebetween which items requires reflection and which objects do not, so precious GPU and CPU time isnot wasted for rendering unnecessary items.

Page 11: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.5 Refraction Texture 11

2.5 Refraction Texture

To create the refraction texture we arrange the render order, so the water quad is rendered last. Thisallows us to take a snapshot of the screen by copying the screen buffer to a texture, which is then laterused with the reflection texture to create the surface.

2.5.1 Screen-to-texture issues

The problem of copying the viewport to a texture is, as mentioned in section 2.4.1, that the viewportmay be sized at some NPOT 2 size, e.g. 1024x768, which may not be compatible with the hardwareor the texture type. So in order to copy the screen "as is" to a texture we will have to apply sometechnical requirements to either our viewport or our refraction texture.

Texture types

In OpenGL the texture type GL_TEXTURE_2D is limited to only POT sizes, but the "perhaps" slowerand more limited texture type GL_TEXTURE_RECTANGLE_NV, which is a part of the GL_ARB_texture_rectangleextension, allows the use of NPOT sized textures. According to [Kil05] it is still unclear whether theGL_TEXTURE_RECTANGLE_NV is actually slower, "but developers should not be surprised if conven-tional POT textures will render slightly faster than NPOT textures". For more documentation onthe precise limitations to the NPOT extension we will refer to [Kil05]. Another NPOT texture type,which is a part of the ARB_texture_non_power_of_two, was introduced in the core of OpenGL 2.0.However, we did not look into the properties of this texture.

This is where our requirements become clear; either the viewport is sized to POT and the objects areredrawn so we can use the GL_TEXTURE_2D texture, or we will have to use the slower texture typeand copy the scene "as is". We decided to use the latter solution. But take notice of which resolutionit is run at, because copying a 1600x1200 texture back and forth at every frame might take its toll onthe graphics card memory bus, which will be noticeable in the framerate. Our viewport is 1024x768is noticeable, but acceptable. For more information on the specific penalty for this operation see theresults section 2.8.2 on page 20.

UV coordinates

When using the texture type, GL_TEXTURE_RECTANGLE_NV the uv coordinates are changed from theinterval [0;1] to the actual dimensions of the texture. This makes it easier for our texture lookup inour fragment program, where we can use the coordinates for the pixel we are currently drawing as thelookup coordinates, and these values are accessible to us by the WPOS semantics.

At this point we realised that it would be easier if we could use the same uv coordinates for both thereflection and refraction texture, and because the speed performance is not clearly documented byOpenGL [Kil05], we decided to go against our own recommendations in section 2.4.1, and also usethe GL_TEXTURE_RECTANGLE_NV texture type for our reflection texture. This also allows us to resizethe viewport to a size which considers that the distortion along the u-coordinate (horizontally on thescreen), usually is larger the v-coordinate (vertically), due to perspective distortion.

2Non Power Of Two, being any size that is not in the power of two.

Page 12: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.6 Wave Distortion 12

(a) normal-map (b) dudv-map

Figure 2.6: The maps used for displacement and bump mapping.

2.5.2 Displacement by Snell’s Law

Snell’s Law, as described in section 2.2.2, states that because Θ1 6= Θ2 we will have to displace therefraction as a function of the angle of the vector from the eye to the specific point we are looking at,We will try to avoid these calculation by considering the consequences of this displacement:

Eq. (2.4) on page 7 indicates that the difference between Θair and Θwater is larger for large values ofΘair. This means, that if we were to ignore Snell’s Law, by letting nwater = nair, we would get thelargest errors when watching the surface from the horizon. Fortunately this is where the reflection isdominating due to the Fresnel term, and the refraction texture is hardly visible. This means that wecan make it for ourselves by letting nwater = nair, and although this is a rough estimate, it is hardlynoticeable when all the effects are added to the scene, and so it is an acceptable estimate compared tothe performance gain.

2.6 Wave Distortion

So far all we have done is to copy the screen buffer to the refraction- and reflection textures, and nowit is time to apply the wavy surface. This is very similar as to applying uv displacement effects toa glass surface; by applying an offset to the texture lookup for the two textures, and combining theresult with a bump mapping.

The distortion is applied using dudv-maps, fig. 2.6(b), and is explained more thoroughly in chapter2.6.1, but it is basically a derivative of the normal-map, fig. 2.6(a), which again is derived from anoriginal water texture.

We will in this paper not go into details of how the bump mapping is applied as this is describedwell in other papers, and we are using traditional step by using a light source vector dotted with thenormal-map to indicate diffuse lighting on the surface. We will instead focus on the distortion as wehave not seen any papers which thoroughly covers dudv-maps and the method used in this project. Aslightly similar way is described in GPUGems2 [Sou05].

Page 13: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.6 Wave Distortion 13

Figure 2.7: Texture lookup and uv displacement. The yellow line is a lookup which is not displaced,and the line above is adjusted by a (du,dv) by first doing a lookup in the dudv-map, and adding thisresulting value to the original (u,v).

2.6.1 Dudv-maps

A dudv-map, also called a uv-displacement map, is used to displace u and v coordinates when doing atexture lookup in the fragment program for a specific pixel as illustrated in fig. 2.7. This displacementis usually caused by a change in the surface of the rendered object, which for example can affect thereflection of the object.

In our specific case, the dudv-map is based on the derivate of the normal-map in figure 2.6(a). Thismeans that a high wave in the original water texture, causes a high change in one of the three co-ordinates in the normal-map, which then again results in a high value in the same coordinate in thedudv-map. We hereby create a displacement which is coherent with the original water texture. Thisprocess is clarified in figure 2.8.

Figure 2.8: The coherence between a wave and the corresponding uv displacement in the reflection-and the refraction texture. This also illustrates why the displacement of the refraction is the same asthe reflection, when we use the approximation in section 2.2.2 that Θ1 = Θ2.

At this point we utilize that we are rendering a shallow water and shallow wave simulation, because

Page 14: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.6 Wave Distortion 14

we can directly use the dudv-lookup as the offset value in our reflection and refraction texture. Deepwater methods, as the ones mentioned by [Tes04] and [Jen04], usually implies high waves becausethey are based on combining underlying geometric waves (e.g. Gerstner Waves) with a normal- anddisplacement-map. This means that the high wave entails the requirements of cube mapping to ensurewe get a proper reflection, as illustrated in figure 2.9. To find the correct uv coordinates in the 6 sidedcube map, based on the vector sum between the slope of the geometric wave and the normal-map, isa problem that requires a lot of extra calculations compared to our proposed solution.

(a) The reflection with high waves requirescube mapping to ensure correct reproductionof the objects behind the camera.

(b) Low waves do not reflect objects behindthe camera, and the reflection can thereforebe represented by copying the flipped sceneto a texture.

Figure 2.9: The advantage of simulating low waves compared to high waves. The problem with highwaves are that because of the cube mapping, calculating the reflection coordinates becomes a verycomplex problem.

We have already indicated that the dudv-map is a derivative of the normal-map, and we want toclarify how exactly the dudv-map is generated from the normal-map. Figure 2.10 illustrates the 1stand 2nd coordinate of the dudv-map at row y = 20, compared to the first order derivative of thesame coordinates in the normal-map. The 3rd coordinate is uninteresting as it is only the first twocoordinates that are used to displace the uv coordinates. The coordinates are derived individuallyusing a numerical 3 point method with 2nd order precision and then finally normalized based on thesum of all three coordinates. Equation (2.6) show the numerical 3 point method we used to calculatethe derived values for the components in the normal-map, n, at y = 20 as illustrated in figure 2.10.

dndx

=n(x+1,20)−n(x−1,20)

2(2.6a)

(2.6b)

It is clear from figure 2.10 that the dudv representation is not an exact representation of the derivativesfrom the normal-map. The reason for this, is the normalizing process the map has gone through.Comparing the dudv-map (dotted yellow line) with the normalized derivative (black line) there is astrong noticeable resemblance between the two lines. The inconsistency between them is probablycaused by different numerical differentiation algorithms, which are emphasized when normalizingthe coordinates. However - neither of these issues are noticeable in the implementation, so we willnot deal with it further. The normalization process also emphasizes the values in the 1st and 2ndcoordinates, which is why the dudv-maps are dominated by combinations of red and green colors (fig.2.6(b)).

Another important point when using normal- and dudv-maps is that the normals and dudv are rep-resented as color values in the maps, which are in the interval [0;1], and these values need to be

Page 15: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.6 Wave Distortion 15

Figure 2.10: Graphs of the 1st and 2nd component of the normal-map (solid red line) and the dudv-map (dotted yellow line) along y = 20. The dotted black line is the normalized 1st order numericalderivate of the normal-map (using three point method with 2nd order precision), which we tried toapproximate to the predefined dudv-map.

reformatted to vectors fitting the interval [−1;1], or else we would only end up with distortions goingin a positive direction, which is of course not what we intend. The transformation is described by eq.(2.7), with cx being any of the three color coordinates, and nx being the corresponding coordinate forthe normal.

nx = 2cx−1 (2.7)

2.6.2 Moving waves

So far we have applied a static wave to the scene, and we now need to make the waves move. To do thiswe introduce two motions, which have two roles: the primary motion should describe the direction ofthe waves, which for example could be the direction of wind or of a current. The secondary motion isto distort the primary motion and to add the impression of a chaotic fluid motion.

The primary motion is just a directional vector multiplied with time, but to apply the secondary motionwe take use of the smoothness and continuity of the dudv-map as seen in figure 2.10. This motionis again a directional vector, almost perpendicular to the primary motionvector, which is used aslookup coordinates in the dudv-map. These two 2D values constitutes the offset which describes themovement of the waves, and although it is simple in form the result is remarkable.

The fragment program for this can be found in the appendix A.1.4.

2.6.3 Artifacts

The combination of using the clipping plane to render the reflection and distorting the texture lookupleaves us with an inevitable artifact near edges at water level. This happens when the distorted lookupgoes outside the reflection texture, due to the clipping, and returns a color value of the background orsomething random (fig. 2.11). We reduced the problem by raising the clipping plane just a little.

Page 16: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.7 Fresnel 16

Figure 2.11: Artifacts caused by uv lookup outside the reflection texture.

2.7 Fresnel

The Fresnel Weight is the factor that is multiplied on to the reflection texture, and 1 minus the weightis what is multiplied on the refraction texture. The correct physical calculations of these weights areunnecessary difficult ones [Fre], especially considering that this will be done in the fragment program,and many simpler suggestions have been proposed by many people. The suggestion by [Hum05] isto use a simple cosine of the angle between the normal and eye vector, as the approximated Fresnel.Hereby ignoring Snell’s Law. The following definitions of the f resnelWeight implies that this weightis used as F in eq. (2.5).

fresnelWeight = 1− (eyevector ·normalvector); (2.8)

And although the result from this term is acceptable despite the simpleness, it is not possible to adjustthe contrasts in the waves and also the reflection becomes negligible near the camera, see fig. 2.12.This does not correspond very well with reality, where for example one will see some mirror reflectionwhen looking straight down into the water.

(a) Digibens’s Fresnel term (b) Tiago’s Fresnel

Figure 2.12: The difference between the two Fresnel term are very noticeable close to the camera.The reflection in fig. 2.12(a) becomes negligible near the camera, whereas 2.12(b) always will havesome reflection according to eq. (2.9)-(2.11).

Instead Tiago Sousa proposed a Fresnel term [Sou05], which was used in a similar animated bumptexture in the game Far Cry:

Page 17: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.7 Fresnel 17

fBias = 0.20373 (2.9)

facing = 1.0−max((eyevector ·normalvector),0) (2.10)

fresnelWeight = max(fBias+(1.0− fBias)∗ pow(facing, fPower),0) (2.11)

The f acing-variable is basically a cosine function of how much the normal of the wave is facing thecamera in the interval 0 to 1, with 0 meaning perpendicular or facing away from the camera, and1 implies that it is facing straight toward the camera. The graph for f resnelWeight with differentFresnel powers is illustrated in fig 2.13 with the x-axis representing the value of f acing.

Figure 2.13: The graph for the Fresnel weight for different Fresnel powers as a function of the f acingvariable going from 0 (surface completely facing the camera) to 1 (surface completely perpendicularto the camera).

(a) Fresnel power = 1 (b) Fresnel power = 3 (c) Fresnel power = 5

Figure 2.14: A higher Fresnel power results in a more curved graph, which visually means lessreflection and sharper transitions between the reflection- and the refraction texture, eg. sharper waves.

The sharpness of the curve is visible in the scene by how much the reflection is dominating the wateras the camera moves from the wave normal to viewing the surface near water level. This also reflectson the sharpness of the water waves. The Fresnel bias, fBias, moves the starting point, and our valueof 0.204 results in always allowing some reflection in the surface, even when looking straight down.A Fresnel power, fPower, of 3 gave our scene a nice result with a coherency between the viewingangle and the transparency as seen in figure 2.14.

Page 18: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.7 Fresnel 18

(a) Tiago’s Fresnel term (b) Tiago’s Fresnel with our distanceterm

(c) Illustrating the depth of 75 units (d) Illustrating the depth of 150 units

Figure 2.15: The difference between Tiago’s Fresnel term, with and without our distance term. No-tice the reflections are smoother in the distance in 2.15(b), while still preserving the contrasts in thewaves close to the camera. Furthermore the sky is reflected more in the distance, which gives is anice reflective color.

The drawback of the proposed term is that it does not take the distance to the water into account. Forexample when looking straight down from high above water level, the small particles (air, plankton,etc.) in even clean water will make it practically impossible to see through, and will instead resultin a mirror. To adjust for this we added our distance term to the previous formula, which weightedbetween the previously described fresnelWeight and the distance between the camera and the watersurface:float distVar = 150;

fresnelWeight = min(( fresnelWeight + max(IN.zpos/distVar , fresnelWeight))*0.5, 1);

It applies a linear interpolation to the existing term, and it interpolates the weight from the previouslycalculated fresnelWeight to 1 depending on the distance and angle to the point. This interpolationoccurs if the distance to the pixel is above a minimum value that depends on the previously calculated

Page 19: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.7 Fresnel 19

Figure 2.16: This illustration is connected to figure 2.15 as the location of the eye corresponds to theposition of the camera, and the distances along the x-axis corresponds to the distances to the watersurface. The graphs illustrate the two Fresnel Weights as a function of distance and angle to the flatquad. Notice how our term causes a stronger reflection compared to Tiagos term after some distancethat is dependent on angle and scale. What this graphs does not show is how the contrasts in thewaves are still nicely preserved close to the camera.

fresnelWeight, which again is dependent on the angle. This term also fixes some visual jitteringin the distance caused by the discrete limitations of the pixels on our screen. The resulting effect is acombination of an increased and a smoother reflection in the distance which works a lot better in thevirtual world. The figures in 2.15 illustrate the difference between "with" or "without" the distanceterm, and figure 2.16 compares the two terms based on the angle and distance to a point.

The distVar is a distance variable which should be changed according to the scale of the environ-ment.

Figure 2.17 illustrates the comparison between the Fresnel effect in our simulated shallow water andreal shallow water.

(a) A photo of a shallow water (b) Our water simulation.

Figure 2.17: A comparison between a real shallow water and our shallow water simulator. Noticehow alike the transition (Fresnel), between the transparency close to the camera and the reflection,for the two images is.

Page 20: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.8 Results 20

2.8 Results

The following results displays framerates for different types of tests. To produce an average frameratewhich should be representable to the tested type, the tests were individually run 3 times for 45 secondsin a deterministic scene and environment.

The framerates should only be compared within each table, as the conditions on which the tests weremade can have changed, e.g. a background software on the computer affecting the framerate.

2.8.1 Reflection-texture size

We are testing the speed for different texture sizes. This size is defined by the size of the viewportwhile rendering the reflection texture. The smaller the window the more coarse the reflection will be,but we will have an increased performance. With all the uv displacement and Fresnel term, the visualdifference between 256x256 and 1024x1024 is very small compared to the frame rate, so we endedup choosing a reflection of 256x256 for our permanent reflective resolution to our 1024x768 renderviewport. The following table and figure 2.18 illustrates the trade-offs between the frame rate and thevisual result.

FPSType 1.run 2.run 3.run AverageWater reflection@128x128 51.28 51.26 51.27 51.27Water reflection@256x256 50.53 50.64 50.16 50.44Water reflection@512x512 46.30 46.29 46.30 46.30Water reflection@1024x1024 36.62 36.58 36.56 36.58

2.8.2 Copy-to-texture

To determine the costs of copying the viewport to the texture, we tested this for our 1024x768 view-port. The test consisted if rendering the scene with and without copying the 1024x768 viewport tothe refraction texture, and it can be seen from the result that the costs are very small for copying a1024x768 viewport to a texture. Compared to re-rendering the whole scene as we do with the reflec-tion, copying to the texture is much faster.

FPSType 1.run 2.run 3.run Average@1024x768:With glCopyTexImage2D 55.57 55.57 55.56 55.57Without glCopyTexImage2D 53.43 53.49 53.43 53.45@1600x1200:With glCopyTexImage2D 30.21 30.20 30.20 30.20Without glCopyTexImage2D 28.67 28.67 28.69 28.68

Copying texture at a resolution of 1024x768 results in a 3.8% performance lost, and at 1600x1200 itresults in a 5% lost. It is surprising that lost is not bigger compared to the data amount which is morethan double between the two resolutions.

Page 21: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.9 Future Work 21

(a) Reflection at 128x128 (b) Reflection at 256x256

(c) Reflection at 512x512 (d) Reflection at 1024x1024

Figure 2.18: The visual trade-off for lower reflective resolutions. There is a visual difference betweenthe 128x128, 256x256 and 512x512 resolution. At 1024x1024 we noticed some jittering effectscaused by the discrete limitations of the screen. In our opinion the visual trade-off compared to theframe rate was best for 256x256.

2.9 Future Work

Although this method is fast and low-cost compared to the alternatives, there are still many ways toimprove it. One simple step would be hardware optimization by using the frame buffer object for thereflection texture. This saves copying the entire buffer to a texture, and instead takes advantage of thegraphics card ability to draw to several buffers without extra costs.

Some permanent fix to the artifact mentioned in section 2.6.3 is also on the list for future work de-pending on this can be done without too high a price, since the current solution by raising the planeclipping induced some relatively nice results.

Caustics is also another visual effect which can be added to the project. We decided not to focus on thiseffect as well as adding any underwater effect. Applying these effects using our method of copyingthe scene to a texture and using this for the refraction, will create some difficulties, as calculatingproper caustics requires the shape of the lake floor. Alternatively it is possible to limit caustics to theterrain and the add it to the terrain shader, which would be the easiest to implement, but you wouldlose the ideal concept of a modular program, as you will implicitly create dependencies between thedifferent shaders. To keep focus on the water shader itself, we decided to leave this for future work.

Finally we would consider to subdivide the quad into smaller quads, in that this will allow us to depthand occlusion check each sub-quad.

Page 22: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

2.10 Summary 22

2.10 Summary

In this chapter we described how to capture the refraction- and reflection textures, and applying a uvdisplacement to these in order to give the impression of a wavy surface. Using the same dudv-mapsand using inexpensive texture lookups, we have illustrated how to apply a motion to the surface. Aprocedure which we were inspired from a project written by Ben Humphrey [Hum05], but furtherenhanced by using the Fresnel weight proposed by Tiago Sousa. This, added with our own distanceterm, was finally used to distort the textures over time, in a way we have not previously seen and withan impressive result, fully usable in real-time rendering.

We have described a method which has its high penalty in the need to redraw the entire scene flippedupside down, but a penalty which is very common when rendering reflective surfaces. The otherbottleneck, this method represents when rendering the water surface itself, is the fragment shader;since this method is based on a single quad only 6 vertices (2 triangles) are passed to the vertexshader, and so all the calculations and displacements are left to the fragment shader. This can causeserious bottlenecks if there are multiple shaders that put strains of the fragment program.

Due to lack of information on correct uv displacement for refractive and reflective purposes a lot ofthis project was spend on researching and describing the proper solution this problem, along withresearching the correct properties for the dudv-map, which also was undisclosed information.

Page 23: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3 Environment Mapping and Clouds

This project is based on an outdoor environment, which leaves us with the question of how to apply aproper sky to our engine.

There are many ways to address the problem of illustrating a dynamic sky, and all of these methodshave positive and negative sides to them, all depending of how the gameplay or how animationsinteract or do not interact with the sky. For example, in a simple ground based shoot’em up game itmight not be as necessary to create volume based clouds as it would be in a flight simulator, since itbasically just takes up unnecessary resources.

Our world is a ground based outdoor environment, meaning that we experience the world withoutinteracting with the sky itself. This is the basis for our solution, and we will in the following sectionsfocus on some of the possibilities related to this, and also discuss some good and bad alternatives.

In our opinion there are three classical ways of implementing a sky:

• Skybox

• Skydome

• Skyplane

The names of these methods describe the geometrical figures used to represent the sky and the envi-ronment, and although they might sound relatively alike, and although they are only based on differentgeometries, the three methods result in very different advantages and disadvantages. Our solution isbased on a combination of a skyplane and a skydome, and it is inspired by different sources and fo-rums on the Internet. We have combined several methods that we have previously seen combined,and we will in the following sections describe this method that we found feasible for our outdoorenvironment and that has a level of realism the corresponds to the rest of our engine.

Initially we will shortly describe the properties for the three methods.

3.1 Skyplane

A skyplane is the simplest method of the three to simulate moving clouds. The concept is to createa plane placed above the user with a sky texture mapped on to it, and by simple uv displacement wewill move the cloud texture, and hence creating the impression of moving clouds. As the plane isconsidered to infinitely large the plane will cover the entire sky.

Since this method is represented by applying texture to a quad, it is a fast and simple method whichfocus all its GPU usage to the fragment shader. For older graphics cards this is good since they usuallyhave more parallel resources available here. Because of the flat structure that this method representsit is not possible to add further environmental mappings and effects to the skyplane, and there will be

Page 24: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.2 Skybox 24

Figure 3.1: Skyplane - A plane with some sky texture applied to it. By displacing the texture along adirection it can give the impression of a moving sky.

an aliasing issue in the horizon as the texture is being drawn further and further in the distance. Thisis a classical issue that occurs when there is a disparity between the size of the features in a texturesand the discrete limitations by the resolution of the screen. More information on the aliasing problemis described in sec. 3.5.3 on page 33.

To improve the realism, we can apply multi-layered motion by combining two or more texture lookupson the same texture, but with different displacement vectors, and this way add an impression of depthin the sky. Multi-layered clouds will also be covered in later sections.

3.1.1 Pros and Cons

Pros

• Fast and simple.

• Graphic cards optimized to draw flat objects

Cons

• Advanced environmental mapping and effects are not possible.

• Aliasing issues in the distance that need to be handled.

3.2 Skybox

A more advanced alternative to skyplanes are skyboxes, also called cubic environmental maps. Thismethod consists of a big textured cube with 6 images, which surrounds the camera/user, with one oneach of the sides. The images make out the scenery as seen to the north, south, east, west, up anddown, from the users perspective. Illustrated in fig. 3.2. Figure 3.31 is an example of a texture whichcan be applied on to a skybox.

3.2.1 Benefits

The upside to this method compared to the previous is that it quickly adds environment and reality toa scene which otherwise would be simple and unreal. Fig. 3.4 is a good example of how applying the

1The skybox texture was created by Hazel Whorley, Scotland, 2006.

Page 25: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.2 Skybox 25

Figure 3.2: Skybox - A box consisting of 6 quads with a texture applied to each of them. For thiseffect to work the box has to be very big and translate according to the camera.

Figure 3.3: Skybox texture

skybox texture to our world quickly adds a new level of realism into the scene. Furthermore with theincrease of memory in graphics adapters opens up for the possibilities, like animated skyboxes, whiche.g. could cycle one of the sides through an exploding volcano, or animate through a sunset.

Its simple structure makes it faster to render than for example a skydome (sphere), since the skyboxconsists of a lot less primitives than the skydome. The speed and the ease of implementation is whythis method has survived through the last 25 years, and is still the most used method for environmentalmapping in games today, like the Valve engine, Unreal Tournament and Far Cry. Still, in these gamesthey often use multiple layers of skyboxes to apply animated effects and objects [Bel98].

3.2.2 Drawbacks

Just as figure 3.3, skybox textures are often based on environments of indeterminable/unreachablesize. The reason for this environmental requirement is that for the map to work properly the camer-a/user should not be allowed to reach the wall of the box, as any movement away from the center ofthe box will expose the perspective distortion near the edges and on the walls, and the flat structure of

Page 26: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.2 Skybox 26

Figure 3.4: A screenshot of the skybox used in our demo. It looks very nice, but the possibilities ofapplying clouds to the sky is very limited.

(a) Moon viewed from center (b) Same moon, off center

Figure 3.5: Why it is important to always center the skybox around the camera/user. Notice how themoon is distorted in fig. (b).

the texture is obviously also becoming clearer which is not desired. This is clearly visible in fig. 3.5).However, if the viewer is stuck to his location and is only able to rotate around the same point, thenalso small environments will qualify for this method.

Another issue with skyboxes is the general perspective distortion one have when viewing a boxthrough the perspective lens of the 3D rendering engine. Due to this perspective distortion and the flatquads, any structures on the texture will converge toward the corners, so to it is necessary to counter-distort the images for it to appear correct. This has been done in our texture in fig. 3.3, where youcan compare how skewed the moon is, with how it appears in the skybox engine as illustrated in fig.3.5(a).

Finally, a reoccurring problem with skyboxes are visible seams at the edges and corners (figure 3.6). Itis a problem that can ruin the illusion of a "realistic" environment, and it usually come from two things;either a stitching mismatch in the texture, being a designing issue, or by how the texture addressingworks on the graphics cards. This problem can be fixed by clamping (GL_CLAMP) the GL_TEXTURE_2Dinstead of wrapping (GL_REPEAT):

glTexParameterf( GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_CLAMP);

glTexParameterf( GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_CLAMP);

Page 27: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.3 Skydome 27

Figure 3.6: Potential problems with seams in the edge of the skybox

3.2.3 Pros and Cons

Pros

• Fast, due to few polygons.

• Easy to implement.

• Easy to apply environmental objects like mountains, buildings and static clouds.

Cons

• Not as fast as skyplane.

• Perspective projections, though fixable by counter-distorting the textures.

• Possible errors near stitched edges.

• Off-center warping.

3.3 Skydome

Figure 3.7: Skydome - A sphere with the orientation and representation of the hemisphere. Allowingeasier appliance of celestial objects, as their positions can be represented by a latitude and a longitude

A skydome or skysphere, which is represented by a sphere or a hemisphere, is similar in concept as theskybox. Although the difference in geometry might appear as a small thing, it completely changes itsconceptual purpose: where as the skybox is strong for environmental texture mapping, the skydomehas its strength in its close relation to our perception of the celestial sphere. This relation allows usto use concepts like the horizon, zenith, altitude and longitude which e.g. can be used for positioningof celestial objects on our skydome and for light scattering models. The specific advantages of these

Page 28: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.4 Atmospheric Light Scattering 28

concepts will become clear as you read through the following pages.

Of course these concepts can also be adapted to skyboxes, but this requires extra and always expensivecalculations, and we will have to consider warping problems when projecting on to the environmentalbox. A problem which is less likely to appear as obviously in a skydome, as the surface of the sphereis perpendicular to the center, where our camera is preferably positioned, but also because the surfaceis smooth, and therefore not as exposed to sharp warps as illustrated in figure 3.5. But as illustrated insection 3.7.2 on page 41 the skydome is much more robust toward off-centered cameras, as long theresolution (number of faces) of the skydome is high enough.

Implementing a skydome is basically just a implementing a colored geometry around the world, whichin it self is not an interesting concept. What makes the skydome strong to work with is the easy atwhich the dynamic effects can be applied to it, and this is the reason why we chose this method of thethree mentioned. The following sections describe the effects we chose to apply to our skydome, andsome of the issues we encountered while implementing them.

3.3.1 Pros and Cons

Pros

• More natural representation of the sky, sun and stars.

• Easy to work with celestial objects.

• Easy to apply inexpensive moving clouds.

• More robust toward off-centered camera.

Cons

• Uses more polygons than a skybox.

3.4 Atmospheric Light Scattering

A simple way to apply a sky color to our skydome is to just apply some tint color to our sphere, e.g.a blue color for a nice sunny and clear sky day. The problem is that this is not compliant with the realworld, where the earths atmosphere generates atmospheric light scattering as seen in fig. 3.8(a).

The atmospheric light scattering occurs when the sunlight passes down through the different kinds ofgases in the different layers of the atmosphere, which causes the reflected sunlight to reflect differentcolors. This means that local conditions like pollution or weather conditions or even the angle of thesun, will effect the resulting scattering, which makes it a very complex problem to find a solutionfor. A fully dynamical Rayleigh light scattering engine is outside the scope of this paper, but welldocumented papers was written by [Smi99], [Nie03], and also a chapter in GPUGems2 explains anice but relatively expensive method [Sea05].

A fast and very simple alternative to these analytical scattering models is presented by [Aba06], whodescribes a method where we use a small 16x16 texture to represent the sky color along the longitude(from the horizon to the pole) on the textures y-axis, and the time of day on the x-axis. By simple uvdisplacement in the fragment program of our sky shader we are now able to apply a color as a functionof time and the z-value of the sphere. The u displacement is some function that represents the time

Page 29: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.4 Atmospheric Light Scattering 29

(a) The light scattering on a beautifulCopenhagen winter afternoon

(b) Light scattering using the methodpresented by [Aba06]

Figure 3.8: A photo of the atmospheric scattering of light caused by the different particles occurringdown through the atmosphere. The light on the right in both images is the glow of the setting sun.Notice how realistic the sky appears in 3.8(b) despite its simpleness.

of day, and we use the time multiplied with some scale that determines the speed of a day-cycle. Justremember to load the texture with GL_REPEAT on the u coordinate, to ensure a cyclic day.

The v displacement is some representation of the latitude of the pixel that we are currently renderingon the sphere. The geodetic latitude of a pixel, p, is calculated using 2D vector representation on asphere with center in s and the radius r as:

latitude = acos

px− sx

py− sy

pz− sz

·

00

1/r

(3.1)

= acos((sz− pz)/r) (3.2)

Since the acos is a relatively expensive lookup, we simplified this representation of the latitude to thenormalized coordinates for p transformed to modelview coordinates, as they will automatically haveorigo in the center of the sphere. Normalizing these coordinates implies that we project the point,p, from a sphere with radius, r, on to a unit sphere, and therefore our z-value will the be a linearrepresentation of the height going from [−1;1]. Although this is not the correct representation of theheight, the visual result between this faster method and the longitude in eq. (3.1) is insignificant tothe viewer, since the error is smallest near the horizon which is where the biggest color changes are.See fig. 3.9(b).

Because our terrain is not infinitely big the coloration of the skydome below the horizon should beconsidered, as this will be visible when elevating the camera away from the horizon. We fixed this byclamping the texture along the v coordinate, so all negative lookups would be the same as the horizon.This clamping also takes care of any artifacts near zenith.

From the scattering texture (fig. 3.9(b)) we extracted our predefined tint color which is mentionedlater on in this project, and was used in our terrain shader as the color that the terrain slowly fadesto in the distance. This is noticeable in the distance of the snapshots in fig. 3.9(a). We discoveredthat the third row from the horizon was a good representation of the general color at the horizon. The

Page 30: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 30

(a) Looping through the day using a simple 16x16 texture. (b) The texture used for scattering.

Figure 3.9: A shortcut to atmospheric scattering by using a texture to represent the light values asfunction of sun position (time of day) and longitude.

problem with using the color at the lower rows is that this color and the horizon itself is only visiblewhen the camera is elevated from the terrain, which is not the idea with this engine. Figure 3.9(a)illustrates the results of this method, as 6 snapshots taken over a period of one day.

3.5 Clouds

There are a number of ways to apply moving clouds to the skydome, and some of these method are:

• Directly applying a warped texture of clouds to the sphere.

• Using volumetric clouds.

• Mapping a cloud texture to the sphere using an AABB (Axis Aligned Bounding Box).

• Mapping a cloud texture to the sphere using a virtual plane.

We will not go into details about these methods, but shortly explain some of the ideas behind themethods, and why we decided to go with the one we chose.

Figure 3.10: The texture used for moving clouds.

Page 31: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 31

Directly applying a warped texture works similar as applying a texture to the skybox. Thetexture is pulled down over the top of the dome, and therefore it is required that the texture are tobe adjusted to counter the shape of the skydome. As with skyboxes, it usually results in a staticenvironment.

Volumetric clouds is a method where each cloud is represented by either a geometric figure or bya billboard. This means that depending on the environment it possible to end up rendering a lot ofclouds which would make this the most costly method. However, it allows for interaction with theclouds, and it allows a more realistic representation of the sky.

Mapping the sky using an AABB is the method we were proposed when given this assignment.This method is described by Abdul Bezrati [Bez05], and it works by encapsulating the skydome inan AABB, and use the coordinates for this AABB to calculate the texture coordinates on the dome.It is a fast method, which only requires texture lookups, but because it maps a box on to a sphere itwill result in the texture being stretched at the horizon, see figure 3.11. This problem was reducedby Bezrati by reducing the interval along the height-axis, z, but it was still visible. Furthermore, thismethod does not result in a plane of perspective on the clouds that is experienced in real life and isillustrated in figure 3.19 on page 38.

Figure 3.11: A figure from [Erl] to illustrate the problem when mapping a cloud texture on to theskydome using an AABB. Notice the lack of perspective realism in the clouds and how the clouds areunnaturally stretched.

Mapping the sky using a virtual plane is a method we came across in a forum on gamedev.com[Sky], and the strength to this method, compared to the previous ones, is that it has the natural planeof perspective but still is very low-cost as it only requires a few texture lookups. When combinedwith a multi-layered cloud texture it also allows for some very nice dynamics as described in section3.5.4. Since our world is viewed from a ground based or near-ground based viewer, we will focus onthis representation of clouds, and we will in the following sections describe some of the problems andissues that should be considered when implementing this method.

3.5.1 Mapping using a virtual plane

The concept behind this method is that we are interested in giving our clods a plane of perspective areyou would see in the real world. This is done by letting a virtual plane define the uv coordinates in thecloud texture lookup, and thereby preserving the plane perspective. To use the planes coordinates weneed to project the point on the sphere that we are currently coloring in our fragment program, on to

Page 32: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 32

the plane using simple geometry. The geometry behind this process is illustrated in fig. 3.12, and willnow be explained.

Figure 3.12: A 2D projection of a virtual plane on to the skydome, by exploiting the relation betweenthe two right-angled triangles A and B, both with an angle in point c. By knowing the x and z valuesin A and the height to the plane, we are able to calculate the corresponding projection point on thevirtual plane, and use the coordinates for this point in the cloud texture lookup.

The virtual plane is defined at some height along the z-axis. Based on a point, p, on the skydome, wecalculate the corresponding projection point on the plane by considering the two equiangular trianglesA and B in fig. 3.12. Since we already know the lengths of the sides in A, and we know the ratiobetween the triangles:

ratio =height

pz(3.3)

We can calculate the length of the sides in triangle B by:

xplane =height

pzpx (3.4)

And similar for yplane.

Because these values are calculated in model space, where the center of the sphere is located atC = (0,0,0), these lengths corresponds to the coordinates of the projection point, which is againequivalent to the texture coordinates for the cloud. The only problem with calculating the texturecoordinates by eq. (3.4) is that the visual result will then be dependent on the size of the skydome.This can be considered a nice feature, but our perspective is that we might at some time be interestedin changing this size to cover a larger area of terrain, without having to reconfigure the dimensions ofthe clouds. So to avoid this, we instead multiply use the normalized coordinates of the point, p, withour ratio, which is equivalent of scaling the sphere down to a unit sphere. The visual result is now

Page 33: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 33

independent of the size of the skydome and we calculate the texture coordinates as:

u =height

pzpnormx (3.5a)

v =height

pzpnormy (3.5b)

Another advantage to this method of representing our cloud texture is that, due to the flat structure ofthe plane, we are easily able to simulate moving clouds by displacing the uv coordinates over timealong some direction vector. In our case we used a global wind vector.

3.5.2 Implementation of the virtual plane

If we do the implementation of eq. (3.5) purely in the vertex shader and just pass on the uv coordinatesfor the pixel shader, we will see clear issues in the horizon, as illustrated in fig. 3.13. The problemwith this solution is in the division with the z-value which happens in the vertex program. When werender the triangles in the horizon, one of their corners will be at z = 0, which according to eq. (3.5)results in xplane = ∞. Infinity interpolates very poorly in the GPU pipeline to anything except infinity,resulting in the whole triangle having this value and therefor causing the texture lookups to go crazy.

There are two possible solutions to this problem. Either fade the clouds prior to these triangles tomake the error invisible, or move the division with z to the pixel shader. The problem with the firstsolution is that depending on the number of faces in the sphere, the clouds would either fade prettyhigh on the sky, or the number of faces in the sphere will have to be increased in order for it to facelower. In any case, it will either reduce the visual result or will make rendering a little slower. Wechose the latter solution and accepted the small cost in performance.

Figure 3.13: The illustration is the sky-texture combined with the wireframe, and it illustrates theartifact that occurs in the sky-texture if we do all the calculation for the uv coordinates in the vertexshader. The problem is caused by the division with z = 0 at the horizon (eq. 3.5) which converges toinfinity. This does not interpolate well in the GPU between the vertex shader and the fragment shader,and causes the whole triangle to do improper texture lookups in the fragment program.

3.5.3 Aliasing

Aliasing is a phenomena that arose in digital signal processing, DSP, when a continuous signal isbeing discretized or sampled into a digital signal using too few samples relative to the signal itself.The problem is stated in the Shannon sampling theorem, which states that any continuous signal can

Page 34: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 34

(a) No mipmapping (b) With mipmapping

Figure 3.14: How mipmapping solves the problem of aliasing in textures. Notice the grainy texturein fig. (a) that is completely smoothed in (b). This is done by computer generated mipmaps, whichare generated at the start of the program.

be reconstructed if is sampled with at least 2 samples per period. This minimum sample rate is calledthe Nyquist rate.

In the discretized 3D world that we are working in, we constantly experience under sampling ofsignals in the representation of lines, textures and edges. For lines and polygon edges the problem iscommonly seen as rough rigged lines, but today graphics cards today usually have a built-in supportfor full-scene anti-aliasing (FSAA) which helps avoid these aliasing problems.

For textures the problem occurs when rendering an object some distance which causes the texture to beunder sampled, and it is commonly corrected in graphics cards by using mipmapping. Mipmappingis based on creating, what is basically, a subset of scale-space textures derived from the original,with each scale having a lower frequency than the other, and thereby allowing a lower sampling ratebefore causing aliasing. The graphics card is able to use these higher levels of scale when renderinga texture at some distance, and the visual results by using mipmapping compared to no mipmappingis illustrated in fig. 3.14. There is obviously more to mipmapping than this, but it explains the basicsbehind solving general textural aliasing problems today.

In our case this problem occurs in eq. (3.5), when we render the clouds at the horizon where pz

converges to zero, resulting in u and v rapidly converging toward infinity. Because these variables arethe coordinates in the texture lookup, this convergence will be visible as texture aliasing. We will inthe following section determine at what value for pz the aliasing begins, by considering the propertiesof the cloud texture compared to the Nyquist rate, and the properties of our camera, which defines therelationship between the pixels and angles in the scene.

Fig. 3.15 illustrates our problem, and the figure also illustrates the variables we will be referring to inthe following pages.

To determine where the aliasing begins, we will use the following procedure:

1. Calculate ω , since this gives us a relation between the pixel size and our virtual world.

2. Determine ∆u, since this determines the requirements for aliasing.

3. Calculate α based on ω and ∆u, as this now determines the angle at which the aliasing begins.

Page 35: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 35

Figure 3.15: This illustrates our problem with aliasing when rendering the skydome and the virtualplane. When α becomes too small, the texture lookup, ∆u, in the cloud texture for two neighborpixels becomes too large for the pixels to recreate the details in the texture. The figure also illustratesthe correspondence between the angle α and the details on the texture; had the cloud puffs been biggerit would have allowed for a larger ∆u and hence for the angle α to be smaller before the aliasingproblem became visible. The angle FOVy is defined at the initial of the camera in the program.

Calculate ω

We will start with looking at the proportion between the cameras Field-of-View along the verticalaxis, FOVy, which is the angle that the camera covers along viewports y-axis, and the resolution ofthe viewport. We know that the total angle along the viewports y-axis is the total amount of pixelmultiplied with the angle between them, ω:

FOVy ≈ ω ry ⇒ω ≈ FOVy

ry(3.6)

The reason why it is only an approximated value is that our viewport is a plane and the FOV is anangle, which makes the value calculated in eq. (3.8) an averaged value over the entire viewport. Itmeans that our pixel angular size is slightly larger in the middle of the viewport and slightly smallernear the sides (see fig. 3.16), which for high values of FOV give a fish-eye effect as everything movesfaster near the edges. In the end it means that the aliasing will occur higher in the horizon when thisis viewed at in the middle of the viewport, than it will near the sides, but nevertheless we will base thefollowing calculations on this approximation.

In our program the resolution of the viewport along y-axis is ry = 768 and the FOV y = 30, which areinserted into eq. (3.6):

ω ≈ 30768

(3.7)

≈ 0.039◦/pixel (3.8)

Because of the small-angle approximation [App98] we can assume that the pixel angular size is con-stant over the entire pixel.

Page 36: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 36

Figure 3.16: This show how the pixel angular size is different for the center than for the sides ofthe viewport. The lines from the eye divides the viewport into four equally large parts, which can beconveyed as 4 pixels in height. It is clear how the pixel angular size is smaller near the edges andlarger in the middle causing the fish-eye effect.

Determine ∆u

The value for ∆u is based on the features in the texture and the Shannon Sampling Theorem, whichstates that we need each signal to be represented by at least two samples. However, notice that al-though this theorem is based on a perfect reproduction of the original signal, this is not possible withthe signals that we are working with (fig. 3.17). An acceptable result is a reproduction which issmooth to the eye, and that does not cause jittering, and for this goal we can still use the theorem.

As we illustrate in fig. 3.17, we can in our case consider the distance between each cloud puff to bea signal, and we measured this distance be approximately 300 pixels. This means that to be withinthe Nyquist rate we need at least two samples (pixels) per period, and with ∆u being the difference intexture lookups between two neighbor pixels, it corresponds to:

2∆u < 300pixels ⇒∆u < 150pixels (3.9)

So between every two pixels we can not have a difference in texture lookup larger than 150pixels, ifwe want to avoid aliasing.

Figure 3.17: A graph of the pixel intensity of the cloud texture in fig. 3.10 at row y = 200, and itillustrates how we define the period of a cloud signal. We measure one period to have the length ofapproximately 300 pixels.

Page 37: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 37

Calculate α

To calculate α we will look at the equation for calculating ∆u, according to fig. 3.15:

∆u = Vα −Vω+α (3.10)

And Vα is calculated as:

Vα = h tan(90◦−α)= h cot(α) (3.11)

Considering that Vω+α is calculated the same way, we will combine eq. (3.11) and eq. (3.10):

∆u = h cot(α)−h cot(α +ω)= h(cot(α)− cot(α +ω)) (3.12)

We will combine this with eq. (3.9), our previously calculated ω in eq. (3.8) and the height to thevirtual plane, v = 150:

150 > 150∗ (cot(α)− cot(α +0.039◦)) ⇒1 > cot(α)− cot(α +0.039◦) (3.13)

We have to find an angle α which satisfies eq. (3.13), and the analytical solution to this equation canbe found by rewriting the last term, using [Def], to:

1 > cot(α)− cot(α +0.039◦)

= cot(α)− cot(α)cot(0.039◦)−1cot(α)+ cot(0.039◦)

cot(α)+ cot(0.039◦) > cot2(α)+ cot(0.039◦)cot(α)− cot(α)cot(0.039◦)−1

0 > cot2(α)− cot(α)− (1+ cot(0.039◦)) (3.14)

We have now reduced the problem to the 2nd order polynomial, eq. (3.14), which is solved as follows:

cot(α) <−(−1)±

√(−1)2−4∗1∗ (−1− cot(0.039◦))

2⇒

cot(α) < 38.8454 ∨ cot(α) <−37.8454 ⇒

α > 1.475◦ ∨ α <−1.514◦ ⇒

α > 1.475◦ (3.15)

We ignore the negative result, since we are not interested in aliasing below the horizon.

We have now calculated the angle we need to be above the horizon in order to avoid aliasing. Thiscorresponds with the visually estimated result illustrated in fig. 3.18.

Page 38: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 38

Figure 3.18: Aliasing occurs near the horizon on the skydome. The sphere is divided into 256segments along the latitude, which means that each segment covers 0.7◦. The bright white line definesthe horizon and the bright yellow is placed at the 2nd segment which equals 1.4◦ from the horizon.This is where the visible aliasing begins, and it is especially visible when moving the camera, whichresults in a very jittering texture. Below the horizon, the dome has been grayed out for easier visibility.

Texture fading

We have now calculated where the problem occurs, and the solution is to introduce a fading of thetexture to a predefined tint color, which is very coherent with the distance fading which can be expe-rienced in real life, fig. 3.19. This fade is nothing but a coefficient to our cloud texture which we bysome function will decrease to zero as the clouds disappear in the horizon.

Figure 3.19: Figures illustrating how real clouds fade out in the distance.

To give ourselves an improved flexibility of the fade, we fade by using the general exponential func-tion, eq. (3.16), illustrated in fig. 3.20. This function give us the ability to control all parts of thefading: the starting intensity, α , the end intensity, β , the point from which the fading should start, c,and finally the speed of the fade, k.

fadevalue = α− (α−β )e(−k∗max(ω−c,0)2) (3.16)

As previously calculated we would like to use the angle as our fade variable, ω , but this value needsadditional calculations, so instead we decided to use a value that is already to our disposal; the z-valuein the unit sphere. The corresponding value for our aliasing limit is at:

c = sin(1.4◦) = 0.025

And with our fade going from α = 0 (no texture in the horizon at z = 0) to β = 1 (intensive texture at

Page 39: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.5 Clouds 39

Figure 3.20: The graph for the exponential function defining our fade value, eq. (3.16)

zenith z = 1), and with the fade starting at c = 0.025, it leads us to the function:

fadevalue = 1− (1−0)e(−k∗max(z−0.025,0)2)

= 1− e(−k∗max(z−0.025,0)2) (3.17)

The variable k is depending on the scale of the scene and how strong one would want the clouds tofade. Figure 3.21 illustrates the aliasing and how it is corrected by the above function with k = 128.

(a) The aliasing is very distinct inthe horizon

(b) The fading factor as describedby eq. (3.17)

(c) Cloud texture faded according to(b)

Figure 3.21: An illustration of the aliasing problem when using a virtual plane for moving clouds.Fixed by fading the cloud texture, so that (a) * (b) = (c)

3.5.4 Multi-layered clouds

To apply further dynamics to the scene we combine the first texture layer with an additional layer.

The layers can be combined in different ways, and two more obvious methods would be either mul-tiplying the color values of the two layers or taking their mean value. We experienced that neitherone is more correct than the other, but they result in different effects. Averaging the values results ingenerally more visible clouds, as the spread is decreased around the mean value of the original cloudtexture, which in our case was close to 0.4 (fig. 3.22(d)), but the movement of the flat layers is morevisible. Opposed to multiplying the layers which results in a much more discrete layer of clouds, asmultiplication of values less than one results in even smaller values. This means that the clouds areonly really visible at the coordinates where one of the layers have visible clouds (fig. 3.22(c)), whichalso makes this method more dynamic when layers are displaced, and it makes the individual layersless noticeable.

The resulting effect when using multi-layered clouds are: As the clouds move they will not appear so

Page 40: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.6 Combined Skydome Color Formula 40

flat, and the movement also gives the impression that small clouds are created and dissolved again.An effect which is most visible when multiplying the layers, as seen in fig. 3.24.

We will at this time mention that besides from fig. 3.24 and fig. 3.23(b), we have used the mean valuemethod in our illustrations, as it is easier to demonstrate some of the problems we encountered whenthe clouds are more visible.

3.6 Combined Skydome Color Formula

The color at some pixel on our skydome is calculated by the following formula:color = tint_color.z * cloud_color.x * fade_value * cloud_color + tint_color;

The variables fade_value (exponential function), cloud_color (cloud texture lookup) and tint_color(atmospheric light scattering lookup) have already been explained, and the last two variables will beexplained in the following.

We discovered that during the "night", our clouds were still as visible as they were during the day. Tocorrect this we decided to multiply the cloud texture with tint_color.z, as this is a representativeof the blue color in the tint. And when using the atmospheric light scattering as proposed by Abadand described in section 3.4, our tint color will change according to "time of day", with the colorblue being much more dominant during the day. The result is bright clouds during the day, and morediscrete clouds during the night. This solution turned out to work very well, and the result is show infig. 3.25.

The multiplication with cloud_color.x is done to let the tint color be more apparent in the darkareas of the cloud texture, and with the cloud texture being gray scaled, fig. 3.10, any component ofthe colors is representing its intensity. There by making the clouds transparent with relation to the tintcolor.

Figure 3.25 displays the result when we put everything together along with a simple billboard torepresent the sun

(a) 1st cloud layer (b) 2nd cloud layer (c) The layers multiplied (d) The mean value

Figure 3.22: How to combine the multi-layered clouds. The two layers are represented by the sametexture but with displaced coordinates. Averaging and multiplying their values give very differentresults

Page 41: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.7 Results 41

(a) The cloud layers combined by averaging the colorvalues. The clouds are generally more visible, butslightly less dynamic over time

(b) The cloud layers combined by multiplying their val-ues. The clouds are less visible, but more dynamic overtime.

Figure 3.23: The visual result of the different ways of combining the clouds.

Figure 3.24: The dynamics with multiplying the multi-layered clouds. The pictures were taken atsmall time intervals, but still the dynamics of the clouds closest by are clear.

3.7 Results

3.7.1 The costs of the effects

In the following we tested the costs of rendering the skydome with the multi-layered, exponentiallyfading cloud texture combined with the atmospheric light scattering contra rendering the skydomewith just a static color.

FPSType 1.run 2.run 3.run AverageWith static color 40.38 40.37 40.38 40.38With all the effect 39.28 39.32 39.29 39.30

It is very clear how inexpensive the described methods are.

3.7.2 Number of faces on skydome

We tested which impact the number of faces on the sphere had to the framerate. A lower amountshould theoretically mean a lower framerate but a the cost of more visible primitives, since the GPUinterpolates linearly between the vertex- and the fragment program, and therefore is not able to capturethe roundness of the sphere.

The following table displays the results with a sphere rendered with X number of triangles/faces. The

Page 42: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.8 Future Work 42

Figure 3.25: The effects in the skydome put together. The pictures are taken over a period of "oneday", beginning with morning in the top left to the night in the lower right. We have added a sun(billboard) for better visual recognition of the sunset. Notice how the intensity of the clouds areenhanced during the day, and dimmed during the night. This is the effect of our multiplication withthe blue value in the tint-color as described in sec. 3.6.

correspond to a resolution of 4x4, 16x16 and 32x32 segments along the longitude and half latitude,which is how our spheres (skydome) are defined.

FPSType 1.run 2.run 3.run Average(4x4) 24 faces 39.36 39.33 39.34 39.34(16x16) 544 faces 39.28 39.32 39.29 39.30(32x32) 2112 faces 39.27 39.27 39.28 39.27

3.8 Future Work

For future work there are some possibilities to improve the correspondence between the motion ofthe sun and the light scattering texture. We only implemented a simple motion of the sun along thecircumference of the skydome.

It is always possible to apply additional effects if needed, but at the chance of sounding smug, webelieve that the solution we are presenting, is a solution that includes the effects needed which leavesa very good balance between visual results and performance.

Page 43: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

3.9 Summary 43

(a) Skydome at 4x4 (b) Skydome at 16x16 (c) Skydome at 32x32

Figure 3.26: The skydome at different resolutions. At lower resolution the vertices become veryclear, but without any noticeable performance gain, so based on these results the resolution of 32x32(2112 faces) is preferred.

3.9 Summary

In this chapter we have described the basics behind different ways of implementing a sky.

With focus on the skydome, we have described some dynamical methods that can be added to theenvironment, including a inexpensive implementation of atmospheric light scattering, and we com-bined it with multi-layered clouds using a virtual skyplane. All in all a method that we in section 3.7.1proved to be very inexpensive.

We have in depth shown some of the problems that occur while implement the multi-layered clouds;such as describing that the theory behind aliasing, which is usually described for DSP also appliesto 3D texture rendering, and we used this theory to propose an analytical solution to the aliasingproblem. We also focused on some concerns caused by divisions in the vertex program that we solvedby moving distributing the division to the fragment program. We finally combined the solutions tothese problems by our own method of exploiting the horizon color of the light scattering texture todetermine the time of day, and thereby set the intensity of the clouds so they are more visible duringthe day and less during the night.

Page 44: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

Bibliography

[Aba06] Jesús Alonso Abad. A Fast, Simple Method to Render Sky Color Using Gradients Maps.University of Burgos, October 2006.

[App98] Trigonometric Infinite Series. 23.Mar 2007: http://hyperphysics.phy-astr.gsu.edu/hbase/trgser.html, December 1998. Georgia State University.

[Bel98] Gavin Bell. Creating Backgrounds for 3D Games. 07.Dec 2006: http://www.gamasutra.com/features/19981023/bell_01.htm, October 1998. Gamasutra.

[Bez05] A. Bezrati. Nature Scene - Technical Report. NVIDIA Corporation, August 2005.

[Bla00] D. T. Blackstock. Fundamentals of Physical Acoustics. pages 191–193, 2000. WileyInterscience.

[Cin] Geforce 6 Tech Specs. 24.Feb 2007: http://www.nvidia.com/object/geforce6_techspecs.html. NVIDIA Corporation.

[Def] Definitions of trigonometric functions. 17.Mar 2007: http://mathforum.org/dr.math/faq/formulas/faq.trig.html. The Math Forum.

[EH05] Dohlmann Erleben, Sporring and Henriksen. Physics-based Animations. Charles RiverMedie, 2005.

[Erl] Kenny Erleben. Game Animation, 2. Quarter, 2005. 28.Dec 2006: http://www.diku.dk/~kenny/game.animation.05/. DIKU.

[Fla98] Andrew Flavell. Gamasutra - Features - Run-Time MIP-Map Filtering. 17.Mar2007: http://www.gamasutra.com/features/19981211/flavell_01.htm, Decem-ber 1998. GamaSutra.

[Fre] Fresnel Equations: Definition and Much More from Answer.com. 14.Sep: http://www.answers.com/topic/fresnel-equations. Answers.com.

[Gre05] S. Green. The OpenGL Framebuffer Object Extension. NVIDIA Corporation, March 2005.

[Hum05] Ben "DigiBen" Humphrey. Realistic Water Using Bump Mapping and Refraction. Game-Tutorials, 2005.

[Jen04] Robert Goliás & Lasse Staff Jensen. Deep-water Animation and Rendering. Funcom OsloAS, 2004.

[Kil99] Mark J. Kilgard. Improving Shadows and Reflections via the Stencil Buffer. pages 1–13,1999. NVIDIA Corporation.

[Kil05] Mark J. Kilgard. 19.Nov 2006: http://www.opengl.org/registry/specs/NV/texture_rectangle.txt, March 2005. NVIDIA Corporation.

Page 45: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

Bibliography 45

[Nie03] Ralf S. Nielsen. Real Time Rendering of Atmospheric Scattering Effects for flight Simula-tor. DTU, Lyngby, 2003.

[Sea05] O’Neil Sean. GPU Gems II - Generic Refraction Simulation, chapter Accurate AtmosphericScattering, pages 253–268. Addison Wesley, 2005.

[Sky] Sky Plane - Where to start? 19.Nov 2006: http://www.gamedev.net/community/forums/topic.asp?topic_id=324413. GameDev Forum.

[Smi99] A. J. Preetham & Peter Shirley & Brian Smits. A Practical Analytic Model for Daylight.University of Utah, October 1999.

[Sou05] Tiago Sousa. GPU Gems II - Generic Refraction Simulation, chapter Generic RefractionSimulation, pages 295–305. Addison Wesley, 2005.

[Sup] Open Inventor - Release Notes. 24.Feb 2007: http://www.tgs.com/support/oiv_doc/ReleaseNotes/OIV/text.htm. Mercury Computer Systems Inc.

[Szy06] Mark Szymczyk. Drawing Tiles With Opengl. 07.Dec 2006: http://www.meandmark.com/tilingpart1.html, August 2006.

[T.D] D.Shreiner & M.Woo & J.Neider & T.Davis. The OpenGL Programming Guide - TheRedBook. 06.Sep 2006: http://www.opengl.org/documentation/red\_book/.

[Tes04] Jerry Tessendorf. Simulating Ocean Water. Finelight Visual Technology, SIGGRAPH2001/2004, 2004.

[Win] Michal Lumsden & Meggie Winchell. Cloud Atlas. 03.Mar 2007: http://www.astro.umass.edu/~arny/winchnew.html>. Department of Astronomy.

[Yod99] James H. McClellan & Ronald W. Schafer & Mark A. Yoder. DSP First, A MultimediaApproach, chapter Sampling and Aliasing, pages 87–89. Prentice Hall, 1999.

Page 46: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

A Source Code

A.1 Water

A.1.1 Rendering Reflection1 v o i d Application :: render_water_reflection( b o o l enabled) {

2 i f (! enabled) r e t u r n ;

34 / / Change t h e v iew p o r t t o be t h e s i z e o f t h e t e x t u r e we w i l l r e n d e r t o5 glViewport (0,0, m_waterview_width , m_waterview_height );

678 / / Draw r e f l e c t i o n , b u t o n l y whe re f l o o r i s .9 d o u b l e plane [4] = {0.0, 0.0, -1.0, m_water_depth };

10 glPushMatrix ();

111213 / / Turn on a c l i p p l a n e and s e t t h e c l i p p i n g e q u a t i o n14 glEnable(GL_CLIP_PLANE0 );

1516 / / S e t t i n g c l i p p l a n e17 glClipPlane(GL_CLIP_PLANE0 , plane );

1819 / / T r a n s l a t e s a c c o r d i n g t o w a t e r l e v e l20 glTranslatef (0.0 ,0.0 ,2* m_water_depth );

2122 / / Skydome i s n o t r e n d e r e d i n v e r s e , a s t h i s i s a l r e a d y i n v e r s e23 render_sky_dome(m_sky_enabled ,m_sky_shader_on ,m_sky_wireframe , t r u e );2425 / / F l i p p i n g u p s i d e down26 glScalef (1.0, 1.0, -1.0);

2728 / / No b a c k p l a n e c u l l i n g ( due t o i n v e r s e v iew )29 glCullFace(GL_FRONT );

30 glEnable(GL_NORMALIZE );

3132 / / R e n d e r s c e n e33 render_terrain(m_terrain_visible , m_terrain_shader_on ,m_terrain_wireframe ,

34 m_water_depth );

35 render_boid(m_boid_visible ,m_boid_mesh_on , m_boid_shader_on );

36 render_trees(m_trees_visible );

3738 / / R e s e t t i n g OpenGL s t a t e s39 glCullFace(GL_BACK );

40 glDisable(GL_NORMALIZE );

4142 / / Turn c l i p p i n g o f f43 glDisable(GL_CLIP_PLANE0 );

4445 glPopMatrix ();

46 glColorPicker (1,1,1,1);

4748 m_water_shader.grab_screen_1 (); / / 1 = R e f l e c t i o n4950 glViewport (0,0, sWidth , sHeight );

Page 47: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

A.1 Water 47

5152 }

A.1.2 Grab Scene1 v o i d grab_screen_1 (){

2 glActiveTextureARB(GL_TEXTURE0_ARB );

3 glEnable(GL_TEXTURE_2D );

4 glBindTexture(GL_TEXTURE_2D , g_uiReflectionTex );

5 glCopyTexImage2D(GL_TEXTURE_2D , 0, GL_RGB , 0, 0, ScreenWidth , ScreenHeight , 0);

6 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

7 glDisable(GL_TEXTURE_2D );

8 }

A.1.3 Vertex program1 s t r u c t vertex_input

2 {

3 float3 position : POSITION;

4 };

56 s t r u c t vertex_output

7 {

8 float4 position : POSITION;

9 / / z p o s i s d i s t a n c e f rom w a t e r s u r f a c e t o c a m e r a − Not i n t e r p o l a t e d10 f l o a t zpos : TEXCOORD0;

11 float3 orgpos : TEXCOORD1;

12 };

1314 vertex_output main(vertex_input IN

15 , uniform float4x4 ModelViewProj

16 / / / < C o n c a t e n a t i o n o f t h e m o d e l v i e w and p r o j e c t i o n m a t r i c e s17 , uniform float4x4 ModelViewIT

18 / / / < The i n v e r s e t r a n s p o s e o f t h e m o d e l v i e w m a t r i x .19 , uniform f l o a t time

20 , uniform float4 camera_pos

21 )

22 {

23 vertex_output OUT;

24 OUT.orgpos = IN.position;

25 OUT.position = mul(ModelViewProj , float4(IN.position , 1.0));

2627 / / ZPos i s t h e d i s t a n c e f rom c a m e r a t o w a t e s u r f a c e28 OUT.zpos = OUT.position.z;

2930 r e t u r n OUT;

31 } / / end o f main

A.1.4 Fragment program1 s t r u c t vertex_output

2 {

3 float4 pos : POSITION;

4 f l o a t zpos : TEXCOORD0; / / D i s t a n c e f rom w a t e r s u r f a c e t o c a m e r a − Not i n t e r p o l a t e d5 float3 orgpos : TEXCOORD1;

6 };

78 s t r u c t frag_out

9 {

10 float4 color : COLOR;

11 };

121314 frag_out main(vertex_output IN,

15 float2 spos : WPOS ,

Page 48: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

A.1 Water 48

16 uniform sampler2D normalTex ,

17 uniform sampler2D dudvTex ,

18 uniform samplerRECT reflectionTex : TEXUNIT0 ,

19 uniform samplerRECT refractionTex : TEXUNIT1 ,

20 uniform float4 camera_pos ,

21 uniform float4 light_position ,

22 uniform float4 grab_screen ,

23 / / xy => r e f l e c t i o n v i e w s i z e , zw => n o r m a l r e n d e r v i e w s i z e24 uniform f l o a t time){

25 frag_out OUT;

2627 c o n s t f l o a t const_shine = 128.0;

28 / / c o n s t _ d i s t d e s c r i b e s t h e d i s t o r t i o n . H i g h e r => more d i s t o r t i o n29 c o n s t f l o a t const_dist = 0.007;

30 c o n s t f l o a t const_dudv = 0.02;

31 c o n s t f l o a t const_screenratio = 32;

3233 float3 light_vector = normalize(light_position.xyz - IN.orgpos );

34 float3 eye_vector = normalize(camera_pos.xyz - IN.orgpos );

3536 float2 primary_motion = (IN.orgpos.xy/const_screenratio + time * float2 (0.09 ,0.13) );

3738 / / T e x t u r e o f f s e t b a s e d on dudv−map39 float2 secundary_mot = IN.orgpos.xy/const_screenratio + time * float2 (0.002 , -0.001);

40 float4 secundary_motion = tex2D(dudvTex ,secundary_mot );

41 secundary_motion = (secundary_motion *2 - 1) * const_dist;

4243 / / Lookup n o r m a l a c c o r d i n g t o dudv− o f f s e t44 float4 normal_vector = tex2D(normalTex , primary_motion + secundary_motion.xy);

45 normal_vector.xyz = normalize(normal_vector.xyz*2-1);

46 normal_vector.a = 0;

4748 / / Lookup dudv− r e f r a c t i o n a c c o r d i n g t o dudv−o f f s e t49 float4 dudv_offset = tex2D(dudvTex , primary_motion + secundary_motion.xy);

50 float2 dudv_offsets = (dudv_offset.xy*2 - 1 )* const_dudv;

5152 float2 vCoords = (spos / grab_screen.zw + dudv_offsets) * grab_screen.xy ;

53 float4 reflColor = texRECT(reflectionTex , vCoords) ;

5455 vCoords = spos + (dudv_offsets * grab_screen.zw);

56 float4 refrColor = texRECT(refractionTex , vCoords );

5758 / / H i g h l i g h t s on t h e s u r f a c e59 float3 light_reflection = normalize( reflect(-light_vector ,normal_vector.xyz) );

60 float3 sunlight = max(pow(dot(light_reflection ,eye_vector),const_shine ),0);

61 float3 light = sunlight ;

626364 / / F r e s n e l e f f e c t65 c o n s t f l o a t R = 0.20373;

66 c o n s t f l o a t distVar = 150;

6768 f l o a t facing = 1.0 - max(dot(eye_vector ,normal_vector.xyz),0);

69 f l o a t fresnelWeight = max(R + (1.0 - R) * pow(facing ,3.0), 0.0);

70 fresnelWeight = min(( fresnelWeight + max(IN.zpos / distVar , fresnelWeight ))*0.5 , 1);

7172 refrColor *= (1- fresnelWeight );

73 reflColor *= fresnelWeight;

7475 OUT.color.rgb = light + reflColor.xyz + refrColor.xyz;

76 OUT.color.a = 1;

77 r e t u r n OUT;

7879 }

Page 49: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

A.2 Sky 49

A.2 Sky

A.2.1 Vertex program1 # d e f i n e PI 3.141592654

2 # d e f i n e TWOPI 6.283185308

34 s t r u c t vertex_input

5 {

6 float3 position : POSITION;

7 float2 uv : TEXCOORD0;

8 };

910 s t r u c t vertex_output

11 {

12 float4 position : POSITION;

13 float4 color : COLOR;

14 float2 uvcoords1 : TEXCOORD0; / / UV f o r moving c l o u d s . XY and ZW.15 f l o a t intensity : TEXCOORD1; / / I n t e n s i t y o f t h e c l o u d16 float2 uvcoords2 : TEXCOORD2;

17 float2 skydomecoluv : TEXCOORD3; / / UV f o r skydome c o l o r18 f l o a t orgposz : TEXCOORD4;

19 };

2021 vertex_output main(

22 vertex_input IN

23 , uniform float4 params / / x , y , z = AABB w , h , d w = d t24 , uniform float3 offset / / l o w e r c o r n e r o f AABB25 , uniform f l o a t plane_height

26 , uniform float3 sun_vector

27 , uniform float4x4 ModelViewProj

28 , uniform float4 wind_direction

29 )

30 {

31 vertex_output OUT;

3233 OUT.position = mul(ModelViewProj ,float4(IN.position , 1.0));

3435 / /36 / / C l o u d s − u s i n g a v i r t u a l s k y p l a n e37 / /3839 / / F i n i n g v e r t e x c o o r d i n a t e s c o r r e s p o n d i n g on p l a n e40 float2 d1 = wind_direction.xy;

41 float2 d2 = wind_direction.zw;

4243 / / N o r m a l i z i n g c o o r d i n a t e s t o f i t u n i t c i r c l e44 float3 normVect = normalize(IN.position );

4546 OUT.orgposz = abs(IN.position.z);

47 float2 vectLength = float2(normVect.x , normVect.y) * plane_height;

4849 / / M u l t i p l y i n g movement w i t h IN . p o s i t i o n . z b e c a u s e we h a v e t o50 / / d i v i d e i t i n t h e f r a g m e n t p r o g r a m l a t e r .51 / / The r e a s o n why we n e e d t o d i v i d e i t l a t e r i s b e c a u s e a t z =052 / / i n v e r t e x p r o g => i n f i n i t e i n ( p l a n e h e i g h t / z )53 / / And i n f i n i t e i s bad t o i n t e r p o l a t e t o a n y t h i n g54 / / T h i s i s t h e h i g h l e v e l c l o u d s . 0 . 9 makes i s more c o m p a c t .55 OUT.uvcoords1.xy = 0.9 * vectLength + params.w * d1 * OUT.orgposz ;

5657 / / Low l e v e l c l o u d s58 OUT.uvcoords2.xy = 0.4 * vectLength + params.w * d2 * OUT.orgposz;

5960 f l o a t fadeheight = params.z / 32;

61 OUT.intensity = max(normVect.z-0.025 ,0);

Page 50: Real-time Shallow Water Simulation and Environment Mapping ...image.diku.dk/projects/media/rene.truelsen.07.pdf · The reason is that the graphics cards are constantly expanding in

A.2 Sky 50

6263 / /64 / / Skydome C o l o r Change65 / /66 / / U t e x t u r e c o o r d i n a t e i s " t i m e o f day " .67 / / T h i s c h a n g e s w i t h t i m e / 1 0 ( c o r r e s p o n d i n g t o v a r i a b l e i n a p p l i c a t i o n . cpp )68 OUT.skydomecoluv.x = params.w/10;

69 / / V t e x c o o r d . i s t h e d i s t a n c e t o z e n i t h70 / / OUT . s k y d o m e c o l u v . y = max(1−max ( a c o s ( no rmVec t . z ) * 2 / 3 . 1 4 1 5 9 2 6 5 4 , 0 . 0 5 ) , 0 . 0 5 ) ;71 OUT.skydomecoluv.y = min(max(normVect.z ,0.05) ,0.95);

7273 r e t u r n OUT;

74 } / / end o f main

A.2.2 Fragment program1 # d e f i n e PI 3.141592654

23 s t r u c t vertex_output

4 {

5 float4 position : POSITION;

6 float4 color : COLOR;

7 float2 uvcoords1 : TEXCOORD0; / / UV f o r moving c l o u d s , 1 s t l a y e r .8 f l o a t intensity : TEXCOORD1; / / I n t e n s i t y o f t h e c l o u d9 float2 uvcoords2 : TEXCOORD2; / / UV f o r moving c l o u d s , 2 nd l a y e r

10 float2 skydomecoluv : TEXCOORD3; / / UV f o r skydome c o l o r11 f l o a t orgposz : TEXCOORD4;

12 };

1314 v o i d main( vertex_output IN

15 , out float4 color : COLOR

16 , uniform float3 tint

17 , uniform sampler2D noisetex

18 , uniform sampler2D skydome

19 , uniform sampler2D skydomecol

2021 )

22 {

23 / / Have t o d i v i d e by z v a l h e r e , o r e l s e i t w i l l f u c k up24 / / i n t e r p o l a t i o n o f t r i a n g l e w i t h v e r t e x i n z =025 float4 noise1 = tex2D(noisetex ,IN.uvcoords1.xy/IN.orgposz );

26 float4 noise2 = tex2D(noisetex ,IN.uvcoords2.xy/IN.orgposz );

2728 / / M u l t i p l y i n g l a y e r s29 float4 cloud_color = (noise1 * noise2 );

3031 / / Mean v a l u e l a y e r s32 / / f l o a t 4 c l o u d _ c o l o r = 0 . 5 * ( n o i s e 1 + n o i s e 2 ) ;3334 / / C a l c u l a t i n g i n t e n s i t y o f t h e c l o u d t e x t u r e35 f l o a t intensity = 1-exp(-256*pow(IN.intensity ,2));

3637 / /38 / / C l o u d s39 / /40 / / t i n t . z = b l u e l e v e l ( c l o u d s a r e l e s s d u r i n g n i g h t )41 color = tint.z * cloud_color.x * (intensity) * cloud_color;

4243 / /44 / / A p p l y i n g A t m o s p h e r i c L i g h t S c a t t e r i n g t e x t u r e45 / /46 float4 bgcolor = tex2D(skydomecol ,IN.skydomecoluv );

47 color += 0.9 * bgcolor;

48 color.a = 1.;

49 }