global illumination using precomputed light paths for...
TRANSCRIPT
Global Illumination using Precomputed Light Paths for Interactive LightCondition Manipulation (sap 0275)
Yonghao YueUniv. Tokyo
Kei IwasakiWakayama Univ.
Yoshinori DobashiHokkaido Univ.
Tomoyuki NishitaUniv. Tokyo
In this document, we would like to describe results and limitations in more detail. Before that, some details of our method are described. All
the results shown in this document and the one-page abstract are rendered by using a desktop PC with a Pentium D 3.0GHz CPU, 2.0 GB of
main memory and an NVIDIA GeForce 7800 GTX GPU.
1 Precomputation
Particle Tracing In order to make the computation during rendering correct, entry points must be appropriately weighted according to the
density of their distribution. We sample entry points uniformly on the surfaces of the scene, thus each entry point will have a same area.
To sample each entry point, we first choose a polygon with a probability proportional to its area, and then randomly select a point on the
polygon. After each entry point is determined, we build the light path by iteratively extending a light sub-path. If we do not need to modify the
characteristics of materials, we can utilize the importance sampling from the BRDFs. Otherwise, the cos term is accounted for the importance
sampling.
Final Gathering To construct final gathering paths, we have to determine the locations of cache points for (ir)radiance caching in the scene.
We use rejection-sampling technique to do this. That is, we first densely sample candidate locations, and reject those where the energy can
likely be interpolated using energies at other locations. We do this rejection based on the error metric proposed by Tabellion et al [Tabellion
and Lamorlette 2004]. After we obtain the set of caching locations, we sample final gathering rays from each location.
2 The Hierarchical Volumetric Data Structure
We construct the hierarchical volumetric data structure (HVDS) in the same fashion as an octree. First, the energies are stored in the highest
resolution, which we call the base level. The resolutions of the base level is changed with respect to the need of accuracy. Since each grid-cell
in the HVDS may contain several faces or curved surfaces, we cluster the normal directions of the faces in the grid and store the energies
per the clustered direction of the normal. We store the outgoing radiance or irradiance rather than the incident radiance in order to use the
stored radiance or irradiance directly in final gathering. To deal with glossy surfaces, the outgoing directions are discretized and energies are
accumulated per each discretized outgoing direction. After all the energies are stored in the base level, we normalize the stored radiance or
irradiance in order to take into account the surface area within a grid-cell and the solid angle of the outgoing direction.
After the normalization for the base level is completed, coarser resolution levels are constructed in the same manner as the bottom up
construction of octrees from leaf nodes. That is, we accumulate the energies stored in higher resolution hierarchies into their corresponding
coarser resolution hierarchies. To take into account the occlusion due to surfaces in a grid-cell (Figure 1), we make an assumpotion that the
energy has an invariant distribution in each grid-cell. Then we calculate the occlusion ratio for each discretized direction.
OccluderSelected grid-cell
Cache point
Figure 1: Occulusion prob-
lem in the HVDS.
grid-cellA
Figure 2: Choose the grid-
cell with appropriate level in
the hierarchy.
Light sourceOccluder Selected grid-cell
Cache point
Figure 3: Limitation of the assumption for solving the
occulusion problem in the HVDS.
3 Rendering
When the locations of light sources are determined during rendering, we can calculate the form factor between each sampling point on a light
source and each entry point. Since each entry point is uniformly distributed in the scene, the differential area per an entry point is determined.
By calculating form factors, we can calculate the energy distributed to each entry point, and the energy at the rest of vertices of light paths
can also be calculated consequently.
For the final gathering, we must determine the grid-cell with appropriate level in the hierarchy for each final gathering ray. We do this by
projecting the solid angle of each final gathering ray onto its corresponding intersection point (A in Figure 2), and use the grid-cell in the
level where the grid-cell has nearly equal size compared to the projected area.
4 Results
We show the rendered results in Figures 4, 5, and 6. Figures 4 and 5 contain the comparisons with reference solutions obtained by Photon
Mapping and pure Monte Carlo. Figure 6 shows an rendered example of a complex scene. Scene descriptions are shown in Table 1.Table 2
summarizes the computational time (all measured respect to ms) of each step in various cases. In the case where the materials are fixed, the
rendering process consists of the calculation of direct illumination (shown as Tdi in Table 2), recalculation of energies of light paths (Tlpe),
the construction of the HVDS (Thvds), the final gathering (Tfg), (ir)radiance splatting (Tsplat ), and the total time to render a single frame (Ttot).
Note that most time during rendering is spent on (ir)radiance splatting. We also show in Table 2 computation time when we move a light
source (Tlight), the viewpoint (Tview), and modify a material (Tmat ).
Table 3 shows the decrease in the rendering performance as we increase the number of light sources in a scene. For each scene, 400k entry
points are sampled, and up to 40k cache points are used. We emitted 256 final gathering rays from each cache point.
Please refer to the accompanied animations that are movies of the Box scene and the Sponza scene. In the movie of the Box scene, we
demonstrate the features of our algorithm. We first move the viewpoint to show around the Box scene. Then we change materials, including
the BRDF of the teapot from a diffuse one to a glossy one. We then move the viewpoint around the Box scene again to show the glossy
surface. After that, we move the point light source, and add an area light source. We show that we can deal with several light sources.
The area light source is approximated by using 10 spot light sources. Finally, we switch off the point light source and move the area light
source to show the indirect illumination due to the leftside wall. In the movie of the Sponza scene, we show the interactive rendering of the
Sponza scene. We first move the viewpoint, and then move the point light source and add an area light source. These movies were made by
recording the real-time interaction operations. We used a capture software and the resulting frame rates appear to be lower than that of the
actual interactive operations. Also Mach band artifacts appeared due to the capture software.
5 Limitations and Drawbacks
We will describe two main limitations and drawbacks of our method here. First, due to the discretization of directions for storing energies, we
can not handle high-frequency BRDF. From our experimental results, we find that Phong BRDF with the exponential up to 40 can be handled
without any noticeable differences with 256 discretized directions. Second, due to the assumption for solving the occulusion problem in the
HVDS, rendered results may be inaccurate under some special lighting conditions. If a final gathering ray gathers the energy form a grid-cell
with a blocker and a light source just behind the blocker, as illustrated in Figure 3, the energy computed at the final gathering point may be
wrong.
6 Future Work
In future work, we aim to achieve the interactive rendering of dynamic scenes, where objects in the scene move.
References
TABELLION, E., AND LAMORLETTE, A. 2004. An approximate global illumination system for computer generated films. ACM Transactions
on Graphics 23, 3, 469–476.
Direct illumination only. Photon mapping. Our method. Pure Monte Carlo.Figure 4: Comparison of a box scene.
Direct illumination only. Photon mapping. Our method. Pure Monte Carlo.
Figure 5: Comparison of the Sponza scene.
Direct illumination only. Our method.Figure 6: Examples of a room scene.
Table 1: Scene description.
Scene #Triangle Image size Precomp. Time Frame rate Memory usage
Box 2,914 512 � 512 32min 9.2 fps 165MB
Sponza 76,154 640 � 480 43min 5.1 fps 172MB
Room 141,287 640 � 480 50min 1.7 fps 197MB
Table 2: Detailed timings.
Scene Tdi Tlpe Thvds Tfg Tsplat Ttot Tlight Tview Tmat
Box 6 11 11 5 76 109 109 87 1271
Sponza 15 15 12 27 122 191 191 164 1310
Room 30 23 112 142 281 588 588 453 1807
Table 3: Fps vs. the number of lights.
Scene 1 light 2 lights 4 lights 8 lights
Box 9.1 8.6 8.2 7.2
Sponza 5.2 4.8 4.3 3.6
Room 1.7 1.5 1.2 1.0