adaptive sampling of area light sources in ray tracing including diffuse interreflection

10
EUROGRAPHICS '92 / A. Kilgour and L. Kjelldahl (Guest Editors), Blackwell Publishers © Eurographics Association, 1992 Volume 11, (1992), number 3 Adaptive Sampling of Area Light Sources in Ray Tracing Including Diffuse Interreflection Arjan J. F. Kok and Frederik W. Jansen Faculty of Technical Mathematics and Informatics, Delft University of Technology Julianalaan 132,2628 BL Delft, The Netherlands Email: [email protected] Abstract Ray tracing algorithms that sample both the light received directly from light sources and the light received indirectly by diffuse reflection from other patches, can accurately render the global illumination in a scene and can display complex scenes with accurate shadowing. A drawback of these algorithms, however, is the high cost for sampling the direct light which is done by shadow ray testing. Although several strategies are available to reduce the number of shadow rays, still a large number of rays will be needed, in particular to sample large area light sources. An adaptive sampling strategy is proposed that reduces the number of shadow rays by using statistical information from the sampling process and by applying information from a radiosity preprocessing. A further reduction in shadow rays is obtained by applying shadow pattern coherence, i.e. reusing the adaptive sampling pattern for neighboring sampling points. Keywords: rendering, radiosity, ray tracing, adaptive shadow testing 1. Introduction Realistic images of rooms and scenes can be generated by high quality rendering algorithms that accurately calculate the distribution of light in an environment. A well known technique for high quality rendering is ray tracing (Whitted, 1988; Glassner, 1989) that samples the light by tracing rays from the eye point into the scene and looking for intersections with the objects in the scene. Ray tracing can generate images with shadows, specular surfaces and transparency. Standard ray tracing, however, samples only the reflected light on surfaces that is received directly from (point) light sources. It does not account for the light reflected by other surfaces. To calculate this indirect light, a large number of secondary rays will have to be cast into all directions. Because this sampling has to be done recursively, the total computation is extremely expensive. For that reason it has long been deemed infeasible to compute this contribution and therefore an ambient term has been assumed instead. Diffuse interreflection only came into the picture(s) with the development of radiosity algorithms (Goral et al., 1984; Nishita and Nakamae, 1985). In analogy to thermal exchange algorithms, these algorithm determine the global illumination in a scene by calculating the amount of energy exchange between discrete patches in the scene. Because of the discretization, exact values are only known at certain points on the surfaces, and values at intermediate points have to be interpolated from these. High shading frequencies (sharp shadows) can therefore only be represented accurately at high costs (Campbell and Fussell, 1990). Attention has therefore shifted back to extended ray tracing methods that sample both the direct and the indirect light by sending shadow rays to the main light sources and additional secondary rays into the main directions of specular and diffuse reflection. The challenge is to reduce the exorbitant high sampling cost. Kajiya (1986) observed that if enough primary rays are cast for each pixel, then it is not necessary to send a large number of shadow and secondary rays for each primary ray, but only a few will do (i.e. 'path tracing'). By stochastically distributing these rays in proportion to the importance of the light sources and reflection directions, an efficient Monte Carlo integration of the light reflection can be achieved. Ward (1988) improved on this by devising a coherence method that is based on the observation that the amount of diffusely reflected light received by a surface from other surfaces is fairly constant over a surface. Storing sampled values at the

Upload: tudelft

Post on 12-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

EUROGRAPHICS '92 / A. Kilgour and L. Kjelldahl (Guest Editors), Blackwell Publishers © Eurographics Association, 1992

Volume 11, (1992), number 3

Adaptive Sampling of Area Light Sources in Ray Tracing Including Diffuse Interreflection

Arjan J. F. Kok and Frederik W. Jansen

Faculty of Technical Mathematics and Informatics, Delft University of Technology Julianalaan 132,2628 BL Delft, The Netherlands

Email: [email protected]

Abstract Ray tracing algorithms that sample both the light received directly from light sources and the light received indirectly by diffuse reflection from other patches, can accurately render the global illumination in a scene and can display complex scenes with accurate shadowing. A drawback of these algorithms, however, is the high cost for sampling the direct light which is done by shadow ray testing. Although several strategies are available to reduce the number of shadow rays, still a large number of rays will be needed, in particular to sample large area light sources. An adaptive sampling strategy is proposed that reduces the number of shadow rays by using statistical information from the sampling process and by applying information from a radiosity preprocessing. A further reduction in shadow rays is obtained by applying shadow pattern coherence, i.e. reusing the adaptive sampling pattern for neighboring sampling points.

Keywords: rendering, radiosity, ray tracing, adaptive shadow testing

1. Introduction Realistic images of rooms and scenes can be generated by high quality rendering algorithms that accurately calculate the distribution of light in an environment. A well known technique for high quality rendering is ray tracing (Whitted, 1988; Glassner, 1989) that samples the light by tracing rays from the eye point into the scene and looking for intersections with the objects in the scene. Ray tracing can generate images with shadows, specular surfaces and transparency. Standard ray tracing, however, samples only the reflected light on surfaces that is received directly from (point) light sources. It does not account for the light reflected by other surfaces. To calculate this indirect light, a large number of secondary rays will have to be cast into all directions. Because this sampling has to be done recursively, the total computation is extremely expensive. For that reason it has long been deemed infeasible to compute this contribution and therefore an ambient term has been assumed instead.

Diffuse interreflection only came into the picture(s) with the development of radiosity algorithms (Goral et al., 1984; Nishita and Nakamae, 1985). In analogy to thermal exchange algorithms, these algorithm determine the global illumination in a scene by calculating the amount of energy exchange between discrete patches in the scene. Because of the discretization, exact values are only known at certain points on the surfaces, and values at intermediate points have to be interpolated from these. High shading frequencies (sharp shadows) can therefore only be represented accurately at high costs (Campbell and Fussell, 1990).

Attention has therefore shifted back to extended ray tracing methods that sample both the direct and the indirect light by sending shadow rays to the main light sources and additional secondary rays into the main directions of specular and diffuse reflection. The challenge is to reduce the exorbitant high sampling cost. Kajiya (1986) observed that if enough primary rays are cast for each pixel, then it is not necessary to send a large number of shadow and secondary rays for each primary ray, but only a few will do (i.e. 'path tracing'). By stochastically distributing these rays in proportion to the importance of the light sources and reflection directions, an efficient Monte Carlo integration of the light reflection can be achieved. Ward (1988) improved on this by devising a coherence method that is based on the observation that the amount of diffusely reflected light received by a surface from other surfaces i s fairly constant over a surface. Storing sampled values at the

C-290 A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources

first sample points ('illuminance caching') and reusing them for later samples in the same region considerably reduces the sampling effort.

Alternatively, Rushmeier (1988) followed by Shirley (1990), Kok and Jansen (1991) and Chen et al. (199 1) introduced a radiosity preprocessing to calculate an approximate radiosity value for each surface in the scene, Secondary rays can directly take this value ('one level' patch tracing), avoiding a recursive sampling of the light at each ray-surface intersection point. A further speed-up of the calculation can be achieved by sampling only the most important light sources. After sampling the main light sources, the smaller light sources can be added by estimating their occlusion from light source statistics (Ward, 1991) or by deferring the sampling of these sources to a low-frequency pass (Chen et al., 1991).

In (Kok and Jansen, 1991), an algorithm was introduced that explicitly classifies the light sources for each surface patch and selects the most important ones for shadow testing. In this way sampling of a strong but totally occluded light source is avoided, whereas a weak light source which, because it is partly occluded and will give a visible shadow boundary, is selected. The light sources are classified on the basis of infor- mation from the radiosity preprocessing. During the radiosity preprocessing, for each patch the most impor- tant contributions are stored. The other contributions are summed to create if general radiosity value. After the preprocessing, a further selection is done and only the light sources that contribute significantly to the light intensity or give rise to shadow boundaries are selected for shadow ray tracing. The contribution of the other light sources are added to the radiosity value which is displayed in the normal way.

Although the source selection cuts down considerably on the number of shadow rays, the number will still be high. This is due to the large area light sources that require a large number of shadow rays to get the expected soft shadowing. For point light sources only one shadow ray is needed, because the sample point is in shadow or not. The light source can not be partly obscured, and therefore shadow boundaries are sharp. Area light sources, however, can be partly obscured resulting in penumbrae, regions that receive only a por- tion of the light from the light source. A large number of shadow rays are needed in this case to obtain smooth shadow gradients.

But, if an area light source is so large that the shadow gradients almost disappear, or when obstructing objects are so near to a light source that shadows disappear, then there is no need to sample that light source; its contribution can just as well be added to the general radiosity value. There is thus a strange discontinuity in the algorithm here: whereas for a large area light source a large number of rays will have to be cast, in some cases no shadow rays are needed at all. The basic idea behind our sampling approach discussed in the following sections is to eliminate this discontinuity and to reduce the number of shadow rays for area light sources that create only small shadow gradients.

In section 2, we discuss the sampling of large area light sources and motivate the choice for adaptive sampling. In section 3 and 4, we describe our adaptive sampling method and the use of sampling pattern coherence. In section 5, the methods are compared with non-adaptive sampling. In section 6, conclusions are drawn and directions for further research are given.

2. Sampling area light sources The radiance seen on a point x caused by a light source S with area A having radiance is given by the following integral:

In this integral, is the geometry term indicating whether points x and can see each other, x ' the bidirectional reflection function, which is a function of wave length the direction from

x ' to x, the angle between the normal at x and the angle between the normal at x ' and and the differential area of x ' (see figure 1).

A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources C-291

Figure 1 . Area sampling.

An exact analytic solution for this integral is not feasible, With ray tracing the integral can be approxi- mated with a Monte Carlo method (Shirley and Wang, 1991) using a probability density on the source.

Wallace et al. (1989) already observed that the most efficient form factor calculation can be obtained by adaptively subdividing light sources until the amount of energy received from each delta area falls below a specified criterion. Similar approaches can be found in (Hanrahan et al., 1991) and (Tampieri and Lischinski, 1991).

From a sampling point of view, adaptive subdivision strongly reduces the variance of the sampling mean if the sampling domain can be subdivided into several subdomains and the variance for some of the sub- domains is lower than the original variance for the whole domain. This is known as stratification (Lee et al., 1985). It can be illustrated by the following example from (Lee et al., 1985).

Figure 2. Stratification and adaptive sampling.

If the rectangle is treated as one domain (fig. 2a), then with random sampling within this domain it may take a while before the variance is below a given threshold. If the domain is subdivided (fig. 2b), the variance in the uniform areas will be zero, which will be easily detected. For the non-uniform areas now a higher variance can be accepted, still giving the same overall variance. This process can be repeated if the sampling is done 'adaptively'. The non-uniform area can be subdivided again, the old samples reclassified or thrown away, and new samples added (fig. 2c). Adaptive sampling done in this way can optimally take advantage of the variance reducing qualities of stratification. Of course the right distribution of samples and the correct normalization have to be taken into account to avoid bias (Arvo and Kirk, 1990; Kirk and Arvo, 1991).

Applying these notions to the sampling of light sources, it then seems better to subdivide the light source into separate subdomains. Some of these domains may be completely unoccluded as seen from the sample point, some may be completely occluded by obstructing objects, and some may be partly occluded (as in fig. 2b). For the first two categories the variance will be zero and the sampling effort can be directed to the last category. This can be done dynamically by directing the sampling to the subdomain with the highest variance. If the number of samples in a subdomain is above a certain level and there still is a high variance, then the subdomain can be split again (as in 2c), and so on, until the overall variance satisfies the image quality criteria.

The sampling can also be directed by a priori information: if it is known that a light source is partly or completely occluded then we can adjust our subdivision level and sampling density. This visibility informa- tion, however, is not cheap to acquire. It will require some hidden-surface testing and in the worst case it will be as expensive as ray tracing itself. However, because we already store in our algorithm (Kok and Jansen, 1991) the most important contributions for each patch, we already have some information on the most

C-292 A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources

important light sources for each patch. Comparing the preprocessed radiosity values with the unoccluded (without taking into account obstructions) radiosity values will give an indication whether light sources are partly occluded or not. If they are completely occluded or completely visible then explicit sampling is not needed. If they are partly occluded then the light source has to be sampled. Starting with 4 to 16 samples (depending on the size and importance of the light source), the sampling is adaptively refined until the variance is below a specified level.

Of course, the preprocessed radiosity value is only an approximate value. Small shadows or small light bundles could easily have been missed during the preprocessing. We should therefore cast at least a few rays to each important light source to test for unexpected occlusion or visibility. This is particularly important for point light sources because they will generate sharp shadows.

3. Generation of adaptive sampling pattern For the adaptive sampling of area sources, the light sources are adaptively binary subdivided, alternatively in u and v direction. Each leaf node of the resulting binary subdivision tree represents a small area of the light source to which one shadow ray i s cast (see figure 3). The result of the shadow ray (shadow or not) is stored at the leaf node. Additionally, at each intermediate node, the following information is stored: the level of subdivision, a mean value, representing a weighted average of all samples of its subtrees, a variance value for all samples in its subtrees, an indication (shadow-flag) what kind of shadow is generated by the subtrees (shadow, no shadow or penumbra), the number of samples in the subtrees, and the u,v parameters of the sample (for leaf nodes).

Figure 3. Subdivision (a) and corresponding bintree (b)

When the influence of a light source S to an (intersection) point P has to be calculated, first an initial number of shadow rays is cast. This number can be chosen on the basis of the information derived during a radiosity preprocessing (Kok and Jansen, 1991), or can be made dependent on some parameter, for instance the solid angle with which S is seen from P. The initial (jittered) shadow rays are evenly distributed over source S. The results of the sampling are stored in the leaf nodes of the sampling tree. The results of the sampling are also passed to the parents of the leafs, so that each node in the tree will have the mean, the variance, the type of shadow (shadow, light, penumbra) it represents, and the number of rays (leafs) in the subtrees. Non-uniform subtrees (that are neither total in shadow nor total visible and thus will show a penumbra) will be subdivided further, and after re-classifying the earlier samples, additional shadow rays are cast for empty leaf nodes. Refinement is continued until the variance of the mean for the whole tree is below a given threshold.

To find the area to be subdivided we use a recursive algorithm that descends the subdivision tree. At each node it is decided which subtree is descended first, because it is important to refine areas with shadow transi- tions first. The following cases can be distinguished: - only one of the subtrees of the node represents a penumbra; this subtree i s chosen first - both subtrees represent a penumbra; the subtree that contains the lowest number of samples is chosen first - none of the subtrees represents a penumbra; in this case both root nodes of the subtrees are compared with

their neighbours at the same subdivision level, and either the one that differs or the one that contains the smaller number of samples i s taken.

A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources C-293

To further reduce the chance that shadow details are missed, a minimum difference in subdivision level is enforced between subtrees. In this way areas around shadow boundaries are sampled denser and shadow gradients are detected earlier.

An example of the sampling process is given in figure 4. After an initial sampling (4a), more samples are taken along the shadow boundary (4b-e). Now the level difference between the lower left area and the samples to the right is too large, therefore, the lower left part is subdivided further (4f). Subdivision is continued along the shadow boundary (4g-h). In (4i) again a subdivision is done because of level difference.

Figure 4 . Area sampling.

If the variance of the samples is high, then part of the source is obstructed by some objects, or the solid angle with which the area source is seen was very large (then the samples each represent a very different energy). In these cases more samples are needed to get a good approximation of the amount of light that is received. Sometimes, however, the variance will reduce very slowly, because of a major intensity gradient (for example a sharp shadow), but the average of the samples does not change very much. Therefore, a stop criterion based on the change in the average sample value after a few iteration steps may be used in addition. If this average does not change much, then the shadow sampling is also stopped. This stop criterion based on difference in average is only performed if enough shadow samples have been cast already.

4. Reuse of sampling pattern If a light source S is sampled from an intersection point P of a viewing ray with an object, then the inter- section point P' of the next (neighboring) viewing ray with that same object, will probably have a similar sampling pattern for that light source. If there is a shadow boundary for P, then it will be moved only slightly for P'. The sampling pattern for P can thus be used as an estimate for the sampling pattern for P'. However, the sampling pattern should be corrected for the transition of the shadow boundaries. After the sampling pattern is applied to P', some areas have to be refined further and some areas can be reduced In subdivision, This shrinking of the subdivision tree is done as follows (see figure 5). If the shadow result for

C-294 A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources

subtree s is all light or all shadow, and also the shadow result for its children s1 and s2 are the same, then the subtrees of s1 and s2 gave no new shadow information and so can be eliminated. Now s1 and s2 will become leafs, The shrinking should not be continued to a level below the initial sampling rate, to avoid missing shadows. The sampling pattern that remains after shrinking, can be used as the sampling pattern for the following point, and so on.

Figure 5. Shrinking, original pattern (a) and pattern after shrinking (b)

5 . Experiments The adaptive sampling and pattern coherence method were implemented in a ray tracing program with a progressive radiosity preprocessing (Cohen et al., 1988). During the preprocessing, in addition to a general radiosity value, the n most important contributions of neighboring patches and light sources are stored in a list, After the preprocessing, a further selection is made to select the m most important light sources for each patch (m may be different for each patch). This selection is based on the criteria given in (Kok and Jansen, 1991). The contributions of source patches that are not selected are added to the general radiosity value.

The rendering is comparable with a standard ray tracing algorithm, only now the general radiosity value is used as an ambient term for the 'indirect' or 'minor' light and shadow rays are cast to the m selected light sources to sample the 'direct' or 'major' light. For point light sources only one shadow ray is needed, for area sources more samples are taken in an adaptive way as described above.

For the experiments, the following test scene was created. A sketch of this scene is given in figure 6a, and a rendered image in figure 6b. The scene consists of a room with a table and a cabinet. The scene is lit by four light sources, two small (almost point) sources A and B at the right wall, and two large area sources C and D on the left wall and on the ceiling.

Figure 6. Test scene. * A radiosity preprocessing was done with four contributions stored separately at each sample point on the

patches (n = 4). The number of sample points on each of the patches is limited, so a direct radiosity display

* See page C-479 for colour pictures of Figure 6.

A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources C-295

of the results of the preprocessing will give almost no shadows. The most difficult shadow is the one on the floor caused by source D and the left back table leg. Therefore the tests were concentrated on this area of the scene (see fig. 7). For the floor, only source D is selected and the contributions of the other sources and patches are added to the general radiosity values (that are interpolated during rendering). These sources will therefore give no shadows in the pictures.

a. (1/64) b. (6411)

c. norm picture (4164) d. adaptive sampling (4/ad)

Figure 7. Picture quality as a result of the number of viewing and shadow rays."

Pictures were generated with different numbers of viewing rays per pixel and shadow rays per viewing ray. First, a picture was generated with one viewing ray per pixel and 64 shadow rays for each viewing ray (see figure 7a). To determine where the samples are cast, the source is subdivided into 64 elements, and a shadow ray is cast to a random point at each of the elements. Second a picture (figure 7b) was made with 64 viewing rays per pixel but only one shadow ray for each viewing ray as done by Kajiya (1986) and Shirley and Wang (1991). The advantage of this approach is that the picture is antialiased at the edges at the same time, but the total number of rays is almost twice as high (and the resulting shadow is poorer than the shadow of the previous picture, because the shadow rays of the viewing rays within one pixel are not stratified. With strati- fication, shadows of the same quality as in figure 7a may be expected). For this area of the test scene, four viewing rays per pixel are enough to give good antialiasing. Figure 7c was generated with 4 viewing rays per pixel and 64 shadow rays for each viewing ray. The quality of this picture is good and the picture was there- fore chosen as the norm picture for comparing the other pictures.

* See page C-479 for colour pictures of Figure 7.

C-296 A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources

Figures 7d was made with the adaptive technique described in the previous sections. The average number of shadow rays for each viewing ray is determined by the maximum allowed variance in the sampling mean. Setting this value lower means more shadow rays per pixel (see table 1). To see how much the figures differ, a comparison was made with the norm picture (figure 7c) on the basis of the following metric:

where and are the pixel rgb values of the pictures p and q, and n is the number of pixels.

Table 1 shows the average difference and the number of shadow rays for different allowed variance thresholds. As expected, increasing the variance threshold will reduce the number of shadow rays, but will also introduce more error. However, the error remains small.

Table 1 . Statistics for figures with image size 256x256 (vpp = viewing rays per pixel, variance = maximum allowed variance during shadow casting, spv = shadow rays per viewing ray, av. diff. = average difference in picture pixel values compared to figure 7c).

Figure 8a gives the average difference as function of the number of shadow rays per viewing ray.

a. without pattern coherence b. with pattern coherence

Figure 8. Number of shadow rays and average difference as a function of the sampling variance.

The same tests were also done with shadow pattern coherence applied in scanline-order. The effect is that for the same variance, the number of shadow rays needed is a little bit larger, but that the error is much smaller and the noise i s reduced considerably (fig. 8b). Thus to obtain the same accuracy, a smaller number of

A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources C-297

shadow rays is needed. Figure 8b also shows that the error is less dependent on the number of shadow rays and the allowed variance. This is caused by the fact that a minimal number of rays is necessary to resample the pattern and adjust the pattern to get a proper idea of the shadow boundary on the source. Additional sampling does not add anymore to the accuracy. Sampling pattern coherence thus allows us to use a higher sampling variance (and less shadow rays) to obtain the same image quality.

It is interesting to see how the shadow rays are distributed. Figure 9 gives the sampling pattern of figure 7d without (9a) and with (9b) sampling pattern coherence. Each dot represents a viewing ray. The color of the dot indicates the number of shadow rays that was used for that viewing ray. If no dot is shown, no shadow rays were needed, because during source selection the source was not chosen to be important for this patch. The color of the dots is from dark gray (8 shadow rays) to white (32 or more shadow rays). As expected, many shadow rays are cast for areas where soft shadowing occurs, because some shadow rays are blocked and others are not, so the shadow rays show a certain variance (larger than the allowed threshold until a large number of rays is cast). Also areas close to the light source, where the solid angle with which the source is seen is larger, require more shadow rays. Areas completely in shadow only need a few shadow rays.

a. without pattern coherence b. with pattern coherence

Figure 9. Shadow sampling."

Sampling pattern coherence can also be applied within an adaptive image refinement strategy as in adaptive stochastic ray tracing (Painter and Sloan, 1989). After an initial low resolution sampling, the image is locally refined by casting additional viewing rays. Storing the sampling patterns at each viewing ray, the sampling patterns of neighboring rays can be compared and only new shadow rays have to be cast for areas where the sampling patterns differ. The sampling pattern coherence can thus be viewed as a generalization of the use of shadow coherence (Kok et al., 1991).

6. Discussion and conclusions We have presented methods to reduce the large number of shadow rays that are needed for sampling large area light sources. The area light sources are sampled in an adaptive way using statistics derived during the sampling process. With this method most of the sampling effort is directed to areas where it is needed, for example in penumbra regions or in areas close to the source.

Using sampling pattern coherence, the same sampling quality can be obtained with a smaller number of shadow rays because we have a good estimation of where shadow boundaries might occur. Additionally, if a small shadow is found, then this will also be noticed in the next sampling, avoiding spatial aliasing.

We are currently investigating whether a further reduction of shadow rays can be obtained by reducing the resampling of the sampling pattern and by a better exploitation of stratification by sharing information of neighboring samples.

* See page C-479 for colour pictures of Figure 9.

C-298 A.J.F. Kok et al. / Adaptive Sampling of Area Light Sources

Acknowledgements We would like to thank Paul Heckbert for his comments on an earlier version of this paper.

References

Siggraph'90.

Computer Graphics 24(4): 155-164, Siggraph’90.

Illumination, Computer Graphics 25(4): 165-174, Siggraph’91.

Fast Radiosity Image Generation, Computer Graphics 22(4): 75-84, Siggraph’88.

Arvo, J., Kirk, D. (1990), Particle Transport and Image Synthesis, Computer Graphics 24(4): 63-66,

Campbell, A.T., Fussell, D.S. (1990), Adaptive Mesh Generation for Global Diffuse Illumination,

Chen, S.E., Rushmeier, H.E., Miller, G., Turner, D. (1991), A Progressive Multi-Pass Method for Global

Cohen, M.F., Chen, S.E., Wallace, J.R., Greenberg, D.P. (1988), A Progressive Refinement Approach to

Glassner, A.S. (1989), Introduction to Ray Tracing, Academic Press. Goral, C.M., Torrance, K.E., Greenberg, D.P., Battaile B. (1984), Modelling the Interaction of Light

Hanrahan, P., Salzman, D., Aupperle, L. (1991), A Rapid Hierarchical Radiosity Algorithm, Computer

Kajiya, J.T. (1986), The Rendering Equation, Computer Graphics 20(4): 143-150, Siggraph'86. Kirk, D., Arvo, J. (1991), Unbiased Sampling Techniques for Image Synthesis, Computer Graphics 25(4):

153- 156, Siggraph'9 1. Kok, A.J.F., Jansen, F.W. (1991), Source Selection for the Direct Lighting Computation in Global

Illumination, Proceedings of the 2nd Eurographics Workshop on Rendering, Barcelona. To be published by Springer Verlag.

Kok, A.J.F., Jansen, F.W., Woodward, C. (1991), Efficient Complete Radiosity Ray Tracing Using a Shadow Coherence Method, Report of the Faculty of Technical Mathematics and Informatics, nr. 91-63. Submitted for publication.

Lee, M.E., Redner, A., Uselton, S.P. (1985), Statistically Optimized Sampling for Distributed Ray Tracing, Computer Graphics 19(3): 61-67, Siggraph’85.

Nishita, T., Nakamae E. (1985), Continuous Tone Representation of Three-Dimensional Objects Taking Account of Shadows and Interreflection, Computer Graphics 19(3): 23-30, Siggraph'85.

Painter, J. and Sloan, K. (1989), Antialiased Ray Tracing by Adaptive Progressive Refinement, Computer Graphics 23(3): 281-288, Siggraph'89.

Rushmeier, H. (1988), Realistic Image Synthesis for Scenes with Radiatively Participating Media, PhD thesis, Cornell University, 1988.

Shirley, P. (1990), A Ray Tracing Method for Illumination Calculation in Diffuse Specular Scenes, Proceedings Computer Graphics Interface '90, pp 205-212.

Shirley, P. and Wang, C. (1991), Direct Lighting Calculation by Monte Carlo Integration, Proceedings of the 2nd Eurographics Workshop on Rendering, Barcelona. To be published by Springer Verlag.

Tampieri, F. and Lischinski, D. (1991), The Constant Radiosity Syndrome, Proceedings of the 2nd Eurographics Workshop on Rendering, Barcelona. To be published by Springer Verlag.

Wallace, J.R., Elmquist, K.A., Haines, E.A. (1989), A Ray Tracing Algorithm for Progressive Radiosity, Computer Graphics 23(3): 315-324, Siggraph'89.

Ward, G.J., Rubinstein, F.M., Clear, R.D. (1988), A Ray Tracing Solution for Diffuse Interreflection, Computer Graphics 22(4): 85-92, Siggraph'88.

Ward, G.J. (1991), Adaptive Shadow Testing for Ray Tracing, Proceedings of the 2nd Eurographics Workshop on Rendering, Barcelona. To be published by Springer Verlag.

Whitted, T. (1980), An Improved Illumination Model for Shaded Display, Communications of the ACM

between Diffuse Surfaces, Computer Graphics 18(3): 212-222, Siggraph '84.

Graphics 25(4): 197-206, Siggraph’91.

23(6): 343-349.