a fast method for applying rigid transformations to volume data

9

Upload: gdenunzio

Post on 13-Feb-2016

212 views

Category:

Documents


0 download

DESCRIPTION

A Fast Method for Applying Rigid Transformations To Volume Data

TRANSCRIPT

Page 2: A Fast Method for Applying Rigid Transformations to Volume Data

A Fast Method for Applying Rigid Transformationsto Volume Data

J. Fischer, A. del RıoWSI/GRIS

University of TubingenSand 14,

D-72076 Tubingen, Germany{fischer, anxo}@gris.uni-tuebingen.de

ABSTRACT

Volume rendering is a widespread method for displaying graphical models in fields such as medical visualizationor engineering. The required image information is stored in a volume dataset, which is a threedimensional arrayof voxel intensities. Sometimes it is necessary to transform a given volume dataset by geometric transformationslike rotation, scaling or translation. In this process, a new volume dataset containing the transformed volumeis generated. Straightforward algorithms with an emphasis on minimizing the resampling error are the standardapproach to computing such transformations. In this paper we present a new algorithm, which is significantly fasterthan these. Since it is based on a simple interpolation scheme, it is useful for generating fast previews. Dependingon their size and voxel bit depth, it can even provide realtime transformation of volumes.

KeywordsVolume Data, Rigid Transformations, Interpolation, Bresenham algorithm.

1. INTRODUCTION

Unlike many approaches for realtime display of 3Dcomputer graphics, direct volume rendering is notbased on drawing polygonal models. Instead thegraphical data to be rendered is structured as a threedi-mensional array of image elements, the so-called vox-els. Each voxel contains a value describing the mate-rial properties (e.g. density) at its location in the graph-ical model. Such volume datasets are usually acquiredand preprocessed before rendering and remain staticafterwards. During the display of the dataset, only the

Permission to make digital or hard copies of all or partof this work for personal or classroom use is grantedwithout fee provided that copies are not made or dis-tributed for profit or commercial advantage and thatcopies bear this notice and the full citation on the firstpage. To copy otherwise, or republish, to post onservers or to redistribute to lists, requires prior specificpermission and/or a fee.WSCG SHORT Communication papers proceedingsWSCG’2004, February 2-6, 2004, Plzen, Czech Re-public.Copyright UNION Agency – Science Press.

viewer position and orientation, as well as parametersfor image generation, are altered. Direct volume ren-dering has been described by Levoy [Lev88]. A morerecent overview is available in [SML98].

In some applications, though, it is necessary to ap-ply changes to the volume data itself outside of therendering process. In many cases this means apply-ing geometric transformations like rotation, scaling ortranslation to the graphical model. Sometimes severaldatasets are combined to form a new one using a pro-cess called constructive volume geometry. A descrip-tion of constructive volume geometry is given by Leuand Chen in [LC99].

In this work we investigate several methods for gen-erating a new volume dataset from an existing one byapplication of a geometric transformation. Along withdescribing standard approaches for this task, which tryto achieve small resampling errors, we also suggest anew algorithm designed to shorten computing timessignificantly. Since our algorithm is only capable ofprocessing rigid transformations (i.e. only linear trans-formations, no scaling), we will limit our considera-tions to these.

In order to apply a geometric transformation to avolume dataset, two major steps have to be performed.At first, the transformed position has to be determined

Page 3: A Fast Method for Applying Rigid Transformations to Volume Data

for every voxel. Then the new voxel intensities arecomputed for all grid positions in the resulting vol-ume. The latter step is done by using an interpolationscheme to approximate the voxel values.

Different methods for performing and optimizingthe two steps of volume transformation exist. They arediscussed in detail in Section 2. The new algorithm,which combines both parts in a novel way, is describedin Section 3.

In addition to the software-based methods thatthis paper focuses on, some research has also beendone into special hardware for volume transformation.Examples of this related work include [TQ97] and[CK00].

2. STANDARD APPROACH

In this section the standard approach to transform-ing volume data is described. An implementation ofthis method can be found in the VTK software library[SML98]. We assume here that rigid transformation Ais to be applied to a volume data set V . A is given asa 4-by-4 matrix containing a combination of rotations,translations and coordinate inversions.

A ∈ R4x4

V = {vijk},i=1,...,I

j=1,...,J

k=1,...,K

(1)

As shown in Equation 1, the volume to be trans-formed, V , has K slices, each comprising J lines of Ivoxels. Without loss of generality we assume that thelower left corner of the volume dataset is located at thecoordinate origin. The slices are to be parallel to the x-y-plane and to have increasing z-coordinates, startingat zero. We also require the volume to be isotropic andhave a regular spacing of 1.0 as shown in Equation 2.

Given voxel index (i, j, k) for voxel vijk:

pijk =

xpijk

ypijk

zpijk

wpijk

=

i − 1j − 1k − 1

1

(2)

In Equation 2, pijk denotes the position of voxelvijk in world coordinates. Also note that we havealready introduced a homogeneous coordinate w vijk

,which is required for multiplication by a 4-by-4 ma-trix.

Possibilities for removing the isotropy and spacingpreconditions are discussed in Section 5. The expectedconfiguration of the volume dataset is depicted in Fig-ure 1.

x

y

z(0,0,0) 1

1

1

Figure 1: Assumed isotropic and regular spacing ofslices.

2.1. Transformation of Voxel Positions

In order to transform the volume, it is processed se-quentially. The slices are dealt with from front to back.Within each slice, first the voxel lines and then singlevoxels are considered in an order of increasing coordi-nates.

Each voxel position (i, j, k) that is generated in thisprocess is transformed according to the given transfor-mation matrix. This can be done by simply performinga matrix-vector multiplication.

pijk = A · pijk = A ·

i − 1j − 1k − 1

1

(3)

Equation 3 shows the transformation of voxel po-sition pijk to its transformed position pijk . Comput-ing the matrix-vector product requires up to 16 mul-tiplications and 12 additions, depending on the kindof transformations that are permitted. Even given therestrictions to rigid transformations and constant ho-mogeneous coordinates, still at least 9 multiplicationsand 9 additions have to be performed. Note that thematrix-vector product must be calculated for every sin-gle voxel position in the entire volume. Thus it con-tributes significantly to the overall computational com-plexity.

However there is an effective optimization for thisstep of the process. It can be observed that the col-umn vectors of matrix A represent the directions ofthe transformed unit vectors. This leads to a faster re-placement of the matrix-vector product.

A =

a1 a2 a3 a4

(4)

In Equation 4, matrix A is defined to consist of thecolumn vectors a1 to a4. In the following discussion

Page 4: A Fast Method for Applying Rigid Transformations to Volume Data

we will denote the coordinate origin as point p0 andthe coordinate system’s unit vectors as e1 through e3.

p0 =

(0001

)e1 =

(1000

)e2 =

(0100

)e3 =

(0010

)

p0 = A · p0 = a4

ei = A · ei = aii∈{1,...,3}

(5)As shown in Equation 5, both the transformed ori-

gin and the transformed unit vectors correspond tocolumns of matrix A. Now every transformed voxel’sposition can be obtained through a linear combinationof e1, e2, e3 and p0.

pijk = p0 + (i−1) · e1 + (j−1) · e2 + (k−1) · e3 (6)

Equation 6 is essentially a reshaping of the matrix-vector product described above. It can be evaluatedincrementally while traversing the volume, as shownin the following code fragment:

curPos := p0

for k := 1 to K docurPos := curPos + e3

for j := 1 to J docurPos := curPos + e2

for i := 1 to I docurPos := curPos + e1

computeInterpolation(curPos)done

donedone

This optimization reduces the computational coststo 3 additions in the inner loop, as well as the sameamount in each of the outer loops.

2.2. Reversal of Processing Direction

Until now we have assumed in our discussion thatthe voxels of the original volume are transformed intothe target volume’s coordinate system. Whereas thismethod is suitable for performing the first step of theprocess, it is not practical for the subsequent voxelvalue interpolation.

By transforming a voxel position according to ma-trix A, in most cases a non-integer position in the tar-get volume is obtained. This makes it very difficult todistribute the original voxel’s intensity value among itsneighbours in the target coordinate system. Moreoverit can not be ensured that every target voxel is assigneda value, especially when using nearest-neighbour in-terpolation.

Original volume

Transformedoriginal volume

Target volumecoordinate system

Figure 2: Relation between transformed, original andtarget coordinate systems. (Corners and crossings cor-respond to voxel positions.)

An illustration of how a transformed original vol-ume might lie within the target volume’s coordinategrid is shown in Figure 2. Note that the depicted 2Dgrids are only a graphical representation in the planeof the actual 3D relationships. They do not directlycorrespond to any slices in the volume dataset.

In order to make an easy interpolation of voxel in-tensities possible, the processing direction is inverted.Instead of the original volume, the target volume is tra-versed sequentially. Then, for every target voxel, thebackprojection into the original coordinate system iscomputed. Within the original volume dataset, stan-dard interpolation methods can be easily applied.

Target volume {wijk} with voxel positions qijk:

qijk = A−1 · qijk ,i=1,...,I

j=1,...,J

k=1,...,K

(7)Equation 7 shows how each voxel position q ijk in

the target volume can be backprojected into the orig-inal coordinate system. Due to the type of transfor-mations permitted for matrix A, it is always invert-ible. The optimization of the matrix-vector productdiscussed in Section 2.1 can also be applied to thebackprojection. Here, the columns of A−1 are usedfor incrementally calculating the voxel positions in theoriginal coordinate system.

2.3. Common Interpolation Methods

Given the backprojected position qijk in the originalvolume, the new value for voxel wijk has to be com-puted now. A number of common methods exist forinterpolating voxel intensities. A detailed discussion

Page 5: A Fast Method for Applying Rigid Transformations to Volume Data

of various interpolation techniques can be found in[Wol90]. In [LCN98] specialized variations for vol-ume rendering are described.

The simplest and cheapest approach is the nearest-neighbour interpolation. For all coordinates of thegiven threedimensional position, the nearest integernumber is determined. Then the voxel value at thisinteger location is assigned to the target voxel.

i′ = �xqijk+ 0.5�

j′ = �yqijk+ 0.5�

k′ = �zqijk+ 0.5�

wijk := vi′j′k′

(8)

In addition to the calculations shown in Equation 8,in practice a boundary check has to be performed. Ifthe resulting integer position is outside of the originalvolume, a default background intensity has to be re-turned instead.

Since the resampling errors caused by the nearest-neighbour algorithm are usually large, often moresophisticated interpolation schemes are used. Awidespread method is the so-called trilinear interpo-lation.

v000 v100

v110v010

v001v101

v111v011

qijk

Figure 3: Voxel cell for trilinear interpolation.

x = xqijk− �xqijk

� y = yqijk− �yqijk

�z = zqijk

− �zqijk�

wijk = v000 · (1 − x) · (1 − y) · (1 − z)+v100 · x · (1 − y) · (1 − z)+v010 · (1 − x) · y · (1 − z)+v001 · (1 − x) · (1 − y) · z+v101 · x · (1 − y) · z+v011 · (1 − x) · y · z+v110 · x · y · (1 − z)+v111 · x · y · z

(9)The voxel cell surrounding a backprojected voxel

qijk is shown in Figure 3, with the neighbours denotedas v000 to v111. The interpolated value for wijk is then

computed as a weighted sum of the neighbours’ inten-sities. The interpolation result depends on the posi-tion of the voxel backprojection within its surroundingvoxel cell, denoted (x, y, z) in Equation 9.

Trilinear interpolation requires access to severalvoxels, as well as at least 7 multiplications and 14additions, even in its most optimized form. Thus itis significantly more expensive than nearest-neighbourinterpolation.

Moreover a number of interpolation schemes ex-ist which take an even greater number of surround-ing voxels into account. These methods, like cubicor quadratic interpolation, achieve even smaller re-sampling errors while requiring still more computationtime (see [LCN98] and [Wol90]).

3. THE NEW ALGORITHM

In this paper we propose a novel algorithm for per-forming rigid transformations of a volume dataset.Unlike the standard approach discussed in Section 2,our method combines backprojection and interpolationinto a single step.

The basic idea is to traverse a scanline of a slice inthe target volume while simultaneously traversing itsbackprojection in the original volume. In order to effi-cently generate the backprojection of a line in the vol-ume dataset, we employ the 3D Bresenham line gener-ation algorithm [KS86].

Original volume

Target volume

Backprojectedtarget volume

Example scanline

Figure 4: Example of a scanline in the target volumeand its backprojection into the original volume. In thisfigure, each square symbolises one voxel’s intensity.Different filling patterns represent different intensities.

Using this algorithm, integer voxel positions in theoriginal volume corresponding to voxels in the targetvolume are determined. Voxel itensities can then bedirectly copied. This is similar to applying a nearest-neighbour interpolation (see Section 2.3), but elimi-

Page 6: A Fast Method for Applying Rigid Transformations to Volume Data

nates the need for explicit computation of the back-projection and coordinate flooring operations.

Figure 4 illustrates how a target volume scanlinecorresponds to its backprojected equivalent in the orig-inal coordinate system. In the figure, the combinationof a rotation around the volume center and a coordi-nate inversion is assumed as the rigid transformation.This is the same as mirroring the volume at a slightlyinclined symmetry plane. Here again it has to be noted,that the grids shown in the diagram do not directly rep-resent any slices of the volume data. Their purpose isto demonstrate the underlying 3D relationships in 2D.

Target volume

Target scanlineto be computed

1 2 3 4 5 6 7 8

Original volume

Iteratively generatedBresenham steps

123456

78

Figure 5: The target scanline and its rasterized back-projection are processed simultaneously.

The central idea of our algorithm is depicted in Fig-ure 5. The backprojected line is iteratively generatedusing the Bresenham algorithm. Meanwhile the cor-responding scanline in the target volume is traversedsimultaneously. For every line step generated by theBresenham rasterizer in the original volume, one voxelin the scanline is accessed. The voxel intensity at theinteger position generated by the line rasterizer is di-rectly copied to the currently considered target voxel.

By repeating the process for all scanlines in the tar-get volume, the entire volume dataset is transformedaccording to matrix A. For each scanline, the backpro-jected positions of its endpoints have to be computed.These backprojected endpoints are then used for ini-tializing the line rasterization algorithm.

3.1. Implementation of the Algorithm

The complete algorithm is shown as pseudocode inFigure 7. It requires an input volume dataset {v ijk}and a rigid transformation A. After performing thetransformation, the result volume {wijk} is returned.For the sake of simplicity, we suppose that the in-put and output volume datasets have the same dimen-sions. It is also assumed that in both volumes the lowerleft voxel of the front slice is at the coordinate origin

(0,0,0). This corresponds to the assumptions made inSection 2.

In practice it might be reasonable to place the out-put volume’s boundaries in a different way. The axis-aligned bounding box of a rotated volume normallyhas a different size because its corner voxels lie out-side of the original boundary. When the volume istranslated, it would be useful to also translate the posi-tion of the output volume in world coordinates. Thesechanges could be easily incorporated into the algo-rithm by adding appropriate coordinate offsets duringthe computations.

The initializeBresenham() and next-BresenhamStep() procedures are assumed to con-tain the respective elements of the line rasterizationalgorithm. An explanation of the Bresenham methodcan be found in [Fv82] as well as in the original work[Bre65]. An example of an implementation in 3D isavailable at [Pen92].

It is possible that a backprojected scanline lies partlyor entirely outside of the original volume. Thus thevalidity of each generated voxel position is checkedwithin the innermost loop. The necessary coordi-nate boundary tests account for a big portion of themethod’s computational complexity.

3.2. Line Error Correction

The new algorithm described in Section 3.1 and shownin Figure 7 works well for most transformations. How-ever when the volume is rotated by a steep angle, con-siderable distortion is visible in the result. The reasonfor this effect is the way the Bresenham algorithm gen-erates discretized line steps.

1 2 3 4 5 6 1 2 3 4 5 6

2

Bresenham steps alongdominant axis

Figure 6: Steps generated by the line algorithm cancorrespond to line segments that are longer than 1.0.

The Bresenham algorithm selects as dominant axisthe coordinate axis along which the line endpointshave their largest distance. The line generation methodthen iterates along this dominant axis. In each step anew integer voxel position with an incremented dom-

Page 7: A Fast Method for Applying Rigid Transformations to Volume Data

(* Define constant background voxel intensity *)const DEFAULT INT := ...

function transformVolume( {vijk} :Volume(IJK), A :Matrix4x4 )var

{wijk} : Volume(IJK) (* Result volume *)i,j,k : Integer (* Coordinate loop counters *)start : Point4d (* Double datatype 4d-vectors *)end : Point4di’,j’,k’: Integer (* Current backprojected point *)

begin

(* i, j and k iterate through the target volume *)for k := 1 to K do

for j := 1 to J do

(* Project start and end of scanline back into original volume *)start := A−1 · ( 0 j-1 k-1 1)T

end := A−1 · (I-1 j-1 k-1 1)T

(* Initialize line generation using computed line ends *)(i’,j’,k’) := initializeBresenham(start, end)

for i := 1 to I doif (i’,j’,k’) is within volume boundaries then

wijk := vi′j′k′else

wijk := DEFAULT INTendif

(i’,j’,k’) := nextBresenhamStep()done

donedone

return {wijk}end

Figure 7: Pseudocode formulation of the new algorithm.

inant axis coordinate is generated. As a consequence,each Bresenham step usually covers a line segmentlonger than 1.0. This is illustrated in Figure 6 usinga twodimensional example. In 3D, depending on theline slope, a single discretization step can advance theposition as much as

√3 along the line.

The result of this discrepancy between line genera-tion steps and actually processed line length are visibledistortions in the target volume. In fact it could be saidthat due to the change of sampling direction when ro-tating a volume, there is not enough data to fill the cor-responding areas in the result volume. We solve thisproblem by optionally repeating single voxels from theoriginal volume. In order to decide whether this isnecessary, we have introduced an additional error vari-able. This decision variable is called ”line error”. Theprinciple is demonstrated in the following pseudocodefragment, which is in the context of the code in Fig-ure 7.

(i’,j’,k’) := initializeBresenham(start, end)

δ := end - startδdom := max(xδ,yδ,zδ){δx1 , δx2} := {xδ , yδ, zδ} \ {δdom}lineSlope :=

√(δx1/δdom)2 + (δx2/δdom)2 + 1

lineError := lineSlope - 1.0

for i := 1 to I dowijk := vi′j′k′

(* Repeat last voxel if lineError > 1 *)if lineError > 1.0 then

i := i + 1wijk := vi′j′k′lineError := lineError - 1.0

endif

(i’,j’,k’) := nextBresenhamStep()lineError := lineError + lineSlope

done

In this sourcecode, δ is the difference vector be-

Page 8: A Fast Method for Applying Rigid Transformations to Volume Data

tween the line endpoints. Its maximum coordinate isdenoted as δdom, whereas the other two vector compo-nents are called δx1 and δx2 . Using this information,the length of the line segment skipped by each Bresen-ham step (lineSlope) is computed.

Within the inner loop the line error variable is accu-mulated using the line slope. Whenever the line erroris greater than one after a Bresenham step, the currentoriginal voxel is copied twice into the target volumeinstead of only once. Note that we left the necessaryvolume boundary checks out of this code fragment forthe sake of clarity.

After adding line error correction to the new vol-ume transformation algorithm, it produces undistortedresults for all kinds of rigid transformations.

4. EXPERIMENTAL RESULTS

We have implemented a software framework for com-paring various volume transformation algorithms. Inaddition to the new algorithm, we have also developedown implementations of several variations of the stan-dard approach. Moreover we included capabilities forcomparing our own results with transformation resultsobtained using the popular volume software packageVTK described in [SML98].

Using the benchmarking framework, both the algo-rithm runtimes and the quality of the produced resultscan be examined. In order to measure the quality, onevolume transformation algorithm is selected as refer-ence. The result of each of the other approaches isthen compared on a voxel-to-voxel basis to the refer-ence volume.

Algorithm Minimum Average MaximumRuntimes [msecs]

Nearest N. 984 1003 1046Trilinear 1141 1543 2203New method 156 178 188N.N. (vtk) 140 392 578Cubic (vtk) 125 2117 3079

Relative differences [%]Nearest N. 0.0 0.34 0.49Trilinear 0.0 0.17 0.24New method 0.0 0.34 0.49N.N. (vtk) 0.0 0.34 0.49

Table 1: Benchmark results with input volume CThead(256x256x113 voxels). Rotations around x-axis withangles from 0◦ to 360◦ and a stepping of 10◦ (36 trans-formations).

Tables 1 and 2 contain the benchmarking results oftwo test series. In both experiments a number of trans-formations was generated automatically, each of whichwas then used as input parameter for all algorithms.

We have recorded minimum, average and maximumruntimes and relative variance for all test runs. The in-put volumes used for the benchmarks have been takenfrom the Stanford volume data archive [Lev02]. Theywere converted to 8-bit voxel depth because of limita-tions in our current implementation.

We compared the following algorithms in the ex-periments: Nearest N. is the standard approach withnearest-neighbour interpolation, but without the opti-mized matrix-vector product. Trilinear is the samewith trilinear interpolation. New method is our newalgorithm, still without a number of possible optimiza-tions. N.N. (vtk) is the Visualization Toolkit’s stan-dard approach with nearest-neighbour interpolation,which includes many optimizations (see [SML98]).Cubic (vtk) is the Toolkit’s algorithm with cubic in-terpolation, which provides a considerably bigger in-terpolation kernel. This algorithm was used as the ref-erence algorithm for quality comparisons in all exper-iments.

Algorithm Minimum Average MaximumRuntimes [msecs]

Nearest N. 12906 15527 19671Trilinear 7563 19011 28015New method 1907 5117 9922N.N. (vtk) 2266 6054 9656Cubic (vtk) 9734 33277 55547

Relative differences [%]Nearest N. 0.41 1.17 1.75Trilinear 0.16 0.46 0.71New method 0.41 1.58 3.01N.N. (vtk) 0.41 1.17 1.75

Table 2: Benchmark results with input volume bunny-ctscan (512x512x360 voxels). Rotations around y-axiswith angles from 0◦ to 180◦ and a stepping of 60◦,multiplied with an x-axis rotation of 15◦ (3 transfor-mations).

Figure 8 shows example renderings of volume trans-formation results produced by different algorithms.

5. DISCUSSION AND FUTURE WORK

We have presented a novel approach for applyingtransformations to volume data. Unlike existing meth-ods, it is aimed at improving runtime performance.Our experiments indicate that the algorithm is con-siderably faster than the standard approach because itdoes not have to explicitly transform voxel positions orcompute the nearest-neighbour interpolation. Insteadinteger positions are generated by the modified Bre-senham algorithm, which is relatively cheap to com-pute. This allows for direct copying of voxel intensi-ties.

Page 9: A Fast Method for Applying Rigid Transformations to Volume Data

(a) Unrotated volume (b) Nearest Neighbour

(c) New method (d) Cubic (vtk)

Figure 8: Volume transformation results

Our method produces results comparable in termsof quality to nearest neighbour interpolation in mostsituations, and only slightly worse for some cases. Wehave so far limited our considerations to isotropic vol-ume datasets and transformations that do not includescaling. We hope to overcome these limitations in thefuture by further modifying the line error correction.

Efficient optimization techniques exist that are notyet used in our current implementation of the new ap-proach. At the moment every Bresenham-generatedvoxel position is tested against the volume boundaries.These tests constitute a big share of the implementa-tion’s computational complexity. By determining theintersections of the backprojected scanline with theoriginal volume before discretizing the line, it will bepossible to eliminate these boundary checks.

Another important issue when dealing with volumedata is the way how memory is accessed. Ideally thedata should be processed in a linear, sequential man-ner. This ensures best utilisation of caching mecha-nisms existing in the computer hardware. We do al-ready partially exploit cache coherence by traversingthe output volume according to its structure in mem-ory. But a sequential access mode is more difficult toachieve when reading from the original volume. De-pending on the angle of backprojected lines, consec-utive Bresenham steps may read from different scan-lines or even slices. This seriously hampers caching

efficiency. Thus we intend to investigate ways for im-proving cache utilisation in our algorithm.

6. ACKNOWLEDGEMENTS

We would like to thank Urs Kanus and Dirk Stanekerfor interesting and helpful discussions on the topic ofvolume transformation as well as for reviews of earlydrafts of this paper.

This work has been supported by the DFG projectVIRTUE.

7. REFERENCES

[Bre65] J.E. Bresenham. Algorithms for computercontrol of a digital plotter. IBM Systems Journals,4(1):25–30, 1965.

[CK00] Baoquan Chen and Arie E. Kaufman. 3d vol-ume rotation using shear transformations. Graphi-cal Models, 62(4):308–322, 2000.

[Fv82] J. D. Foley and A. van Dam. Fundamentals ofInteractive Computer Graphics. Addison-Wesley,1982.

[KS86] A. Kaufman and E. Shimony. 3d scan-conversion algorithms for voxel-based graphics.In Proceedings of Workshop on Interactive 3DGraphics, pages 45–75, October 1986.

[LC99] A. Leu and M. Chen. Modeling and ren-dering graphics scenes composed of multiple vol-umetric datasets. Computer Graphics Forum,18(2):159–171, June 1999.

[LCN98] B. Lichtenbelt, R. Crane, and S. Naqvi. In-troduction To Volume Rendering. Hewlett-PackardProfessional Books, 1998.

[Lev88] M. Levoy. Display of surfaces from volumedata. IEEE Computer Graphics and Applications,8(3):29–37, May 1988.

[Lev02] M. Levoy. The Stanford volume dataarchive. http://www-graphics.stanford.edu/data/voldata/, 2002.

[Pen92] B. Pendleton. line3d - 3D Bresenham’s (a3D line drawing algorithm). ftp://ftp.isc.org/pub/usenet/comp.sources.unix/volume26/line3d, 1992.

[SML98] W. Schroeder, K. Martin, and B. Lorensen.The Visualization Toolkit. Prentice Hall PTR, sec-ond edition, 1998.

[TQ97] Tommaso Toffoli and Jason Quick. Three-dimensional rotations by three shears. Graphicalmodels and image processing: GMIP, 59(2):89–95, 1997.

[Wol90] G. Wolberg. Digital Image Warping. IEEEComputer Society Press, 1990.