out-of-core simplification with guaranteed error tolerance

8
Out-of-Core Simplification with Guaranteed Error Tolerance Pavel Borodin, Michael Guthe, Reinhard Klein University of Bonn, Institute of Computer Science II omerstraße 164, 53117 Bonn, Germany Email: {borodin,guthe,rk}@cs.uni-bonn.de Abstract In this paper we present a high quality end-to-end out-of-core mesh simplification algorithm that is ca- pable to guarantee a given geometric error com- pared to the original model. The method consists of three parts: memory in- sensitive cutting; hierarchical simplification; mem- ory insensitive stitching of adjacent parts. Since the first and last part of the algorithm work entirely on disk and the number of vertices during each simpli- fication step is bound by a constant value, the whole algorithm can process models that are far too large to fit into memory. In contrast to most previous out-of-core we do not use vertex clustering since for a given error tol- erance the reduction rates are low compared to ver- tex contraction techniques. Since we use a high quality simplification method during the whole re- duction and we guarantee a maximum geometric er- ror between the original and simplified model, the computation time is higher compared to recent ap- proaches, but the gain in quality and/or reduction rate is significant. 1 Introduction Modern 3D acquisition and modeling tools generate high-quality, detailed geometric models. In order to process the associated complexity, which increases much faster than the hardware performance, a great number of mesh decimation methods have been de- veloped in the recent years. Whereas earlier sim- plification algorithms have worked only with mod- els that completely fit into main memory, the ne- cessity of methods which can deal with arbitrary large meshes has become obvious. These out-of- core algorithms do not load the whole model geom- etry into the actual in-core memory, but temporarily store its large parts on disk. Therefore the memory Figure 1: The Lucy and David models simplified to 26 772 and 25 888 triangles respectively. requirements of these methods is independent of the complexity of the input as well as output models. The fact that the model cannot be loaded into memory prevents efficient comparison of simplified and original objects, which in turn complicates the control over the geometric error of the simplified mesh. As long as the resulting model does not fit into main memory as well, error control is simply impossible in most cases. In this paper we present a high quality end-to- end out-of-core mesh simplification algorithm (nei- ther input, nor output model fit into main memory) which is capable not only to measure the Haus- dorff distance between the original and simplified meshes, but also to simplify a model up to a given VMV 2003 1 Munich, Germany, November 19–21, 2003

Upload: independent

Post on 23-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Out-of-Core Simplification with Guaranteed Error Tolerance

Pavel Borodin, Michael Guthe, Reinhard Klein

University of Bonn, Institute of Computer Science IIRomerstraße 164, 53117 Bonn, Germany

Email: {borodin,guthe,rk }@cs.uni-bonn.de

Abstract

In this paper we present a high quality end-to-endout-of-core mesh simplification algorithm that is ca-pable to guarantee a given geometric error com-pared to the original model.

The method consists of three parts: memory in-sensitive cutting; hierarchical simplification; mem-ory insensitive stitching of adjacent parts. Since thefirst and last part of the algorithm work entirely ondisk and the number of vertices during each simpli-fication step is bound by a constant value, the wholealgorithm can process models that are far too largeto fit into memory.

In contrast to most previous out-of-core we donot use vertex clustering since for a given error tol-erance the reduction rates are low compared to ver-tex contraction techniques. Since we use a highquality simplification method during the whole re-duction and we guarantee a maximum geometric er-ror between the original and simplified model, thecomputation time is higher compared to recent ap-proaches, but the gain in quality and/or reductionrate is significant.

1 Introduction

Modern 3D acquisition and modeling tools generatehigh-quality, detailed geometric models. In order toprocess the associated complexity, which increasesmuch faster than the hardware performance, a greatnumber of mesh decimation methods have been de-veloped in the recent years. Whereas earlier sim-plification algorithms have worked only with mod-els that completely fit into main memory, the ne-cessity of methods which can deal with arbitrarylarge meshes has become obvious. These out-of-core algorithms do not load the whole model geom-etry into the actual in-core memory, but temporarilystore its large parts on disk. Therefore the memory

Figure 1: The Lucy and David models simplified to26 772 and 25 888 triangles respectively.

requirements of these methods is independent of thecomplexity of the input as well as output models.

The fact that the model cannot be loaded intomemory prevents efficient comparison of simplifiedand original objects, which in turn complicates thecontrol over the geometric error of the simplifiedmesh. As long as the resulting model does not fitinto main memory as well, error control is simplyimpossible in most cases.

In this paper we present a high quality end-to-end out-of-core mesh simplification algorithm (nei-ther input, nor output model fit into main memory)which is capable not only to measure the Haus-dorff distance between the original and simplifiedmeshes, but also to simplify a model up to a given

VMV 2003 1 Munich, Germany, November 19–21, 2003

error threshold. It guarantees, that no operation isperformed what would exceed this threshold, whichallows to get a very high reduction of the modelcomplexity at a certain maximum geometric error.

Furthermore, applying generalized pair contrac-tions instead of vertex contractions only allows for acontrolled modifications of the topology. This waysmall gaps are automatically sewed and parts whichare close together are merged in a controlled wayduring the simplification process.

The amount of main memory required for our al-gorithm does not depend on the size of the input oroutput models and can easily be configured to con-sume a fixed amount of memory depending on thesystem it is running on.

Of course, these advantages lead to lower com-putation rates compared to other recent out-of-coresimplification methods.

The paper is structured as follows. First we dis-cuss the related work. In section 3 we describe ourout-of-core simplification algorithm in detail. Someresults of its work are shown in section 4. Finallywe conclude and outline the future work.

2 Related Work

Mesh Simplification. Since mesh simplificationis one of the fundamental techniques for polygonalmeshes, there is an extensive amount of literature onthis topic. However, we focus only on methods al-lowing topology changes during simplification. Thevertex clusteringfamily of methods has been intro-duced by Rossignac and Borrel [20] and has beenrefined in numerous more recent publications, seee.g. [17]. Algorithms of this family essentially pro-ceed by applying a 3D grid to the object and con-tracting all vertices inside each cell. Although thedegenerate faces are subsequently removed, it is dif-ficult to influence the fidelity of the result due tolack of control over induced topological changes.and the reduction rate is quite low at flat parts of themodel.

Thevertex pair contractionoperation simultane-ously introduced by Popovic and Hoppe [18] andGarland and Heckbert [7] allows to contract anytwo vertices independent of whether they are topo-logically adjacent or just geometrically close. Thevertex pair contraction offers more control over thetopological modifications, but does not always con-nect close or even intersecting surfaces on an early

stages of the simplification.In [2] Borodin et al. generalized the vertex pair

contraction operation by performing a contractionof a vertex with another vertex, edge or triangle andthe contraction of two edges. These modificationsimprove the connecting potential of the pair con-traction simplification and allow to connect closeand intersecting surface parts that are not topologi-cally incident on early stages of simplification. Fur-thermore, small gaps in the model are closed duringsimplification as soon as they are smaller than thecurrent approximation error.

Out-Of-Core Simplification. To simplify mod-els of ever increasing size a number of out-of-coresimplification algorithms have been developed. El-Sana and Chiang [4] sort all edges according totheir length and use this ordering as decimation se-quence. In a more efficient algorithm [15] vertexclustering is used to reduce the number of vertices.But since the geometry data is stored in a voxel gridthe memory requirement of this algorithm dependson the output size of the model. For cases whereneither input nor output model fit into main memoryan out-of-core vertex clustering [16] was developed.The multiphase algorithm [8] first uses vertex clus-tering to reduce the complexity of the input modeland then greedy simplification for high quality re-sults.

Another general strategy for out-of-core simpli-fication is to split the model into smaller blocks,simplify these blocks and stitch them together forfurther simplification. In [11] this approach is ap-plied to terrain and in [5] and [19] to arbitrarymeshes. This approach has the problem that trian-gles intersecting the octree cells used to partition themodel cannot be simplified before the cells are com-bined in a higher level of the hierarchy. Therefore,the number of triangles in an octree cell may verywell exceed the main memory available, so theseare not real out-of-core simplification algorithms al-though they allow simplification of large models.To overcome this problem a special method to sim-plify these border triangles has been developed byCignoni et al. [3]. In this paper we show that thegeneralized pair contractions combined with cut-ting of the model at octree cell boundaries providea more elegant and general solution to this problem.

Recently Wu and Kobbelt developed a streamdecimation algorithm [21] for out-of-core simpli-

2

fication which performs decimation by collapsingrandomly chosen edges. But the geometric distancebetween the original and simplified models cannotbe truly controlled, since the original model in theactive working region does not fit into main mem-ory. Here again the problem may arise that thecurrently processed triangles may not fit into mainmemory.

3 Algorithm

Since the generalized pair contractions close gapsmore efficiently than vertex pair contractions a sim-ple and fast out-of-core simplification is possibleby cutting the model into subparts and simplify-ing each subpart independently. Using the general-ized pair contractions gaps are automatically closedwhen the subparts of a node are simplified together.To simplify gigabyte models the cutting and inde-pendent hierarchical simplification are applied re-cursively.

During each node simplification a maximum ge-ometric error threshold for the node simplificationis determined as a constant fraction of the edgelength of its bounding box. Therefore, the errorthreshold duplicates with each level of the octree.This leads to an almost constant order of magni-tude of triangles in the simplified node1, as shownin table 1. The geometric approximation error of thesimplified model is measured against two levels be-low the current node. This way only the geometryof at most 64 nodes have to be loaded into memory.Nevertheless a good upper bound for the geometricdeviation from the original model can be guaran-teed.

Depth 7 9 11 13

Armadillo 737 1022 n.a. n.a.Happy Buddha 1336 488 1022 n.a.

David 2mm 2637 2029 826 n.a.Lucy 1551 1008 430 1024

Table 1: Maximum triangle numbers of the (simpli-fied) nodes at different levels of the octree hierar-chy.

If the accumulated maximum error in the nextlevel already exceeds the given global error thresh-

1Of course this number depends on the fractal dimension of theunderlying mesh. But most of the meshes have a fractal dimensionnear 2, which is verified by our experiments.

old, the nodes are simplified up to this error insteadand the hierarchical simplification is stopped. Inthis way all nodes are simplified up to the desirederror. Since the simplification of nodes in the samelevel of the hierarchy is completely independent ofeach other, it can be parallelized in a straightfor-ward way by distributing the nodes to simplify be-tween different computers.

To combine the subparts into one connectedmodel, we use two different approaches dependingon the size of the final simplified model. In gen-eral during simplification all 64 grandchildren of anode are gathered into the current node and sim-plified. During simplification the introduced gapsalong the cutting planes between them are automat-ically closed since we know that their geometricdistance is at most half of the approximation errorthreshold of the current node. Therefore, if possi-ble we do not perform independent simplificationof the subparts on the last level of the hierarchy,but in-core simplification of the combined model.When end-to-end out-of-core simplification is re-quired, we perform an out-of-core stitching of thesubparts after the last level of the hierarchy is sim-plified.

The following sections describe each phase ofour algorithm more in detail.

3.1 Cutting

Since the gaps are automatically closed during hier-archical simplification, we do not need to preservethe triangles at node boundaries in contrast to [3].But if the triangles are simply sorted into one of thechild nodes during partitioning a sawtooth bound-ary is created which cannot be simplified efficiently,without exceeding the given error tolerance of thenode along the boundary. Therefore, the model ispartitioned by cutting the geometry of a node intoeight subparts if it contains more thanTmax trian-gles and storing it in its children. This partitioningis repeated until no node was split. If no geometry iscontained in a node it is marked and not partitionedfurther. In this way a sparse octree is build.

Since the whole geometry of a node and all itschildren generally does not fit into the main mem-ory, the vertices and normals of the mesh are storedin blocks and swaped in and out from disk using alast-recently-used (LRU) algorithm. The indices ofthe triangles need not to be stored in memory andtherefore, can be streamed from the geometry file

3

of the node to the files of its children. This is ac-complished by loading the current triangle from thegeometry file of the node, cutting it and then savingthe generated triangles in the child geometry files.Therefore, only the current triangle and the trian-gles generated from it are stored in memory. Afterthe triangle is cut it is not needed any more. Whenfirst saving all triangles in the root node, the vertexnormals are calculated.

At each cutting step every triangle is cut with thethree planes dividing the node into its children usingthe Sutherland-Hodgman algorithm [6] and the re-sulting triangles are stored in the appropriate geom-etry files. When a triangle edge is cut, the normal ofthe new point is calculated by linear interpolation.Note that new vertices may have the same coordi-nates as existing vertices, but this is resolved whenthe whole tree is build. After cutting the trianglesof a node and storing them into its children, the ge-ometry file of this node is not used any more and isdeleted.

When the cutting is complete new indices forthe leaf node triangles are calculated and duplicatepoints are removed.

Since the octree data structure may grow verylarge for huge models we store subtrees on disk andload them into main memory only when they needto be processed.

The total complexity of the cutting algorithm isO(n log n), since on each level of the octree all tri-angles need to be processed once.

3.2 Hierarchical Simplification

After cutting, the geometry contained in the octreeleafs is stored on disk. Starting from the geometryof these nodes the model is simplified recursivelyfrom bottom to top using the following algorithm:• At every even depth (2,4,...) of the octree

gather the simplified geometry from all childnodes that are two levels below the currentnode (or the original geometry if there is nopre-simplified geometry at this depth). Its ap-proximation errorεprev is then the maximumerror of the simplified geometry in these childnodes.

• Simplify the resulting geometry as long as thedistanceεh to the gathered geometry2 is less

2As distance measure the double-sided Hausdorff distance orthe one-sided Hausdorff distance from the simplified to the originalmesh can be used.

thanεs = enoderes

− εprev, whereenode is theedge length of the currents nodes boundingcube andres is the desired resolution in frac-tions ofenode.

• Storeε = εh + εprev as approximation errorin the current node.

By using the children at two levels below the cur-rent node instead of its direct children the simplifiedgeometry contains less triangles, since the approxi-mation of the real geometric error is better. This isdue to the fact that the difference between the esti-mated geometric errorε and the real geometric errorεreal is low, since

εreal ≥ εs =enode

res− εprev

≥ enode

res− enode

4 · res =3

4

enode

res=

3

and thus34ε ≤ εreal ≤ ε.

Starting with the already simplified geometrygathered from the grandchildren of the current nodegreatly reduces the computation cost and still leadsto high quality drastic simplifications. Since the in-put and output number of triangles in an octree cellgenerally remain in the same order of magnitudeand since the vertices inside a node not closer thanε = enode

resare bound to be less than12

πres3, the

complexity of the simplification algorithm linearlydepends on the number of nodes in the octree andtherefore isO(n). This means that the total sim-plification time depends only linearly on the num-ber of leaf nodes and thus linearly on the numberof triangles in the base geometry. Therefore, the to-tal time for this out-of-core simplification algorithmsums up toO(n log n), wheren is a number of in-put triangles.

In order to close the cracks introduced by thecutting and independent simplification in previousstages of the recursion, the simplifier has to be ca-pable of performing topological simplification. Per-forming standard vertex pair contractions simplifi-cation on such data could have undesirable results(figure 2, left).

Therefore, the generalized pair contractions op-erator described by Borodin et al. [2] has beenused. This approach completely extends the ver-tex pair contraction by introducing the new contrac-tion operations: vertex-edge, vertex-triangle andedge-edge contractions. In case of vertex-edge andvertex-triangle contractions the contraction vertex

4

Figure 2: Hierarchical simplification using onlyvertex pair contractions (left) and generalized paircontractions (right). The arrows point to some ofthe cracks introduced by cutting and independentsimplification and not closed by vertex pair contrac-tions.

is contracted onto intermediate vertex which is cre-ated on the contraction edge or triangle. In caseof edge-edge contraction two intermediate verticesare created on each contraction edge and then con-tracted together. Note, that these three operationsperform no reduction, but increase the connected-ness of the mesh3. Nevertheless, the use of thesetechnique will resolves the previously shown prob-lems by sewing disconnected parts together (fig-ure 2, right). More details on generalize pair con-tractions could be found in [2].

3.3 Stochastic Simplification

As a criterion for the choice of next contraction op-eration we use the quadric error metric presented byGarland and Heckbert [7].

Although the quadric error metric is a fast tech-nique, which provides good results, it does not de-liver the Hausdorff distance. In our case this is anecessary requirement. Therefore, in addition toquadric error metrics we calculate the Hausdorffdistance between the original and the simplifiedmeshes. It is done the same way as described byHoppe [10] and Klein et al. [12]. Before contract-ing the chosen candidate pair we always check, ifthe Hausdorff error which will be produced by thisoperation is less than the given error threshold. Ifnot, we reject the operation. Thus we avoid all op-

3In the presented algorithm we do not perform contractions be-tween two edges, as search for correspondent edges is very timeconsuming

erations whose errors exceed the maximum error setfor the given hierarchical level.

During each node simplification an idea pro-posed by Wu and Kobbelt [21] is used. Insteadof using a priority queue to order candidates forcontraction operations, at each simplification stepwe stochastically pickNrand verticesvi - candi-dates for the next contraction operation. Then,for each candidate vertex the neighbour simplexsi

is found, such that contraction ofvi and si willresult in the smallest quadric error. In [2] thissearch procedure is described in detail. Since thesearch of nearest neighbour simplices is expansive,we do it for Nsearch vertices only. For the restNrand−Nsearch vertices we check only their adja-cent vertices (this means that for these vertices onlyedge collapses could be found). Of course for ver-tices which lie on boundaries, in order to close thecracks introduced by the cutting, we always have toperform the complete search4.

After defining Nrand candidate contractionpairs, we choose the one with the smallest quadricerror that will arise after contracting it. The newposition of a contraction vertex is chosen in orderto minimize this error.

Once an operation is rejected we mark the ver-tex with a flag, which is valid only until the opera-tion on a neighbour simplex is performed. If a ran-domly chosen vertex is marked with this flag, wechoose the next vertex. Once all operation candi-dates have been rejected and marked, the simplifi-cation of a given node could not be continued fur-ther without exceeding the maximum error thresh-old and we stop.

Nrand ∆output Time (m:ss) Rate(∆/sec)

4 33 780 6:18 8266 33 733 6:35 7908 33 775 6:47 76710 33 933 7:15 717

queue 33 829 8:56 582

Table 2: Impact of the numberNrand of the ver-tices, randomly selected at each simplification step,on the reduction and performance rates for the Ar-madillo model.

Table 2 demonstrates how quality and perfor-mance rates of our algorithm depend on the number

4Practically, for models presented in this paper, we performedthe complete search of nearest neighbour simplices only forboundary vertices.

5

Nrand of the vertices, randomly selected at eachsimplification step. Computations have been donefor the Armadillo model with an error threshold setto 0,129% of the diagonal of the bounding box. Forall other models the results are similar. In the lastrow of the table the rates for the similar simplifica-tion algorithm driven by a priority queue are shown.In shorter times the stochastic approach achieveseven greater reduction rates than using a priorityqueue. Note, that all times include the cutting time(≈1:10), which does not depend on simplificationparameters.

3.4 Stitching

To generate a consistent mesh from the indepen-dently simplified nodes we move a stitching frameover the model.

This frame is placed as shown in figure 3. For allborder vertices inside this frame the closest simplexin the other seven nodes is determined and a con-traction operation is applied if the distance is lessthan2ε. In this way all gaps introduced by indepen-dent simplification of the nodes are closed.

Figure 3: Stitching frames for the torso of theArmadillo model.

Finally duplicate vertices are removed and newglobal indices stored in each node. In this way anew vertex index can be calculated by only check-ing the direct neighbor nodes leading to a stitchingtime ofO(n), wheren is the number of input trian-gles. Then the simplified and stitched geometry iswritten into a single file that may again exceed theamount of main memory available.

Figure 4: The head of the Happy Buddha modelbefore (left) and after (right) stitching.

Figure 4 demonstrates the stitching on the headof the Happy Buddha model.

4 Results

All results presented in this paper have been mea-sured on a 1.8 GHz Pentium 4 PC with 512 MBmain memory. Like other methods we restrict our-selves during the simplification to the one-sidedHausdorff distance from the simplified to the origi-nal model.

In table 3 the reduction and performance ratesof our algorithm for four models from the Stan-ford 3D Scanning Repository [14] and The DigitalMichelangelo Project [13] are shown. The simpli-fied Lucy and David models are shown in figure 1.

The simplification time for these models is splitinto three parts. The cutting of the model has anapproximate splitting rate of 25 000 /log n trian-gles/sec, wheren is a number of input triangles andsimplification algorithm has an approximate reduc-tion rate of 960 triangles/sec. The stitching algo-rithm was not applied since the simplified modelsfit into main memory, but it performs at more than100 000 triangles/sec. Since the hierarchical sim-plification can be parallelized, we ran the simplifi-cation on ten PCs achieving a linear speedup of thereduction rate by a factor of ten [9].

A quality comparison of our algorithm with pre-vious methods [7, 15, 3, 21] is shown in table 4 andin figure 5.

Reduction rates for the simplification of theHappy Buddha model (∆input = 1 087 716) weremeasured using MESH tool [1]. As table 4 demon-strates, both one-sided and symmetric Hausdorffdistances between simplified and original meshes

6

Error Cutting time Simpl. time RateModel ∆input ∆output(% of diag.) (h:mm:ss) (h:mm:ss) (∆/sec)

Armadillo 345 944 33 780 0,129 0:01:12 0:05:06 826Happy Buddha 1 087 716 32 377 0,170 0:04:40 0:19:28 728

David 2mm 8 254 150 25 888 0,178 0:38:01 2:22:02 762Lucy 28 055 742 26 772 0,163 2:19:08 8:03:57 779

Table 3: Reduction and performance rates of our algorithm for four standard models using a single PC.

One-sided Symm.Method ∆output(% of diag.) (% of diag.)

QSlim v2.0 18 338 0.261 0.786OOCC 19 071 0.919 0.919

OEMM-QEM 18 338 0.505 0.821Stream decim. 18 486 0.488 0.818Our method 18 248 0.176 0.706

Table 4: Results of different simplification methodsfor the Happy Buddha model.

in our approach are smaller even than in in-coreQSlim. Of course, since we use the one-sidedHausdorff distance during simplification, it is sig-nificantly lower than the symmetric (double-sided)Hausdorff distance.

In figure 5 it is clearly visible, that compared tothe other methods, details (e.g. the necklace and themouth) and silhouettes are better preserved by ouralgorithm.

5 Conclusion

In this paper we presented a high quality end-to-end out-of-core mesh simplification algorithm. Themain features of the algorithm are than it allows toguarantee a maximum geometric distance betweenoriginal and simplified model and that topologi-cal simplification is performed in a geometric errorcontrolled manner. Furthermore, the maximum al-located main memory can be restricted by the user.Although, due to the advantages of the algorithmthe reduction rates are less than of other recent al-gorithms, they are almost constant regardless of thesize of the input model. This demonstrates the opti-mality of the approach.

Acknowledgements

We thank Marc Levoy, Paolo Cignoni and JianhuaWu for providing us with the models used for mea-surements.

References

[1] Nicolas Aspert, Diego Santa-Cruz, and TouradjEbrahimi. Mesh: Measuring errors between sur-faces using the hausdorff distance. InProceedingsof the IEEE International Conference on Multime-dia and Expo, volume I, pages 705 – 708, 2002.http://mesh.epfl.ch.

[2] Pavel Borodin, Stefan Gumhold, Michael Guthe, andReinhard Klein. High-quality simplification withgeneralized pair contractions. InGraphiCon 2003,September 2003.

[3] Paolo Cignoni, Claudio Rocchini, Claudio Montani,and Roberto Scopigno. External memory manage-ment and simplification of huge meshes. InIEEETransactions on Visualization and Computer Graph-ics. IEEE, 2002.

[4] Jihad El-Sana and Yi-Jen Chiang. External memoryview-dependent simplification and rendering.Com-puter Graphics Forum, 19(3), 2000.

[5] Carl Erikson and Dinesh Manocha. HLODs forfaster display of large static and dynamic envi-ronments. InACM Symposium on Interactive 3DGraphics, 2000.

[6] James D. Foley, Andries van Dam, Steven K. Feiner,and John F. Hughues.Computer Graphics. Prin-ciples and Practice. Addison-Wesley, 2nd edition,1990.

[7] Michael Garland and Paul S. Heckbert. Surface sim-plification using quadric error metrics.ComputerGraphics, 31(Annual Conference Series):209–216,1997.

[8] Michael Garland and Eric Shaffer. A multiphase ap-proach to efficient surface simplification. InIEEEVisualization, pages 117–124. IEEE, 2003.

[9] Michael Guthe, Pavel Borodin, and Reinhard Klein.Efficient view-dependent out-of-core visualization.In The 4th International Conference on Virtual Real-ity and its Application in Industry (VRAI2003), Oc-tober 2003.

[10] Hugues Hoppe. View-dependent refinement of pro-gressive meshes.Computer Graphics, 31(AnnualConference Series):189–198, 1997.

[11] Hugues Hoppe. Smooth view-dependant level-of-detail control and its application to terrain rendering.In IEEE Visualization, pages 35–52. IEEE, 1998.

[12] Reinhard Klein, Gunther Liebich, and WolfgangStraßer. Mesh reduction with error control. In RoniYagel and Gregory M. Nielson., editors,IEEE Visu-alization ’96, pages 311–318, 1996.

[13] Marc Levoy. The Digital Michaelangelo Project –

7

Original mesh OEMM-QEM Stream decimation Our method1 087 716 triangles 18 338 triangles 18 486 triangles 18 248 triangles

Figure 5: Results of different out-of-core simplification methods for the Happy Buddha model.

http://www-graphics.stanford.edu/projects/mich.[14] Marc Levoy. The Stanford 3D Scanning Repository

– http://www-graphics.stanford.edu/data/3dscanrep.[15] Peter Lindstrom. Out-of-core simplification of large

polygonal models. InACM Siggraph, 2000.[16] Peter Lindstrom and Claudio T. Silva. A memory

insensitive technique for large model simplification.In IEEE Visualization. IEEE, 2001.

[17] Kok-Lim Low and Tiow Seng Tan. Model simpli-fication using vertex-clustering. InSymposium onInteractive 3D Graphics, pages 75–82, 188, 1997.

[18] Jovan Popovic and Hugues Hoppe. Progressive sim-plicial complexes. InSIGGRAPH, 1997.

[19] Chris Prince. Progressive meshes for large mod-els of arbitrary topology. master’s thesis, departmentof computer science and engeneering, university ofwashington, seattle, 2000.

[20] Jarek Rossignac and Paul Borrel. Multi-resolution3D approximations for rendering. InModeling inComputer Graphics. Springer-Verlag, 1993.

[21] Jianhua Wu and Leif Kobbelt. A stream algorithmfor the decimation of massive meshes. InGraphicsInterface Proceedings, page to appear, 2003.

8