Up to this point, we have always implicitly assumed that the meshes we were working with were much smaller than main memory. They all begin by loading the entire mesh into memory and building some kind of in-memory data structure (e.g., halfedges). However, if we’re dealing with multi-gigabyte meshes, this kind of assumption is deeply flawed. Now we’ll be looking at the issues involved in efficiently processing massive mesh models, typically in the range of 10 million up to 1 billion triangles.
We’ll begin by discussing the problem of simplifying massive mesh models.
More generally, what we need to worry about when processing large meshes is locality. Here are two papers that develop a stream-oriented approach to processing large meshes that enforce good locality.
M. Isenburg, P. Lindstrom, S. Gumhold, and J. Snoeyink, Large mesh simplification using processing sequences. In Proceedings of IEEE Visualization 2003. [PDF]
M. Isenburg and P. Lindstrom. Streaming meshes. In Proceedings of IEEE Visualization 2005. [PDF]
Uniform grids are the most natural method for spatial clustering of vertices, but you can do better by using data-dependent BSP trees and multiphase algorithms.
P. Lindstrom. Out-of-core simplification of large polygonal models. In Proceedings of SIGGRAPH 2000, pp. 259–262, July 2000. [PDF]
E. Shaffer and M. Garland. Efficient adaptive simplification of massive meshes. In Proceedings of IEEE Visualization 2001. [PDF]
M. Garland and E. Shaffer. A multiphase approach to efficient surface simplification. In Proceedings of IEEE Visualization 2002. [PDF]
By use of out-of-core sorting, Lindstrom and Silva show how to simplify large meshes using O(1) memory.
Here’s an earlier stream-oriented simplification paper that relies on randomization rather than a strict priority queue.
This paper investigates a method for compressing very large meshes.