Out-of-Core Mesh Processing

Up to this point, we have always implicitly assumed that the meshes we were working with were much smaller than main memory. They all begin by loading the entire mesh into memory and building some kind of in-memory data structure (e.g., halfedges). However, if we’re dealing with multi-gigabyte meshes, this kind of assumption is deeply flawed. Now we’ll be looking at the issues involved in efficiently processing massive mesh models, typically in the range of 10 million up to 1 billion triangles.

Required reading

First lecture

We’ll begin by discussing the problem of simplifying massive mesh models.

Second lecture

More generally, what we need to worry about when processing large meshes is locality. Here are two papers that develop a stream-oriented approach to processing large meshes that enforce good locality.

Recommended reading

Uniform grids are the most natural method for spatial clustering of vertices, but you can do better by using data-dependent BSP trees.

By use of out-of-core sorting, Lindstrom and Silva show how to simplify large meshes using O(1) memory.

Here’s an earlier stream-oriented simplification paper that relies on randomization rather than a strict priority queue.

This paper investigates a method for compressing very large meshes.