Research

Main Pages

Author

Michael Garland
garland@uiuc.edu

My students and I currently have a number of ongoing research projects. This page provides an overview of the main research topics that we are pursuing. I have added links to relevant papers, but for complete information you’ll want to consult my list of publications.

Surface Analysis

We have been developing new methods for analyzing the topological and geometric structure of polygonal meshes. The foundation of our work here is the computation of discrete harmonic functions over triangulated surfaces. We have developed an efficient irregular multigrid algorithm to make the computation of these harmonic fields efficient. We have demonstrated that these functions can be used as Morse functions for analyzing the topology of the manifold. These harmonic fields also have many important connections to conformal parameterization, minimal surfaces, and the theory of differential forms on manifolds.

Modeling by Example

Traditional modeling packages are often cumbersome for non-expert users. We have been exploring methodologies that would allow most users to perform interesting modeling tasks without the need to learn complex software packages. The paradigm we have been using is modeling by example. The key idea is that a user specifies in a fairly direct and easy manner what should happen to the surface. The system then automatically applies this example to the entire surface. For example, in our similarity-based editing system a user can paint the entire surface by coloring only a handful of points. The rest of the surface is then colored according to its similarity to the anchor points painted by the user.

Texture Synthesis

It is often convenient to be able to generate arbitrary amounts of texture from a single texture sample. While there has been considerable work in this area of late, our particular focus has been on developing algorithms that can synthesize texture at near real-time rates.

The key idea behind our work is to partition the problem of texture synthesis into an off-line analysis phase and an on-line synthesis phases. The analysis phase generates a graph encoding probabilistic links between similar neighborhoods, that we call a jump map. The synthesis phase consists of a random walk through this graph. Our initial development of the jump map was used in generating texture images, which we were able to do at rates up to 3 million texels/second. More recently, we have demonstrated that the jump map can be used to synthesize textures directly on surfaces interactively. No additional texture space is required beyond that needed for the original sample, and the texture scale can be set independently of the mesh resolution.

Surface Simplification

Extremely precise polygonal surface models are now widely available, in large part due to advances in scanning technology. Models of this complexity, perhaps several million polygons, frequently contain far more detail than required for the target application. Simplification can provide more economical models, containing only the amount of detail necessary.

I developed a quadric-based simplification algorithm for automatically producing high-quality approximations of polygonal surface models. This approach is quite efficient, produces good results, and is quite simple to implement. In addition to working well in practice, a more theoretical analysis has shown that there is a direct connection between the quadric error metric and the curvature of the surface under consideration.

All of my papers describing the quadric-based algorithm are available online. I have also made my experimental implementation — QSlim — freely available. The most authoritative account of my surface simplification work can be found in my Ph.D. dissertation.

Processing Massive Meshes

Over time, the size of typical polygonal meshes has expanded enormously. At this point, meshes containing hundreds of millions or even several billion polygons are not uncommon. For instance, the [[dmich Digital Michelangelo Project]] has generated surface scans of large-scale statues at 1/4 mm sample spacing, producing multi-gigabyte mesh models. Some extremely complex mechanical systems, such as the Boeing 777, have been designed digitally and the full system models contain hundreds of millions of polygons. Processing meshes of this magnitude in any kind of reasonable way requires fundamentally new mesh processing algorithms.

We have developed two different schemes for out-of-core simplification of such massive mesh data. In both cases, the amount of RAM required is independent of the input surface size. Consequently, even multi-gigabyte meshes can be simplified on a simple laptop.

More recently, we have been focused on developing an efficient multiresolution storage representation for massive meshes. High-precision scans, such as the Digital Michelangelo data, are very desirable as archival quality datasets. But for many applications, only a small portion of the full data is really required.

Surface Clustering

Surface simplification methods are intimately related to hierarchical partitioning of surfaces. One instance of this is our work on hierarchical face clustering, which provides a method for hierarchically partitioning a surface into a set of face clusters. These clusters, which are connected sets of faces, represent the aggregate properties of the original surface at different scales rather than providing geometric approximations of varying complexity. The clustering algorithm itself is in many ways the dual form of the quadric-based simplification algorithm. It uses a dual form of the quadric error metric to guide a process of iterative contraction on the dual graph of the original mesh. Just as normal edge contraction induces a binary tree of vertex neighborhoods on the surface, dual edge contraction induces a binary tree of face clusters on the surface.

Graph Clustering

More recently, we have been exploring the use of clustering in much more general settings. Rather than focusing on triangulated meshes embedded in a 3-D Euclidean space, we consider the much broader class of weighted undirected graphs.

We haved been considering the case of scale-free networks — graphs with vertex degrees exhibiting a power law distribution. We have shown that we can successfully reduce these graphs to much coarser approximations, while still being able to answer distance queries with relatively little distortion. Consequently, this provides a powerful data mining technique; the coarsened graph still encodes much of the important structural information of the original.

Spacetime Meshing and Visualization

We are currently involved with a project working to develop a new class of spacetime discontinuous Galerkin finite element methods. One of the very interesting problems that arises here is how to efficiently tessellate spacetime while respecting causality. We have been focused on the problem of visualizing spacetime finite element solutions at interactive rates. Recently, we have developed a system for utilizing modern programmable GPU hardware to provide pixel-exact renderings of the solution data. By evaluating the finite element solution on a per-pixel basis, we can generate visualizations of much greater fidelity as compared to typical piecewise linear interpolation over elements.