Building Reconstruction using Manhattan-World Grammars
Licensed under
Category
Published on
Abstract
We present a passive computer vision method that
exploits existing mapping and navigation databases in
order to automatically create 3D building models. Our
method defines a grammar for representing changes in
building geometry that approximately follow the
Manhattan-world assumption which states there is a
predominance of three mutually orthogonal directions in
the scene. By using multiple calibrated aerial images, we
extend previous Manhattan-world methods to robustly
produce a single, coherent, complete geometric model of a
building with partial textures. Our method uses an
optimization to discover a 3D building geometry that
produces the same set of façade orientation changes
observed in the captured images. We have applied our
method to several real-world buildings and have analyzed
our approach using synthetic buildings.