This tool version is unpublished and cannot be run. If you would like to have this version staged, you can put a request through HUB Support.
Creating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling is a popular approach to synthesize urban environments, but requires writing suitably parameterized grammars. In this tool, we automate the generation of procedural buildings by taking a photograph as example input. Our system does not aim at an exact reproduction of a building, but rather at capturing its overall shape, the layout of its façade, and the style of its windows. To do so, we decompose the problem into logical stages (mass, façade, windows) and treat each stage with a common methodology that consists in simplifying the input to make it amenable to analysis by deep networks trained with synthetic data, and refining the output with custom optimizations. The resulting pipeline can generate a diversity of procedural buildings with no user intervention.
This tool enables automatically generating a procedural model from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method and then an OBJ file (and textures) are produced.
Manush Bhatt, Rajesh Kalyanam, Gen Nishida, Liu He, Chris May, Dev Niyogi, Daniel Aliaga
NSF CSSI 1835739, NSF CBET 1250232, NSF IIS 1302172
Manush Bhatt, et al. [paper-under-preparation]
Graphical user interface was created by:
Cite this work
Researchers should cite this work as follows: