constructing a realistic 3d model from stereoscopic … a realistic 3d model from stereoscopic image...
TRANSCRIPT
Constructing a Realistic 3D Model from Stereoscopic Image Pairs
Alora Killian
Abstract
How can we create a realistic model using data from stereoscopic image pairs? This paper
explores this question by using generated 3D data from image pairs taken of the St. Olaf
College Regents building and comparing it to a model hand sculpted from Regents'
architectural plans. We will investigate not only the process behind creating usable data from
image pairs, but also how to build an accurate 3D model in the open source modeling
software Blender. We will also factor in the realism gained from exporting it to the 3D
application engines three.js and Copperlicht. We will discuss the applications of such work,
with the overall goal of accurately displaying a model of the Regents building at St. Olaf
College.
Introduction
Due to the great leaps in computer hardware and display, creating and viewing a 3D model
has become accessible to everyone. Models are becoming a ubiquitous part of everyday life.
A 3D model can help a doctor understand the inner workings of an ear, or describe the
topography of the ocean. A necessary trait of such models, however, must consist of a certain
1
level of realism. Due to the nature of sculpting models by hand, constructing a hyper-realistic
model from scratch will take far too long to be efficient. A possible solution to this problem
is to construct a model directly from photographs.
Stereoscopic image pairs can be analyzed for depth information for use in modeling.
There are many different methods for gathering 3D data from static images [1]. For this
paper, we will use data gathered with software developed explicitly for our project. While an
explanation of these features is outside the limits of this paper, we will rely on the data pulled
from these methods to construct our realistic models.
Overview
In this paper, we will focus on the goal of creating a realistic model of St. Olaf College's
Regents building. We will first outline the previous work and necessary background of the
project. We will then discuss the logistics of sculpting a model in order to check it against our
data gathered from image pairs. We will also go over the methods required to correct
polygonal data received and the file export types necessary for appropriate display. We will
conclude with the future direction of the project.
Background
This project is part of a larger program at St. Olaf College called the “Palantir project,” which
focuses on research in computer vision. The process of modeling Regents has been ongoing
for a year in three different courses, with new students rotating in on various research
positions. Each course involves a “pipeline” structure, where one team of students passes data
2
onto the next. It is in this way that we receive source images, vertex lists in the image
coordinate space, and vertex lists in 3D coordinate space to use in our model. The original
team in our stage of the pipeline previously created tools to convert this data into .obj files for
importing into Blender [2].
In order to compile our stereoscopic image data, as well as construct a model by hand,
we will use the 3D modeling software Blender. Blender is a open source tool used by
animators, illustrators, and architects [3]. We chose this software due to the strong developer
community, its flexibility, and the many plugins available for it. Besides for the Palantir
project's previous work, there is little to no research done in the area of realistically modeling
any object from stereoscopic image pairs in Blender. We hope this paper will provide some
insight into how to correctly produce and check an accurate model.
Modeling from Scratch
For this project, we were able to request access to the Regents blueprints in a vector .pdf
format. We used these to-scale blueprints as a starting point from which we created a model.
Thanks to the Blender community, there was a detailed tutorial on building a model from
blueprints which is a helpful starting point for anyone starting out in the 3D modeling area[4].
In order to correctly size our model, we cleaned up our vector .pdf files needed in the open
source scalable graphics editor Inkscape [5]. After importing the files and following the
tutorial mentioned, we scaled our model and used simple Blender processes to set the global
origin to the same origin as our image data coordinates. By creating a model in this way, one
is able to accurately measure the disparity between 3D models generated from stereoscopic-
based data and the actual scale of the building.
3
It is worth mentioning, that due to time constraints, we were not able to complete the
full six-story model from scratch. We only managed to complete the first two floors, and
thus, will only compare stereoscopic data from the first two floors with our model.
Applying Texture and Color
In order to test accurate texture and color coordinate mapping and display between models,
we procured the design information from the blueprints. The design document consisted of a
list of all materials used in the construction of the building, such as paints, linoleum, and
concrete types, as well as the companies these materials were purchased from. As most
companies offer sample images for their wares, we found all swatches available and collected
them for use with our model. Some companies also offer reflectance values and finish
information, which is useful for accurate modeling in Blender.
Due to time constraints, we were not able to apply these textures, colors, and other
maps to our model. If we were not limited by time, we would have used a Windows program
called xNormal to create normal and specularity maps for textured surfaces [6]. By taking
four photographs of a surface, with a different light source in each image, a tangent-based
normal map is generated. The specularity map would be generated from the normal map with
slight modification and consideration to reflectance specifications on various materials. We
would apply all colors, textures, and maps by hand to our model in Blender. Maps would also
be applied to our stereoscopic image based model, in order to promote the most realistic
model possible.
4
Exporting Data
The next, and final stage, in the Palantir project's pipline deals with accurately displaying a
model on various electronic mediums. This allows our model to be color corrected based on
different screen displays, allowing for extreme and correct realism. In order to display the
model, we need to export it into both three.js, an open source 3D engine for the web, and
Copperlicht, a licensed 3D display engine [7] [8].
On the surface, the export process is extremely straightforward. Blender is a versatile
exporter with many different available formats, and three.js and Copperlicht have just as
many ways to import and display models. However, due to discrepancies in 3D export
standards in file formats such as COLLADA and .obj, as well as differences between Blender
versions and their export process, finding an appropriate format to export a model in became
a tedious process. One important detail discovered during this process involved learning
Copperlicht requires various code input for all colors, materials, and textures to appropriately
show up when dealing with COLLADA. However, the process of applying this code is
outside the scope of this paper, and if one is curious about the process, there is plenty of
documentation available on the Copperlicht website.
Results
Comparing the exported models resulted in a relatively accurate analysis. Due to the time
constraints and limits placed on teams earlier in the pipeline, we could only compare one
hallway from the stereoscopic image based model. From initial observations, our strategy of
comparing the image based model with the handmade model works extremely well for
5
promoting realistic 3D model creation. However, there is more work to be done in order to
determine if this comparison method can work on a larger scale.
Image Appendix
Figure 1: Full view of completed handmade model.
Figure 2: Inside view of hallway in handmade model.
6
Figure 3: Stereoscopic image data aligned with handmade model from an aerial view.
Figure 4: Stereoscopic image data aligned with hallway in handmade model.
Resources
[1] "Photogrammetry: Current Suite of Software." Wikipedia. Wikipedia, 6 Dec 2012. Web.
16 Dec 2012. <http://en.wikipedia.org/wiki/Photogrammetry>.
[2] Ribe, James, Alora Killian, and Dan Anderson. "3D Modelling in Blender Based on
7
Polygonal Data." St. Olaf College, 2012. Print.
[3] Blender. Blender Foundation, 14 Dec 2011. Web. 16 Dec 2012.
<http://www.blender.org/>.
[4] Chaloupka, Vaclav, prod. Tutorial: building a house in Blender 3d software - part 1.
2008. Web. 16 Dec 2012. <http://vimeo.com/785249>.
[5] Inkscape. Inkscape Administrators, 19 Nov 2012. Web. 16 Dec 2012.
<http://inkscape.org/>.
[6] Orgaz, Santiago. xNormal. Santiago Orgaz & co., 29 Dec 2011. Web. 16 Dec 2012.
<http://www.xnormal.net/>.
[7] three.js. mrdoob, 10 Dec 2012. Web. 16 Dec 2012. <http://mrdoob.github.com/three.js/>.
[8] Gebhardt, Nikolaus. Copperlicht. Ambiera, n.d. Web. 16 Dec 2012.
<http://www.ambiera.com/copperlicht/>.
8