How to use photographs and Photosynth to create 3-D mesh models
Table of Contents
Ever find an interesting object and want to show it to your friends and colleagues? Pictures might capture some aspects of it but can leave certain details out. It’s especially hard to capture the depth and dimensionality of an object in a single photo. You could take dozens of photos, but something about looking at a slideshow of one object isn’t appealing. Photosynth, ARC3D, and Toolkit allow you to display these photographs in an intuitive, 3-dimensional space.
This document will show you how to take a collection of photographs and transform them into a 3-D model. This model can be viewed on a standard computer with open-source software or be displayed in 3-D virtual environments such as the Duke immersive Virtual Environment (DiVE). You can also recreate a replica of the object with a 3D printer.
The general computing requirements consist of a Windows compatible device. A Windows
virtual machine run on a Mac is acceptable. The programs are all free and available online.
Photos-->Pointcloud-->Mesh-->Texture-->3D Print/Virtual Reality
For all questions and for a lot more information
The directions below show, step-by-step, how to reconstruct an object in 3-dimensions from a set of photographs. Steps 1-8 detail the simplest process available which uses Photosynth to create the point cloud and Meshlab to import it and create the mesh. There are several more advanced programs available that you can find out more about on the pgrammetry forum site linked above. I recommend photosynth toolkit by Henri Astre and have included basic instructions on using it. Once you get a handle of steps 1-8 below, start exploring other options in order to learn how to create a highly detailed object.
1. Find an object
2. Photograph it!
a. Take 1-300 photographs of the object for Photosynth or 5-10 for ARC3D
b. Take the photographs along the perimeter of the object with the object as the center
c. Zoom in and focus on areas of interest
d. Overlap (and parralex) is key!
e. Don’t turn the camera sideways-some programs can’t figure this out even if you try to manually rotate the image back
Alternatively, Video it!
a. Take an HD video of the object
b. Convert movie to images
a. Quicktime Pro 7: Export-->Image Sequence-->Choose # of frames
b. Virtual Dub? Have not attempted yet
a. Create a Microsoft live account (free)
b. Load photos onto Photosynth Server
c. Explore point cloud
i. The better the point cloud, the better the mesh
ii. Experiment to figure out what makes better point clouds
4. Meshlab (or viewing software of choice)
1. Import pointcloud from photosynth
Filters-->Create New Mesh Layer-->Import Photosynth data
Input URL of your photosynth
Deselect download images (unless you want them, then specify image directory)
Deselect show cameras
Click Points Icon
2. Remove unwanted points
Point select>Delete Point
Hold Control or Command to select multiple groups
Filters--->Selection--->Select Faces by Color. You can select using RGB and HSV, and tweak the range of values.
3. Save/export cleaned up image as .ply
4. Compute Normals [skip this if using photosynth toolkit (4_C, below):
Filters>Normals, Curvatures, and Orientation>Compute Normals for Point sets
This filter has a single parameter for the number of neighbors.
Default 10 neighbors but use 100.
5. Generate a mesh using the Poission reconstruction. The function can be found under: Filters>Point Set>Surface Reconstruction: Poisson Reconstruction.
This filter has four parameters. Click the help button for more information on the parameters. It is important to read about their function as altering the parameters profoundly changes the results. I typically use the following values: Octree Depth = somewhere between 10-12, Solver Divide = either 7 or 8, Samples per Node = 1 (default), Surface offseting = 1 (default). This can be quite glitchy-I’ve found that Octree Depth=11 and Solver Divide=7 tends to work so I’ve stuck with that.
6. If the resulting mesh is a bubble (which it very often is), remove the unwanted parts of the bubble. This task is performed using the triangle selection tool and the triangle deletion tool. Both of these are on the tool bar. They can be found near the point selection and deletion tools.
7. Remove extra edges with faces. This function can be fund under Filters> Cleaning
and Repairing> Remove edges with faces longer than...
Filter>Sampling>Vertex Attribute Transfer Function
Make Sure the original pointcloud coordinate system is selected up top and the poisson mesh is selected down below
Click light bulb
Other programs: (notes constantly being updated)
Can simply transfer color attribute from point cloud to mesh or can ‘paint’ on a texture from the original photos. Using the original photos allows you to paint on the tiny details onto the 3D mesh you just generated but is also more time and not “true” 3D reconstruction.
Vertex attribute transfer:
1. Meshlab (pointcloud to mesh)
1. Blender (free)
2. Maya (most familiar)
3. Autodesk Max3D
Photosynth toolkit 4 is geared for this
can get maya and autodesk free as a student
A how-to video for photokit: http://blog.neonascent.net/archives/photosynth-toolkit/
Does it have to be 100% synthy?
How to run PhotoSynthToolkit4 (Windows OS 64 bit version)
1) Create a PhotoSynth synth from a set of photos.
2) Download PhotoSynthToolkit4:
3) Decompress PhotoSynthToolkit4.zip to C:\
4) Run script: 1 - Download Synth.wsf
4.1) Enter your Photosynth URL
4.2) Choose output path:
5) Copy all the original photos (NOT the downloaded thumbs), used to create your synth to:
6) Run script: 3 - Prepare for PMVS2.wsf
6.1) Choose your input path:
6.2) The program will ask to choose some point cloud creation options*, and finally will create a file named: launch_pmvs.bat
7) Run launch_pmvs.bat
The program will take some time to run. When it ends you will find a PLY file in:
How to run PhotoSynthToolkit4 (Windows OS 32 bit version)
Go to: C:\PhotoSynthToolkit4\bin\PMVS2
erase and replace the 64 bit files:
for their 32 bit version:
You can find this last two files inside this zip:
*Notes on pmvs_options.txt
Josh Harle’s Video showing results of different parameters: http://vimeo.com/15223228
Level: 0 will give you a denser point cloud but will be slower than using the default 1 option
wsize: 7 default, but go up to 9 if possible
Using masks allows you to inform the software which part of your image you find interesting and to limit the noise coming from irrelevant points. This not only produces a point cloud that is semi-cleaned up, it also can increase the density for the region of interest.
Another good, quick tutorial from Josh Harle: http://vimeo.com/18517975
There are several websites that allow you to simply upload your photos and they will send you a point cloud. The quality is again dependent upon your photographs as well as how each site will compute them. There are both free and pay alternatives out there and I’ve included a brief description of some of the free ones below.
Positives: Free, makes point cloud for you (one less step), can do on mac os?
My 3D Scanner
Can take videos! (not that complicated since they just framegrab but still, makes it quick)
Zip and upload
Wow worked pretty well
+Very dense point cloud, probably ½ of photosynth toolkit at 0 dense level
+they auto clean it up, very, very well
- do they own my photos? i don’t understand the TOS
- only small photosets?
Post on pgrammetry forums:
300mb .zip file limit
10 mb per photo
+they give feedback
some of my results in their tutorials?
Get 180 degree FOV at most so need to break 360 object into multiple faces then combine:
CMP SfM Web Service
Need to email to ask for username
Zip and upload
Less documentation/turials but say they’re working on it.
Am attempting a few trials and will report back
5-10 photographs taken in sequence around a circumfrence
create account (free)
Download .zip, unzip
Click on .v3D
Select photographs on right side that have good depth maps (red)
Put in high subsample (11?)
Click Fast merge?
Select all images on the right (or those that are good according to heat map?)
Click export to .ply
Then click ok
Very glitchy-select high subsample and low resolution and fast-merge
3_D. Bundler Noah-Snavely options?
Worked pretty well...
needs small amount of photos
Is there a way to get output into meshlab?
3_F. Insight 3d
Haven’t tried yet. Open-source but looks like it crashes often.
Demo only allows you to use photos from company
Supposedly can contact local reseller who will give you code that enables you to use your own photos for the demo.
Goldingart has a pretty amazing reconstruction from it (in example gallery)
Free to upload and reconstruct but costs money to export into formats you want
~$100 per download
No portrain photos, only landscape. Keep photo size and zoom level constant.
a. Convert .ply to Maya compatible extension
b. Import into Virtools
c. Provide figure dimensions
d. View in DiVE
6_B. 3-D printing
Great link with video tutorials
A Meshlab team blog describing their attempts at photosynth pointcloud-->mesh experience.
A good general meshlab tutorial?
Quick overview of process
More detailed look at importer
A look at some of the parameters
Noah Snavely’s Bundler version of photosynth
ESPNs 3D scans of student athletes bodies:
Statue Photosynth: http://photosynth.net/view.aspx?cid=de9d2943-5628-4fd5-aba3-c0ce3b6eaf4d
ARC3D (only 8 photos!)
ARC3D, Duke Chapel (20 photos)
Toolkit4, Duke Chapel 20 photos, 3.5MB High-Resolution
(captured much more detail of the front face vs. the above ARC3D reconstruction but did not capture the entire structure as well)
Toolkit2 (Thumbnail, low resolution version so less details but very photorealistic color-wise)
Another Toolkit rendering, look familiar? (low resolution thumbnails)
Photofly. Photo Editor. Statue with 8? photos (less than arc3d). Good sides...best art in back
Video side by side. Video walk around on left and screen capture of similar path of model. Screen shot them side-by-side.
UMDNJ Robert Wood Johnson Medical School
Regular digital photographs can now be used to create models for use in virtual environments (such as the DiVE) and 3-D printers. This talk will demonstrate a simplified process for transforming a series of object-oriented photographs into 3-D models. Widespread applications include art and archaeology to businesses interested in a quick way to share a prototype. A tutorial will follow the talk for those interested in hands on help.
email author and let him know about photogrammetry (http://topics.nytimes.com/top/reference/timestopics/people/v/ashlee_vance/index.html?inline=nyt-per)
Work on resizing in blender, then see what shapeways says
can get up to 64gb RAM but need to try it out first
can use 2 kinects to get a ~180 degree 3d video model
Video Game Animation
Pretty cool use of 360degrees of video cameras to recreate facial expressions for a video game
3D Photo Notes:
Get plenty of background in photos b/c it helps to provide context for images to find relationships. Be sure to delete these along the way when doing the reconstruction though since they’re not that useful and cause huge bubbles. Make sure the background isn’t moving or just huge (out a window) since that isn’t that useful.
Take HD video, walk around object. Extract stills? would that work well enough???
Links to try
by bhowiebkr under Creative Commons
Good b/c will see if gaps are filled in or not
Statue of Liberty:
Jason’s mesh one:
good point cloud, a statue
alright point cloud, a good outdoor structure
Any of M4’s synths