Georgia Tech’s Computational Photography
Portfolio
Project #1: A Photograph is a Photograph
Link to extended presentation
https://docs.google.com/presentation/d/1kpkJz-qgV6iUSgFPTy3yv9nmD4vgj--BgS_0PA96-2k/edit?usp=sharing
Arroz con Pollo, Cafe Cancun, GA
Taken from iPhone 6.
Project #2 Image I/O
Link to extended presentation.
https://docs.google.com/presentation/d/1oa_MJ0f8nwUZQ-4V71zhLE3pfuCd7VK5WUsdWaTVaHE/edit?usp=sharing
For this project, we did 5 functions. First function calculated the number of pixels. The second function gets the average pixel of the image. The third function converts the image to black and white. The fourth function gets the averages for two images. The fifth function flips the image horizontally.
I’ve shown examples for functions 3-5.
Input / Output Images
Function 3: Black and White
Original Black and White
Input / Output Images
Function 4: Average Two Images
Image 1 Image 2 Averaged
Input / Output Images
Function 5: Flip Horizontal
Original Flipped
Project #3: Epsilon Photography
Link to extended presentation
https://docs.google.com/presentation/d/19wmd8kqWrYJUK5DR4wqOz4C2uUl8b2ghgXknED_9nUE/edit?usp=sharing
I took a bunch of images of my meal over the course of an hour or so, and turned that into a timelapse GIF image.
I’ve provided the visual representation below with all images.
Image 1 Image 2
Image 3 Image 4
Image 5 - GIF
Project #4: Camera Obscura
Link to extended presentation
https://docs.google.com/presentation/d/1uT8HO2LWJ6BQPhVjFvJc3SBWpL9IhU-XlV1sQ-yLod0/edit?usp=sharing
In this project, I made two different “pinhole” cameras to capture images. The image that I was trying to capture was a lamp light.
I’ve displayed the setup and results on the next slide.
“The Setup”
“The Image”
“The Scene”
Project #5: Gradients and Edges
Link to extended presentation
https://docs.google.com/presentation/d/12m6TpCdT9Ul3xK8wp488BPHfOUZ89WVdqM6W83SW838/edit?usp=sharing
Part 1
In this part I computed the image gradients in both the X and Y directions. Then in my computeGradient function I took a small image kernel and smoothed it over the larger image. The results are on the next slides using openCV and python, and also two other platforms.
Part 2
For this part of the project, I found an image of pennies online with a noticeably white background. I used openCV Canny Edge Detection to get the results of the second picture. I used this inside my computeGradient function. I did this to compare to my other results that I did using another resource.
In the first image, I used an edge detection free online photo editor. (http://www.photo-kako.com/en/edge.cgi) I did edge extraction/edge detecton by using a filter that used a “Sobel, Laplacian (3x3) according to this photo editor and converted it used monochrome to make it into a black and white image.
In the second image, I used the magic wand tool in Photoshop to select the edges based on the color tones. I then used another feature to convert it to black and white. This was simply done by selecting parts of the image and using photoshop features.
Project #6: Blending
Link to extended presentation
https://docs.google.com/presentation/d/1sB6yyneZ-2OoDBhmoemDBpzj4-39fjISB5LTr5GgV0w/edit?usp=sharing
In this project, I took two images, and mask image to create a final image that blends the first two images together using the mask.
We coded functions for reducing the image, expanding the image, gaussian pyramids, laplacian pyramids, blending the images, and collapsing the pyramids. I have shown my sample two images, mask and output image.
Sample Images I Chose
“Black Image”
“White Image”
“Mask”
Output Image
Project #7 Feature Detection
Link to extended presentation
https://docs.google.com/presentation/d/1bav2OaV0R62_6RGDai-risbL5_Jk6n8ihHz3DmrjxEs/edit?usp=sharing
In this project, we use image detection to detect certain features. I used a stuffed animal for my images in different conditions (dark lighting, bigger scale, and rotation). First we compute SIFT keypoints and descriptors for both images, then create a Brute Force Matcher using the hamming distance. We then compute the matches between both images and sort the matches based on distance so you get the best results. We return the keypoints and top matches in the images.
Sample
This was the other sample
image that didn’t match up
correctly after running the tests.
Lighting
This was the other image that
didn’t give me good results
Scale
Rotation
Project #8: Panoramas
Sadly, my code didn’t run for this part of the project. I wish I could have implemented it better. This would have been a very useful tool to use for future purposes.
Project #9: Photos of Space
Link to extended presentation
https://docs.google.com/presentation/d/1lODILngZlrONTOIlJ0AYPEN-FUpPW-fIRKUxot4FGAY/edit?usp=sharing
For this project, I made a photosynth and panoramas using an application. The photosynth required at least 20 photos, mine uses 26. For my panorama images, I downloaded an application called PTGui to make these images. You need more than one image for this and it will stitch all the photos together. My results are on the next slides.
Site 1: Photosynth
Site 2: Panorama
My Panorama
Here is another Panorama I took of my bed to play around with PTGui.
Project #10: HDR
Link to extended presentation
https://docs.google.com/presentation/d/1aULcGvodArkqnwTMtTmfJqZBqphOpmBrtZJT3Agwqf0/edit?usp=sharing
For this project, we took multiple images at different exposures to create an HDR image. My results were a little skewed (I believe because of the exposure numbers in my code), which I show in the next slides.
Thumbnails of the HDR Images I Chose:
HDR Result
The actual HDR image should’ve looked like this:
Project #11: Video Textures
Link to extended presentation
https://docs.google.com/presentation/d/1T5HTF0_KjiRIkDjw8hyLUyUdYcyOV01XWBemCG4xt-E/edit?usp=sharing
In this project, we computed over a bunch of images, and combined them to make a GIF for our results.
Results on the next slides.
Results
Link to candle gif: http://imgur.com/ozhkLAR
Alpha is 0.0104166666667
My starting and ending index were both 0.
My Own Results
Link to my own video gif: http://imgur.com/88Ic72L
I used the same default alpha values for my results.
Final Project - Hybrid Images
One static image composed by two images that shows different representations at different viewing distances or sizes.
The goal of our project was to create hybrid images using various sample pictures and to experiment with creating hybrid “videos” to see how the concept of hybrid images translated into objects in motion.
Team: Kanan Patel & Megi Guliashvili
Input
Output
In this case, switched the low and high pass filters on the pencil and Bank of America building. (high pass filter being on the building) We still get good results from doing this.
Pictures with defined lines and characteristics seem to create better hybrid images such as these ones with the basketball and soccer ball.
Input
Output
A Hybrid Gif Video