1 of 38

Georgia Tech’s Computational Photography

Portfolio

Kanan Patel

kananpatel@gatech.edu

2 of 38

Project #1: A Photograph is a Photograph

Link to extended presentation

https://docs.google.com/presentation/d/1kpkJz-qgV6iUSgFPTy3yv9nmD4vgj--BgS_0PA96-2k/edit?usp=sharing

Arroz con Pollo, Cafe Cancun, GA

Taken from iPhone 6.

3 of 38

Project #2 Image I/O

Link to extended presentation.

https://docs.google.com/presentation/d/1oa_MJ0f8nwUZQ-4V71zhLE3pfuCd7VK5WUsdWaTVaHE/edit?usp=sharing

For this project, we did 5 functions. First function calculated the number of pixels. The second function gets the average pixel of the image. The third function converts the image to black and white. The fourth function gets the averages for two images. The fifth function flips the image horizontally.

I’ve shown examples for functions 3-5.

4 of 38

Input / Output Images

Function 3: Black and White

Original Black and White

5 of 38

Input / Output Images

Function 4: Average Two Images

Image 1 Image 2 Averaged

6 of 38

Input / Output Images

Function 5: Flip Horizontal

Original Flipped

7 of 38

Project #3: Epsilon Photography

Link to extended presentation

https://docs.google.com/presentation/d/19wmd8kqWrYJUK5DR4wqOz4C2uUl8b2ghgXknED_9nUE/edit?usp=sharing

I took a bunch of images of my meal over the course of an hour or so, and turned that into a timelapse GIF image.

I’ve provided the visual representation below with all images.

8 of 38

Image 1 Image 2

Image 3 Image 4

9 of 38

Project #4: Camera Obscura

Link to extended presentation

https://docs.google.com/presentation/d/1uT8HO2LWJ6BQPhVjFvJc3SBWpL9IhU-XlV1sQ-yLod0/edit?usp=sharing

In this project, I made two different “pinhole” cameras to capture images. The image that I was trying to capture was a lamp light.

I’ve displayed the setup and results on the next slide.

10 of 38

“The Setup”

“The Image”

“The Scene”

  • Other images on later slides

11 of 38

Project #5: Gradients and Edges

Link to extended presentation

https://docs.google.com/presentation/d/12m6TpCdT9Ul3xK8wp488BPHfOUZ89WVdqM6W83SW838/edit?usp=sharing

Part 1

In this part I computed the image gradients in both the X and Y directions. Then in my computeGradient function I took a small image kernel and smoothed it over the larger image. The results are on the next slides using openCV and python, and also two other platforms.

12 of 38

Part 2

For this part of the project, I found an image of pennies online with a noticeably white background. I used openCV Canny Edge Detection to get the results of the second picture. I used this inside my computeGradient function. I did this to compare to my other results that I did using another resource.

13 of 38

In the first image, I used an edge detection free online photo editor. (http://www.photo-kako.com/en/edge.cgi) I did edge extraction/edge detecton by using a filter that used a “Sobel, Laplacian (3x3) according to this photo editor and converted it used monochrome to make it into a black and white image.

In the second image, I used the magic wand tool in Photoshop to select the edges based on the color tones. I then used another feature to convert it to black and white. This was simply done by selecting parts of the image and using photoshop features.

14 of 38

Project #6: Blending

Link to extended presentation

https://docs.google.com/presentation/d/1sB6yyneZ-2OoDBhmoemDBpzj4-39fjISB5LTr5GgV0w/edit?usp=sharing

In this project, I took two images, and mask image to create a final image that blends the first two images together using the mask.

We coded functions for reducing the image, expanding the image, gaussian pyramids, laplacian pyramids, blending the images, and collapsing the pyramids. I have shown my sample two images, mask and output image.

15 of 38

Sample Images I Chose

“Black Image”

“White Image”

“Mask”

16 of 38

Output Image

17 of 38

Project #7 Feature Detection

Link to extended presentation

https://docs.google.com/presentation/d/1bav2OaV0R62_6RGDai-risbL5_Jk6n8ihHz3DmrjxEs/edit?usp=sharing

In this project, we use image detection to detect certain features. I used a stuffed animal for my images in different conditions (dark lighting, bigger scale, and rotation). First we compute SIFT keypoints and descriptors for both images, then create a Brute Force Matcher using the hamming distance. We then compute the matches between both images and sort the matches based on distance so you get the best results. We return the keypoints and top matches in the images.

18 of 38

Sample

  • For this image particularly, it seems like it got all the features correct, all the lines point to the same part of the plush in both images. I’m surprised that it didn’t detect the tongue or horns (on top of the head) to see if these features matched up as I saw in the sample Buzz pictures that I tested.
  • I did take this particular photo for sample a couple times, I tried an image other than this one and it didn’t work out as well, this picture had better results so I used this one instead.

This was the other sample

image that didn’t match up

correctly after running the tests.

19 of 38

Lighting

  • This feature got mostly all the features correct, except for one of the matches. That match went from the Poro’s right eye in the template picture to the Poro’s left eye in the lighting picture. I think some errors occur because of similar features in the photo and if there’s more than one eye it’s bound to connect it to the other eye.
  • I took this picture multiple times as well with different shadings, and this one surprisingly had better results than my previous picture which was more lighted but still in a slightly darker shade.

This was the other image that

didn’t give me good results

20 of 38

Scale

  • In this matchup, the features did correlate for the most part, but some features didn’t quite match up. There’s one feature that didn’t match up which is the one where it points from the horn in template to the left eye in the scaled picture. This does not show correct match for the same feature. I think this occurs because the function may calculate the image at that location to look like the image at the location that it points at in the other picture.
  • For this picture, since it was so straightforward, I decided to take a close up zoomed in picture of the plush. I used only 1 photo for this since it gave me decent results with some error.

21 of 38

Rotation

  • This feature got a few matches correct, and some matches wrong. I can tell that when it tries to detect the eyes, it does a decent job with rotation, but not perfectly. It also detected other wrong things, for example it took the furry head knob and compared it to the furry part on the rotated picture which isn’t the same exact spot on the Poro. It also tried to detect part of the couch, and mismatched the connections from the two eyes.
  • I took this image multiple times, and it resulted in quite similar results with this being the best. It shows some correct features along with some wrong errors.

22 of 38

Project #8: Panoramas

Sadly, my code didn’t run for this part of the project. I wish I could have implemented it better. This would have been a very useful tool to use for future purposes.

23 of 38

Project #9: Photos of Space

Link to extended presentation

https://docs.google.com/presentation/d/1lODILngZlrONTOIlJ0AYPEN-FUpPW-fIRKUxot4FGAY/edit?usp=sharing

For this project, I made a photosynth and panoramas using an application. The photosynth required at least 20 photos, mine uses 26. For my panorama images, I downloaded an application called PTGui to make these images. You need more than one image for this and it will stitch all the photos together. My results are on the next slides.

24 of 38

Site 1: Photosynth

25 of 38

Site 2: Panorama

26 of 38

My Panorama

27 of 38

Here is another Panorama I took of my bed to play around with PTGui.

28 of 38

Project #10: HDR

Link to extended presentation

https://docs.google.com/presentation/d/1aULcGvodArkqnwTMtTmfJqZBqphOpmBrtZJT3Agwqf0/edit?usp=sharing

For this project, we took multiple images at different exposures to create an HDR image. My results were a little skewed (I believe because of the exposure numbers in my code), which I show in the next slides.

29 of 38

Thumbnails of the HDR Images I Chose:

30 of 38

HDR Result

31 of 38

The actual HDR image should’ve looked like this:

32 of 38

Project #11: Video Textures

Link to extended presentation

https://docs.google.com/presentation/d/1T5HTF0_KjiRIkDjw8hyLUyUdYcyOV01XWBemCG4xt-E/edit?usp=sharing

In this project, we computed over a bunch of images, and combined them to make a GIF for our results.

Results on the next slides.

33 of 38

Results

Link to candle gif: http://imgur.com/ozhkLAR

Alpha is 0.0104166666667

My starting and ending index were both 0.

34 of 38

My Own Results

Link to my own video gif: http://imgur.com/88Ic72L

I used the same default alpha values for my results.

35 of 38

Final Project - Hybrid Images

One static image composed by two images that shows different representations at different viewing distances or sizes.

The goal of our project was to create hybrid images using various sample pictures and to experiment with creating hybrid “videos” to see how the concept of hybrid images translated into objects in motion.

Team: Kanan Patel & Megi Guliashvili

36 of 38

Input

Output

In this case, switched the low and high pass filters on the pencil and Bank of America building. (high pass filter being on the building) We still get good results from doing this.

37 of 38

Pictures with defined lines and characteristics seem to create better hybrid images such as these ones with the basketball and soccer ball.

Input

Output

38 of 38

A Hybrid Gif Video