1 of 43

Project List

Side Projects from 13/03/2016 to 13/03/2017

2 of 43

Slides from Previous Years

3 of 43

These slides contain high-level summaries of some of the personal projects worked on between March 13, 2016 and March 13, 2017. For every two listed, there is one that fell on its face or was not included for other reasons.

I have attempted to list them in temporal order.

4 of 43

Outline

  1. Image Autoencoder
  2. ImageResolver
  3. MusicClassifier
  4. ImageCreation V6
  5. colour -> pos NN
  6. Pos -> colour NN
  7. pix2vec
  8. Char2vec
  9. Writing Optimization
  10. Multiscale Music Entropy Optimization
  11. Multiscale Image Entropy Optimization
  12. GOL with fixable cells
  13. Scrabble Optimization
  14. TABAISEC v1
  15. Keyboard Piano
  16. Temp Climbing Sim
  17. Recursive Music Generation
  18. EmotiveCircle
  19. TABAISEC v2
  20. QuickReader v2

5 of 43

ImageAutoencoder

Goal: Style transfer (taking the style from one image and combining it with the content of another image)

Approach: train a convolutional autoencoder to reconstruct patches of style image, and then send patches from content image through the autoencoder (many times if necessary)

Outcome: With some fine tuning, it may be a more competitive method.

6 of 43

7 of 43

ImageResolver

Goal: Generate new images with similar texture at many scales of an existing image.

Approach: train a neural network (or set of) to repeatedly increase resolution of an input (noisy) set of pixels.

Outcome: The results tend not to reliably reproduce the input texture, but what it does produce is also interesting!

8 of 43

9 of 43

10 of 43

11 of 43

MusicClassifier

Goal: Classify songs by energy level and mood.

Approach: Generate spectrograms of songs, and then train model to classify the images. Also trained autoencoder to try create 2D encodings for songs.

Outcome: It worked slightly better than random.

12 of 43

13 of 43

ImageCreation V6

Goal: Generate images with similar texture to existing image.

Approach: Train neural network to take neighbouring pixels to predict center pixel. Apply to random pixels until(ish) convergence.

Outcome: Did not do exactly as expected, but still produced pretty pictures.

14 of 43

15 of 43

Position -> Colour

Goal: What does it look like when you train a neural network to take the x, y coordinates of a pixel and predict its colour?

Approach: That. Also added a z-axis so a single NN could learn multiple images.

Outcome: Works as predicted. Mostly relies on overfitting a NN. Very pleasing results.

16 of 43

17 of 43

18 of 43

19 of 43

Colour -> Position

Goal: What does it look like when you train a neural network to take the colour of a pixel and predict its coordinates in an image?

Approach: That.

Outcome: Produces nice minimal representations of the image. You can almost see the twisting intersected hypersurface projections.

20 of 43

21 of 43

This just shows how the predicted locations for colours changes as the NN is trained.

22 of 43

Pix2Vec

Goal: What does it look like when you apply the skipgram model to colour in an image as opposed to words?

Approach: That.

Outcome: Produces nice “fingerprints” of images.

23 of 43

24 of 43

WritingOptimization

Goal: How can letters of the alphabet and short bi and trigrams (and some 4-grams) be assigned characters so as to minimize writing time?

Approach: Come up with a set of easily distinguishable characters that are also easy to write, calculate how long it take to write each one, and optimize assignments by simulating writing. Also added some constraints because of reasons.

Outcome: Provided a nice speedup to my previous character assignments and standard alphabet characters. Would be beneficial to gain fluency.

25 of 43

Char2Vec

Goal: What happens when you try encode each letter of the alphabet in 2 dimensions?

Approach: Using the skipgram model, use character contexts to find encodings (similar idea to word2vec).

Outcome: The letter ‘y’ is indeed weird. This project shows that its usage lies somewhere between that of vowels and that of consonants. Also, the letter ‘h’ is a loner. The structure discovered by this project was used to create another character assignment which reflects the structure in the encodings.

26 of 43

27 of 43

Multiscale Music Entropy Optimization

Goal: How do you generate songs that sound good?

Approach: Using the hypothesis that aesthetic music has restrictions on the multiscale entropy curve, have an algorithm try learn which curves produce better music.

For samples: https://soundcloud.com/tanner-bohn/sets/multiscalemusicentropy

More about MSE: https://tannerbohn.wordpress.com/2015/12/17/multiscale-image-entropy/

Outcome: Did not work terribly well. The entropy constraints are sufficient to create acceptable structure in the music, but not sufficient to produce pleasing note combinations.

28 of 43

Multiscale Image Entropy Optimization

Goal: How do you generate images that look good?

Approach: Using the hypothesis that aesthetic images have restrictions on the multiscale entropy (MSE) curve, see if particular curves produce interesting images. This is similar to a project from last year, but with improvements and focusing on tractable optimization problems.

Outcome: It turns out that using the variation in colours at a certain scale instead of the MSE is better at creating interesting images.

29 of 43

30 of 43

31 of 43

Game of Life with Fixable Cells

Goal: What interesting dynamics occur when some cells and be always-on?

Approach: Make a GOL simulator with that capability. Also modified colour updating for aesthetics.

Outcome: Nice.

32 of 43

Scrabble Optimization

Goal: How can you place tiles in a grid to maximize the number of words in it? Also, how can you place all the tiles onto a scrabble board?

Approach: These are simple(ish) optimization problems. Use a dictionary of valid words, optimize arrangement to maximize number of valid words that appear or maximize score with constraints.

Outcome: The optimization algorithm create solutions with a strong bias towards shorter words since they are more easily formed as a result of random processes.

33 of 43

34 of 43

TABAISEC V1

Goal: How can you make a program that automatically parses and responds to emails?

Approach: Do that.

Outcome: Worked well. Should be hosted on server as opposed to my laptop…

Update: Raspberry pi!

Update: wifi temporally inconsistent on pi :/

35 of 43

Keyboard Piano

Goal: Is it possible to associate each letter and number with a specific note?

Approach: Write a program that runs in the background and every time you press a key, play a corresponding note.

Outcome: Did not use it long enough for conclusive results. Instead, created programs to help learn the associations faster. Was only able to learn the note sequences of about 30 of the most common words before I could no longer distinguish between them reliably. This would be an interesting experiment to run while raising children… if one were so inclined

36 of 43

Temperature Climbers

Goal: What does it look like when agents want to be in an environment of a certain temperature, but they actively change the temperature of their surroundings?

Approach: Simulate that.

Outcome: Somewhat interesting results.

37 of 43

Recursive Music Generation

Goal: How do you create pleasant and interesting music?

Approach: Write an algorithm that can generate music with recursive structure. This should lead to music which creates nice balances between predictability and complexity.

https://soundcloud.com/tanner-bohn/sets/recursive-music-tests-v1

Outcome: Able to produce interesting music despite the strong constraints.

38 of 43

EmotiveCircle

Goal: What are the necessary properties of a pet?

Approach: Create and simulate an agent with various mental and physiological properties. Note any personal reaction.

http://tannerbohn.github.io/2017/02/13/EmotiveCircle/

Outcome: Whenever I forget to feed it, I kinda feel bad… Also, apparently listening to a heartbeat and breathing, even if artificial, can affect the rate of your own.

39 of 43

TABAISEC V2

Goal: Can I make a more intelligent chatbot than existing ones?

Approach: Try that.

http://tannerbohn.github.io/2017/02/20/FBChatAssistant/

Outcome: Was able to use skills and some algorithms used when creating Fermi to speed up production. As far as I known, no existing personal assistant has similar capacity for ambiguity resolution.

40 of 43

TABAISEC V2 - Context/Ambiguity

No context management

  1. Question, how large is Toronto?
  2. find a couple pictures of Toronto
  3. show me a couple pictures of a spire
  4. define spire
  5. how’s the weather in Toronto?
  6. add Toronto to the list of places to visit
  7. add Berlin to the list of place to visit
  8. remind me soon to make supper
  9. remind me soon to close the windows

With context management

  • How large is Toronto?
  • find a couple pictures of it
  • show me that many pictures of a spire
  • define
  • how’s the weather there?
  • add it to the list of places to visit
  • add Berlin to the list
  • remind me soon to make supper
  • at that time remind me to close the windows

41 of 43

42 of 43

QuickReader V2

Goal: Can I make an algorithm that summarizes text?

Approach: By optimizing trigram histograms, select optimal subset of sentences from text. This is similar to a project from last year, but with prettier and more flexible code and an updated algorithm.

Outcome: Seems to work well for short articles, but takes too long to run for long pieces of text like Wikipedia pages.

43 of 43