1 of 56

RunwayML & Google Colab

Derrick Schultz

Lia Coleman

2 of 56

SCHEDULE

WEEK 1 Intro to RunwayML

WEEK 2 Intro to Colab

WEEK 3 Types of GAN

WEEK 4 Creating Datasets

WEEK 5 (no class)

WEEK 6 Training StyleTransfer & NFP Models

WEEK 7 Training StyleGAN, MUNIT Models

WEEK 8 (no class)

WEEK 9 Inference

WEEK 10 Integrations, Additional Resources

TBD Show N Tell

3 of 56

TODAY: Training NFP & Style Transfer Models

BREAKOUT How did dataset making go?

LECTURE Training in Colab

DEMO Training NFP

DEMO Training StyleTransfer

HOMEWORK

4 of 56

BREAKOUT 10 MINS

Homework discussion:

How did dataset making go?

  • What made sense?
  • What didn’t?
  • What did you like about the process?
  • What was annoying?

5 of 56

TRAINING ON COLAB

6 of 56

GOOGLE DRIVE + TRAINING

Some trainings can be done in < 10 hours, but often we’re talking days or weeks.

The problem with training sessions that last longer than your Colab period is that you can lose your files at any moment.

7 of 56

USUALLY...

COLAB SERVER

Hardware (GPU/CPU)

Dependencies (pip)

Git Repo

Files

8 of 56

GOOGLE DRIVE + TRAINING

Another problem you likely run into without Colab Pro is hard disk space on your Colab Server. (w/o Pro you get ~50GB.)

NFP can produce ~750MB per epoch

9 of 56

USUALLY...

COLAB SERVER

Hardware (GPU/CPU)

Dependencies (pip)

Git Repo

Files

All on server HD

10 of 56

GOOGLE DRIVE + TRAINING

Saving files directly to Google Drive means as soon as the file is written its saved in a location you won’t lose when your server disconnects.

11 of 56

NOW

COLAB SERVER

Hardware (GPU/CPU)

Dependencies (pip)

DRIVE

Git Repo

Files

12 of 56

GOOGLE DRIVE + TRAINING

Google Drive is virtual storage for Colab. You write to it without it counting against your Colab HD, but it’s accessible like* your Colab HD.

13 of 56

NOW

COLAB SERVER

Hardware (GPU/CPU)

Dependencies (pip)

DRIVE

Git Repo

Files

<—————————Disk storage split—————————>

14 of 56

NEXT FRAME PREDICTION

15 of 56

DATASET

16 of 56

TRAINING

17 of 56

TRAINING

18 of 56

TRAINING

19 of 56

And on until all pairs have been trained...

TRAINING

20 of 56

INFERENCE/TESTING/GENERATING

Predicted Frame

21 of 56

INFERENCE/TESTING/GENERATING

Predicted Frame

Predicted Frame

22 of 56

INFERENCE/TESTING/GENERATING

Predicted Frame

Predicted Frame

Predicted Frame

23 of 56

DEMO: NFP

Colab Notebook

24 of 56

STEP ONE

Upload your video to Colab. I recommend the video be 1280x720.

The video’s dimensions must be multiples of 32px (sometimes it works if not but its safer if it is)

25 of 56

STEP TWO: SEPARATE FRAMES

Next we need to break the video into individual frames and create paired folders of images

26 of 56

STEP THREE: TRAIN MODEL

Training a model runs the model thru the GAN. Every time it runs thru the entire length of the video is called an epoch. By default the model trains for 200 epochs, but you may need to train it longer.

27 of 56

STEP FOUR: CONTINUE TRAINING

After ~20hours (on Colab Pro) your machine will disconnect. Reconnect your machine and run the first couple of cells. Skip over the frame extraction cell and run the cell that include --continue_train

28 of 56

STYLE TRANSFER

29 of 56

30 of 56

31 of 56

STYLE TRANSFER

Some style transfer models are trained (“Fast Style Transfer”) so that they work better in real time. My personal preference is for style transfers that don’t use scaling, ad instead use iterative and scaling processes.

32 of 56

STYLE TRANSFER

Neural-Style-TF (my fork)

Neural-Style-PT

Comparison video

33 of 56

Style

�Content

34 of 56

DEMO: NEURAL-STYLE-TF

Colab Notebook

35 of 56

At its most basic:

python neural_style.py --content_img floracon1.jpg --style_imgs rodina.jpg

(any parameters not altered in a command will use defaults. In this case that means the max_size is 512, and iterations is 1000)

python neural_style.py --content_img floracon1.jpg --style_imgs rodina.jpg --max_size 1400 --max_iterations 500 --img_output_dir ./floracon1-rodina

36 of 56

python neural_style.py

--content_img floracon1.jpg

--style_imgs rodina.jpg

--max_size 1400

--max_iterations 500

--img_output_dir ./floracon1-rodina

37 of 56

--max_iterations 100

38 of 56

--max_iterations 200

39 of 56

--max_iterations 300

40 of 56

--max_iterations 400

41 of 56

--max_iterations 500

42 of 56

--max_iterations 600

43 of 56

--max_iterations 700

44 of 56

--max_iterations 800

45 of 56

--max_iterations 800

46 of 56

--style_scale 1.0

(default value)

47 of 56

--style_scale 0.75

48 of 56

--style_scale 0.5

49 of 56

--style_scale 0.25

50 of 56

--content_weight 0e0

--init_img_type random

(default: --seed 0

--style_scale 0.25)

51 of 56

--style_imgs davy2.jpg rodina.jpg

--style_imgs_weight 0.5 0.5

52 of 56

--style_imgs davy2.jpg,rodina.jpg

--style_imgs_weight 0.25 0.75

53 of 56

--style_imgs davy2.jpg,rodina.jpg

--style_imgs_weight 0.75 0.25

54 of 56

--original_colors

55 of 56

HOMEWORK

Have (Pix2Pix OR CycleGAN/MUNIT) OR StyleGAN �model for the following Monday

Pix2Pix:

  • 2 aligned image folders (300-500 images each)

CycleGAN/MUNIT:

  • 2 unaligned image folders (300-500 images)

StyleGAN:

  • One dataset (1000+ images)

56 of 56

HOMEWORK

Train an NFP model

  • Continue running your model until it stops, then continue until it hits 200 epochs (or more)