1 of 13

AI Society

AI Society

Evolving Neural Networks

2 of 13

Movie Night

CDS B63 @7:00pm

3 of 13

XOR

AI Society

1.0

-2.0

1.0

1.0

1.0

-1.0

4 of 13

But how did we get here?

AI Society

1.0

-2.0

1.0

1.0

1.0

-1.0

1

1

1

0

max(0, 1*1 + 1*1 + -2.0*1 + 0)

5 of 13

Learning

AI Society

  • Backpropagation (Which we will learn about later)
  • Genetic Evolution (This one is “easy”)

Most functions work off of some kind of “reward” or “loss” function

6 of 13

Squared Loss Function

AI Society

Imagine you have a dataset of inputs matched out outputs (in other words, a function):

Input A

Input B

“Expected” Output

1

1

0

1

0

1

0

1

1

0

0

0

7 of 13

Squared Loss Function

AI Society

8 of 13

Squared Loss Function

AI Society

“Expected” Output

“Actual” Output

Diff

1

0.9

0.1

1

0.1

0.9

0

0.4

0.6

0

0.6

0.4

9 of 13

Squared Loss Function

AI Society

Diff

Squared Diff

Total Diff

0.1

0.01

0.9

0.81

0.6

0.16

0.4

0.16

1.14

10 of 13

Learning

AI Society

We can randomly change a neural network by

  1. Adding a new hidden neuron
  2. Adding a new connection
  3. Randomly shifting the weight of a connection

11 of 13

Learning

AI Society

Let’s replicate evolution in nature. Let’s start with 100 neural networks, then:

  • Score them using the loss function
  • “Kill off” the lowest scoring networks
  • “Copy-paste” the good neural networks onto the dead ones
  • Randomly mutate the new networks
  • Repeat

12 of 13

Evolution

AI Society

When we repeat those steps, we expect our loss function to approach 0… In other words, we expect to solve our problem!

13 of 13

We’ve done it

AI Society