AI Society
AI Society
Evolving Neural Networks
Movie Night
CDS B63 @7:00pm
XOR
AI Society
1.0
-2.0
1.0
1.0
1.0
-1.0
But how did we get here?
AI Society
1.0
-2.0
1.0
1.0
1.0
-1.0
1
1
1
0
max(0, 1*1 + 1*1 + -2.0*1 + 0)
Learning
AI Society
Most functions work off of some kind of “reward” or “loss” function
Squared Loss Function
AI Society
Imagine you have a dataset of inputs matched out outputs (in other words, a function):
Input A | Input B | “Expected” Output |
1 | 1 | 0 |
1 | 0 | 1 |
0 | 1 | 1 |
0 | 0 | 0 |
Squared Loss Function
AI Society
Squared Loss Function
AI Society
“Expected” Output | “Actual” Output | Diff |
1 | 0.9 | 0.1 |
1 | 0.1 | 0.9 |
0 | 0.4 | 0.6 |
0 | 0.6 | 0.4 |
Squared Loss Function
AI Society
Diff | Squared Diff | Total Diff |
0.1 | 0.01 | … |
0.9 | 0.81 | … |
0.6 | 0.16 | … |
0.4 | 0.16 | 1.14 |
Learning
AI Society
We can randomly change a neural network by
Learning
AI Society
Let’s replicate evolution in nature. Let’s start with 100 neural networks, then:
Evolution
AI Society
When we repeat those steps, we expect our loss function to approach 0… In other words, we expect to solve our problem!
We’ve done it
AI Society