Neural Networks�a Brief History
Deniz Yuret�Feb 27, 2016
Real vs Artificial Neurons
A biological neuron
A computational neuron
Summary
Perceptrons
Perceptrons (Rosenblatt, 1958)
Perceptron algorithm
Perceptron algorithm: example run
Perceptron convergence theorem
Perceptrons, the book (Minsky and Papert 1969)
Summary
Multilayer perceptrons
PDP (Rumelhart and McClelland, 1986)
Networks of perceptrons
Differentiable activation functions
Backpropagation algorithm
y = Wx + b “model prediction”
J = |y-y’|² “objective function”
Summary
Convolutional Neural Networks
The human visual system
Receptive fields of ganglion cells in the retina�(Kuffler 1953)
Receptive fields of cells from the cat visual cortex (Hubel and Wiesel, 1959)
A fully connected artificial neural network
From http://cs231n.github.io/convolutional-networks
Convolutional neural networks have sparse connectivity (LeCun, 1998)
From http://deeplearning.net/tutorial/lenet.html
Convolutional neural networks share weights
From http://deeplearning.net/tutorial/lenet.html
Activations of an example ConvNet
From http://cs231n.github.io/convolutional-networks
Receptive fields learnt by an example ConvNet
From http://cs231n.github.io/convolutional-networks
Summary
Recurrent neural networks
Recurrent connections
From: https://www.willamette.edu/~gorr/classes/cs449/rnn1.html
Processing sequences
From: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Example: machine translation
From: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Example: generating image descriptions
From: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Training RNNs
From: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Demo: writing Shakespeare
From: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Summary
Conclusion
We are trying to build computer models with three features: