CS 451 Quiz 29
Shallow neural nets
* Required
Email address
*
Your email
In the updated notation for neural nets, superscripts
*
in square brackets [i] denote training examples; in round brackets (i) denote nodes
in square brackets [i] denote nodes; in round brackets (i) denote network layers
in square brackets [i] denote network layers; in round brackets (i) denote training examples
In the updated notation, which of the following is considered a 3layer neural net?
*
one input layer, one hidden layer, one output layer
one input layer, two hidden layers, one output layer
Stacking up all training examples as column vectors from left to right in a matrix X allows us to do forward propagation in vectorized form without any transpose operations
*
True
False
For hidden layers, which activation function usually works best? Order from best (first) to worst (last):
*
tanh, sigmoid, ReLU
ReLU, sigmoid, tanh
ReLU, tanh, sigmoid
Why do we need nonlinear activation functions?
*
Without them, the computation done by all layers could be "collapsed" into a single layer since a combination of many linear functions is still a linear function
I have no idea since this was covered in an optional video I didn't watch
Even though I didn't watch the optional video, I remember from earlier in the course that the first answer is correct
Suppose g(z) = a. Which activation function has derivative g'(z) = 1  a^2 ?
*
sigmoid
tanh
ReLU
Leaky ReLU
Which of the following is NOT part of the VECTORIZED backprop algorithm?
*
A
B
C
D
I'm still on campus and attending class today and I'm happy to receive 3 points for that :)
*
True
False
Submit
This content is neither created nor endorsed by Google.
Report Abuse

Terms of Service

Additional Terms
Forms