CS 451 Quiz 6
To address overfitting, one can (check all that apply)
reduce the number of features
increase the number of features
The regularization term affects all parameters except theta_0
In the regularized cost function, the parameter lambda controls the weight of the regularization term. What happens if lambda is very large?
We get overfitting
We get underfitting
Gradient descent fails to converge
adding an extra term to the cost function that makes gradient descent converge faster
adding an extra term to the cost function that encourages small parameter values
adding polynomial features so the model becomes more expressive
limiting the number of non-zero parameters
An algorithm that does overfitting has
An algorithm that does overfitting
predicts both training examples and unseen test examples well
predicts training examples well, but does not predict unseen test examples well
does not predict the training examples well, but does predict unseen test examples well
predicts neither the training examples nor unseen test examples well
Once we add a regularization term to linear regression we can only use gradient descent for minimization, since there is no normal equation for an analytical solution
We can use regularization only with linear regression, not with logistic regression
What would the following numpy code be in Octave?
a = [1, 2, 3; 4, 5, 6];
a = [1, 4; 2, 5; 3, 6];
a = [[1, 2, 3], [4, 5, 6]];
Suppose N is a 3x3 numpy array and M is a 3x3 Octave matrix. What would the expression N be in Octave?
This content is neither created nor endorsed by Google.
Terms of Service