|We know by now that neural networks learn by making guesses about the parameters of a function (filters in a convolutional layer, weights in a dense layer), and updating those guesses based on how closely the network's outputs match the known labels in our data.|
We might also know that networks somehow do this using derivatives, and, as I learned not too long ago, that a derivative is just the rate of change of a thing.
|In order to do this, we need a way to collectively compare the outputs from our network to the labels in our data. There are multiple loss functions we can use here, but I like RMSE (root mean squared error) because it's pretty straightforward mathematically, and I'm a simple man.|
|To understand RMSE, let's imagine that we have a following set of training data where x and y are linearly related. A linear relationship can be represented in the form y = mx + b. |
In this case, m = 2 and b = 30.
|We know that the relationship between x and y is linear, but we don't know the value of the parameters m and b. |
What we can do is make a guess that m=1 and b=1 and see what values of y we'd predict using those values of m and b.
|We can compare our predicted y values to our actual y values by subtracting them from each other and squaring the result. The square root of the sum of (y_pred - y)^2 gives us RMSE - a single number that tells us how close our guesses are to our actual labels. |
The lower our RMSE, the more accurate our guesses.
(y_pred - y) ^2
|We can see that our guesses are not that accurate right now! That's fine though - we haven't done any optimization yet.|
Let's start by visualizing our loss function, or our gradient, which is just a loss function in multiple dimensions (I'm like 80% sure of this). Unfortunately, Google Sheets doesn't support 3D area charts, so we're going to have to make do with conditional formatting.
Our columns represent different guesses for the value of m, and our rows different guesses for the value of b. RMSE decreases as we get closer to actual values of m and b:
|Imagine the above gradient as a landscape with peaks and valleys. If we were blindfolded and dropped onto this landscape with the goal of getting to the its lowest possible point, one way to accomplish our objective would be to test the ground around us with a foot, take a step wherever the descent feels steepest, and repeat. |
This is what derivatives allow us to do.
In the following table, we have 20 values of x along with their actual y values. In the first row, we make guesses about the values of m and b, make a prediction about the value of y, and calculate squared error using our m and b values.
Next, we calculate what squared error would be if we added 0.01 to our guess of m. We find the derivative of our error with respect to m (how much our error changed when we made that change to m), and use that information to pick a new value of m, essentially taking a step in the direction of steepest descent. The learn parameter decides how large or small a step we take.
We do the same thing with b, copy our values of m and b to the next row, and do it all over again.
When we've completed this process against every pair of x and y values in our dataset, we've completed one epoch.
The "Run Epoch" button below completes as many epochs as we specify (5 by default) and records the results in a table, showing how RMSE changes after epoch. Hitting "Reset" removes the recorded values and sets our guesses for both parameters back to 1.
Try messing around with the parameters and running a few epochs to see if you can build an intuition around gradient descent.