Deep Learning (DEEP-0001)�
Prof. André E. Lazzaretti
https://sites.google.com/site/andrelazzaretti/graduate-courses/deep-learning-cpgei/2025
2 – Supervised Learning
Supervised learning
Supervised learning overview
Supervised learning
Notation:
Variables always Roman letters
Normal = scalar
Bold = vector
Capital Bold = matrix
Functions always square brackets
Normal = returns scalar
Bold = returns vector
Capital Bold = returns matrix
Notation example:
Structured or tabular data
Model
Parameters always Greek letters
Loss function
or for short:
Returns a scalar that is smaller when model maps inputs to outputs better
Training
Returns a scalar that is smaller when model maps inputs to outputs better
Testing
Supervised learning
Example: 1D Linear regression model
y-offset
slope
Example: 1D Linear regression model
y-offset
slope
Example: 1D Linear regression model
y-offset
slope
Example: 1D Linear regression model
y-offset
slope
Example: 1D Linear regression training data
Example: 1D Linear regression training data
Loss function:
“Least squares loss function”
Example: 1D Linear regression loss function
Loss function:
“Least squares loss function”
Example: 1D Linear regression loss function
Loss function:
“Least squares loss function”
Example: 1D Linear regression loss function
Loss function:
“Least squares loss function”
Example: 1D Linear regression loss function
Loss function:
“Least squares loss function”
Example: 1D Linear regression loss function
Example: 1D Linear regression loss function
Example: 1D Linear regression loss function
Example: 1D Linear regression loss function
Example: 1D Linear regression training
Example: 1D Linear regression training
Example: 1D Linear regression training
Example: 1D Linear regression training
Example: 1D Linear regression training
This technique is known as gradient descent
Possible objections
Example: 1D Linear regression testing
Supervised learning
Where are we going?