COSI 115b - Lab 4
2/7/2025
Review
Perceptron
Perceptron
Perceptron
Perceptron
Perceptron Loss
L(ŷ,y) = (ŷ-y)z
Perceptron Example
Suppose pos = 1 and neg = 0.
We have features weights:
“dog”: -0.5
“like”: 0.2
“tacos”: -0.3
“not”: -0.7
“coffee”: 0.4
bias: 0.1
We get neg example, “The dog does not like fish”
What is the gradient?
(y_hat - y) x what is y_hat?
y_hat = sign(-0.5 * 0.2 * -0.7 * 0.1)
y_hat = 1
(1 - 0) x = (1)[1, 1, 0, 1, 0, 1] = [1, 1, 0, 1, 0, 1]
How do we update the current weights?
[-0.5, -0.2, -0.3, -0.7, 0.4, 0.1] - (lr *[1, 1, 0, 1, 0, 1])
What would happen if the gold label (true class) was positive?
We would get it right so no update (yhat - y) = 0
POS Tagging:
NER:
The | dog | ate | tacos |
DT | NN | VBD | NNS |
Brandeis | University | is | in | Massachusetts |
B-ORG | I-ORG | O | O | B-LOC |