Introduction to Machine Learning -- help us improve the scribes
Comments
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
$
%
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Still loading...
ABCDEFGHIJKLMNOPQRST
1
LecturePageLineWhat is writtenWhat should be writtenName of first student to identify thisStudent's emailVerified by a lecturer
2
@The scribes of the course have been carefully proofread, still, there are always typos left
3
Please use this form to report typos. Each student can claim up to 5 bonus points. Each math typo is 1pt, each English typo is 0.5pt.
4
Points will be awarded after verification by one of the lecturers or the TA.
5
Just *add a new row for every new discovery*. Do not repeat existing typos pointed out already by others.
6
Each scribe is listed below by order. First all lectures then all recitations.
7
You can also point to typos in the slides. 0.5pt each (English or Math). Put the slide typos of each lecture under the area of that lecture's scribe.
8
9
10
LecturePageLineWhat is writtenWhat should be writtenName of first student to identify thisStudent's emailVerified by a lecturer
11
12
-
13
1Introduction34th from the end"theme to clusters.""them to clusters."Saleet Kleinsaleet.k@gmail.comVerfied and corrected
14
15th from the endincrease in they predictionincrease in their predictionOrit Moskovichorit.mosko@gmail.comVerfied and corrected
15
32nd from the end"set of example""set of examples"Orit Moskovichorit.mosko@gmail.comVerfied and corrected
16
42in the sum: j is in S_ij: x_j is in S_i (since S_i was defined as a set of points, not a set of indices, as we can see in the 'assign' part)Iddan Golombigolomb@gmail.comcorrected
17
410in the sum in update rule: j is in S_ij: x_j is in S_i (same explanation as before)Iddan Golombigolomb@gmail.comlooks fine now
18
42nd from the end for the plan (d = 2)for the plane (d = 2)Saleet Kleinsaleet.k@gmail.comVerfied and corrected
19
43rd from the endVeronoi diagramsVoronoi diagramsOrit Moskovichorit.mosko@gmail.comVerfied and corrected
20
45th from the endThe number of iterationthe number of iterationsMichal Faktorfaktorm1@post.tau.ac.ilcorected
21
17hetheOren Avramorenavr2@mail.tau.ac.ilcorected
22
1--2 - in the Recitationthe end of page one and the start of page 2 where L(a; b) = 0 iff a = b and otherwise L(a; b) = 0.I am not sure, but it's not make any sanse that L(a,b) = 0 in both casesdov danondov84d@gmail.comcorrected
23
2Bayesian Inference181 Lecture 1: October 13Maximum LikelihoodRoy Mitzroymitz@gmail.comcorrected
24
-+112nd from the end a = b and otherwise L(a; b) = 0ThereforeRoy Mitzroymitz@gmail.comcorrected
25
112set of n pointset of n pointsSaleet Kleinsaleet.k@gmail.comcorrected
26
16th from the end$||\mu_i-x_j||$$||\mu_i-x_j||^2$ (missing square in the minimization equation)Orit Moskovichorit.mosko@gmail.comcorrected
27
16th from the endin the sum: j is in S_ij: x_j is in S_i (since S_i was defined as a set of points, not a set of indices, as we can see in the 'assign' part)Asaf Ezraasaf244@gmail.comcorrected
28
lecture slide 42last linein the sum: j is in S_ij: x_j is in S_i (same explanation as before)Asaf Ezraasaf244@gmail.com
29
210namelyNamely (capital)Saleet Kleinsaleet.k@gmail.comcorrected
30
21what should beWhat should be (capital)Asaf Ezraasaf244@gmail.comcorrected
31
26historical data andHistorical data and (capital)Asaf Ezraasaf244@gmail.comcorrected
32
41 (below the frame)drawsdrawnSaleet Kleinsaleet.k@gmail.comcorrected
33
43,11L(\mu, \sigma ); x1; x2; ... ; xn)L((\mu, \sigma ); x1; x2; ... ; xn) [missing bracket]Orit Moskovichorit.mosko@gmail.comcorrected
34
411the sum is 1 to nchanging every m in the equation to n (in two places- m/2 and m*log(sigma))Saleet Kleinsaleet.k@gmail.comcorrected
35
413derivative of L according to \mushould be the derivative of l according to \muAsaf Ezraasaf244@gmail.comcorrected
36
52derivative of L according to \sigmashould be the derivative of l according to \sigmaAsaf Ezraasaf244@gmail.comcorrected
37
57 from the endx_i<theta*x_i<=theta* (since there can be a realization in which x_i is exactly the upper bound)Iddan Golombigolomb@gmail.comcorrected
38
56 from the endE[theta^_ML]<theta*E[theta^_ML]<=theta*(same explanation as previous line)Iddan Golombigolomb@gmail.comcorrected
39
77th from the endand thereforand thereforeOrit Moskovichorit.mosko@gmail.comcorrected
40
85(without the Lagrangian multiplies(without the Lagrangian multiplies)Saleet Kleinsaleet.k@gmail.comcorrected
41
810 from the endthereThereOrit Moskovichorit.mosko@gmail.comcorrected
42
88 from the endmanyManyIddan Golombigolomb@gmail.comcorrected
43
101n+1 pointn+1 pointsIddan Golombigolomb@gmail.comcorrected
44
104 derivative of Fshould be multiplied by 1/2Saleet Kleinsaleet.k@gmail.comcorrected
45
1012viewed aviewed asOrit Moskovichorit.mosko@gmail.comcorrected
46
10last equationwrong equation with correct ending(1) sigma is 1 - should be removed from the denominator (2) e^{\mu}^2 - should be e^{-0.5\mu}^2 (3) in the second line -{\mu}^2 /2 should be +{\mu}^2 [remove the "/2" and minus should be plus] (4) in the third line inside the sum: should be (2+x_i\mu_i + (n+1)\mu^2) Saleet Kleinsaleet.k@gmail.comcorrectedThanks! should be counted as 4 errors YM
47
10last equationwrong equation with correct endingin the third line inside the sum the part of the X_i^2 is missingת it comes back in the fourth lineAsaf Ezraasaf244@gmail.comnot an error
48
116a family of distributiona family of distributionsIddan Golombigolomb@gmail.comcorrected
49
112nd line after "Beta Distribution"what would beWhat would be (capital letter starting a sentence)Iddan Golombigolomb@gmail.comcorrected
50
11one before last(m+a-1,n-m+b-1)(m+a,n-m+b)
...
This is a Beta distribution with (alpha,beta) params (m+a,n-m+b),
not (m+a-1,n-m+b-1).
Also - can mention that MAP is mode of posterior, not mean as implied.
and that mode of Beta is
(a-1)/(a+b-2) for a,b>1 .
The MAP(s) shown take this into account, but it is not made explicit.
Shimi Salantshimi.salant@gmail.comcorrected
51
6After "For the maximum we have"Variance computation of the ML estimator is not accurate.The ML estimator is biased, so why is its expectation given by \theta when computing its variance?Dean Dorondeandoron@mail.tau.ac.ilcorrected - thie is really the squared error
52
2 (recitation)The first one in section 2.3Typo: "there"theirDean Dorondeandoron@mail.tau.ac.ilcorrected
53
411l(\mu, \sigma ); x1; x2; ... ; xn)l((\mu, \sigma ); x1; x2; ... ; xn) [missing left bracket, in addition to the comment here above]Oren Avramorenavr2@mail.tau.ac.ilcorrected
54
22 (In chapter 2.3)a Bernoulli random variablea Binomial random variable [Binomial is the sum of Bernoulli experiments]Tomer Haimovichtomer.ha@gmail.comcorrected
55
112 (In chapter 2.4.2)a Bernoulli random variablea Binomial random variable [Same as the above...]Tomer Haimovichtomer.ha@gmail.comcorrected
56
73rd equation\forall kIt's not clear what the range of k is.Tomer Haimovichtomer.ha@gmail.comcorrected
57
76has to be a stationary point of the gradient of the Lagragianhas to be a stationary point of the Lagragian [where the gradient is 0]Tomer Haimovichtomer.ha@gmail.comcorrected
58
3GMM & EM18Maximum LilkeihoodMaximum LikelihoodRoy Mitzroymitz@gmail.comcorrected
59
14th from the endHA is a normal distributionHA is a normal distribution with different expectation but the same standard deviation (this is NOT an error, but it might be clearer this way)Iddan Golombigolomb@gmail.comcorrected
60
64dustributions.distributions.Roy Mitzroymitz@gmail.comcorrected
61
28 (third equation)Pr[x|y=b]Pr[x|y=1] (twice)Saleet Kleinorit.mosko@gmail.comcorrected
62
35th from the endgives a reasonable baseline resultsgives reasonable baseline resultsIddan Golombigolomb@gmail.comcorrected
63
44when can the independence assumption can breakwhen the independence assumption can breakIddan Golombigolomb@gmail.comcorrected
64
45 and 6 (same typo twice)distributiondistributionsIddan Golombigolomb@gmail.comcorrected
65
59in the sum: j is in S_ij: x_j is in S_i (since S_i was defined as a set of points, not a set of indices, as we can see in the 'assign' part)Asaf Ezraasaf244@gmail.comit's the indiced, clarified
66
47with comewill comeOrit Moskovichorit.mosko@gmail.comcorrected
67
9th from the endis defineis definedOrit Moskovichorit.mosko@gmail.comI think its OK
68
59the objective function to minimize in K-means - the outer sigma is 1 to n, and inside the sum there is x_ithe outer sigma should be 1 to k, and inside the sum x_i should be x_jOrit Moskovichorit.mosko@gmail.comcorrected
69
511The points isThe points inOrit Moskovichorit.mosko@gmail.comcorrected
70
61Figure 3.5 says "read"should be "red"Orit Moskovichorit.mosko@gmail.comcorrected
71
75\sum_{i=1}^n\sum_{j=1}^n\sum_{i=1}^n\sum_{j=1}^kOrit Moskovichorit.mosko@gmail.comcorrected
72
76E[F(x)]<=F(E[X])E[F(X)]<=F(E[X]) (X should be capitilized)Asaf Ezraasaf244@gmail.comcorrected
73
76th from the enda_ijlog(a_i,j)a_ijlog(a_ij) (without the comma)Asaf Ezraasaf244@gmail.comcorrected
74
72nd to lastsum over a_ijsum over a_ij^(t+1)Asaf Ezraasaf244@gmail.comcorrected
75
83 and 4a_ij is written 4 timesshould be a_ij^(t+1) in allAsaf Ezraasaf244@gmail.comcorrected
76
812We like to findWe would like to findAdam Polyakadampolyak@gmail.comcorrected
77
slide 48last row\sum_{i=1}^n\sum_{j=1}^n\sum_{i=1}^n\sum_{j=1}^k (also appears in last row of slide 49 and first row of slide 51)Iddan Golombigolomb@gmail.comsent to eran
78
slide 41last line, and in formulaknOri Terneroriterner@gmail.comsent to eran
79
66S - 1S_1Ori Terneroriterner@gmail.comcorrected
80
713It iterationIn iterationSaleet Kleinsaleet.k@gmail.comcorrected
81
3second and third line of the equationxx_iEmmanuelle Muhlethaleremmanuellem@mail.tau.ac.ilcorrected
82
67S-1S_1Roy Mitzroymitz@gmail.comcorrected
83
610 (the equation for f_j(x))\mu\mu_jDean Dorondeandoron@mail.tau.ac.ilcorrected
84
3 (recitation)First equation(p_1^t)^x_i(p_1^t)^{x_i} (there are three such mistakes in the equation).Dean Dorondeandoron@mail.tau.ac.il
85
48 from the endit tois toOren Avramorenavr2@mail.tau.ac.ilcorrected
86
46 from the endan dXda dXdOfir Lindenbaumofirlin@gmail.com
87
8First equationargmax_\mu,\sigma,pargmax_\mu,\sigmaOren Avramorenavr2@mail.tau.ac.ilcorrected
88
8Second equationa_i,ja_ij [without the comma]Oren Avramorenavr2@mail.tau.ac.ilcorrected
89
82 (below formula)where the maximization is formwhere the maximization is fromOri Terneroriterner@gmail.comcorrected
90
8below second formula we consider as as constants in g. we consider a as constants in g.Ori Terneroriterner@gmail.comcorrected
91
8Inner Sigma in the definition of the g function\sum_{j=1}^n\sum_{j=1}^kOren Avramorenavr2@mail.tau.ac.ilcorrected
92
8expression for sigma_j^(t+1)the expression giventhis is the expression for sigma_j^(t+1) *squared*
...
either take sqrt of RHS or let LHS be (sigma_j^(t+1)) ^ 2
Shimi Salantshimi.salant@gmail.comcorrected
93
93 lines after the E stepmixture if Gaussiansmixture of GaussiansOren Avramorenavr2@mail.tau.ac.ilcorrected
94
9first equation (2 lines after M-Step)log(Pr(D|\theta) = log(\sum_z (Pr(D,z|\theta)))log(Pr(D|\theta) = log(\sum_z (Pr(D,z|\theta))*Pr(z|\theta)) (According to the law of total probability. This is also true in the next parts of the equation, though the final result is correct). Iddan Golombigolomb@gmail.comwrong
95
10first bulletno to decreasenot to decreaseOren Avramorenavr2@mail.tau.ac.ilcorrected
96
10first bulletWe are guaranteeWe are guaranteedAsaf Ezraasaf244@gmail.comcorrected
97
44when can the independence assumption can breakwhen the independence assumption can breakTomer Haimovichtomer.ha@gmail.comcorrected
98
48 from the endto first sample n normal...to first sample d normal...Tomer Haimovichtomer.ha@gmail.com
99
63for a univariate normal distributionfor an univariate... [an]Tomer Haimovichtomer.ha@gmail.comwrong
100
64demonstrate the concepts, and latter we generalizedemonstrate the concepts, and later we will generalize [later, will]Tomer Haimovichtomer.ha@gmail.comcorrected
Loading...
 
 
 
Sheet1