Collaborative Filtering
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
View only
 
 
ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
2
Collaborative Filtering
3
In October 2006 [1], a little-known American entertainment company called Netflix announced the Netflix Prize, a competition offering US$1,000,000 to any member of the public who could improve the performance of Netflix's recommender system by at least 10%.

[1] Can you guess the year Netflix was founded? Here's a hint: it was 1997.

Netflix evaluated submissions based on RMSE (surprise!) which we covered in depth on Day Eighteen, and can be expressed as sqrt(sum((y - y_pred)^2)).

The Netflix algorithm was called Cinematch (a mid-2000s name if there ever was one) and its best peformance at the time was 0.9525, so to win the prize a participating had to achieve an RMSE of 0.8572 or lower.

Here's how it all went down:
4
5
6
7
8
9
10
11
12
13
14
15
YearRMSEImprovementTeam
16
20070.87238.42%KorBell (an alternate name for BellKor, the AT&T Labs team)
17
20080.86279.27%BellKor in BigChaos (the previous year's leading team in a crossover episode with another team, BigChaos)
18
20090.8567Winner! 10.06%BellKor's Pragmatic Chaos (the previous year's leading collaboration joined by yet another team, PragmaticTheory)
19
20
The leaderboard and original contest rules ares still accessible here.
21
22
Netflix reported over 5,000 teams making valid submissions over the course of the competition, and the three-person KorBell/BellKor team spent over 2,000 hours in year one coming up with the 107 algorithms that led to their 8.4% reduction in RMSE.

So despite the seemingly negligible differences in RMSE from year to year, a tremendous amount of time and energy was devoted into carving out each decimal point gain.

KorBell/BellKor joining forces with other teams to form some kind of data science suicide squad wasn't all that unusual. In 2007, the second-placed team on the leaderboard was called Dinosaur Planet. Dinosaur Planet later joined forces with a team called Gravity to form When Gravity and Dinosaurs Unite.

Nor was the competition without its fair share of drama. A team called The Ensemble (a merger of the teams Grand Prize Team and Opera Solutions and Vandelay United) matched KorBell/BellKor's final result with an RMSE of 0.8567, but the KorBell/BellKor collective submitted their results 20 minutes earlier.

Apart from being a neat historical detail, the story of the Netflix Prize serves as a useful introduction to the concept of collaborative filtering.

Collaborative filtering is a technique used by recommender systems. Let's get right to it.

Let's say we have a subset of Netflix shows, users, and the ratings users gave to each show.
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
show
42
user
moana
zootopia
jessica jones
daredevil
43
14511
44
29552
45
72254
46
211114
47
48
User 14 hasn't watched Moana... But they watched Zootopia and liked it, giving it a 5. They also watched Jessica Jones and Daredevil but didn't like them, giving both shows a 1.

We know User 14 hasn't watched Moana. Should we recommend that they watch it?

Putting aside the fact that Moana is one of the greatest movies of all time and the perfect way to teach your kids that the same organizational practices critical to the execution of an existing business model will inevitably lead to the disruption of that same organization, let's see what we can infer from the data.

Based on our very small dataset, we can see that Users 14 and 29 exhibit similar behavior - they both gave Zootopia a high score and Jessica Jones a low one. We know that User 29 gave Moana a high score. If we're working off the assumption that Users 14 and 29 have similar tastes, we can reasonably expect User 14 to give Moana a high score as well.

Taking this one step further, let's assume that each show has a set of characteristics that affect how they are likely to be rated by different users, and that each user has a set of preferences that affect how they are likely to rate different shows.

How could we categorize the four shows (all owned by Disney)? We could describe Moana and Zootopia as cartoons, and Jessica Jones and Daredevil as superhero shows.

And user preferences could be as simple as "likes cartoons" and "likes superhero shows".
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
show characteristics
68
cartoonsuperhero show
69
70
show
71
user preferencesuser
moana
zootopia
jessica jones
daredevil
72
likes cartoons14511
73
likes cartoons29552
74
likes superhero shows72254
75
likes superhero shows211114
76
77
But the world is richer and more complex than that. What if the reason User 211 rated Jessica Jones highly wasn't because they like superhero shows, but because they like shows with a strong female protagonist? Daredevil doesn't seem like an obvious recommendation to make in this case.

We need to make allowances for multiple user preferences and show characteristics, not all of which we might be able to articulate - or even be aware of.

Does that sound like a problem we can throw some randomly generated weights at? Because it sure sounds like a problem we can throw some randomly generated weights at.
78
79
80
81
82
83
84
85
86
show characteristics
87
characteristic 1
0.710.920.680.83
88
characteristic 2
0.810.550.280.88
89
characteristic 3
0.740.860.530.33
90
characteristic 4
0.040.440.160.41
91
characteristic 5
0.040.80.940.24
92
showshow
93
user preferencesuser
moana
zootopia
jessica jones
daredevil
user
moana
zootopia
jessica jones
daredevil
94
0.190.630.310.440.51140.01.41.01.114511
95
0.250.830.710.960.59291.42.21.50.029552
96
0.30.440.1900.72720.01.31.10.972254
97
0.020.720.690.350.252111.11.40.90.0211114
98
99
rmse8.33
100
Loading...
 
 
 
Collaborative Filtering
 
 
Main menu