1 of 52

CS60050: Machine Learning

Sourangshu Bhattacharya

CSE, IIT Kharagpur

2 of 52

NAÏVE BAYES

3 of 52

Generative vs. Discriminative Classifiers

Discriminative classifiers (e.g. Logistic Regression)

  • Assume some functional form for P(Y|X) or for the decision boundary
  • Estimate parameters of P(Y|X) directly from training data

Generative classifiers (e.g. Naïve Bayes)

  • Assume some functional form for P(X,Y) (or P(X|Y) and P(Y))
  • Estimate parameters of P(X|Y), P(Y) directly from training data

arg max_Y P(Y|X) = arg max_Y P(X|Y) P(Y)

4 of 52

4

4

A text classification task: Email spam filtering

From: ‘‘’’ <takworlld@hotmail.com>

Subject: real estate is the only way... gem oalvgkay

Anyone can buy real estate with no money down

Stop paying rent TODAY !

There is no need to spend hundreds or even thousands for similar courses

I am 22 years old and I have already purchased 6 properties using the

methods outlined in this truly INCREDIBLE ebook.

Change your life NOW !

=================================================

Click Below to order:

http://www.wholesaledaily.com/sales/nmd.htm

=================================================

How would you write a program that would automatically detect

and delete this type of message?

5 of 52

5

5

Formal definition of TC: Training

Given:

    • A document set X
      • Documents are represented typically in some type of high-dimensional space.
    • A fixed set of classes C = {c1, c2, . . . , cJ}
      • The classes are human-defined for the needs of an application (e.g., relevant vs. nonrelevant).
    • A training set D of labeled documents with each labeled document <d, c> ∈ X × C

Using a learning method or learning algorithm, we then wish to

learn a classifier ϒ that maps documents to classes:

ϒ : X → C

6 of 52

6

6

Formal definition of TC: Application/Testing

Given: a description d ∈ X of a document Determine: ϒ (d) ∈ C,

that is, the class that is most appropriate for d

7 of 52

7

7

Examples of how search engines use classification

    • Language identification (classes: English vs. French etc.)
    • The automatic detection of spam pages (spam vs. nonspam)
    • Topic-specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not)

8 of 52

8

8

Derivation of Naive Bayes rule

We want to find the class that is most likely given the document:

Apply Bayes rule

Drop denominator since P(d) is the same for all classes:

9 of 52

9

9

Too many parameters / sparseness

    • There are too many parameters , one for each unique combination of a class and a sequence of words.
    • We would need a very, very large number of training examples to estimate that many parameters.
    • This is the problem of data sparseness.

10 of 52

10

10

Naive Bayes conditional independence assumption

To reduce the number of parameters to a manageable size, we

make the Naive Bayes conditional independence assumption:

We assume that the probability of observing the conjunction of

attributes is equal to the product of the individual probabilities

P(Xk = tk |c).

11 of 52

11

11

The Naive Bayes classifier

      • The Naive Bayes classifier is a probabilistic classifier.
      • We compute the probability of a document d being in a class c

as follows:

    • nd is the length of the document. (number of tokens)
    • P(tk |c) is the conditional probability of term tk occurring in a

document of class c

    • P(tk |c) is a measure of how much evidence tk contributes

that c is the correct class.

    • P(c) is the prior probability of c.
    • If a document’s terms do not provide clear evidence for one

class vs. another, we choose the c with highest P(c).

12 of 52

12

12

Maximum a posteriori class

    • Our goal in Naive Bayes classification is to find the “best” class.
    • The best class is the most likely or maximum a posteriori (MAP) class cmap:

13 of 52

13

13

Taking the log

    • Multiplying lots of small probabilities can result in floating point underflow.
    • Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities.
    • Since log is a monotonic function, the class with the highest score does not change.
    • So what we usually compute in practice is:

14 of 52

14

14

Naive Bayes classifier

    • Classification rule:

    • Simple interpretation:
      • Each conditional parameter log is a weight that indicates how good an indicator tk is for c.
      • The prior log is a weight that indicates the relative frequency of c.
      • The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class.
      • We select the class with the most evidence.

15 of 52

15

15

Parameter estimation take 1: Maximum likelihood

    • Estimate parameters and from train data: How?
    • Prior:

    • Nc : number of docs in class c; N: total number of docs
    • Conditional probabilities:

    • Tct is the number of tokens of t in training documents from class c (includes multiple occurrences)
    • We’ve made a Naive Bayes independence assumption here:

16 of 52

16

16

The problem with maximum likelihood estimates: Zeros

P(China|d) ∝ P(China) ・ P(BEIJING|China) ・ P(AND|China)

P(TAIPEI|China) ・ P(JOIN|China) ・ P(WTO|China)

      • If WTO never occurs in class China in the train set:

17 of 52

17

17

The problem with maximum likelihood estimates: Zeros

(cont)

    • If there were no occurrences of WTO in documents in class China, we’d get a zero estimate:

    • → We will get P(China|d) = 0 for any document that contains WTO!
    • Zero probabilities cannot be conditioned away.

18 of 52

18

18

To avoid zeros: Add-one smoothing

    • Before:

    • Now: Add one to each count to avoid zeros:

    • B is the number of different words (in this case the size of the vocabulary: |V | = B)

19 of 52

19

19

To avoid zeros: Add-one smoothing

    • Estimate parameters from the training corpus using add-one smoothing
    • For a new document, for each class, compute sum of (i) log of prior and (ii) logs of conditional probabilities of the terms
    • Assign the document to the class with the largest score

20 of 52

20

20

Exercise

    • Estimate parameters of Naive Bayes classifier
    • Classify test document

21 of 52

21

21

Example: Parameter estimates

The denominators are (8 + 6) and (3 + 6) because the lengths of

textc and are 8 and 3, respectively, and because the constant

B is 6 as the vocabulary consists of six terms.

22 of 52

22

22

Example: Classification

Thus, the classifier assigns the test document to c = China. The

reason for this classification decision is that the three occurrences

of the positive indicator CHINESE in d5 outweigh the occurrences

of the two negative indicators JAPAN and TOKYO.

23 of 52

Class Conditional Probabilities

24 of 52

24

24

Generative model

    • Generate a class with probability P(c)
    • Generate each of the words (in their respective positions), conditional on the class, but independent of each other, with probability P(tk |c)
    • To classify docs, we “reengineer” this process and find the class that is most likely to have generated the doc.

25 of 52

On naïve Bayesian classifier

  • Advantages:
    • Easy to implement
    • Very efficient
    • Good results obtained in many applications
  • Disadvantages
    • Assumption: class conditional independence, therefore loss of accuracy when the assumption is seriously violated (those highly correlated data sets)

26 of 52

BAYESIAN LINEAR REGRESSION

27 of 52

Maximum Likelihood and Least Squares

  • Assume observations from a deterministic function with added Gaussian noise:

  • which is the same as saying,

  • Given observed inputs, , and targets,� , we obtain the likelihood function

where

28 of 52

Maximum Likelihood and Least Squares

  • Taking the logarithm, we get

  • where

  • is the sum-of-squares error.

29 of 52

Bayesian Linear Regression (1)

  • Define a conjugate prior over w

  • Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior

  • where

30 of 52

Bayesian Linear Regression (2)

  • A common choice for the prior is

  • for which

  • Next we consider an example …

31 of 52

Bayesian Linear Regression (3)

0 data points observed

Prior

Data Space

32 of 52

Bayesian Linear Regression (4)

1 data point observed

Likelihood

Posterior

Data Space

33 of 52

Bayesian Linear Regression (5)

2 data points observed

Likelihood

Posterior

Data Space

34 of 52

Bayesian Linear Regression (6)

20 data points observed

Likelihood

Posterior

Data Space

35 of 52

Predictive Distribution (1)

  • Predict t for new values of x by integrating over w:

  • where

36 of 52

Predictive Distribution (2)

  • Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point

37 of 52

Predictive Distribution (3)

  • Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points

38 of 52

Predictive Distribution (4)

  • Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points

39 of 52

Predictive Distribution (5)

  • Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points

40 of 52

GAUSSIAN MIXTURE MODELS

41 of 52

Mixture of Gaussians

  •  

42 of 52

Mixture of Gaussians

  •  

43 of 52

Generative Procedure

  •  

44 of 52

Generative Procedure

  •  

45 of 52

Posterior distribution

  •  

46 of 52

Example

47 of 52

Max-likelihood

  •  

48 of 52

KKT conditions

  •  

49 of 52

KKT conditions

  •  

50 of 52

KKT conditions

  •  

51 of 52

(EM) Algorithm

  •  

52 of 52

Example