1 of 69

CS60050: Machine Learning

Sourangshu Bhattacharya

CSE, IIT Kharagpur

2 of 69

SUPPORT VECTOR MACHINES

3 of 69

Linear Classifiers

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

How would you classify this data?

4 of 69

Linear Classifiers

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

How would you classify this data?

5 of 69

Linear Classifiers

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

How would you classify this data?

6 of 69

Linear Classifiers

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

How would you classify this data?

7 of 69

Linear Classifiers

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

Any of these would be fine..

..but which is best?

8 of 69

Classifier Margin

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

9 of 69

Maximum Margin

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.

This is the simplest kind of SVM (Called an LSVM)

Linear SVM

10 of 69

Maximum Margin

f

x

α

yest

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.

This is the simplest kind of SVM (Called an LSVM)

Support Vectors are those datapoints that the margin pushes up against

Linear SVM

11 of 69

Why Maximum Margin?

denotes +1

denotes -1

f(x,w,b) = sign(w. x - b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.

This is the simplest kind of SVM (Called an LSVM)

Support Vectors are those datapoints that the margin pushes up against

  1. Intuitively this feels safest.
  2. If we’ve made a small error in the location of the boundary (it’s been jolted in its perpendicular direction) this gives us least chance of causing a misclassification.
  3. LOOCV is easy since the model is immune to removal of any non-support-vector datapoints.
  4. There’s some theory (using VC dimension) that is related to (but not the same as) the proposition that this is a good thing.
  5. Empirically it works very very well.

12 of 69

Specifying a line and margin

How do we represent this mathematically?

…in m input dimensions?

Plus-Plane

Minus-Plane

Classifier Boundary

“Predict Class = +1” zone

“Predict Class = -1” zone

13 of 69

Specifying a line and margin

Plus-plane = { x : w . x + b = +1 }

Minus-plane = { x : w . x + b = -1 }

Plus-Plane

Minus-Plane

Classifier Boundary

“Predict Class = +1” zone

“Predict Class = -1” zone

Classify as..

+1

if

w . x + b >= 1

-1

if

w . x + b <= -1

Universe explodes

if

-1 < w . x + b < 1

wx+b=1

wx+b=0

wx+b=-1

14 of 69

Support vector machines

Let {x1, ..., xn} be our data set and let yi {1,-1} be the class label of xi

14

Class 1

Class 2

m

y=1

y=1

y=1

y=1

y=1

y=-1

y=-1

y=-1

y=-1

y=-1

y=-1

For yi=1

For yi=-1

So:

15 of 69

Large-margin Decision Boundary

The decision boundary should be as far away from the data of both classes as possible

We should maximize the margin, m

15

Class 1

Class 2

m

16 of 69

Finding the Decision Boundary

The decision boundary should classify all points correctly

The decision boundary can be found by solving the following constrained optimization problem

This is a constrained optimization problem. Solving it requires to use Lagrange multipliers

16

17 of 69

KKT Conditions

 

18 of 69

Finding the Decision Boundary

The Lagrangian is

αi≥0

Note that ||w||2 = wTw

18

19 of 69

The Dual Problem

Setting the gradient of L w.r.t. w and b to zero, we have

19

n: no of examples, m: dimension of the space

20 of 69

The Dual Problem

If we substitute to , we have

Since

This is a function of αi only

20

21 of 69

The Dual Problem

The new objective function is in terms of αi only

It is known as the dual problem: if we know w, we know all αi; if we know all αi, we know w

The original problem is known as the primal problem

The objective function of the dual problem needs to be maximized (comes out from the KKT theory)

The dual problem is therefore:

21

Properties of αi when we introduce the Lagrange multipliers

The result when we differentiate the original Lagrangian w.r.t. b

22 of 69

The Dual Problem

This is a quadratic programming (QP) problem

A global maximum of αi can always be found

w can be recovered by

22

23 of 69

Characteristics of the Solution

 

23

24 of 69

A Geometrical Interpretation

24

α6=1.4

Class 1

Class 2

α1=0.8

α2=0

α3=0

α4=0

α5=0

α7=0

α8=0.6

α9=0

α10=0

25 of 69

Characteristics of the Solution

For testing with a new data z

Compute

and classify z as class 1 if the sum is positive, and class 2 otherwise

Note: w need not be formed explicitly

25

26 of 69

Non-linearly Separable Problems

We allow “error” ξi in classification; it is based on the output of the discriminant function wTx + b

ξi approximates the number of misclassified samples

26

Class 1

Class 2

27 of 69

Soft Margin Hyperplane

The new conditions become

ξi are “slack variables” in optimization

Note that ξi=0 if there is no error for xi

ξi is an upper bound of the number of errors

We want to minimize

C : tradeoff parameter between error and margin

27

28 of 69

The Optimization Problem

28

With α and μ Lagrange multipliers, POSITIVE

29 of 69

The Dual Problem

With

and

30 of 69

The Optimization Problem

The dual of this new constrained optimization problem is

��

New constraints derived from since μ and α are positive.

w is recovered as

This is very similar to the optimization problem in the linear separable case, except that there is an upper bound C on αi now

Once again, a QP solver can be used to find αi

30

31 of 69

The algorithm try to keep ξ low, maximizing the margin

The algorithm does not minimize the number of error. Instead, it minimizes the sum of distances from the hyperplane.

When C increases the number of errors tend to lower. At the limit of C tending to infinite, the solution tend to that given by the hard margin formulation, with 0 errors

31

1/19/26

32 of 69

Soft margin is more robust to outliers

32

33 of 69

Extension to Non-linear Decision Boundary

So far, we have only considered large-margin classifier with a linear decision boundary

How to generalize it to become nonlinear?

Key idea: transform xi to a higher dimensional space to “make life easier”

Input space: the space the point xi are located

Feature space: the space of φ(xi) after transformation

Why transform?

Linear operation in the feature space is equivalent to non-linear operation in input space

Classification can become easier with a proper transformation. In the XOR problem, for example, adding a new feature of x1x2 make the problem linearly separable

33

34 of 69

Extension to Non-linear Decision Boundary

So far, we have only considered large-margin classifier with a linear decision boundary

How to generalize it to become nonlinear?

Key idea: transform xi to a higher dimensional space to “make life easier”

Input space: the space the point xi are located

Feature space: the space of φ(xi) after transformation

Why transform?

Linear operation in the feature space is equivalent to non-linear operation in input space

Classification can become easier with a proper transformation. In the XOR problem, for example, adding a new feature of x1x2 make the problem linearly separable

34

35 of 69

XOR

35

X

Y

0

0

0

0

1

1

1

0

1

1

1

0

Is not linearly separable

X

Y

XY

0

0

0

0

0

1

0

1

1

0

0

1

1

1

1

0

Is linearly separable

36 of 69

Find a feature space

36

37 of 69

Transforming the Data

Computation in the feature space can be costly because it is high dimensional

The feature space is typically infinite-dimensional!

The kernel trick comes to rescue

37

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ(.)

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

φ( )

Feature space

Input space

Note: feature space is of higher dimension than the input space in practice

38 of 69

The Kernel Trick

Recall the SVM optimization problem

The data points only appear as inner product

As long as we can calculate the inner product in the feature space, we do not need the mapping explicitly

Many common geometric operations (angles, distances) can be expressed by inner products

Define the kernel function K by

38

39 of 69

An Example for φ(.) and K(.,.)

Suppose φ(.) is given as follows

An inner product in the feature space is

So, if we define the kernel function as follows, there is no need to carry out φ(.) explicitly

This use of kernel function to avoid carrying out φ(.) explicitly is known as the kernel trick

39

40 of 69

Kernels

Given a mapping:

a kernel is represented as the inner product

A kernel must satisfy the Mercer’s condition:

40

41 of 69

Modification Due to Kernel Function

Change all inner products to kernel functions

For training,

41

Original

With kernel function

42 of 69

Modification Due to Kernel Function

For testing, the new data z is classified as class 1 if f 0, and as class 2 if f <0

42

Original

With kernel function

43 of 69

More on Kernel Functions

Since the training of SVM only requires the value of K(xi, xj), there is no restriction of the form of xi and xj

xi can be a sequence or a tree, instead of a feature vector

K(xi, xj) is just a similarity measure comparing xi and xj

For a test object z, the discriminant function essentially is a weighted sum of the similarity between z and a pre-selected set of objects (the support vectors)

43

44 of 69

Kernel Functions

In practical use of SVM, the user specifies the kernel function; the transformation φ(.) is not explicitly stated

Given a kernel function K(xi, xj), the transformation φ(.) is given by its eigenfunctions (a concept in functional analysis)

Eigenfunctions can be difficult to construct explicitly

This is why people only specify the kernel function without worrying about the exact transformation

Another view: kernel function, being an inner product, is really a similarity measure between the objects

44

45 of 69

A kernel is associated to a transformation

Given a kernel, in principle it should be recovered the transformation in the feature space that originates it.

K(x,y) = (xy+1)2= x2y2+2xy+1

It corresponds the transformation

45

1/19/26

46 of 69

Examples of Kernel Functions

Polynomial kernel of degree d

Polynomial kernel up to degree d

Radial basis function kernel with width σ

The feature space is infinite-dimensional

Sigmoid with parameter κ and θ

It does not satisfy the Mercer condition on all κ and θ

46

47 of 69

Building new kernels

If k1(x,y) and k2(x,y) are two valid kernels then the following kernels are valid

Linear Combination

Exponential

Product

Polynomial transformation (Q: polynomial with non negative coeffcients)

Function product (f: any function)

47

48 of 69

Polynomial kernel

48

1/19/26

49 of 69

Gaussian RBF kernel

49

1/19/26

50 of 69

BOOSTING

51 of 69

Boosting

Train classifiers (e.g. decision trees) in a sequence.

A new classifier should focus on those cases which were incorrectly classified in the last round.

Combine the classifiers by letting them vote on the final prediction (like bagging).

Each classifier is “weak” but the ensemble is “strong.”

AdaBoost is a specific boosting method.

52 of 69

Boosting Intuition

We adaptively weigh each data case.

Data cases which are wrongly classified get high weight (the algorithm will focus on them)

Each boosting round learns a new (simple) classifier on the weighed dataset.

These classifiers are weighed to combine them into a single powerful classifier.

Classifiers that that obtain low training error rate have high weight.

We stop by using monitoring a hold out set (cross-validation).

53 of 69

Boosting in a Picture

53

training cases

Correctly

classified

This example

has a large weight

in this round

This DT has

a strong vote.

boosting rounds

54 of 69

Boosting

 

55 of 69

Adaboost

56 of 69

Adaboost (contd..)

57 of 69

And in animation

Original training set: equal weights to all training samples

Taken from “A Tutorial on Boosting” by Yoav Freund and Rob Schapire

58 of 69

AdaBoost example

58

ROUND 1

ε = error rate of classifier

α = weight of classifier

59 of 69

AdaBoost example

59

ROUND 2

60 of 69

AdaBoost example

60

ROUND 3

61 of 69

AdaBoost example

61

62 of 69

Adaboost illustration

63 of 69

Adaboost - Observations

 

64 of 69

Adaboost - derivation

 

65 of 69

Adaboost - derivation

 

66 of 69

Adaboost - derivation

 

67 of 69

Adaboost

68 of 69

Adaboost (contd..)

69 of 69

End of Slides