1 of 14

Principal Component Analysis in The Light Of Face Recognition

By : Parag Jain

2 of 14

Why and Where is PCA used?

  • Exploratory data analysis
  • Making predictive models ( e.g Face Recognition)
  • Reveals internal structure of the data
  • Best explains the variance in data

2/14

3 of 14

Why and Where is PCA used?

  • If a multivariate dataset (e.g., set of images) is visualized as a set of coordinates in a high-dimensional dataspace,

3/14

4 of 14

Why and Where is PCA used?

  • If a multivariate dataset (e.g., set of images) is visualized as a set of coordinates in a high-dimensional dataspace,

  • Then PCA can supply the user with a lower-dimensional picture, a “shadow” of this object when viewed from its (in some sense) most informative viewpoint.

4/14

5 of 14

What is PCA and its Relation to Face Recognition?

  • Principal Component Analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of possibly correlated M variables into a set of values of K uncorrelated variables called Principal Components.

  • The number of principal components is ALWAYS less than or equal to the number of original variables i.e K < M

5/14

6 of 14

PCA and its Relation to Face Recognition

  • Principal Component Analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of possibly correlated M face images into a set of values of K uncorrelated variables called Eigenfaces.

  • The number of Eigenfaces is ALWAYS less than or equal to the number of original face images. i.e K < M

6/14

7 of 14

PCA and its Relation to Face Recognition

This transformation is defined in such a way that the first principal component shows the most dominant “direction”/”features” of the dataset and each succeeding component in turn shows the next possible dominant “directions/features”, under the constraint that it be uncorrelated with the preceding components.

To reduce the calculations needed for finding these principal components, the dimensionality of the original dataset is reduced before they are calculated.

7/14

8 of 14

PCA and its Relation to Face Recognition

Since Principal components show the “directions” of the data, and each proceeding component shows less “directions” and more “noise”, only Few first principal components (say K) are selected whereas the rest of the last components are discarded.

These K principal components can safely represent the whole original dataset because they depict the major features/directions that make up the dataset.

8/14

9 of 14

PCA and its Relation to Face Recognition

9/14

10 of 14

Eigen Face Representation of Image

10/14

Therefore, each variable (image) in the original dataset can be represented in terms of these K Principal Components.

11 of 14

Eigen Face Representation of Image (Advantage):

11/14

Representing a data point this way (as a combination of K principal components) reduces the number of values (from M to K) needed to recognize it.

This makes the recognition process faster and more free of error caused by noise. It is because we discarded all the noisy Eigen Faces. In short, we discarded all the noise in the dataset.

12 of 14

How is PCA done?

12/14

  • PCA can be done by eigenvalue decomposition of a data covariance matrix.

  • The results of PCA are usually discussed in terms of :
    • Component scored ( i.e a data point is made up of ‘how much’ of each of K principal components).
    • And loadings (the weight by which each standardized original variable should be multiplied to get the component score).

13 of 14

Revisit PCA in The Light of Face Recognition

Lets just replace the following

Principal components => Eigenface

Data point/variable => image (or ‘face image’)

Dataset => training set (of images)

And see if we NOW understand PCA in relation to recognition or not (Do it in your mind if we have no time) :)

13/14

14 of 14

Thank you.

14