Eigenfaces

From Scholarpedia
Sheng Zhang and Matthew Turk (2008), Scholarpedia, 3(9):4244. doi:10.4249/scholarpedia.4244 revision #128015 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Sheng Zhang

Figure 1: Examples of CMU PIE face images.
Figure 2: The leftmost in the first row is the average face, the others are top two eigenfaces; the second row shows eigenfaces with least three eigenvalues.

Eigenfaces refers to an appearance-based approach to face recognition that seeks to capture the variation in a collection of face images and use this information to encode and compare images of individual faces in a holistic (as opposed to a parts-based or feature-based) manner. Specifically, the eigenfaces are the principal components of a distribution of faces, or equivalently, the eigenvectors of the covariance matrix of the set of face images, where an image with \(N\) pixels is considered a point (or vector) in \(N\)-dimensional space. The idea of using principal components to represent human faces was developed by Sirovich and Kirby (Sirovich and Kirby 1987) and used by Turk and Pentland (Turk and Pentland 1991) for face detection and recognition. The Eigenface approach is considered by many to be the first working facial recognition technology, and it served as the basis for one of the top commercial face recognition technology products. Since its initial development and publication, there have been many extensions to the original method and many new developments in automatic face recognition systems. Eigenfaces is still often considered as a baseline comparison method to demonstrate the minimum expected performance of such a system.

The motivation of Eigenfaces is twofold:

  • Extract the relevant facial information, which may or may not be directly related to human intuition of face features such as the eyes, nose, and lips. One way to do so is to capture the statistical variation between face images.
  • Represent face images efficiently. To reduce the computation and space complexity, each face image can be represented using a small number of parameters.

The eigenfaces may be considered as a set of features which characterize the global variation among face images. Then each face image is approximated using a subset of the eigenfaces, those associated with the largest eigenvalues. These features account for the most variance in the training set.

Contents

Computing the Eigenfaces

Before generating eigenfaces, face images are normalized to line up the eyes and mouths, then all resampled at the same pixel resolution. Eigenfaces are then extracted out of the image data by means of principal component analysis (PCA) in the following manner:

  1. Given \(M\) face images with the size of \(h \times w\ ,\) each image is transformed into a vector of size \(D(=hw)\) and placed into the set\[\{\Gamma_1, \Gamma_2, \cdots, \Gamma_M\}\ .\] The face images should be appropriately scaled and aligned, and the backgrounds (and possibly non-face areas such as hair and neck) should be constant or removed.
  2. Each face differs from the average by the vector \(\Phi_i=\Gamma_i-\Psi\ ,\) where the average face is defined by \(\Psi=\frac{1}{M}\sum^M_{i=1}\Gamma_i\ .\)
  3. The covariance matrix \(\mathbf{C} \in \mathbb{R}^{D \times D}\) is defined as\[\mathbf{C} = \frac{1}{M}\sum^M_{i=1}\Phi_i \Phi^\top_i = \mathbf{A}\mathbf{A}^\top, \] where \(\mathbf{A}=\{\Phi_1,\Phi_2,\cdots,\Phi_M\} \in \mathbb{R}^{D \times M}\ .\)
  4. Determining the eigenvectors of \(\mathbf{C}\) is an intractable task for typical image sizes when \( D \gg M \ .\) However, to efficiently compute the eigenvectors of \(\mathbf{C}\ ,\) one may first compute the eigenvectors of the much-smaller \(M \times M\) matrix \(\mathbf{A}^\top \mathbf{A}\ .\) The eigenvector and eigenvalue matrices of \(\mathbf{A}^\top \mathbf{A}\) are defined as\[\mathbf{V}=\{\mathbf{v}_1, \mathbf{v}_2,\cdots\, \mathbf{v}_r\}\] and \(\Lambda=diag\{\lambda_1, \lambda_2,\cdots\, \lambda_r\}\ ,\) \(\lambda_1 \geq \lambda_2 \geq \cdots \lambda_r > 0\ ,\) where \(r\) is the rank of \(\mathbf{A}\ .\) Note that eigenvectors corresponding to eigenvalues of zero have been discarded.
  5. The eigenvalue and eigenvector matrices of \(\mathbf{C}\) are \(\Lambda\) and \(\mathbf{U}=\mathbf{AV}\Lambda^{-1/2}\ ,\) where \(\mathbf{U}=\{\mathbf{u}_i\}\) is the collection of eigenfaces.

Figure 1 shows some examples from the CMU PIE dataset (Sim et al. 2003), and Figure 2 shows the average face and eigenfaces derived from the dataset.

PCA and SVD

This section discusses the close relationship between PCA and SVD. PCA seeks to find \(k\) "principal axes," which define an orthonormal coordinate system that can capture most of the variance in data. For example, given the data matrix \(\mathbf{A}\ ,\) the covariance matrix (outer product matrix) can be written as \(\mathbf{C}= \mathbf{AA}^\top\ ,\) and the principal components \(\mathbf{u}_i\) \[ \mathbf{AA}^\top \mathbf{u}_i = \lambda_i \mathbf{u}_i \] where \(\mathbf{u}_i\) are the eigenvectors of \(\mathbf{AA}^\top\) associated with eigenvalues \(\lambda_i\ .\) When \(\mathbf{AA}^\top\) is too large to be efficiently decomposed, one can circumvent this by computing the inner product matrix \(\mathbf{A}^\top \mathbf{A}\) as in Step 4, \[ \mathbf{A}^\top \mathbf{A} \mathbf{v}_i = \lambda_i \mathbf{v}_i \] where \(\mathbf{v}_i\) are the eigenvectors of \(\mathbf{A}^\top \mathbf{A}\) associated with eigenvalues \(\lambda_i\ .\) Note that \(\lambda_i\) is nonnegative, and \(\mathbf{u}_i=\lambda^{-0.5}_i \mathbf{Av}_i\) for those nonzero eigenvalues. This explains Step 5, which is written in the matrix form.

The SVD of \(\mathbf{A}\) is the following factorization: \[ \mathbf{A} = \mathbf{U} \Delta \mathbf{V}^\top = \sum \delta_i \mathbf{u}_i \mathbf{v}^\top_i \] where \(\mathbf{U}\) and \(\mathbf{V}\) are orthogonal matrices, the diagonal matrix \(\Delta\) contains the singular value \(\delta_i\ ,\) which could be positive, zero, or even negative. Connecting to PCA, \(\mathbf{U}\) and \(\mathbf{V}\) are the eigenvector matrices of \(\mathbf{AA}^\top\) and \(\mathbf{A}^\top\mathbf{A}\ ,\) and \(\delta^2_i = \lambda_i\) for any \(i\ .\)

Using Eigenfaces in Face Processing

The eigenfaces span an \(m\)-dimensional subspace of the original image space by choosing the subset of eigenvectors \(\widehat{\mathbf{U}}=\{\mathbf{u}_1, \cdots, \mathbf{u}_m\}\) associated with the \(m\) largest eigenvalues. This results in the so-called face space, whose origin is the average face, and whose axes are the eigenfaces (see Figure 3). To perform face detection or recognition, one may compute the distance within or from the face space.

Figure 3: Visualization of a 2D face space, with the axes representing two Eigenfaces.
Figure 4: Original images (Row 1) and their projections into the face space (Row 2).

Face detection

Because the face space (the subspace spanned by the eigenfaces) defines the space of face images, face detection can be considered as detecting image patches that lie close to the face space. In other words, the projection distance \(\delta\) should be within some threshold \(\theta_{\delta}\ .\) The point-to-space distance \(\delta\) is the distance between the face image and its projection onto the face space, and it can be computed as \[ \delta = \|(\mathbf{I}-\widehat{\mathbf{U}}\widehat{\mathbf{U}}^\top)(\Gamma-\Psi)\| \] where \(\mathbf{I}\) is the identity matrix. As shown in Figure 4, the distance between an image (above) and its face space projection (below) is much smaller for a face than for a nonface (tree) image.

Face recognition

A new face \(\Gamma\) is projected into the face space by \(\Omega=\widehat{\mathbf{U}}^\top (\Gamma-\Psi)\ ,\) where \(\widehat{\mathbf{U}}\) is the set of significant eigenvectors. Note that the weight vector \(\Omega\) is the representation of the new face in face space. One simple way to determine which face class \(\Gamma\) belongs to is minimizing the Euclidean distance \[ \epsilon_k = \|\Omega-\Omega_k\| \] where \(\Omega_k\) is the weight vector representing the \(k\)th face class. The face \(\Gamma\) is considered as belonging to class \(k\) if the the minimum \(\epsilon_k\) is smaller than some predefined threshold \(\theta_{\epsilon}\ ;\) otherwise, it is classified as unknown. Figure 3 illustrates the projection and recognition by visualizing face space as a plane.

Eigenface Extensions

The Eigenface approach was an important step towards appearance-based recognition in computer vision. However, the method is sensitive to variation in lighting, scale, pose, facial expression, and occlusion. To work well, the face must be presented in frontal view, at the appropriate scale, in a similar lighting condition, in a defined (typically neutral) expression, and unoccluded. To handle the variety of challenges faced by real face recognition systems, many modifications and extensions have been proposed since the original work.

The idea of View-Based Eigenfaces (Pentland et al. 1994) is to build a set of separate eigenspaces, each capturing the variation of face images in a common view. When classifying a new face \(\Gamma\ ,\) the first step is to determine its view. This can be accomplished by computing the minimum distance \(\delta_l\) to individual eigenspace \(l\ .\) \[ \delta_l = \| (\mathbf{I}-\widehat{\mathbf{U}}_l\widehat{\mathbf{U}}^\top_l)(\Gamma-\Psi_l) \| \] where \(\Psi_l\) and \(\widehat{\mathbf{U}}_l\) are the average face and eigenfaces of view \(l\ .\) Once the proper view is determined, the image is projected onto the corresponding eigenfaces, and then recognized as in SectionFigure 3.

Another variation of the eigenface approach, called Eigenfeatures, or Modular Eigenfaces, is to compute eigenspaces for facial features, yielding eigeneyes, eigennoses, and eigenmouths. One may wish to improve face recognition performance by incorporating the eigenfeatures with the eigenfaces. This can be viewed as representing face images in a layered approach, where a coarse representation (i.e., eigenface) of the whole face is augmented by a fine representation (i.e., eigenfeature) of facial features. The eigenface approach has also been used to represent local face patches, rather than the whole face, which are then combined in a multi-classifier approach.

The idea of Eigenfaces has been extended very widely, for example, Fisherfaces (Belhumeur et al. 1997), Kernel Eigenfaces (Yang 2002), and others. Being aware that PCA is optimal for pattern representation, rather than classification, Fisherfaces seek the linear discriminants of the face distribution based on the Fisher criterion. Since PCA only captures the second order statistics (i.e., covariance matrix) of face images, Kernel Eigenfaces were developed to capture the higher order information. This can be accomplished by using the Kernel Principal Component Analysis (KPCA). Finally, the Eigenfaces technique has been even applied to other areas, such as eigenvoice (Kuhn et al. 2000) and eigengait (Huang et al. 1999).

Commercial Use

The original Eigenfaces work was driven by an interest in automatic television audience monitoring on behalf of Arbitron Inc. The initial U.S. patent (#5,164,992 and #RE36,041) was licensed by several companies, most notably Viisage Technology (now part of L-1 Identity Solutions Inc.), who built their commercial face recognition products around the eigenfaces approach.


References

  • P. Belhumeur, J. Hespanha, and D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711--720, 1997.
  • P. S. Huang, C. J. Harris, and M. S. Nixon, Recognising Humans by Gait via Parametric Canonical Space, Artificial Intelligence in Engineering, 13(4):359--366, October 1999.
  • R. Kuhn, J. C. Junqua, P. Nguyen, and N. Niedzielski, Rapid Speaker Adaptation in Eigenvoice Space, IEEE Transactions on Speech and Audio Processing, 8(6):695--707, 2000.
  • A. Pentland, B. Moghaddam, and T. Starne, View-based and Modular Eigenspaces for Face Recognition, IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, June 1994.
  • T. Sim, S. Baker, and M. Bsat, The CMU Pose, Illumination, and Expression Database, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1615 -- 1618, December 2003.
  • L. Sirovich and M. Kirby, Low-dimensional Procedure for the Characterization of Human Faces, Journal of the Optical Society of America A, 4:519--524, 1987.
  • M. Turk and A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, 3(1), 1991.
  • M. Turk and A. Pentland, Face Recognition using Eigenfaces, IEEE Conference on Computer Vision and Pattern Recognition, Maui, HI, June 1991.
  • M. H. Yang, Face Recognition Using Kernel Methods, Advances in Neural Information Processing systems (NIPS), pages 215--220, 2002.


Internal references

See also

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools