Chapter 5�Eigenvalues and Eigenvectors
大葉大學 資訊工程系
黃鈴玲
Linear Algebra
5.1 Eigenvalues and Eigenvectors
2
Definition
Let A be an n × n matrix. A scalar λ is called an eigenvalue (特徵值,固有值) of A if there exists a nonzero vector x in Rn such that
Ax = λx.
The vector x is called an eigenvector corresponding to λ.
Figure 6.1
Computation of Eigenvalues and Eigenvectors
3
Let A be an n × n matrix with eigenvalue λ and corresponding eigenvector x. Thus Ax = λx. This equation may be written
Ax – λx = 0
given
(A – λIn)x = 0
Solving the equation |A – λIn| = 0 for λ leads to all the eigenvalues of A.
On expending the determinant |A – λIn|, we get a polynomial in λ. This polynomial is called the characteristic polynomial of A.
The equation |A – λIn| = 0 is called the characteristic equation of A.
Example 1
4
Find the eigenvalues and eigenvectors of the matrix
Let us first derive the characteristic polynomial of A.
We get
Solution
We now solve the characteristic equation of A.
The eigenvalues of A are 2 and –1.
The corresponding eigenvectors are found by using these values of λ in the equation(A – λI2)x = 0. There are many eigenvectors corresponding to each eigenvalue.
We solve the equation (A – 2I2)x = 0 for x. The matrix �(A – 2I2) is obtained by subtracting 2 from the diagonal elements of A. We get
5
This leads to the system of equations
giving x1 = –x2. The solutions to this system of equations are x1 = –r, x2 = r, where r is a scalar. Thus the eigenvectors of A corresponding to λ = 2 are nonzero vectors of the form
We solve the equation (A + 1I2)x = 0 for x. The matrix �(A + 1I2) is obtained by adding 1 to the diagonal elements of A. We get
6
This leads to the system of equations
Thus x1 = –2x2. The solutions to this system of equations are x1 = –2s and x2 = s, where s is a scalar. Thus the eigenvectors of A corresponding to λ = –1 are nonzero vectors of the form
隨堂作業:9(a)�先不求eigenspaces
Theorem 5.1
7
Let A be an n × n matrix and λ an eigenvalue of A. The set of all eigenvectors corresponding to λ, together with the zero vector, is a subspace of Rn. This subspace is called the eigenspace of λ.
Proof
Let x1 and x2 be two vectors in the eigenspace of λ and let c be a scalar. Then Ax1 = λx1 and Ax2 = λx2. Hence,
Thus is a vector in the eigenspace of λ. The set is closed under addition.
8
Further, since Ax1 = λx1,
Therefore cx1 is a vector in the eigenspace of λ. The set is closed scalar multiplication.
Thus the set is a subspace of Rn.
Example 2
9
Find the eigenvalues and eigenvectors of the matrix
The matrix A – λI3 is obtained by subtracting λ from the diagonal elements of A.Thus
Solution
The characteristic polynomial of A is |A – λI3|. Using row and column operations to simplify determinants, we get
10
We now solving the characteristic equation of A:
The eigenvalues of A are 10 and 1.
The corresponding eigenvectors are found by using three values of λ in the equation (A – λI3)x = 0.
We get
11
The solution to this system of equations are x1 = 2r, x2 = 2r, and x3 = r, where r is a scalar. Thus the eigenspace of λ = 10 is the one-dimensional space of vectors of the form.
Let λ = 1 in (A – λI3)x = 0. We get
12
The solution to this system of equations can be shown to be x1 = – s – t, x2 = s, and x3 = 2t, where s and t are scalars. Thus the eigenspace of λ = 1 is the space of vectors of the form.
13
Separating the parameters s and t, we can write
Thus the eigenspace of λ = 1 is a two-dimensional subspace of R2 with basis
If an eigenvalue occurs as a k times repeated root of the �characteristic equation, we say that it is of multiplicity k. �Thus λ=10 has multiplicity 1, while λ=1 has multiplicity 2 �in this example.
隨堂作業:10
Example 3
14
Let A be an n × n matrix A with eigenvalues λ1, …, λn, and corresponding eigenvectors X1, …, Xn. Prove that if c ≠ 0, then the eigenvalues of cA are cλ1, …, cλn with corresponding eigenvectors X1, …, Xn.
Solution
Let λi be one of eigenvalues of A with corresponding eigenvectors Xi. Then AXi = λiXi. Multiply both sides of this equation by c to get
cAXi = cλiXi
Thus cλi is an eigenvalues of cA with corresponding eigenvector Xi.
Further, since cA is n × n matrix, the characteristic polynomial of A is of degree n. The characteristic equation has n roots, implying that cA has n eigenvalues. The eigenvalues of cA are therefore cλ1, …, cλn with corresponding eigenvectors X1, …, Xn.
隨堂作業:28
Homework
15
Ex24: Prove that if A is a diagonal matrix, then its eigenvalues are� the diagonal elements.
Ex26: Prove that if A and At have the same eigenvalues.
Ex32: Prove that the constant term of the characteristic polynomial� of a matrix A is |A|.
5.3 Diagonalization of Matrices
16
Definition
Let A and B be square matrices of the same size. B is said to be similar to A if there exists an invertible matrix C such that
B = C–1AC. The transformation of the matrix A into the matrix B in this manner is called a similarity transformation.
Example 1
17
Consider the following matrices A and C. C is invertible. Use the similarity transformation C–1AC to transform A into a matrix B.
Solution
隨堂作業:1(b)
Theorem 5.3
18
Similar matrices have the same eigenvalues.
Proof
Let A and B be similar matrices. Hence there exists a matrix C such that B = C–1AC. The characteristic polynomial of B is |B – λIn|. Substituting for B and using the multiplicative properties of determinants, we get
The characteristic polynomials of A and B are identical. This means that their eigenvalues are the same.
19
Definition
A square matrix A is said to be diagonalizable if there exists a matrix C such that D = C–1AC is a diagonal matrix.
Theorem 5.4
Let A be an n × n matrix.
20
Proof
(a) Let A have eigenvalues λ1, …, λn, with corresponding linearly independent eigenvectors v1, …, vn. Let C be the matrix having v1, …, vn as column vectors.
C = [v1 … vn]
Since Av1 = λ1v1, …, Avn = λ1vn, matrix multiplication in terms of columns gives
21
Since the columns of C are linearly independent, C is nonsingular. Thus
Therefore, if an n × n matrix A has n linearly independent eigenvectors, these eigenvectors can be used as the columns of a matrix A that diagonalizes A. The diagonal matrix has the eigenvaules of A as diagonal elements.
22
(b) The converse is proved by retracting the above steps. Commence with the assumption that C is a matrix [v1 … vn] that diagonalizes A. Thus, there exist scalars γ1, …, γn, such that
Retracting the above steps, we arrive at the conclusion that
Av1 = γ1v1, …, Avn = γnvn
The v1, …, vn are eigenvectors of A. Since C is nonsingular, these vectors (column vectors of C) are linearly independent. Thus if an n × n matrix A is diagonalizable, it has n linearly independent eigenvectors.
Example 2
23
Solution
Since A, a 2 × 2 matrix, has two linearly independent eigenvectors, it is diagonalizable.
24
(b) A is similar to the diagonal matrix D, which has diagonal elements λ1 = 2 and λ2 = –1. Thus
(c) Select two convenient linearly independent eigenvectors, say
Let these vectors be the column vectors of the diagonalizing matrix C.
We get
隨堂作業:3(a)
25
If A is similar to a diagonal matrix D under the transformation
C–1AC, then it can be shown that Ak = CDkC–1.
This result can be used to compute Ak. Let us derive this result and then apply it.
This leads to
Note
Example 3
26
Compute A9 for the following matrix A.
Solution
A is the matrix of the previous example. Use the values of C and D from that example. We get
隨堂作業:9(a)
Example 4
27
Show that the following matrix A is not diagonalizable.
Solution
The characteristic equation is
There is a single eigenvalue, λ = 2. We find the corresponding eigenvectors. (A – 2I2)x = 0 gives
Thus x1 = r, x2 = r. The eigenvectors are nonzero vectors of the form
The eigenspace is a one-dimensional space. A is a 2 × 2 matrix, but it does not have two linearly independent eigenvectors. Thus A is not diagonalizable.
隨堂作業:3(c)
Theorem 5.5
28
Let A be an n × n symmetric matrix.
Definition
A square matrix A is said to be orthogonally diagonalizable if there exists an orthogonal matrix C such that D = C−1AC is a diagonal matrix.
Orthogonal Diagonalization
Theorem 5.6
29
Let A be a square matrix. A is orthogonally diagonalizable if and only if it is a symmetric matrix.
Example 5
Orthogonally diagonalize the following symmetric matrix A.
Solution
The eigenvalues and corresponding eigenspaces of this matrix are
30
Let us determine the transformation. The eigenspaces V1 and V2 are orthogonal. Use a unit vector in each eigenspace as columns of an orthogonal matrix C. We get
隨堂作業:6(a)
The orthogonal transformation that leads to D is
Since A is symmetric, it can be diagonalized to give
Homework
31