1 of 123

CHAPTER 2�MATRICES

Elementary Linear Algebra

R. Larson (8 Edition)

2.1 Operations with Matrices

2.2 Properties of Matrix Operations

2.3 The Inverse of a Matrix

2.4 Elementary Matrices

2.5 Markov Chain

2.6 More Applications of Matrix Operations

投影片設計製作者

淡江大學 電機系 翁慶昌 教授

2 of 123

CH 2 Linear Algebra Applied

Flight Crew Scheduling (p.47) Beam Deflection (p.64)

Information Retrieval (p.58)

Computational Fluid Dynamics (p.79) Data Encryption (p.94)

2/123

3 of 123

2.1 Operations with Matrices

  • Matrix:

(i, j)-th entry:

row: m

column: n

size: m×n

Elementary Linear Algebra: Section 2.1, p.40

3/123

4 of 123

  • i-th row vector:
  • j-th column vector:

row matrix

column matrix

  • Square matrix: m = n

Elementary Linear Algebra: Section 2.1, p.40

4/123

5 of 123

  • Diagonal matrix:
  • Trace:

Elementary Linear Algebra: Section 2.1, Addition

5/123

6 of 123

  • Ex:

Elementary Linear Algebra: Section 2.1, Addition

6/123

7 of 123

  • Equal matrix:
  • Ex 1: (Equal matrix)

Elementary Linear Algebra: Section 2.1, p.40

7/123

8 of 123

  • Matrix addition:
  • Ex 2: (Matrix addition)

Elementary Linear Algebra: Section 2.1, p.41

8/123

9 of 123

  • Matrix subtraction:
  • Scalar multiplication:
  • Ex 3: (Scalar multiplication and matrix subtraction)

Find (a) 3A, (b) –B, (c) 3AB

Elementary Linear Algebra: Section 2.1, p.41

9/123

10 of 123

(a)

(b)

(c)

Sol:

Elementary Linear Algebra: Section 2.1, p.41

10/123

11 of 123

  • Matrix multiplication:

where

  • Notes: (1) A+B = B+A, (2)

Size of AB

Elementary Linear Algebra: Section 2.1, p.42 & p.44

11/123

12 of 123

  • Ex 4: (Find AB)

Sol:

Elementary Linear Algebra: Section 2.1, p.43

12/123

13 of 123

  • Matrix form of a system of linear equations:

=

=

=

A

x

b

Elementary Linear Algebra: Section 2.1, p.45

13/123

14 of 123

  • Partitioned matrices:

submatrix

Elementary Linear Algebra: Section 2.1, Addition

14/123

15 of 123

Elementary Linear Algebra: Section 2.1, p.46

  • Linear combination of column vectors:

15/123

16 of 123

  • Ex 7: (Solve a system of linear equations)

(infinitely many solutions)

Elementary Linear Algebra: Section 2.1, p.47

16/123

17 of 123

Key Learning in Section 2.1

  • Determine whether two matrices are equal.
  • Add and subtract matrices and multiply a matrix by a scalar.
  • Multiply two matrices.
  • Use matrices to solve a system of linear equations.
  • Partition a matrix and write a linear combination of column vectors.

17/123

18 of 123

Keywords in Section 2.1

  • row vector: 列向量
  • column vector: 行向量
  • diagonal matrix: 對角矩陣
  • trace: 跡數
  • equality of matrices: 相等矩陣
  • matrix addition: 矩陣相加
  • scalar multiplication: 純量乘法(純量積)
  • matrix subtraction: 矩陣相減
  • matrix multiplication: 矩陣乘法
  • partitioned matrix: 分割矩陣
  • linear combination: 線性組合

18/123

19 of 123

2.2 Properties of Matrix Operations

  • Three basic matrix operators:

(1) matrix addition

(2) scalar multiplication

(3) matrix multiplication

  • Zero matrix:
  • Identity matrix of order n:

Elementary Linear Algebra: Section 2.2, 52-55

19/123

20 of 123

Then (1) A+B = B + A

(2) A + ( B + C ) = ( A + B ) + C

(3) ( cd ) A = c ( dA )

(4) 1A = A

(5) c( A+B ) = cA + cB

(6) ( c+d ) A = cA + dA

  • Properties of matrix addition and scalar multiplication:

Elementary Linear Algebra: Section 2.2, p.52

20/123

21 of 123

  • Notes:
  • 0m×n: the additive identity for the set of all m×n matrices
  • A: the additive inverse of A
  • Properties of zero matrices:

Elementary Linear Algebra: Section 2.2, p.53

21/123

22 of 123

(1) A(BC) = (AB)C

(2) A(B+C) = AB + AC

(3) (A+B)C = AC + BC

(4) c(AB) = (cA)B = A(cB)

  • Properties of identity matrix:
  • Properties of matrix multiplication:

Elementary Linear Algebra: Section 2.2, p.54 & p.56

22/123

23 of 123

  • Transpose of a matrix:

Elementary Linear Algebra: Section 2.2, p.57

23/123

24 of 123

(b)

(c)

Sol:

(a)

(b)

(c)

(a)

  • Ex 8: (Find the transpose of the following matrix)

Elementary Linear Algebra: Section 2.2, p.57

24/123

25 of 123

  • Properties of transposes:

Elementary Linear Algebra: Section 2.2, p.57

25/123

26 of 123

A square matrix A is symmetric if A = AT

  • Ex:

is symmetric, find a, b, c?

A square matrix A is skew-symmetric if AT = –A

  • Skew-symmetric matrix:

Sol:

  • Symmetric matrix:

Elementary Linear Algebra: Section 2.2, Addition

26/123

27 of 123

is a skew-symmetric, find a, b, c?

  • Note:

is symmetric

Pf:

Sol:

  • Ex:

Elementary Linear Algebra: Section 2.2, Addition

27/123

28 of 123

ab = ba

(Commutative law for multiplication)

(Sizes are not the same)

(Sizes are the same, but matrices are not equal)

  • Real number:
  • Matrix:

Three situations:

Elementary Linear Algebra: Section 2.2, Addition

28/123

29 of 123

Sol:

  • Note:
  • Ex 4:

Sow that AB and BA are not equal for the matrices.

and

Elementary Linear Algebra: Section 2.2, p.55

29/123

30 of 123

(Cancellation is not valid)

(Cancellation law)

  • Matrix:

(1) If C is invertible, then A = B

  • Real number:

Elementary Linear Algebra: Section 2.2, p.55

30/123

31 of 123

Sol:

So

But

  • Ex 5: (An example in which cancellation is not valid)

Show that AC=BC

Elementary Linear Algebra: Section 2.2, p.55

31/123

32 of 123

Key Learning in Section 2.2

  • Use the properties of matrix addition, scalar multiplication, and zero matrices.
  • Use the properties of matrix multiplication and the identity matrix.
  • Find the transpose of a matrix.

32/123

33 of 123

Keywords in Section 2.2

  • zero matrix: 零矩陣
  • identity matrix: 單位矩陣
  • transpose matrix: 轉置矩陣
  • symmetric matrix: 對稱矩陣
  • skew-symmetric matrix: 反對稱矩陣

33/123

34 of 123

2.3 The Inverse of a Matrix

  • Note:

A matrix that does not have an inverse is called noninvertible (or singular).

Consider

Then (1) A is invertible (or nonsingular)

(2) B is the inverse of A

  • Inverse matrix:

Elementary Linear Algebra: Section 2.3, p.62

34/123

35 of 123

If B and C are both inverses of the matrix A, then B = C.

Pf:

Consequently, the inverse of a matrix is unique.

  • Notes:

(1) The inverse of A is denoted by

  • Thm 2.7: (The inverse of a matrix is unique)

Elementary Linear Algebra: Section 2.3, pp.62-63

35/123

36 of 123

  • Ex 2: (Find the inverse of the matrix)

Sol:

  • Find the inverse of a matrix by Gauss-Jordan Elimination:

Elementary Linear Algebra: Section 2.3, pp.63-64

36/123

37 of 123

Thus

Elementary Linear Algebra: Section 2.3, pp.63-64

37/123

38 of 123

If A can’t be row reduced to I, then A is singular.

  • Note:

Elementary Linear Algebra: Section 2.3, p.64

38/123

39 of 123

Sol:

  • Ex 3: (Find the inverse of the following matrix)

Elementary Linear Algebra: Section 2.3, p.65

39/123

40 of 123

So the matrix A is invertible, and its inverse is

  • Check:

Elementary Linear Algebra: Section 2.3, p.65

40/123

41 of 123

  • Power of a square matrix:

Elementary Linear Algebra: Section 2.3, Addition

41/123

42 of 123

If A is an invertible matrix, k is a positive integer, and c is a scalar

not equal to zero, then

  • Thm 2.8: (Properties of inverse matrices)

Elementary Linear Algebra: Section 2.3, p.67

42/123

43 of 123

  • Thm 2.9: (The inverse of a product)

If A and B are invertible matrices of size n, then AB is invertible and

Pf:

  • Note:

Elementary Linear Algebra: Section 2.3, p.68

43/123

44 of 123

  • Thm 2.10: (Cancellation properties)

If C is an invertible matrix, then the following properties hold:

(1) If AC=BC, then A=B (Right cancellation property)

(2) If CA=CB, then A=B (Left cancellation property)

Pf:

  • Note:

If C is not invertible, then cancellation is not valid.

Elementary Linear Algebra: Section 2.3, p.69

44/123

45 of 123

  • Thm 2.11: (Systems of equations with unique solutions)

If A is an invertible matrix, then the system of linear equations

Ax = b has a unique solution given by

Pf:

( A is nonsingular)

This solution is unique.

(Left cancellation property)

Elementary Linear Algebra: Section 2.3, p.70

45/123

46 of 123

  • Note:

Elementary Linear Algebra: Section 2.3, p.70

(A is an invertible matrix)

  • Note:

For square systems (those having the same number of equations as variables), Theorem 2.11 can be used to determine whether the system has a unique solution.

46/123

47 of 123

Key Learning in Section 2.3

  • Find the inverse of a matrix (if it exists).
  • Use properties of inverse matrices.
  • Use an inverse matrix to solve a system of linear equations.

47/123

48 of 123

Keywords in Section 2.3

  • inverse matrix: 反矩陣
  • invertible: 可逆
  • nonsingular: 非奇異
  • noninvertible: 不可逆
  • singular: 奇異
  • power: 冪次

48/123

49 of 123

2.4 Elementary Matrices

  • Row elementary matrix:

An n×n matrix is called an elementary matrix if it can be obtained

from the identity matrix In by a single elementary operation.

  • Three row elementary matrices:

Interchange two rows.

Multiply a row by a nonzero constant.

Add a multiple of a row to another row.

  • Note:

Only do a single elementary row operation.

Elementary Linear Algebra: Section 2.4, p.74

49/123

50 of 123

  • Ex 1: (Elementary matrices and nonelementary matrices)

Elementary Linear Algebra: Section 2.4, p.74

50/123

51 of 123

  • Notes:
  • Thm 2.12: (Representing elementary row operations)

Let E be the elementary matrix obtained by performing an

elementary row operation on Im. If that same elementary row

operation is performed on an m×n matrix A, then the resulting

matrix is given by the product EA.

Elementary Linear Algebra: Section 2.4, p.75

51/123

52 of 123

  • Ex 2: (Elementary matrices and elementary row operation)

Elementary Linear Algebra: Section 2.4, p.75

52/123

53 of 123

Sol:

  • Ex 3: (Using elementary matrices)

Find a sequence of elementary matrices that can be used to write

the matrix A in row-echelon form.

Elementary Linear Algebra: Section 2.4, p.76

53/123

54 of 123

row-echelon form

Elementary Linear Algebra: Section 2.4, p.76

54/123

55 of 123

Matrix B is row-equivalent to A if there exists a finite number of elementary matrices such that

  • Row-equivalent:

Elementary Linear Algebra: Section 2.4, p.76

55/123

56 of 123

  • Thm 2.13: (Elementary matrices are invertible)

If E is an elementary matrix, then exists and

is an elementary matrix.

  • Notes:

Elementary Linear Algebra: Section 2.4, p.77

56/123

57 of 123

  • Ex:

Elementary Matrix Inverse Matrix

Elementary Linear Algebra: Section 2.4, p.77

(Elementary Matrix)

(Elementary Matrix)

(Elementary Matrix)

57/123

58 of 123

Pf:

  • Assume that A is the product of elementary matrices.

(a) Every elementary matrix is invertible.

(b) The product of invertible matrices is invertible.

Thus A is invertible.

(2) If A is invertible, has only the trivial solution. (Thm. 2.11)

Thus A can be written as the product of elementary matrices.

  • Thm 2.14: (A property of invertible matrices)

A square matrix A is invertible if and only if it can be written as

the product of elementary matrices.

Elementary Linear Algebra: Section 2.4, p.77

58/123

59 of 123

Sol:

  • Ex 4:

Find a sequence of elementary matrices whose product is

Elementary Linear Algebra: Section 2.4, p.78

59/123

60 of 123

  • Note:

If A is invertible

Elementary Linear Algebra: Section 2.4, p.78

60/123

61 of 123

If A is an n×n matrix, then the following statements are equivalent.

(1) A is invertible.

(2) Ax = b has a unique solution for every n×1 column matrix b.

(3) Ax = 0 has only the trivial solution.

(4) A is row-equivalent to In .

(5) A can be written as the product of elementary matrices.

  • Thm 2.15: (Equivalent conditions)

Elementary Linear Algebra: Section 2.4, p.78

61/123

62 of 123

L is a lower triangular matrix

U is an upper triangular matrix

If the n×n matrix A can be written as the product of a lower

triangular matrix L and an upper triangular matrix U, then A=LU is an LU-factorization of A

  • Note:

If a square matrix A can be row reduced to an upper triangular

matrix U using only the row operation of adding a multiple of

one row to another row below it, then it is easy to find an LU-factorization of A.

  • LU-factorization:

Elementary Linear Algebra: Section 2.4, p.79

62/123

63 of 123

Sol: (a)

  • Ex 5: (LU-factorization)

Elementary Linear Algebra: Section 2.4, pp.79-80

63/123

64 of 123

(b)

Elementary Linear Algebra: Section 2.4, pp.79-80

64/123

65 of 123

  • Two steps:

(1) Write y = Ux and solve Ly = b for y

(2) Solve Ux = y for x

  • Solving Ax=b with an LU-factorization of A:

Elementary Linear Algebra: Section 2.4, p.80

65/123

66 of 123

Sol:

  • Ex 7: (Solving a linear system using LU-factorization)

Elementary Linear Algebra: Section 2.4, p.81

66/123

67 of 123

Thus, the solution is

So

Elementary Linear Algebra: Section 2.4, p.81

67/123

68 of 123

Key Learning in Section 2.4

  • Factor a matrix into a product of elementary matrices.
  • Find and use an LU-factorization of a matrix to solve a system of linear equations.

68/123

69 of 123

Keywords in Section 2.4

  • row elementary matrix: 列基本矩陣
  • row equivalent: 列等價
  • lower triangular matrix: 下三角矩陣
  • upper triangular matrix: 上三角矩陣
  • LU-factorization: LU-分解

69/123

70 of 123

2.5 Markov Chains

Elementary Linear Algebra: Section 1.3, p.84

  • Stochastic Matrices

{S1, S2, …, Sn} is a finite set of state of a given population.

P is called the matrix of transition probabilities.

pij = 0 is certain to not change from the jth state to the ith state.

pij = 1 is certain to change from the jth state to the ith state.

S1 S2 Sn

Form

S1

S2

Sn

To

=1

=1

=1

+

+

+

+

+

+

70/123

71 of 123

Elementary Linear Algebra: Section 1.3, p.25

not stochastic

not stochastic

stochastic

stochastic

  • Ex 1: (Examples of Stochastic Matrices and Nonstochastic Matrices)

71/123

72 of 123

A B None

A

B

None

A

B

None

A

B

None

  • Ex 2: (A Consumer Preference Model)

72/123

73 of 123

A

B After 1 year

None

A

B After 3 year

None

A

B After 5 year

None

A

B After 10 year

None

A

B Steady state matrix

None

  • Ex 3: (A Consumer Preference Model)

73/123

74 of 123

Elementary Linear Algebra: Section 2.5, p.87

  • Steady state matrix:
  • Regular stochastic matrix:

A stochastic matrix P is regular when some power of P has only positive entries.

  • Note:

The matrix Xn eventually reaches a steady state. That is, as long as the matrix P does not change, the matrix product PnX approaches a limit . The limit is the steady state matrix.

When P is a regular stochastic matrix, the corresponding regular Markov chain

approaches a unique steady state matrix .

74/123

75 of 123

  • Ex 4: (Regular Stochastic Matrices)

Elementary Linear Algebra: Section 2.5, p.87

(a) The stochastic matrix

is regular because P has only positive entries.

(b) The stochastic matrix

has only positive entries.

75/123

76 of 123

  • Ex 4: (Regular Stochastic Matrices)

Elementary Linear Algebra: Section 2.5, p.87

(c) The stochastic matrix

is not regular because every power of P has two zeros in its second column.

76/123

77 of 123

  • Ex 5: (Finding a Steady State Matrix)

Elementary Linear Algebra: Section 2.5, p.88

Find the steady state matrix X of the Markov chain whose matrix of transition probabilities is the regular matrix

Sol:

Letting . Then use the matrix equation to

obtain

77/123

78 of 123

Elementary Linear Algebra: Section 2.5, p.88

or

Use these equations and the fact that x1 + x2 + x3 = 1 to write the system of linear equations below.

78/123

79 of 123

  • Ex 5: (Finding a Steady State Matrix)

Elementary Linear Algebra: Section 2.5, p.88

Use any appropriate method to verify that the solution of this system is

So the steady state matrix is

Check:

79/123

80 of 123

Elementary Linear Algebra: Section 2.5, p.87

  • Finding the Steady State Matrix of a Markov chain:

1. Check to see that the matrix of transition probabilities P

is a regular matrix.

2. Solve the system of linear equations obtained from the matrix

equation along with the equation

3. Check the solution found in Step 2 in the matrix equation

80/123

81 of 123

Elementary Linear Algebra: Section 2.5, p.89

  • Absorbing state:

1. The Markov chain has at least one absorbing state.

2. It is possible for a member of the population to move from any

nonabsorbing state to an absorbing state in a finite number of

transitions.

An absorbing Markov chain has the two properties listed below.

  • Absorbing Markov chain:

Consider a Markov chain with n different states {S1, S2, . . . , Sn}. The ith state Si is an absorbing state when, in the matrix of transition probabilities P, pii = 1. That is, the entry on the main diagonal of P is 1 and all other entries in the ith column of P are 0.

81/123

82 of 123

  • Ex 6: (Absorbing and Nonabsorbing Markov Chains)

Elementary Linear Algebra: Section 2.5, p.89

(a) For the matrix

From

S1 S2 S3

S1

S2 To

S3

the second state, represented by the second column, is absorbing. Moreover, the corresponding Markov chain is also absorbing because it is possible to move from S1 to S2 in two transitions, and it is possible to move from S3 to S2 in one transition.

82/123

83 of 123

Elementary Linear Algebra: Section 2.5, p.89

(b) For the matrix

From

S1 S2 S3 S4

the second state is absorbing. However, the corresponding Markov chain is not absorbing because there is no way to move from state S3 or state S4 to state S2.

S1

S2

S3

S4

To

83/123

84 of 123

  • Ex 6: (Absorbing and Nonabsorbing Markov Chains)

Elementary Linear Algebra: Section 2.5, p.89

(a) For the matrix

From

S1 S2 S3 S4

has two absorbing states: S2 and S4. Moreover, the corresponding Markov chain is also absorbing because it is possible to move from either of the nonabsorbing states, S1 or S3, to either of the absorbing states in one step.

S1

S2

S3

S4

To

84/123

85 of 123

Ex 7: (Finding Steady State Matrices of Absorbing Markov Chains)

Elementary Linear Algebra: Section 2.5, p.90

Use the matrix equation , or

along with the equation x1 + x2 + x3 = 1 to write the system of linear equations

85/123

86 of 123

Elementary Linear Algebra: Section 2.5, p.90

The solution of this system is x1 = 0, x2 = 1, and x3 = 0, so the steady state matrix is X = [0 1 0]T. Note that coincides with the second column of the matrix of transition probabilities P.

86/123

87 of 123

Ex 7: (Finding Steady State Matrices of Absorbing Markov Chains)

Elementary Linear Algebra: Section 2.5, p.90

Use the matrix equation , or

along with the equation x1 + x2 + x3 + x4 = 1 to write the system of linear equations

87/123

88 of 123

Elementary Linear Algebra: Section 2.5, p.90

The solution of this system is x1 = 0, x2 = 1 – t, x3 = 0, and x4 = t, where t is any real number such that 0 ≤ x ≤ 1. So, the steady matrix is = [0 1 − t 0 t]T. The Markov chain has an infinite number of steady state matrices.

88/123

89 of 123

Key Learning in Section 2.5

  • Use a stochastic matrix to find the nth state matrix of a Markov chain.
  • Find the steady state matrix of a Markov chain.
  • Find the steady state matrix of an absorbing Markov chain.

89/123

90 of 123

Keywords in Section 2.5

  • matrix of transition probabilities: 轉移機率矩陣
  • stochastic: 隨機
  • stochastic matrix: 隨機矩陣
  • state matrix: 狀態矩陣
  • Markov chain: 馬可夫鏈
  • steady state: 穩定狀態
  • regular stochastic matrix: 正規隨機矩陣
  • regular Markov chain: 正規馬可夫鏈
  • steady state matrix: 穩定狀態矩陣
  • absorbing Markov chains: 吸收馬可夫鏈

90/123

91 of 123

a method of using matrix multiplication to encode and decode messages.

0 = __ 2 = G 2 = N 2 = U

1 = A 2 = H 2 = O 2 = V

2 = B 2 = I 2 = P 2 = W

2 = C 2 = J 2 = Q 2 = X

2 = D 2 = K 2 = R 2 = Y

2 = E 2 = L 2 = S 2 = Z

2 = F 2 = M 2 = T

  • Cryptography

2.6 More Applications of Matrix Operations

Elementary Linear Algebra: Section 2.6, p.94

91/123

92 of 123

M E E T _ M E _ M O N D A Y _

  • Notes:

(1) The use of a blank space fill out the last uncoded row matrix.

(2) To encode a message, choose an n × n invertible matrix A

and multiply the uncoded row matrices (on the right) by A

to obtain coded row matrices.

  • Ex 1: (Forming Uncoded Row Matrices)

Elementary Linear Algebra: Section 2.6, p.94

92/123

93 of 123

Uncoded

Row Matrix

Encoding

Matrix A

Coded

Row Matrix

  • Ex 2: (Encoding a Message)

Elementary Linear Algebra: Section 2.6, p.95

93/123

94 of 123

Uncoded

Row Matrix

Encoding

Matrix A

Coded

Row Matrix

Elementary Linear Algebra: Section 2.6, p.95

94/123

95 of 123

the sequence of coded row matrices

cryptogram

an uncoded 1 × n matrix

Y = XA is the corresponding encoded matrix

to obtain

Elementary Linear Algebra: Section 2.6, p.95

95/123

96 of 123

Gauss-Jordan eliminiation

the sequence of coded row matrices

  • Ex 3: (Decoding a Message)

Elementary Linear Algebra: Section 2.6, p.96

96/123

97 of 123

Coded

Row Matrix

Encoding Matrix A-1

Decoded

Row Matrix

Elementary Linear Algebra: Section 2.6, p.96

97/123

98 of 123

Coded

Row Matrix

Encoding

Matrix A-1

Decoded

Row Matrix

Elementary Linear Algebra: Section 2.6, p.96

98/123

99 of 123

M E E T _ M E _ M O N D A Y _

Coded

Row Matrix

Encoding

Matrix A-1

Decoded

Row Matrix

the sequence of decoded row matrices

the message

Elementary Linear Algebra: Section 2.6, p.96

99/123

100 of 123

  • Input-output matrix:

(2) The values of dij must satisfy and the sum of the

entries in any column must be less than or equal to 1.

(1) dij be the amount of output the jth industry needs from the

ith industry to produce one unit of output per year.

I1 I2 In

User (Output)

I1

I2

In

Supplier (Input)

Elementary Linear Algebra: Section 2.6, p.97

100/123

101 of 123

  • Ex 4: (Forming an Input-Output Matrix)

Elementary Linear Algebra: Section 2.6, p.97

Consider a simple economic system consisting of three industries: electricity, water, and coal. Production, or output, of one unit of electricity requires 0.5 unit of itself, 0.25 unit of water, and 0.25 unit of coal. Production of one unit of water requires 0.1 unit of electricity, 0.6 unit of itself, and 0 units of coal. Production of one unit of coal requires 0.2 unit of electricity, 0.15 unit of water, and 0.5 unit of itself. Find the input-output matrix for this system.

Sol:

The column entries show the amounts each industry requires from the others, and from itself, to produce one unit of output.

101/123

102 of 123

  • Ex 4: (Forming an Input-Output Matrix)

Elementary Linear Algebra: Section 2.6, p.97

The row entries show the amounts each industry supplies to the others, and to itself, for that industry to produce one unit of output. For instance, the electricity industry supplies 0.5 unit to itself, 0.1 unit to water, and 0.2 unit to coal.

User (Output)

E W C

E

W Supplier (Input)

C

102/123

103 of 123

  • Leontief input-output model:

Let the total output of the ith industry be denoted by xi. If the economic system is closed (that is, the economic system sells its products only to industries within the system, as in the example above), then the total output of the ith industry is

I1 I2 In

User (Output)

I1

I2

In

Supplier (Input)

Elementary Linear Algebra: Section 2.6, p.97

(Closed System)

  • Closed system:

103/123

104 of 123

If the industries within the system sell products to nonproducing groups (such as governments or charitable organizations) outside the system, then the system is open and the total output of the ith industry is

Elementary Linear Algebra: Section 2.6, p.98

(Open system)

  • Open system:

where ei represents the external demand for the ith industry’s product. The system of n linear equations below represents the collection of total outputs for an open system.

The matrix form of this system is X = DX + E, where X is the output matrix and E is the external demand matrix.

104/123

105 of 123

Ex 5: (Solving for the output Matrix of an open system)

Elementary Linear Algebra: Section 2.6, p.98

An economic system composed of three industries has the input-output matrix shown below.

User (Output)

A B C

A

B Supplier (Input)

C

Sol:

Letting I be the identity matrix, write the equation X = DX + E as IXDX = E, which means that (ID)X = E. Using the matrix D above produces

105/123

106 of 123

Ex 5: (Solving for the output Matrix of an open system)

Elementary Linear Algebra: Section 2.6, p.98

So, the output matrix is

Using Gauss-Jordan elimination,

A

B

C

106/123

107 of 123

Ex 5: (Solving for the output Matrix of an open system)

Elementary Linear Algebra: Section 2.6, p.98

To produce the given external demands, the outputs of the three industries must be approximately 46,750 units for industry A, 50,950 units for industry B, and 37,800 units for industry C.

107/123

108 of 123

Determine a line that appears to best fit the points

(1, 1), (2, 2), (3, 4), (4, 4), and (5, 6).

A procedure used in statistics to develop linear models.

A method for approximating a line of best fit for a given set of data points.

  • Least Squares Regression analysis
  • Ex 6: (A Visual Straight-Line Approximation)

Elementary Linear Algebra: Section 2.6, p.99

108/123

109 of 123

Model 1: f(x) = 0.5 + x

Model 2: f(x) = 1.2x

xi

yi

F(xi)

[yi - F(xi)]2

xi

yi

F(xi)

[yi - F(xi)]2

1

1

1.5

(-0.5) 2

1

1

1.2

(-0.2) 2

2

2

2.5

(-0.5) 2

2

2

2.4

(-0.4) 2

3

4

3.5

(+0.5) 2

3

4

3.6

(+0.5) 2

4

4

4.5

(-0.5) 2

4

4

4.8

(-0.8) 2

5

6

5.5

(+0.5) 2

5

6

6.0

(0.0) 2

Sum

1.25

Sum

1.00

sum of squared error

Elementary Linear Algebra: Section 2.6, p.99

109/123

110 of 123

  • Notes:

(1) The sums of squared errors confirm that the second model

fits the given points better than the first model.

(2) Of all possible linear models for a given set of points, the

model that has the best fit is defined to be the one that

minimizes the sum of squared error.

(3) This model is called the least squares regression line, and

the procedure for finding it is called the method of least

squares.

Elementary Linear Algebra: Section 2.6, p.100

110/123

111 of 123

a set of points

the least squares regression line

minimizes the sum of squared error

the system of linear equations

  • Definition of Least Squares Regression Line

Elementary Linear Algebra: Section 2.6, p.100

111/123

112 of 123

the error

the system of linear equations

define Y, X, A, and E

the matrix equations

Elementary Linear Algebra: Section 2.6, p.100

112/123

113 of 123

  • Notes:

(1) The matrix has a column of 1’s (corresponding to a0) and a

column containing the xi’s.

(2) This matrix equation can be used to determine the

coefficients of the least squares regression line.

  • Matrix From for Linear Regression

the regression model

the least squares regression line

the sum of squared error

Elementary Linear Algebra: Section 2.6, pp.100-101

113/123

114 of 123

Find the least squares regression line for the points

(1, 1), (2, 2), (3, 4), (4, 4), and (5, 6).

Sol: Choose a fourth-degree polynomial function

  • Ex 7: (Finding the Least Squares Regression Line)

Elementary Linear Algebra: Section 2.6, p.101

114/123

115 of 123

the coefficient matrix

the least squares regression line

Elementary Linear Algebra: Section 2.6, p.101

115/123

116 of 123

Key Learning in Section 2.6

  • Use matrix multiplication to encode and decode messages.
  • Use matrix algebra to analyze an economic system (Leontief input-output model).
  • Find the least squares regression line for a set of data.

116/123

117 of 123

Keywords in Section 2.6

  • cryptogram: 密碼學
  • encode: 編碼
  • decode: 解碼
  • uncoded row matrices: 未編碼的列矩陣
  • coded row matrices: 已編碼的列矩陣
  • input: 輸入
  • output: 輸出
  • input-output matrix: 輸入-輸出矩陣
  • closed: 封閉的
  • open: 開放的
  • external demand matrix: 外部需求矩陣
  • sum of squared error: 誤差平方
  • least squares regression line: 最小平方回歸線
  • method of least squares: 最小平方法

117/123

118 of 123

  • Fight Crew Scheduling

Many real-life applications of linear systems involve enormous numbers of equations and variables. For example, a flight crew scheduling problem for American Airlines required the manipulation of matrices with 837 rows and more than 12,750,000 columns. To solve this application of linear programming, researchers partitioned the problem into smaller pieces and solved it on a computer.

2.1 Linear Algebra Applied

Elementary Linear Algebra: Section 2.1, p.47

118/123

119 of 123

  • Information Retrieval

Information retrieval systems such as Internet search engines make use of matrix theory and linear algebra to keep track of, for instance, keywords that occur in a database. To illustrate with a simplified example, suppose you wanted to perform a search on some of the m available keywords in a database of n documents. You could represent the occurrences of the m keywords in the n documents with A, an m 🞨 n matrix in which an entry is 1 if the keyword occurs in the document and 0 if it does not occur in the document. You could represent the search with the m🞨1 column matrix x in which a 1 entry represents a keyword you are searching and 0 represents a keyword you are not searching. Then, the n🞨1 matrix product ATx would represent the number of keywords in your search that occur in each of the n documents. For a discussion on the PageRank algorithm that is used in Google’s search engine, see Section 2.5 (page 86).

2.2 Linear Algebra Applied

Elementary Linear Algebra: Section 2.2, p.58

119/123

120 of 123

  • Beam Deflection

Recall Hooke’s law, which states that for relatively small deformations of an elastic object, the amount of deflection is directly proportional to the force causing the deformation. In a simply supported elastic beam subjected to multiple forces, deflection d is related to force w by the matrix equation

d = Fw

where is a flexibility matrix whose entries depend on the material of the beam. The inverse of the flexibility matrix, F‒1 is called the stiffness matrix. In Exercises 61 and 62, you are asked to find the stiffness matrix F‒1 and the force matrix w for a given set of flexibility and deflection matrices.

2.3 Linear Algebra Applied

Elementary Linear Algebra: Section 2.3, p.64

120/123

121 of 123

  • Computational Fluid Dynamics

Computational fluid dynamics (CFD) is the computer-based analysis of such real-life phenomena as fluid flow, heat transfer, and chemical reactions. Solving the conservation of energy, mass, and momentum equations involved in a CFD analysis can involve large systems of linear equations. So, for efficiency in computing, CFD analyses often use matrix partitioning and LU-factorization in their algorithms. Aerospace companies such as Boeing and Airbus have used CFD analysis in aircraft design. For instance, engineers at Boeing used CFD analysis to simulate airflow around a virtual model of their 787 aircraft to help produce a faster and more efficient design than those of earlier Boeing aircraft.

2.4 Linear Algebra Applied

Elementary Linear Algebra: Section 2.4, p.79

121/123

122 of 123

Google’s PageRank algorithm makes use of Markov chains. For a search set that contains n web pages, define an n × n matrix A such that aij = 1 when page j references page i and aij = 0 otherwise. Adjust A to account for web pages without external references, scale each column of A so that A is stochastic, and call this matrix B. Then define

where p is the probability that a user follows a link on a page, 1 − p is the probability that the user goes to any page at random, and E is an n × n matrix whose entries are all 1. The Markov chain whose matrix of transition probabilities is M converges to a unique steady state matrix, which gives an estimate of page ranks. Section 10.3 discusses a method that can be used to estimate the steady state matrix.

2.5 Linear Algebra Applied

Elementary Linear Algebra: Section 2.5, p.86

122/123

123 of 123

  • Data Encryption

Information security is of the utmost importance when conducting business online. If a malicious party should receive confidential information such as passwords, personal identification numbers, credit card numbers, Social Security numbers, bank account details, or sensitive company information, then the effects can be damaging. To protect the confidentiality and integrity of such information, Internet security can include the use of data encryption, the process of encoding information so that the only way to decode it, apart from an “exhaustion attack,” is to use a key. Data encryption technology uses algorithms based on the material presented here, but on a much more sophisticated level, to prevent malicious parties from discovering the key.

2.6 Linear Algebra Applied

Elementary Linear Algebra: Section 2.6, p.94

123/123