Matrices A Complete Course
by Luke S. Cole Version 1.7, July, 2000
CONTENTS
2
Contents 1 Linear Equations in Linear Algebra 1.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . 1.2 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Solving a Linear System . . . . . . . . . . . . . . . . . . . . . . .
4 4 5 6
2 Row Reduction and Echelon Forms 2.1 Some Terms for Matrices . . . . . . . . . . . . . . . . . . . . . . 2.2 Solutions to Linear Systems . . . . . . . . . . . . . . . . . . . . . 2.3 Back Substitution (Gaussian Elimination) . . . . . . . . . . . . .
7 7 8 8
3 Vector Equations 3.1 Addition of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Subtraction of Vectors . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . .
8 9 9 9
4 The Matrix Equation A~x = ~b 10 4.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5 Solution Sets of Linear Systems 11 5.1 Parametric Vector Form . . . . . . . . . . . . . . . . . . . . . . . 11 5.2 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . 11 5.2.1 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6 Linear Independence 6.1 Linear Independence 6.1.1 Definition . . 6.2 Linear Dependence . 6.2.1 Definitions .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
11 11 11 12 12
7 Linear Transformations 7.1 Properties . . . . . . . . . . . . . . . 7.2 Terms . . . . . . . . . . . . . . . . . 7.2.1 A Shear Transformation . . . 7.2.2 A Dilation Transformation . 7.2.3 A Reflected Transformation . 7.2.4 A Rotated Transformation . . 7.2.5 A Projection Transformation 7.3 One-to-one . . . . . . . . . . . . . . 7.3.1 Properties . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
12 12 13 13 13 13 13 13 14 14
8 Matrix Operations 8.1 Sums and Scalar Multiples . 8.1.1 Properties . . . . . . 8.2 Multiplication . . . . . . . . 8.2.1 Definition . . . . . . 8.2.2 Properties . . . . . . 8.3 Power of a Matrix . . . . . 8.4 The Transpose of a Matrix 8.4.1 Properties . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
14 14 14 14 14 14 15 15 15
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . . . . . .
CONTENTS
3
9 The Inverse of a Matrix 9.1 Properties . . . . . . . . . . . . . . . . . 9.2 Elementary Matrices . . . . . . . . . . . 9.2.1 Properties . . . . . . . . . . . . . 9.3 An Algorithm for Finding A−1 . . . . . 9.4 Characterizations of Invertible Matrices 9.5 Invertible Linear Transformations . . . . 9.5.1 Theorem . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
15 16 16 16 17 17 18 18
10 Subspaces of
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
18 18 18 18 19 19 19 19 20 20 20
. . . . . . . . A
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
20 20 21 21 21 22
1 Linear Equations in Linear Algebra
1 1.1
4
Linear Equations in Linear Algebra Systems of Linear Equations
A linear equation with variables x1 , ..., xn is an equation that can be written in the form: a1 .x1 + a2 .x2 + ... + an .xn = b
(1)
where, b R or C a1 , ..., an R or C n J+ The equations 4.x1 − 5.x2 + 2 = x1
√ and x2 = 2( 6 − x1 ) + x3
are both linear because they can be rearranged algebraically as in equation (4.1). The equations 4.x1 − 5.x2 = x1 .x2
√ and x2 = 2 x1 − 6
are both not linear because of the presence of x1 .x2 in the first equation and √ x1 in the second. A system of linear equations (or a linear system) is a collection of one or more linear equations involving the same variables.
1.2
Matrix Notation
5
E.g. For two linear equations 2.x1 − x2 + 1 · 5.x3 = 8 x1 − 4.x3 = −7 or For a single vector equation 2 4 2 x + x = 4 1 9 2 3 2 Here we thing of as a vector (a line segment joining (0,0) to (2,4) in a 4 plane) and x1 as a scalar. A system of linear equations has either: • no solution • exactly one solution • infinitely many solutions If a linear system is consistent if it has either one solution or infinitely many solutions; a system is inconsistent if it has no solution.
1.2
Matrix Notation
The Matrix is simply the essential information of a linear system recorded completely in a rectangular array. Given that a system is: − 2.x2 + x3 = 0 2.x2 − 8.x3 = 8 −4.x1 + 5.x2 + 9.x3 = −9 x1
There are two different forms of the matrix that can be formed from the above system. The Coefficient Matrix
1 −2 1 0 2 −8 −4 5 9 The Augmented Matrix 1 −2 1 0 0 2 −8 8 −4 5 9 −9
1.3
1.3
Solving a Linear System
6
Solving a Linear System
The basic strategy is to replace one system with an equivalent system. There are three basic operations are used to simplify a linear system: 1. Replace one equation by the sum of itself and a multiple of another equation. 2. Interchange the two equations. 3. Multiply all the terms in an equation by a nonzero constant. E.g.
Solve the following system: 1 −2 1 0 0 2 −8 8 −4 5 9 −9 We want to keep x1 in the first equation and eliminate it from the other equations. To do so, add 4 times equation 1 to equation 3. 1 −2 1 0 0 2 −8 8 0 −3 13 −9 Next, multiply equation 2 by 12 in order to obtain 1 as the coefficient for x2 . 1 −2 1 0 0 1 −4 4 0 −3 13 −9 Use the x2 in equation 2 to eliminate the −3.x2 in equation 3.
1 −2 1 0 0 1 −4 4 0 0 1 3 Eventually, we want to eliminate the −2.x2 term from equation 1, but it is more efficient to use the x3 in equation 3 first, to eliminate the −4.x3 and +x3 terms in equations 2 and 1.
1 −2 0 −3 0 1 0 16 0 0 1 3 Now, having cleaned out the column above the x3 in equation 3, we move back to the x2 in equation 2 and use it to eliminate the −2.x2 above it. Because of our previous work with x3 , there is now no arithmetic involving x3 terms. Adding 2 times equation 2 to equation 1, we obtain the system:
2 Row Reduction and Echelon Forms
7
1 0 0 29 0 1 0 16 0 0 1 3 Hence x1 = 29, x2 = 16 and x3 = 3 [i.e. (29,16,3)]
2
Row Reduction and Echelon Forms
A matrix is in Echelon Form if: 1. All nonzero rows are above any rows of all zeros. 2. It’s leading entries () are nonzero values a column to the right of the leading entry of the row above it. 3. All values below the leading entries are zero.
∗ ∗ ∗ 0 ∗ ∗ 0 0 0 0 0 0 0 0 Note: ∗ indicates, value ∈ <. A matrix is in Reduced Row Echelon Form if: 4. Leading entries are ’1’. 5. Zero’s are below, above to the left of it the leading entries. 1 0∗ ∗ 0 1 ∗ ∗ 0 0 0 0 0 0 0 0 Note: ∗ indicates, value ∈ <.
2.1
Some Terms for Matrices
• When we get a echelon form after row reduction, this process is called forward phase. • When we get a reduced row echelon form after row reduction of a echelon form, this process is called backward phase • The leading entry (or pivot) is the left most value before, the first column of the above row. • Free variables are parameters of a parametric description. • A unique matrix is when the solution set is defined (i.e. no free variables).
2.2
2.2
Solutions to Linear Systems
8
Solutions to Linear Systems
To find the general solution of a matrix we do as follows. E.g. 1 0 −5 1 For the matrix: 0 1 1 4 , x1 , x2 are leading terms and x3 is a free 00 0 0 variable. x1 = 1 + 5.t x1 1 5 x2 = 4 − t or, in vector form x2 = 4 + −1 t So, x3 = t x3 0 1
2.3
Back Substitution (Gaussian Elimination)
This method is done as follows: 1. Solve the equations for the leading variables. 2. Starting with the last equations, substitute each equation into the one above it. 3. Treat the free variable(s), if any, as unconstrained parameters.
3
Vector Equations
2 When we see a vector: p~ = 3 , this is a matrix with only one column and is 1 called a column vector or just a vector and each row represents a dimension. So, for example, a two row vector is a vector in two dimensions, denoted: <2 . A three row vector is a vector in three dimensions and denoted <3 . 1 The geometric description of a vector such as is shown in figure 1 4 Figure 1: <2 vector representation
7 The geometric description of a vector such as 10 is shown in figure 2 6 Figure 2: <3 vector representation
3.1
3.1
Addition of Vectors
9
Addition of Vectors
If we add vectors, ~u and ~v we simply add ~u to ~v (this is adding the corresponding rows of each column vector). 7 4 7+4 11 i.e. + = = , and is represented in figure 3 10 5 10 + 5 15 Figure 3: Addition of two vectors in <2
3.2
Subtraction of Vectors
If we subtract vectors, ~u and ~v we simply add ~u to −~v . 7 4 7−4 3 i.e. + − = = , and is represented in figure 4 10 5 10 − 5 5 Figure 4: Subtraction of two vectors in <2
3.3
Linear Combinations
A linear combination is expressed: w ~ =
p X
ci .~vi
(2)
i=1
Where, ~v1 , ~v3 , ..., ~vp are vectors in
Estimate the linear combinations of ~v1 =
−1 1
2 and ~v2 = that 1
generate the vector ~u. Looking at the geometrical description of ~v1 and ~v2 . –image– The parallelogram rule show us that ~u is the sum of 3.~v1 and −2.~v2 . i.e. ~u = 3.~v1 − 2.~v2 1 5 −3 E.g. If ~a1 = −2 , ~a2 = −13 and ~b = 8 , then Span(~a1 , ~a2 ) is a 3 −3 1 3 ~ plane through the origin in < . Is b in that plane.
4 The Matrix Equation A~x = ~b
10
Since we now that if x1 .~a1 + x2 .~a2 = b has a solution, then ~b is on the plane. 1 5 −3 1 5 −3 −2 −13 8 rref→ 0 −3 2 Now, 3 −3 1 0 0 −2 Here we can see that the system has no solution. ~b is not in Span{~a1 , ~a2 } ∴
The Matrix Equation A~x = ~b
4
A fundamental idea in linear algebra is to view a linear combination of vectors as the product of a matrix and a vector. The following equation shows this donation. A~x = ~b x1 n X x2 ~a1 ~a2 . . . ~an . = ..
(3)
xj~aj
(4)
j=1
xn Where, A is a m × n matrix with columns ~a1 , ~a2 , ..., ~an ~x is a column vector that is in
4.1
Properties
If A is an m × n matrix, ~u and ~v are vectors in
5 Solution Sets of Linear Systems
5
11
Solution Sets of Linear Systems
Solution sets of linear systems are important objects of study in linear algebra.
5.1
Parametric Vector Form
A solution set is said to be in parametric vector form if the solution is in the form: ~x = s~u + t~v
(5)
Where, ~x is a column vector that is in
5.2
Homogeneous Linear Systems
A system of linear equations is said to be homogeneous if it can be written in the form: A~x = 0
(6)
Where, A is an m × n matrix ~x is a column vector that is in
Theorems
1. The homogeneous equation has a nontrivial solution iff the equation has at least one free variable. 2. Suppose the equation A~x = ~b is consistent for some given ~b, and let p~ be a solution. Then the solution set of A~x = ~b is the set of all vectors of the form w ~ = p~ + ~vh , where ~vh is any solution of the homogeneous equation A~x = 0.
6 6.1 6.1.1
Linear Independence Linear Independence Definition
If vectors {~v1 , ..., ~vp } in
6.2
Linear Dependence
12
Figure 5: Graphical representation of what it means for vectors to be linear independent.
6.2
Linear Dependence
6.2.1
Definitions
1. If vectors {~v1 , ..., ~vp } in m for a matrix A being m × n 5. Iff ~vj is a linear combination of ~v1 , ~v2 , ..., ~vj−1 for the set of vectors {~v1 , ~v2 , ..., ~vk }. Figure 6 shows a graphical representation of what it means for vectors to be linear dependent. Figure 6: Graphical representation of what it means for vectors to be linear dependent.
7
Linear Transformations
Linear transformations are seeing what happens when we think of the matrix A as an object that acts on a vector ~x by multiplication to produce a new vector called A~x. So, if a matrix in
T :
then T (~x) = ~x 7→ A~x
E.g.
7.1
1 4 −3 1 3 1 = 5 8 2 0 5 1 1 1
Properties
1. T maps
(7)
7.2
Terms
13
1. T (~u + ~v ) = T (~u) + T (~v ) 2. T (c~u) = cT (~u)
∀~u ∈ <m
∀~u, ~v ∈
3. If T (x) = A~x for x = e1 , e2 , ..., en then T (~x) = T (e1 ) T (e2 ) . . . T (en ) ~x
7.2 7.2.1
Terms A Shear Transformation
A shear transformation is when a matrix is transformed so that some vectors are shifted as shown in figure 7. Figure 7: A Shear Transformation.
7.2.2
A Dilation Transformation
A dilation transformation is when a matrix is transformed so that the vectors are multiplied by some constant in <, as shown in figure 8. Figure 8: A Dilation Transformation.
7.2.3
A Reflected Transformation
A reflected transformation is when a matrix is transformed so that the vectors are reflected across y = x or y = x and y − x. See figure 9. Figure 9: A Reflected Transformation.
7.2.4
A Rotated Transformation
A rotated transformation is when a matrix is transformed so that the vectors are rotated through some angle, as shown in figure 10. Figure 10: A Rotated Transformation.
7.2.5
A Projection Transformation
A projection transformation is when a matrix is transformed so that the vectors are projected to some set of axis’s. See figure 11. Figure 11: A Projection Transformation.
7.3
One-to-one
7.3 7.3.1
14
One-to-one Properties
1. Iff T (~x) = 0 for T :
8
Matrix Operations
8.1 8.1.1
Sums and Scalar Multiples Properties
If A, B and C are matrices of the same size, and r and s are scalars. 1. A + B = B + A 2. (A + B) + C = A + (B + C) 3. A + 0 = A 4. r(A + B) = rA + rB 5. (r + s)A = rA + sA 6. r(sA) = (rs)A 4 05 1 1 1 2 −3 E.g. If A = ,B= and C = then what is −1 3 2 3 5 7 0 1 (a) A + B (b) A + C.
5 1 6 2 8 9 (b) A + C is not defined because A and C have different size matrices. (a)
8.2 8.2.1
A+B=
Multiplication Definition
If A is an m × n matrix, and if B is an n × p matrix with columns ~b1 , ~b2 , ..., ~bp then the product AB is the m × p matrix whose columns are A~b1 , A~b2 , ..., A~bp . That is: h i AB = A~b1 A~b2 . . . A~bp (8) 8.2.2
Properties
If A is a m × n matrix and B and C are matrices of size, for which the indicated sums and products are defined. 1. A(BC) = (AB)C
8.3
Power of a Matrix
15
2. A(B + C) = AB + AC 3. (B + C)A = BA + CA 4. r(AB) = (rA)B = A(rB)
∀r ∈ <
5. Im A = A = AIn
8.3
Power of a Matrix
If A is an m × n matrix and if k is a positive integer, then Ak
= A...A | {z }
(9)
k
Note: We interpret A0 as I
8.4
The Transpose of a Matrix
Given a m × n A matrix, the transpose of A is the n × m matrix, denoted as AT , where the columns of AT are formed from the corresponding rows of A. a b E.g. If A = , then what is the transpose of A. c d a c AT = b d 8.4.1
Properties
If A and B are matrices whose sizes are appropriate for the following sums and products. 1. (AT )T = A 2. (A + B)T = AT + BT 3. (rA)T = rAT 4. (AB)T = BT AT
9
∀r ∈ < Note: The reverse order
The Inverse of a Matrix
In this section we consider only square matrices and we investigate the matrix analogue of the reciprocal or multiplicative inverse of a nonzero real number. If A is an n × n matrix, if often happens that there is another n × n matrix C such that AC = I and CA = I where I is the n × n identity matrix. In this case, we say that A is invertible and we call C an inverse of A. If B were another inverse of A, then we would have B = BI = B(AC) = (BA)C = IC = C. Thus when A is invertible, its inverse is unique. We denote it by A−1 , so that: AA−1 = I
and A−1 A = I
(10)
9.1
Properties
16
A matrix that is not invertible is sometimes called a singular matrix, and an invertible matrix is called a nonsinqular matrix. An equation for the inverse of a 2 × 2 matrix is as follows: a b If A = and ad − bc 6= 0, then c d
A
−1
1 = ad − bc
d −b −c a
(11)
If ad − bc = 0, then A is not invertible. The quantity ad − bc is called the determinant of A, and we write: det A = ad − bc
(12)
A use of the inverse matrix is if for example each ~b in
9.1
Properties
1. If A is an invertible matrix, then A−1 is invertible and (A−1 )−1 = A 2. (AB)−1 = B−1 A−1
Note: The reverse order
3. (AT )−1 = (A−1 )T
9.2
Elementary Matrices
An elementary matrix is one that is obtained by performing a single elementary row operation on an identity matrix. The next example illustrate a elementary matrix. 1 0 0 a b c E.g. Let E = 0 1 0 and A = d e f . Compute EA and −4 0 1 g h i describe how the product can be obtained by elementary row operations on A. a b c e f So, EA = d g − 4a h − 4b i − 4c Hence,
9.2.1
Addition of -1 times row 1 of A to row 3 produces EA. This is a row replacement operation
Properties
1. If an elementary row operation is performed on an m × n matrix A, the resulting matrix can be written as EA, where the m × m matrix E is created by performing the same row operation on Im .
9.3
An Algorithm for Finding A−1
17
2. Each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type that transforms E back into I. 3. An n × n matrix A is invertible iff A is row equivalent to In , and in this case, any sequence of elementary row operations that reduces A to In also transforms In into A−1 .
9.3
An Algorithm for Finding A−1
If we place A and I side-by-side to form an augmented matrix [ A I ], then row operations on this matrix produce identical operations on A and on I. Furthermore, either there are row operations that transform A to In , and In to A−1 , or else A is not invertible. The Algorithm is to row reduce the augmented matrix [ A I ] to [ I A−1 ] 0 1 2 E.g. Find the inverse of the matrix A = 1 0 3 , if it exists. 4 −3 8 0 1 2 1 0 0 1 0 3 0 1 0 1 0 3 0 1 0 Here, A I = 1 0 3 0 1 0 ∼ 0 1 2 1 0 0 ∼ 0 1 2 1 0 0 4 −3 8 0 0 1 4 −3 8 0 0 1 0 −3 −4 0 −4 1 1030 1 0 10 3 0 1 0 1 0 0 −9/2 7 −3/2 ∼ 0 1 2 1 0 0 ∼ 0 1 2 1 0 0 ∼ 0 1 0 −2 4 −1 0 0 2 3 −4 1 0 0 1 3/2 −2 1/2 0 0 1 3/2 −2 1/2 −1 ∼ IA −9/2 7 −3/2 ∴ A−1 = −2 4 −1 3/2 −2 1/2
9.4
Characterizations of Invertible Matrices
Let A be a square n × n matrix. Then The following statements are equivalent. That is, for a given A, the statements are either all true or all false. 1. A is an invertible matrix. 2. A is row equivalent to the n × n identity matrix. 3. A has n pivot positions. 4. The equation A~x = ~b has at least one solution for each ~b in
9.5
Invertible Linear Transformations
9.5
18
Invertible Linear Transformations
A linear transformation T :
∀~x ∈
(13) (14)
The next theorem shows that if such an S exists, it is unique and must be a linear transformation. We call S the inverse of T and write it as T −1 . 9.5.1
Theorem If we let T :
Subspaces of
10 10.1
Definition
A subspace of
10.2
Column Space of a Matrix
10.2.1
Definition
The column space of a matrix A is the set Col A of all linear combinations of the columns of A. 1 −3 −4 3 E.g. Let A = −4 6 −2 and ~b = 3 . Determine whether ~b −3 7 6 −4 is in the column space of A. If A~x = ~b for some ~x and the equation Then ~b is in the column space of A. 1 −3 −4 3 1 −3 −4 −4 6 −2 3 rref → 0 −6 −18 −3 7 6 −4 0 0 0 ~b is in the column space of A. ∴
is consistent. 3 15 0
10.3
Null Space of a Matrix
10.3
19
Null Space of a Matrix
10.3.1
Definition
The null space of a matrix A is the set Nul A of all solutions to the homogeneous equation A~x = 0
10.4
Basis for a Subspace
10.4.1
Definition
A basis for a subspace H of
− x4 + 3.x5 = 0 x3 + 2.x4 − 2.x5 = 0 0 = 0 Hence, the general solution is: −3 1 x1 2.x2 + x4 − 3.x5 2 0 0 x2 1 x2 x3 = −2.x4 + 2.x5 = x2 0 +x4 −2 +x5 2 0 1 x4 0 x4 1 0 0 x5 x5 −3 1 2 0 0 1 , −2 , 2 is the basis for Nul A 0 So, 0 1 0 1 0 0 1 −3 (b) So, 1 , 2 is the basis for Col A 2 5 So,
So we say that: 1. The pivot columns of a matrix A form a basis for the column space of A. 2. The column vectors of the free variables of a row reduction echelon matrix A form a basis for the null space of A.
10.5
The Dimension of a Subspace
10.5
20
The Dimension of a Subspace
10.5.1
Definitions
1. The dimension of a nonzero subspace H, denoted by dim H, is the number of vectors in any basis for H. The dimension of the zero subspace {0} is defined to be zero1 . 2. The rank of a matrix A, denoted by rank A, is the dimension of the column space of A. 10.5.2
Theorems
1. If a matrix A has n columns, then rank A + dim Nul A = n
(15)
2. Let H be a p-dimensional subspace of
11
Determinants
Recall from equation 11 on page 16 that a matrix is invertible iff its determinant is nonzero. Here we will extend this fact to matrices greater then a 2 × 2 matrix.
11.1
Definition
For n > 2, the determinant of an n × n matrix A = [aij ] is the sum of n terms of the form ±a1j det A1j , with plus and minus signs alternating, where the entries a11 , a12 , ..., a1n are from the first row of A. i.e. det A = a11 det A11 − a12 det A12 + ... + (−1)1+n a1n det A1n n X = (−1)1+j a1j det A1j
(16) (17)
j=1
where, a1j is the term in the first row and in the j th column det A1j is the determinant of the matrix A without the first row and the j th column Since most matrices have many rows, mostly zero. The cofactor expansion across the first row formula is introduced. That is, if we let: Cij = (−1)i+j aij det Aij
(18)
det A = a11 C11 − a12 C12 + ... + a1n C1n
(19)
then
Form this we get the follow theorems: 1 The zero subspace has no basis (because the zero vector by itself forms a linearly dependent set).
11.2
Theorems
11.2
21
Theorems
The determinant of an n×n matrix A can be computed by a cofactor expansion across any row or down any column. 1. The expansion across the ith row using the cofactors in 18 is det A = ai1 Ci1 − ai2 Ci2 + ... + ain Cin
(20)
2. The cofactor expansion down the j th column is det A = a1j C1j − a2j C2j + ... + anj Cnj
11.3
(21)
Properties
Let A be a square matrix. 1. If a multiple of one row of A is added to another orw to produce a matrix B, then: det B = det A 2. If two rows of A are interchanged to produce B, then: det B = −det A 3. If one row of A is multiplied by k to produce B, then: det B = k.det A 4. If a matrix A has the row reduced echelon form U then: product of (−1)r . when A is invertible pivots in U det A = 0 when A is not invertible 5. A square matrix A is invertible iff det A 6= 0 6. If A is an n × n matrix, then: det AT = det A 7. If A and B are n × n matrices, then: det AB = (det A)(det B) 8. If A and B are n × n matrices, then: det (A + B) 6= det A + det B 9. If A is an n × n matrix and E is an n× n 1 det EA = (det E)(det A), where det E = −1 r
11.4
elementary matrix, then: if E is a row replacement if E is an interchange if E is a scale by r
Cramer’s Rule
Let A be an invertible n × n matrix. For any ~b in
det Ai (~b) det A
Where, i = 1, 2, ..., n h i Ai (~b) = ~a1 · · · ~ai−1 ~b ~ai+1 · · · ~an
(22)
11.5
A Formula for the Inverse of A
22
E.g. Use Cramer’s rule to solve the system 3.x1 − 2.x2 = 6 and −5.x1 + 4.x2 = 8 3 −2 6 −2 3 6 ~ ~ Here, A = , A1 (b) = , A2 (b) = −5 4 8 4 −5 8 And since det A = 2, the system has a unique solution. 24 + 16 det A1 (~b) = = 20 det A 2 det A2 (~b) 24 + 30 x2 = = = 27 det A 2
x1 =
11.5
A Formula for the Inverse of A
From Cramer’s rule we find that if we let a A be an invertible n × n matrix. Then: A−1 ij =
det Ai (~ej ) det A
Where, ~ej is the j th column of the identity matrix
(23)