AS3sol/MATH1111/YKL/08-09
THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS MATH1111: Linear Algebra Assignment 3 Suggested Solution
1. In lecture, we proved that W = Span(W ) if and only if W is a subspace. Let me repeat the argument below. [No matter whether or not W is a subspace, W ⊂ Span(W ) since w ∈ W ⇒ 1 · w ∈ Span(W ). ”If part”: If W is a subspace, then any linear combination of elements of W belongs to W . i.e. Span(W ) ⊂ W . ∴ W = Span(W ). ”Only if” part: Clear by direct checking from definition.] Now we prove the result in Qn 1. For any nonempty subset S ⊂ V , Span(S) is a subspace. Remains to show: S ⊂ W and W is a subspace ⇒ Span(S) ⊂ W . This is also clear, since w1 , w2 , · · · , wr ∈ S ⇒ w1 , w2 , · · · , wr ∈ W . Thus, a1 w1 + a2 w2 + · · · + ar wr ∈ W because W is a subspace. ∴ Span(S) ⊂ W .
1. Since U + V is a subspace containing the sets U and V , by Qn 1 part (b), Span(U ∪ V ) ⊂ U +V. Let x ∈ U + V . Then x = u + v where u ∈ U and v ∈ V . So x is a linear combination of vectors in U ∪ V . i.e. U + V ⊂ Span(U ∩ V ). 2. No. Consider U = Span((1
1)T ), V = Span(e1 ), W = Span(e2 ), all of which are sub-
spaces of R2 . Then U + (V ∩ W ) = U + {0} = U but (U + V ) ∩ (U + W ) = R2 ∩ R2 = R2 . Remark. U + (V ∩ W ) ⊂ (U + V ) ∩ (U + W ) is always true. 3. No. Use the counterexample in (b). Now, U ∩ V = {0} and U ∩ W = {0}, but since V + W = R2 , we have U ∩ (V + W ) = U . 1
Let B = {u1 , · · · , ur } be a basis for U ∩ V . (i.e. dim(U ∩ V ) = r.) Then {u1 , · · · , ur } is a set of linearly independent vectors in U . By Theorem 3.4.4 (ii), let dim U = n, we can extend u1 , · · · , ur by adding suitable vectors to form a basis for U . Let {u1 , · · · , ur , b1 , · · · , bn−r } be such a basis for U . (In case n = r, u1 , · · · , ur form a basis for U and we do not need to add anything.) Similary, if dim V = m, we get a basis {u1 , · · · , ur , b01 , · · · , b0m−r } for V . Now we prove that u1 , · · · , ur , b1 , · · · , bn−r , b01 , · · · , b0m−r form a basis for U + V . 1◦ {u1 , · · · , ur , b1 , · · · , bn−r , b01 , · · · , b0m−r } is a spanning set for U + V Proof. Let w ∈ U + V . By definition, w = u + v where u ∈ U and v ∈ V . u is a linear combination of u1 , · · · , ur , b1 , · · · , bn−r and v is a linear combination of u1 , · · · , ur , b01 , · · · , b0m−r . As a result, w = u + v is a linear combination of u1 , · · · , ur , b1 , · · · , bn−r , b01 , · · · , b0m−r . This proves our assertion. 2◦ u1 , · · · , ur , b1 , · · · , bn−r , b01 , · · · , b0m−r are linearly independent. Proof. Suppose that for some scalars c1 , · · · , cm+n−r , c1 u1 + · · · + cr ur + cr+1 b1 + · · · + cn bn−r + cn+1 b01 + · · · + cn+m−r b0m−r = 0.
(1)
Rewrite the equation into the following, c1 u1 + · · · + cr ur + cr+1 b1 + · · · + cn bn−r = −cn+1 b01 − · · · − cn+m−r b0m−r , or c1 u1 + · · · + cr ur + cn+1 b01 + · · · + cn+m−r b0m−r = −cr+1 b1 − · · · − cn bn−r . We see that both −cr+1 b1 − · · · − cn bn−r
− cn+1 b01 − · · · − cn+m−r b0m−r
and
belong to U ∩ V . This implies that −cr+1 b1 − · · · − cn bn = α1 u1 + · · · + αr ur −cn+1 b01
− · · · − cn+m−r b0m−r = β1 u1 + · · · + βr ur .
Rewrite the 1st equation into the form α1 u1 + · · · + αr ur + cr+1 b1 + · · · + cn bn−r = 0, it follows that α1 = · · · = αr = cr+1 = · · · = cn = 0 by linear independence of u1 , · · · , ur , b1 , · · · , bn−r . Similarly, β1 = · · · = βr = cn+1 = · · · = cn+m−r = 0. Therefore, we get c1 u1 + · · · + cr ur = 0. The linear independence of u1 , · · · , ur forces c1 = · · · = cr = 0. To sum up, c1 = · · · = cr = cr+1 = · · · = cn = cn+1 = · · · = cn+m−r = 0. As u1 , · · · , ur , b1 , · · · , bn−r , b01 , · · · , b0m−r form a basis for U + V , dim(U + V ) = r + (n − r) + (m − r) = m + n − r = dim U + dim V − dim(U ∩ V ). 2
This completes the proof. 1. (Only if part) Let x ∈ N (Q), i.e. Qx = 0. Write x1 xm+1 0 x x= where x0 = ... ∈ Rm and x00 = ... ∈ Rr . x00 xm xm+r Observe Qx = (I
A)x = Ix0 + Ax00 , we have x0 = −Ax00 . x0 x00
(If part) Suppose x =
where x0 ∈ Rm and x00 ∈ Rr satisfy x0 = −Ax00 . Then, x0 A) x00
Qx = (I as x0 = −Ax00 . −A = (b1 2. Write Ir
= Ix0 + Ax00 = 0
b2 · · · br ) where bj denotes the jth column of the matrix
−A . Ir
We shall prove the following two assertions. 1◦ Span(b1 , · · · , br ) = N (Q). −aj Proof. Observe that bj = where aj is the jth column of A and ej ej = (0 · · · |{z} 1 · · · 0)T ∈ Rr . jth
Then Qbj = (I
−aj = −aj +Aej = −aj +aj = 0. ∴ bj ∈ N (Q) for all j = 1, · · · , r. A) ej
As N (Q) is a vector space, all linear combination of b1 , · · · , br belong to N (Q). ∴ Span(b1 , · · · , br ) ⊂ N (Q). · · · · · · · · · · · · · · · · · · · · · · · · · · · (∗) 0 x Next, let x ∈ N (Q). Then by part (a), x = where x0 = −Ax00 . x00 As x00 ∈ Rr , we can write x00 = x1 e1 + x2 e2 + · · · + xr er , where {e1 , · · · , er } is the standard basis for Rr . When x00 = x1 e1 + x2 e2 + · · · + xr er , we have x0 = −Ax00 = −x1 Ae1 − x2 Ae2 − · · · − xr Aer = x1 (−a1 ) + x2 (−a2 ) + · · · + xr (−ar ). Consequently, 0 x −a1 −a2 −ar x= = x1 + x2 + · · · + xr = x1 b1 + · · · + xr br . x00 e1 e2 er 3
∴ N (U ) ⊂ Span(b1 , · · · , br ). Together with (∗), we conclude N (U ) = Span(b1 , · · · , br ). 2◦ b1 , · · · , br are linearly independent. Proof. It suffices to show Xc = 0 has only trivial solution where X = (b1 · · · br ) = −A . Ir Note that c ∈ Rr . Now, Xc = 0
⇒
−A c=0 Ir
⇒
−Ac = 0. c
∴ c = 0. This completes the proof by 1◦ and 2◦ , Let E1 , · · · , Ek be elementary matrices such that I` A . U E1 · · · Ek = 0 0 Denote U 0 = U E1 · · · Ek . 1◦ dim N (U 0 ) = dim N (U ).† Proof. Exercise. [Hint: Let {v1 , · · · , vn } be a basis for N (U 0 ). Write E = E1 · · · Ek , show that Ev1 , · · · , Evn form a basis for N (U ) by proving the following claims: Claim i: Ev1 , · · · , Evn are linearly independent. Proof. Suppose c1 Ev1 + · · · + cn Evn = 0. Then E(c1 v1 + · · · + cn vn ) = 0 .... c1 v1 + · · · + cn vn = 0 .... (Fill in the details, you need to use the invertibility of E and linear independence of v1 , · · · , vn .) Claim ii: Ev1 , · · · , Evn span N (U ). Proof. Let x ∈ N (U ). Then U x = 0 ... E −1 x ∈ N (U 0 ) .... (To fill in the details, make use of the nonsingularity of E and the fact that v1 , · · · , vn spans N (U 0 ).) Finally, count the number elements in these two bases (for N (U 0 ) and N (U ) respectively), you can conclude dim N (U 0 ) = dim N (U ).] Let Q = (I` 2◦
N (U 0 )
A).
= N (Q). Proof. Exercise.
−A By Part (b) of Question 5, the columns of form a basis for N (Q). In−` Thus dim N (Q) = n − `. By 1◦ and 2◦ , we conclude dim N (U ) = n − `. †
0 Note that N (U ) 6= N (U) in general, but their dimensions are the same. Here is a counterexample: let 1 0 0 1 U= and E = , then N (U ) = Span(e2 ) but N (U E) = Span(e1 ). 0 0 1 0
4