Processing math: 100%

Solution Manual of "Elementary Linear Algebra": 3. Subspaces

This document is the solution manual of “Elementary Linear Algebra” which was written by K. R. Matthews, University of Queensland.
The free printable PDF format lecture can be Downloaded Here


Summary
  • A subset S of F^{n} is called a subspace of F^{n} if
    • 0\in S;
    • If u\in S and v\in S, then u+v\in S;
    • If u\in S and t\in F, then tu\in S.
  • Let X_1, \cdots, X_m\in F^{n}. Then the set consisting of all linear combinations x_1X_1+\cdots+x_mX_m, where x_1, \cdots, x_m\in F, is a subspace of F^{n}. This subspace is called the subspace spanned or generated by X_1, \cdots, X_m and is denoted by \big < X_1, \cdots, X_m\big > . We also call X_1, \cdots, X_m a spanning family for S=\big < X_1, \cdots, X_m\big > .
  • Let A\in M_{m\times n}(F). Then the set of vectors X\in F^{n} satisfying AX=0 is a subspace of F^{n} called the null space of A and is denoted by N(A).
  • If A\in M_{m\times n}(F), the subspace generated by the columns of A is the subspace of F^{m} and is called the column space of A, C(A). Similarly, the subspace generated by the rows of A is the subspace of F^{n} and is called the row space of A, R(A).
  • Suppose each of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s. Then any linear combination of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s. That is, \big < X_1, \cdots, X_r\big > \subseteq\big < Y_1, \cdots, Y_s\big > .
  • Subspaces \big < X_1, \cdots, X_r\big > and \big < Y_1, \cdots, Y_s\big > are equal if each of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s and each of Y_1, \cdots, Y_s is a linear combination of X_1, \cdots, X_r.
  • \big < X_1,\cdots, X_r, Z_1, \cdots, Z_t\big > and \big < X_1,\cdots, X_r\big > are equal if each of Z_1, \cdots, Z_t is a linear combination of X_1,\cdots, X_r.
  • If A is row equivalent to B, then R(A)=R(B). (Note that it is not always true that C(A)=C(B))
  • Linearly dependent and linear independent
    Vectors X_1, \cdots, X_m in F^{n} are said to be linearly dependent if there exist scalars x_1, \cdots, x_m, not all zero, such that x_1X_1+\cdots +x_mX_m=0. X_1, \cdots, X_m are called linearly independent if they are not linearly dependent. Hence X_1, \cdots, X_m are linearly independent if and only if the equation x_1X_1+\cdots +x_mX_m=0 has only the trivial solution x_1=0,\cdots, x_m=0.
  • A family of m vectors in F^{n} will be linearly dependent if m > n. Equivalently, any linearly independent family of m vectors in F^{n} must satisfy m \leq n.
  • A family of s vectors in \big < X_1, \cdots, X_r\big > will be linearly dependent if s > r. Equivalently, a linearly independent family of s vectors in \big < X_1, \cdots, X_r\big > must have s \leq r.
  • Left-to-right test
    Vectors X_1, \cdots, X_m in F^{n} are linearly independent if
    • X_1 \neq 0;
    • For each k with 1 < k \leq m, X_k is not a linear combination of X_1, \cdots, X_{k-1}.
  • Every subspace S of F^{n} can be represented in the form S=\big < X_1\cdots X_m\big > , where m \leq n.
  • Suppose that A is row equivalent to B and let c_1, \cdots, c_r be distinct integers satisfying 1 \leq c_i \leq n. Then
    • Columns A_{* c_1}, \cdots, A_{* c_r} of A are linearly dependent if and only if the corresponding columns of B are linearly dependent; indeed more is true: x_1A_{*c_1}+\cdots + x_rA_{*c_r}=0\Leftrightarrow x_1B_{*c_1}+\cdots + x_rB_{*c_r}=0
    • Columns A_{* c_1}, \cdots, A_{* c_r} of A are linearly independent if and only if the corresponding columns of B are linearly independent.
    • If 1 \leq c_{r+1} \leq n and c_{r+1} is distinct from c_1, \cdots, c_r, then A_{*c_{r+1}}= z_1A_{*c_{1}}+ \cdots + z_rA_{*c_{r}}\Leftrightarrow B_{*c_{r+1}}= z_1B_{*c_{1}}+ \cdots + z_rB_{*c_{r}}
  • Basis
    Vectors X_1, \cdots, X_m belonging to a subspace S are said to form a basis of S if
    • Every vector in S is a linear combination of X_1, \cdots, X_m;
    • X_1, \cdots, X_m are linearly independent.
  • A subspace of the form \big < X_1, \cdots, X_m\big > , where at least one of X_1, \cdots, X_m is non-zero, has a basis X_{c_1}, \cdots, X_{c_r}, where 1 \leq c_1 < \cdots < c_r \leq m.
  • Left-to-right algorithm
    • A subspace of the form \big < X_1, \cdots, X_m\big > . Let c_1 be the least index k for which X_k is non-zero. If c_1=m or if all the vectors X_k with k > c_1 are linear combinations of X_{c_1}, terminate the algorithm and let r = 1. Otherwise let c_2 be the least integer k > c_1 such that X_k is not a linear combination of X_{c_1}.
    • If c_2=m or if all the vectors X_k with k > c_2 are linear combinations of X_{c_1} and X_{c_2}, terminate the algorithm and let r = 2. Eventually the algorithm will terminate at the rth stage, either because c_r=m, or because all vectors X_k with k > c_r are linear combinations of X_{c_1}, \cdots, X_{c_r}.
    • X_{c_1}, \cdots, X_{c_r} is called the left-to-right basis for the subspace \big < X_1, \cdots, X_m\big > . For example, 2X, -Y is the left-to-right basis of \big < 0, 2X, X, -Y, X+Y\big >
  • Dimension
    Any two bases for a subspace S must contain the same number of elements. This number is called the dimension of S and is written \dim S. Naturally we define \dim{0} = 0.
  • A linearly independent family of m vectors in a subspace S, with \dim S=m, must be a basis for S.
  • Rank and nullity
    • column rank A = \dim C(A);
    • row rank A = \dim R(A);
    • nullity A= \dim N(A).
  • Let A\in M_{m\times n}(F). Then
    • column rank A = row rank A;
    • column rank A + nullity A = n.
  • The common value if column rank A and row rank A is called the rank of A.
  • Every linearly independent family of vectors in a subspace S can be extended to a basis of S.

Problems 3.6

1. Which of the following subsets of \mathbb{R}^2 are subspaces? (a) [x, y] satisfying x=2y; (b) [x, y] satisfying x=2y and 2x=y; (c) [x, y] satisfying x=2y+1; (d) [x, y] satisfying xy=0; (e) [x, y] satisfying x \geq 0 and y \geq 0.  

Solution:
(a) (1) [0, 0]\in S; (2) Suppose [x_1, y_1]\in S and [x_2, y_2]\in S, so x_1+x_2 = 2y_1+2y_2=2(y_1+y_2), that is [x_1+x_2, y_1+y_2] = [x_1, y_1] + [x_2, y_2]\in S; (3) Suppose t\in R, so tx = t\cdot2y=2(ty), that is [tx, ty]=t[x, y]\in S. Hence this is subspace. (b) We have x = 2y=4x\Rightarrow x=y=0. That is, S=\{[0, 0]\} which is consisting of zero vector, and it is always subspace. (c) [0, 0]\notin S hence it is not subspace. (d) xy=0\Rightarrow x=0 or y=0. But it is not closed under addition, for example, [0, 1] + [1, 0]=[1, 1]\notin S. Hence it is not subspace. (e) This is not closed under scalar multiplication. For example, [1, 0] \in S and t=-1\in R, but t[1, 0]=[-1, 0]\notin S. Hence it is not subspace.

2. If X, Y, Z are vectors in \mathbb{R}^n, prove that \big < X, Y, Z\big > = \big < X+Y, X+Z, Y+Z \big >
Solution:
First, each of X+Y, X+Z, Y+Z is a linear combination of X, Y, Z. So \big < X+Y, X+Z, Y+Z \big > \subseteq \big < X, Y, Z\big > On the other hand, \begin{cases}X={1\over2}(X+Y) + {1\over2}(X+Z) - {1\over2}(Y+Z)\\ Y={1\over2}(X+Y) - {1\over2}(X+Z) + {1\over2}(Y+Z)\\ Z=-{1\over2}(X+Y) + {1\over2}(X+Z) + {1\over2}(Y+Z) \end{cases} That is, each of X, Y, Z is a linear combination of X+Y, X+Z, Y+Z, so \big < X, Y, Z\big > \subseteq \big < X+Y, X+Z, Y+Z \big > Thus, \big < X, Y, Z\big > = \big < X+Y, X+Z, Y+Z \big > .

3. Determine if X_1 = \begin{bmatrix}1\\0\\ 1\\ 2 \end{bmatrix}, X_2 = \begin{bmatrix}0\\1\\ 1\\ 2 \end{bmatrix}, and X_3 = \begin{bmatrix}1\\ 1\\ 1\\ 3 \end{bmatrix} are linearly independent in \mathbb{R}^4.

Solution:
Suppose we have xX_1+yX_2+zX_3=0, that is \begin{cases}x + z=0\\ y+z =0\\ x+y+z = 0\\ 2x+2y+3z = 0 \end{cases} \Rightarrow x=y=z=0 Thus they are linearly independent.

4. For which real number \lambda are the following vectors linearly independent in \mathbb{R}^3? X_1=\begin{bmatrix}\lambda\\ -1\\ -1 \end{bmatrix}, X_2=\begin{bmatrix}-1\\ \lambda\\ -1 \end{bmatrix}, X_3=\begin{bmatrix}-1\\ -1\\ \lambda \end{bmatrix}.
Solution:
Suppose there exists x, y, z satisfying xX_1+yX_2+zX_3= 0. That is \begin{cases}\lambda x -y -z=0\\ -x + \lambda y-z =0\\ -x -y +\lambda z = 0\end{cases} \Rightarrow \begin{bmatrix}\lambda & -1& -1\\ -1& \lambda& -1\\ -1& -1& \lambda \end{bmatrix} \Rightarrow\begin{cases}R_1 + \lambda R_3\\ R_2-R_3\\ -R_3 \end{cases}\begin{bmatrix}0 & -\lambda-1& \lambda^2-1\\ 0& \lambda+1& -1-\lambda\\ 1& 1 & -\lambda \end{bmatrix} If \lambda = -1, then \begin{bmatrix}0 & 0& 0\\ 0& 0& 0\\ 1& 1 & 1 \end{bmatrix}\Rightarrow x = -y-z which means there are non-trivial solutions. Thus \lambda \neq -1. We have \begin{bmatrix}0 & -\lambda-1& \lambda^2-1\\ 0& \lambda+1& -1-\lambda\\ 1& 1 & -\lambda \end{bmatrix} \Rightarrow\begin{cases}{1\over\lambda+1} R_1\\ {1\over\lambda+1} R_2 \end{cases} \begin{bmatrix}0 & -1& \lambda-1\\ 0& 1& -1\\ 1& 1 & -\lambda \end{bmatrix} \Rightarrow\begin{cases}R_1 + R_2\\ R_3-R_2 \end{cases} \begin{bmatrix}0 & 0& \lambda-2\\ 0& 1& -1\\ 1& 0 & -\lambda+1 \end{bmatrix} If \lambda = 2, then \begin{bmatrix}0 & 0& 0\\ 0& 1& -1\\ 1& 0 & -1 \end{bmatrix} \Rightarrow \begin{cases}x = z\\ y = z \end{cases} which means there are non-trivial solutions. Thus \lambda \neq 2. Hence we have \begin{bmatrix}0 & 0& \lambda-2\\ 0& 1& -1\\ 1& 0 & -\lambda+1 \end{bmatrix} \Rightarrow {1\over\lambda-2}R_1 \begin{bmatrix}0 & 0& 1\\ 0& 1& -1\\ 1& 0 & -\lambda+1 \end{bmatrix} \Rightarrow \begin{cases}R_2+R_1\\ R_3+(\lambda-1)R_1 \end{cases} \begin{bmatrix}0 & 0& 1\\ 0& 1& 0\\ 1& 0 & 0 \end{bmatrix} \Rightarrow x= y= z = 0 Therefore, it is linearly independent when \lambda\neq -1, 2.

5. Find bases for the row, column and null spaces of the following matrix over \mathbb{Q}: A=\begin{bmatrix}1& 1& 2& 0& 1\\ 2& 2& 5& 0& 3\\ 0& 0& 0& 1& 3\\ 8& 11& 19& 0& 11 \end{bmatrix}
Solution:
A=\begin{bmatrix}1& 1& 2& 0& 1\\ 2& 2& 5& 0& 3\\ 0& 0& 0& 1& 3\\ 8& 11& 19& 0& 11 \end{bmatrix} \Rightarrow\begin{cases}R_2- 2R_1\\ R_4-8R_1\end{cases} \begin{bmatrix}1& 1& 2& 0& 1\\ 0& 0& 1& 0& 1\\ 0& 0& 0& 1& 3\\ 0& 3& 3& 0& 3 \end{bmatrix} \Rightarrow{1\over3} R_4 \begin{bmatrix}1& 1& 2& 0& 1\\ 0& 0& 1& 0& 1\\ 0& 0& 0& 1& 3\\ 0& 1& 1& 0& 1 \end{bmatrix} \Rightarrow \begin{cases}R_1-R_4\\ R_4-R_2\end{cases} \begin{bmatrix}1& 0& 1& 0& 0\\ 0& 0& 1& 0& 1\\ 0& 0& 0& 1& 3\\ 0& 1& 0& 0& 0 \end{bmatrix} \Rightarrow R_1-R_2 \begin{bmatrix}1& 0& 0& 0& -1\\ 0& 0& 1& 0& 1\\ 0& 0& 0& 1& 3\\ 0& 1& 0& 0& 0 \end{bmatrix} \Rightarrow \begin{bmatrix}1& 0& 0& 0& -1\\ 0& 1& 0& 0& 0\\ 0& 0& 1& 0& 1\\ 0& 0& 0& 1& 3 \end{bmatrix} = B Thus, the bases of R(A) and C(A) are: \begin{cases}R(A): & [1, 0, 0, 0, -1], [0, 1, 0, 0, 0], [0, 0, 1, 0, 1], [0, 0, 0, 1, 3]\\ C(A): & [1, 2, 0, 8]^{t}, [1, 2, 0, 11]^{t}, [2, 5, 0, 19]^{t}, [0, 0, 1, 0]^{t}\\ \end{cases} For N(A), we have AX=0 where A=[x_1, x_2, x_3, x_4, x_5]^{t}, that is \begin{cases}x_1=x_5\\ x_2=0\\x_3=-x_5\\ x_4=-3x_5 \end{cases}\Rightarrow X=\begin{bmatrix}x_5\\ 0\\ -x_5\\ -3x_5\\ x_5 \end{bmatrix} = x_5\begin{bmatrix}1\\ 0\\ -1\\ -3\\ 1 \end{bmatrix} Hence the basis of N(A) is [1, 0, -1, -3, 1]^{t}.

6. Find bases for the row, column and null spaces of the following matrix over \mathbb{Z}_2: A=\begin{bmatrix}1& 0& 1& 0& 1\\ 0& 1& 0& 1& 1\\ 1& 1& 1& 1& 0\\ 0& 0& 1& 1& 0 \end{bmatrix}
Solution:
Recall that 1 + 1 = 0 \Rightarrow -1=1 in \mathbb{Z}_2 field.
A=\begin{bmatrix}1& 0& 1& 0& 1\\ 0& 1& 0& 1& 1\\ 1& 1& 1& 1& 0\\ 0& 0& 1& 1& 0 \end{bmatrix} \Rightarrow R_3-R_1 \begin{bmatrix}1& 0& 1& 0& 1\\ 0& 1& 0& 1& 1\\ 0& 1& 0& 1& 1\\ 0& 0& 1& 1& 0 \end{bmatrix} \Rightarrow \begin{cases}R_3-R_2\\ R_1-R_4\end{cases} \begin{bmatrix}1& 0& 0& 1& 1\\ 0& 1& 0& 1& 1\\ 0& 0& 0& 0& 0\\ 0& 0& 1& 1& 0 \end{bmatrix} \Rightarrow R_3\leftrightarrow R_4\begin{bmatrix}1& 0& 0& 1& 1\\ 0& 1& 0& 1& 1\\ 0& 0& 1& 1& 0 \\ 0& 0& 0& 0& 0 \end{bmatrix} The bases of R(A) and C(A) are: \begin{cases}R(A): & [1, 0, 0, 1, 1], [0, 1, 0, 1, 1], [0, 0, 1, 1, 0]\\ C(A): & [1, 0, 1, 0]^{t}, [0, 1, 1, 0]^{t}, [1, 0, 1, 1]^{t}\end{cases} For N(A) we have \begin{cases}x_1=-x_4-x_5=x_4+x_5\\ x_2=-x_4-x_5=x_4+x_5\\ x_3= -x_4=x_4\\ x_4=x_4\\x_5=x_5 \end{cases}\Rightarrow X=x_4\begin{bmatrix}1\\ 1\\ 1\\ 1\\ 0 \end{bmatrix} + x_5\begin{bmatrix}1\\ 1\\ 0\\ 0\\ 1 \end{bmatrix} Hence the basis of N(A) is [1, 1, 1, 1, 0]^{t}, [1, 1, 0, 0, 1]^{t}.

7. Find bases for the row, column and null spaces of the following matrix over \mathbb{Z}_5: A=\begin{bmatrix}1& 1& 2& 0& 1 & 3\\ 2& 1& 4& 0& 3 & 2\\ 0& 0& 0& 1& 3 &0\\ 3& 0& 2& 4& 3 & 2 \end{bmatrix}
Solution:
Recall that for x \notin \{0, 1, 2, 3, 4\} we have \begin{cases}x - 5n & x > 0\\ 5n - x & x < 0 \end{cases}, for example, 8 = 8 - 5 = 3, -8 = 10 - 8 = 2. A=\begin{bmatrix}1& 1& 2& 0& 1 & 3\\ 2& 1& 4& 0& 3 & 2\\ 0& 0& 0& 1& 3 &0\\ 3& 0& 2& 4& 3 & 2 \end{bmatrix} \Rightarrow \begin{cases}R_2-2R_1\\ R_4-3R_1 \end{cases} \begin{bmatrix}1& 1& 2& 0& 1 & 3\\ 0& 4& 0& 0& 1 & 1\\ 0& 0& 0& 1& 3 &0\\ 0& 2& 1& 4& 0 & 3 \end{bmatrix} \Rightarrow\begin{cases}2R_1-R_4\\ R_2-2R_4 \end{cases} \begin{bmatrix}2& 0& 3& 1& 2 & 3\\ 0& 0& 3& 2& 1 & 0\\ 0& 0& 0& 1& 3 &0\\ 0& 2& 1& 4& 0 & 3 \end{bmatrix} \Rightarrow \begin{cases} R_1-R_2\\ 3R_4-R_2\end{cases} \begin{bmatrix}2& 0& 0& 4& 1 & 3\\ 0& 0& 3& 2& 1 & 0\\ 0& 0& 0& 1& 3 &0\\ 0& 1& 0& 0& 4 & 4 \end{bmatrix} \Rightarrow \begin{cases} R_1-4R_3\\ R_2-2R_3\end{cases} \begin{bmatrix}2& 0& 0& 0& 4 & 3\\ 0& 0& 3& 0& 0 & 0\\ 0& 0& 0& 1& 3 &0\\ 0& 1& 0& 0& 4 & 4 \end{bmatrix} = \begin{bmatrix}2& 0& 0& 0& 4 & 8\\ 0& 0& 3& 0& 0 & 0\\ 0& 0& 0& 1& 3 &0\\ 0& 1& 0& 0& 4 & 4 \end{bmatrix} \Rightarrow \begin{bmatrix}1& 0& 0& 0& 2 & 4\\ 0& 1& 0& 0& 4 & 4\\ 0& 0& 1& 0& 0 & 0\\ 0& 0& 0& 1& 3 &0\end{bmatrix} Thus the bases of R(A) and C(A) are: \begin{cases}R(A):& [1, 0, 0, 0, 2, 4], [0, 1, 0, 0, 4, 4], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 3, 0] \\ C(A): & [1, 2, 0, 3]^{t}, [1, 1, 0, 0]^{t}, [2, 4, 0, 2]^{t}, [0, 0, 1, 4]^{t}\end{cases} For N(A), we have \begin{cases}x_1=-2x_5-4x_6=3x_5+x_6\\ x_2 = -4x_5 - 4x_6= x_5+x_6\\ x_3=0\\ x_4 = -3x_5=2x_5\\ x_5 = x_5\\ x_6 = x_6 \end{cases}\Rightarrow X=x_5\begin{bmatrix}3\\1\\0\\2\\1\\0 \end{bmatrix} + x_6 \begin{bmatrix}1\\1\\0\\0\\0\\1 \end{bmatrix} Hence the basis of N(A) is [3, 1, 0, 2, 1, 0]^{t}, [1, 1, 0, 0, 0, 1]^{t}.

8. Find bases for the row, column and null spaces of the matrix A defined in section 1.6, Problem 17.

Solution:
Recall that in section 1.6 Problem 17 we have the following result:
A=\begin{bmatrix}1& a& b& a\\ a& b& b& 1\\ 1& 1& 1& a \end{bmatrix} \Longleftrightarrow B=\begin{bmatrix}1& 0& 0& 0\\ 0& 1& 0& b\\0& 0& 1& 1 \end{bmatrix} And the addition and multiplication tables are in the following table.


Thus the bases of R(A) and C(A) are \begin{cases}R(A): &[1, 0, 0, 0], [0, 1, 0, b], [0, 0, 1, 1] \\ C(A): & [1, a, 1]^{t}, [a, b, 1]^{t}, [b, b, 1]^{t} \end{cases} For N(A) we have \begin{cases}x_1=0\\ x_2 = -bx_4 = bx_4\\ x_3 = -x_4=x_4\\ x_4=x_4 \end{cases} \Rightarrow X=x_4\begin{bmatrix}0\\ b\\1\\1 \end{bmatrix} Hence the basis of N(A) is [0, b, 1, 1]^{t}.

9. If X_1, \cdots, X_m form a basis for a subspace S, prove that X_1, X_1+X_2, \cdots, X_1+\cdots+X_m also form a basis of S.

Solution:
There are two phases, firstly prove linearly independent and then prove every vector in S is expressible by the basis.\\
Suppose that there exists x_1, \cdots, x_m such that x_1X_1+x_2(X_1+X_2)+\cdots+x_m(X_1+\cdots+X_m)=0 \Rightarrow (x_1+x_2+\cdots+x_m)X_1+(x_2+\cdots+x_m)X_2+\cdots+x_mX_m = 0 Since X_1, \cdots, X_m is linearly independent, so \begin{cases}x_1+x_2+\cdots+x_m = 0\\ x_2+\cdots+x_m =0\\ \vdots\\ x_m=0 \end{cases} \Rightarrow \begin{cases}x_1=0\\ x_2=0\\ \vdots\\ x_m=0\end{cases} which indicates that X_1, X_1+X_2, \cdots, X_1+\cdots+X_m are linearly independent. \\
Next, since X_1, \cdots, X_m is a basis of S, thus suppose a vector X\in S and we have X = a_1X_1+\cdots+a_mX_m = x_1X_1+x_2(X_1+X_2)+\cdots+x_m(X_1+\cdots+X_m) = (x_1+x_2+\cdots+x_m)X_1+(x_2+\cdots+x_m)X_2+\cdots+x_mX_m \Rightarrow \begin{cases}a_1 = x_1+x_2+\cdots+x_m\\ a_2 = x_2+\cdots+x_m\\ \vdots\\ a_m = x_m \end{cases} \Rightarrow \begin{cases}x_1=a_1-a_2\\ x_2=a_2-a_3\\ \vdots\\ x_m=a_m\end{cases} which means an arbitrary vector X\in S can be expressed by X_1, X_1+X_2, \cdots, X_1+\cdots+X_m.

10. Let A=\begin{bmatrix}a&b&c\\1&1&1\end{bmatrix}. Classify a, b, c such that (a) rank A = 1; (b) rank A = 2.

Solution:
(a) \begin{bmatrix}a&b&c\\1&1&1\end{bmatrix}\Rightarrow \begin{bmatrix}0&b-a&c-a\\1&1&1\end{bmatrix} Thus when b-a=c-a=0\Rightarrow a=b=c then rank A=1.
(b) Similarly to (a), b-a \neq 0 or c-a\neq 0, that is, at least two of a, b, c are distinct then rank A=2.

11. Let S be a subspace of F^n with \dim S=m. If X_1, \cdots, X_m are vectors in S with the property that S=\big < X_1, \cdots, X_m\big > , prove that X_1, \cdots, X_m form a basis for S.

Solution:
Consider Problem 9, we only need to prove the dependency of X_1, \cdots, X_m since S=\big < X_1, \cdots, X_m\big > has been given. \\
Without loss of generality, suppose X_1, \cdots, X_m are linear dependent and hence one of the vectors can be expressed by others: X_m= x_1X_1+\cdots+x_{m-1}X_{m-1} where x_i are not all zero for i=1, \cdots, m-1. This equation indicates that S can be spanned by m-1 vectors, say X_1, \cdots, X_{m-1}. This is contradict to S=\big < X_1, \cdots, X_m\big > .

12. Find a basis for the subspace S of \mathbb{R}^3 defined by the equation x + 2y + 3z =0 Verify that Y_1=[-1, -1, 1]^{t}\in S and find a basis for S which includes Y_1.

Solution:
First, (-1)+2(-1)+3=0\Rightarrow Y_1\in S. Then, we have
x + 2y + 3z =0\Rightarrow x=-2y-3z\Rightarrow \begin{bmatrix}x\\y\\z\end{bmatrix} = y\begin{bmatrix} -2\\ 1\\ 0 \end{bmatrix} + z\begin{bmatrix}-3\\ 0\\ 1 \end{bmatrix} Hence [-2, 1, 0]^{t} and [-3, 0, 1]^{t} form a basis for S.

13. Let X_1, \cdots, X_m be vectors in F^n. If X_i = X_j, where i < j, prove that X_1, \cdots, X_m are linearly dependent.  

Solution:
Without loss of generality, suppose X_1 = X_2, then we have 1\cdot X_1+(-1)\cdot X_2+0\cdot X_3+\cdots + 0\cdot X_m=0 which means X_1, \cdots, X_m are linearly dependent.

14. Let X_1, \cdots, X_{m+1} be vectors in F^n. Prove that \dim \big < X_1, \cdots, X_{m+1}\big > = \dim \big < X_1, \cdots, X_{m} \big > if X_{m+1} is a linear combination of X_1, \cdots, X_{m}, but \dim \big < X_1, \cdots, X_{m+1}\big > = \dim \big < X_1, \cdots, X_{m} \big > +1 if X_{m+1} is not a linear combination of X_1, \cdots, X_{m}.
Deduce that the system of linear equations AX=B is consistent, if and only if \text{rank}\ [A|B] = \text{rank}\ A.
Solution:
(1) Since X_{m+1} is a linear combination of X_1, \cdots, X_{m}, so we have \big < X_1, \cdots, X_{m+1}\big > = \big < X_1, \cdots, X_{m} \big > \Rightarrow \dim \big < X_1, \cdots, X_{m+1}\big > = \dim \big < X_1, \cdots, X_{m} \big >
(2) Suppose X_{c_1}, \cdots, X_{c_r} is a basis of \big < X_1, \cdots, X_{m} \big >. Since X_{m+1} is not a linear combination of X_1, \cdots, X_{m}, so we have \big < X_1, \cdots, X_{m+1}\big > = \big < X_{c_1}, \cdots, X_{c_r}, X_{m+1} \big > \Rightarrow \dim \big < X_1, \cdots, X_{m+1}\big > = \dim \big < X_{c_1}, \cdots, X_{c_r}, X_{m+1} \big > = r+1 = \dim \big < X_1, \cdots, X_{m} \big > +1
(3) Since AX=B is consistent, we have B = x_1A_{*1}+\cdots+x_nA_{*n} where X=[x_1, \cdots, x_n]^{t}. So if AX=B is soluble then B is a linear combination of columns of A, that is B\in C(A). From part (1) we know that B\in C(A) if and only if \dim C(A|B) = \dim C(A) that is, \text{rank}\ [A|B] = \text{rank}\ A.

15. Let a_1, \cdots, a_n be elements of F, not all zero. Prove that the set of vectors [x_1, \cdots, x_n]^{t} where x_1, \cdots, x_n satisfy a_1x_1+\cdots+a_nx_n=0 is a subspace of F^{n} with dimension equal to n-1.

Solution:
Denote S be the set of vectors [x_1, \cdots, x_n]^{t}. Then S = N(A), where A is the row matrix [a_1, \cdots, a_n]. And rank A=1 since A\neq0. Thus we have \dim S=\dim N(A)=n-\text{rank}\ A=n-1

16. Prove the following theorems:
(1) Suppose each of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s. Then any linear combination of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s. That is, \big < X_1, \cdots, X_r\big > \subseteq\big < Y_1, \cdots, Y_s\big > .
(2) Subspaces \big < X_1, \cdots, X_r\big > and \big < Y_1, \cdots, Y_s\big > are equal if each of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s and each of Y_1, \cdots, Y_s is a linear combination of X_1, \cdots, X_r.
(3) \big < X_1,\cdots, X_r, Z_1, \cdots, Z_t\big > and \big < X_1,\cdots, X_r\big > are equal if each of Z_1, \cdots, Z_t is a linear combination of X_1,\cdots, X_r.
(4) A family of s vectors in \big < X_1, \cdots, X_r\big > will be linearly dependent if s > r. Equivalently, a linearly independent family of s vectors in \big < X_1, \cdots, X_r\big > must have s \leq r.

Solution:
(1) Since each of X_1, \cdots, X_r is a linear combination of Y_1, \cdots, Y_s, so we have \begin{cases}X_1=a_{11}Y_1+\cdots+a_{1s}Y_s\\ \vdots\\ X_r=a_{r1}Y_1+\cdots+a_{rs}Y_s\end{cases} Now let X is a linear combination of X_1, \cdots, X_r, that is X=x_1X_1+\cdots+x_rX_r =x_1(a_{11}Y_1 + \cdots + a_{1s}Y_s) + \cdots +x_r(a_{r1}Y_1+\cdots+a_{rs}Y_s) =y_1Y_1 + \cdots + y_sY_s where y_i = \displaystyle\sum_{j=1}^{r}x_{j}a_{ji} for i = 1, \cdots, s. This shows that X is also a linear combination of Y_1, \cdots, Y_s.

(2) According to part (1), we know that \big < X_1, \cdots, X_r\big > \subseteq\big < Y_1, \cdots, Y_s\big > and \big < Y_1, \cdots, Y_s\big > \subseteq \big < X_1, \cdots, X_r\big > Thus \big < X_1, \cdots, X_r\big > = \big < Y_1, \cdots, Y_s\big > .

(3) Since each of Z_1, \cdots, Z_t is a linear combination of X_1,\cdots, X_r, so each of X_1,\cdots, X_r, Z_1, \cdots, Z_t is a linear combination of X_1,\cdots, X_r.
On the other hand, each of X_1,\cdots, X_r is a linear combination of X_1,\cdots, X_r, Z_1, \cdots, Z_t. According to part (2), we know that \big < X_1,\cdots, X_r, Z_1, \cdots, Z_t\big > = \big < X_1,\cdots, X_r\big >
(4) Suppose Y_1, \cdots, Y_s are vector in \big < X_1, \cdots, X_r\big > . We have \begin{cases}Y_1=a_{11}X_1+\cdots+a_{1r}X_r\\ \vdots\\ Y_s=a_{s1}X_1+\cdots+a_{sr}X_r\end{cases} Then assume we have coefficients y_1, \cdots, y_s, and y_1Y_1+\cdots+y_sY_s = y_1(a_{11}X_1+\cdots+a_{1r}X_r) + \cdots + y_s(a_{s1}X_1 + \cdots + a_{sr}X_r) =x_1X_1 + \cdots + x_rX_r where x_i = \displaystyle\sum_{j=1}^{s}y_{j}a_{ji} for i = 1, \cdots, r. That is, \begin{cases}a_{11}y_1+\cdots+a_{s1}y_s = x_1\\ \vdots \\ a_{1r}y_1+\cdots+a_{sr}y_s = x_r\end{cases} The homogeneous system x_i=0 for i = 1, \cdots, r, has non-trivial solutions (y_j for j =1, \cdots, s) if s > r, otherwise s \leq r.
That is, Y_1, \cdots, Y_s will be linearly dependent if s > r, otherwise it will be linearly independent.

17. Let R and S be subspaces of F^n, with R \subseteq S. Prove that \dim R\leq \dim S and that equality implies R=S.

Solution:
Suppose X_1, \cdots, X_r form a basis of R, and Y_1, \cdots, Y_s form a basis of S. Since R\subseteq S, so we have \big < Y_1, \cdots, Y_s\big > = \big < X_1, \cdots, X_r, Y_1, \cdots, Y_s\big > Applying left-to-right algorithm, we can form a basis on the right hand of the above equation, which is an extension of X_1, \cdots, X_r, say X_1, \cdots, X_r, \cdots, X_s. That is, \dim R = r \leq s = \dim S Obviously, from the above extension of bases, when r=s it will be R=S.

18. Let R and S be subspaces of F^{n}. If R\cup S is a subspace of F^{n}, prove that R\subseteq S or S\subseteq R.

Solution:
Suppose R\nsubseteq S and S\nsubseteq R, that is, there exists u and v such that u\in R but u\notin S, and v\in S but v\notin R. We have u, v \in R\cup S \Rightarrow u+v\in R\cup S \Rightarrow u+v\in R\ \text{or}\ u+v\in S Assume that u+v\in R, since u\in R, -u\in R (since R is a subspace and also a field which satisfying 10 properties in Chapter 1). So we have v=(u+v)+(-u)\in R\Rightarrow v\in R which is contradiction. Thus, we can conclude that R\subseteq S or S\subseteq R.

19. Let X_1, \cdots, X_r be a basis for a subspace S. Prove that all bases for S are given by the family Y_1, \cdots, Y_r, where Y_i=\sum_{j=1}^{r}a_{ij}X_j, and where A=[a_{ij}]\in M_{r\times r}(F) is a non-singular matrix.

Solution:
First, we will prove Y_i is a basis for S if A is non-singular matrix. Then we will prove A is non-singular matrix if Y_i is a basis for S.
(1) From the given condition, we have \begin{cases}Y_1 = a_{11}X_1 + \cdots + a_{1r}X_r\\ \vdots\\ Y_r=a_{r1}X_1+\cdots + a_{rr}X_r\end{cases} where A=[a_{ij}] is a non-singular matrix, B=[Y_1, \cdots, Y_r]^{t}, X=[X_1, \cdots, X_r]^{t}. Thus X=A^{-1}B, which means X is soluble in terms of Y. So by Problem 16 part (1) we have S =\big < X_1, \cdots, X_r \big > = \big < Y_1, \cdots, Y_r \big > And (by problem 11) hence Y_1, \cdots, Y_r is a basis for S.
(2) On the other hand, if A is non-singular, then the rows of A are linearly independent. That is, assuming x_1[a_{11}, \cdots, a_{1r}]+ \cdots+ x_r[a_{r1}, \cdots, a_{rr}] = [0, \cdots, 0] We need to prove that all x_i are zeros. So we have \begin{cases}a_{11}x_1+\cdots + a_{r1}x_r = 0 \\ \vdots \\ a_{1r}x_1+\cdots + a_{rr}x_r =0\end{cases} Hence x_1Y_1+\cdots+x_rY_r = x_1(a_{11}X_1 + \cdots + a_{1r}X_r) + \cdots + x_r(a_{r1}X_1+\cdots + a_{rr}X_r) = (a_{11}x_1+\cdots + a_{r1}x_r)X_1 + \cdots + ( a_{1r}x_1+\cdots + a_{rr}x_r)X_r =0\cdot X_1+\cdots+0\cdot X_r = 0 Since Y_i are linearly independent (i.e. basis), thus x_1 = \cdots = x_r =0. That is, the rows of A are linearly independent which implies A is non-singular matrix.


没有评论:

发表评论