Skip to main content
Logo image

Section PD Properties of Dimension

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

Subsection GT Goldilocks’ Theorem

We begin with a useful theorem that we will need later, and in the proof of the main theorem in this subsection. This theorem says that we can extend linearly independent sets, one vector at a time, by adding vectors from outside the span of the linearly independent set, all the while preserving the linear independence of the set.

Proof.

Suppose \(S=\set{\vectorlist{v}{m}}\) and begin with a relation of linear dependence on \(S^\prime\text{,}\)
\begin{equation*} \lincombo{a}{v}{m}+a_{m+1}\vect{w}=\zerovector\text{.} \end{equation*}
There are two cases to consider. First suppose that \(a_{m+1}=0\text{.}\) Then the relation of linear dependence on \(S^\prime\) becomes
\begin{equation*} \lincombo{a}{v}{m}=\zerovector \end{equation*}
and by the linear independence of the set \(S\text{,}\) we conclude that \(a_1=a_2=a_3=\cdots=a_m=0\text{.}\) So all of the scalars in the relation of linear dependence on \(S^\prime\) are zero.
In the second case, suppose that \(a_{m+1}\neq 0\text{.}\) Then the relation of linear dependence on \(S^\prime\) becomes
\begin{align*} a_{m+1}\vect{w}&=-a_1\vect{v}_1-a_2\vect{v}_2-a_3\vect{v}_3-\cdots-a_m\vect{v}_m\\ \vect{w}&=-\frac{a_1}{a_{m+1}}\vect{v}_1-\frac{a_2}{a_{m+1}}\vect{v}_2-\frac{a_3}{a_{m+1}}\vect{v}_3-\cdots-\frac{a_m}{a_{m+1}}\vect{v}_m\text{.} \end{align*}
This equation expresses \(\vect{w}\) as a linear combination of the vectors in \(S\text{,}\) contrary to the assumption that \(\vect{w}\not\in\spn{S}\text{,}\) so this case leads to a contradiction.
The first case yielded only a trivial relation of linear dependence on \(S^\prime\) and the second case led to a contradiction. So \(S^\prime\) is a linearly independent set since any relation of linear dependence is trivial.
In the story Goldilocks and the Three Bears, the young girl Goldilocks visits the empty house of the three bears while out walking in the woods. One bowl of porridge is too hot, the other too cold, the third is just right. One chair is too hard, one too soft, the third is just right. So it is with sets of vectors — some are too big (linearly dependent), some are too small (they do not span), and some are just right (bases). Here is Goldilocks’ Theorem.

Proof.

Let \(B\) be a basis of \(V\text{.}\) Since \(\dimension{V}=t\text{,}\) Definition B and Theorem BIS imply that \(B\) is a linearly independent set of \(t\) vectors that spans \(V\text{.}\)
  1. Suppose to the contrary that \(S\) is linearly independent. Then \(B\) is a smaller set of vectors that spans \(V\text{.}\) This contradicts Theorem SSLD.
  2. Suppose to the contrary that \(S\) does span \(V\text{.}\) Then \(B\) is a larger set of vectors that is linearly independent. This contradicts Theorem SSLD.
  3. Suppose to the contrary that \(S\) does not span \(V\text{.}\) Then we can choose a vector \(\vect{w}\) such that \(\vect{w}\in V\) and \(\vect{w}\not\in\spn{S}\text{.}\) By Theorem ELIS, the set \(S^\prime=S\cup\set{\vect{w}}\) is again linearly independent. Then \(S^\prime\) is a set of \(m+1=t+1\) vectors that are linearly independent, while \(B\) is a set of \(t\) vectors that span \(V\text{.}\) This contradicts Theorem SSLD.
  4. Suppose to the contrary that \(S\) is linearly dependent. Then by Theorem DLDS (which can be upgraded, with no changes in the proof, to the setting of a general vector space), there is a vector in \(S\text{,}\) say \(\vect{v}_k\) that is equal to a linear combination of the other vectors in \(S\text{.}\) Let \(S^\prime=S\setminus\set{\vect{v}_k}\text{,}\) the set of “other” vectors in \(S\text{.}\) Then it is easy to show that \(V=\spn{S}=\spn{S^\prime}\text{.}\) So \(S^\prime\) is a set of \(m-1=t-1\) vectors that spans \(V\text{,}\) while \(B\) is a set of \(t\) linearly independent vectors in \(V\text{.}\) This contradicts Theorem SSLD.
There is a tension in the construction of a basis. Make a set too big and you will end up with relations of linear dependence among the vectors. Make a set too small and you will not have enough raw material to span the entire vector space. Make a set just the right size (the dimension) and you only need to have linear independence or spanning, and you get the other property for free. These roughly-stated ideas are made precise by Theorem G.
The structure and proof of this theorem also deserve comment. The hypotheses seem innocuous. We presume we know the dimension of the vector space in hand, then we mostly just look at the size of the set \(S\text{.}\) From this we get big conclusions about spanning and linear independence. Each of the four proofs relies on ultimately contradicting Theorem SSLD, so in a way we could think of this entire theorem as a corollary of Theorem SSLD. (See Proof Technique LC.) The proofs of the third and fourth parts parallel each other in style: introduce \(\vect{w}\) using Theorem ELIS or toss \(\vect{v}_k\) using Theorem DLDS. Then obtain a contradiction to Theorem SSLD.
Theorem G is useful in both concrete examples and as a tool in other proofs. We will use it often to bypass verifying linear independence or spanning.

Example BPR. Bases for \(P_n\text{,}\) reprised.

In Example BP we claimed that
\begin{align*} B&=\set{1,\,x,\,x^2,\,x^3,\,\ldots,\,x^n}\\ C&=\set{1,\,1+x,\,1+x+x^2,\,1+x+x^2+x^3,\,\ldots,\,1+x+x^2+x^3+\cdots+x^n} \end{align*}
were both bases for \(P_n\) (Example VSP). Suppose we had first verified that \(B\) was a basis, so we would then know that \(\dimension{P_n}=n+1\text{.}\) The size of \(C\) is \(n+1\text{,}\) the right size to be a basis. We could then verify that \(C\) is linearly independent. We would not have to make any special efforts to prove that \(C\) spans \(P_n\text{,}\) since Theorem G would allow us to conclude this property of \(C\) directly. Then we would be able to say that \(C\) is a basis of \(P_n\) also.

Example BDM22. Basis by dimension in \(M_{22}\).

In Example DSM22 we showed that
\begin{equation*} B= \set{ \begin{bmatrix}-2&1\\1&0\end{bmatrix},\, \begin{bmatrix}-2&0\\0&1\end{bmatrix} } \end{equation*}
is a basis for the subspace \(Z\) of \(M_{22}\) (Example VSM) given by
\begin{equation*} Z=\setparts{\begin{bmatrix}a&b\\c&d\end{bmatrix}}{2a+b+3c+4d=0,\,-a+3b-5c-d=0}\text{.} \end{equation*}
This tells us that \(\dimension{Z}=2\text{.}\) In this example we will find another basis. We can construct two new matrices in \(Z\) by forming linear combinations of the matrices in \(B\text{.}\)
\begin{align*} 2\begin{bmatrix}-2&1\\1&0\end{bmatrix}+ (-3)\begin{bmatrix}-2&0\\0&1\end{bmatrix}&= \begin{bmatrix}2&2\\2&-3\end{bmatrix}\\ 3\begin{bmatrix}-2&1\\1&0\end{bmatrix}+ 1\begin{bmatrix}-2&0\\0&1\end{bmatrix}&= \begin{bmatrix}-8&3\\3&1\end{bmatrix} \end{align*}
Then the set
\begin{equation*} C= \set{ \begin{bmatrix}2&2\\2&-3\end{bmatrix},\, \begin{bmatrix}-8&3\\3&1\end{bmatrix} } \end{equation*}
has the right size to be a basis of \(Z\text{.}\) Let us see if it is a linearly independent set. The relation of linear dependence
\begin{align*} a_1\begin{bmatrix}2&2\\2&-3\end{bmatrix}+ a_2\begin{bmatrix}-8&3\\3&1\end{bmatrix}&=\zeromatrix\\ \begin{bmatrix}2a_1-8a_2&2a_1+3a_2\\2a_1+3a_2&-3a_1+a_2\end{bmatrix}&= \begin{bmatrix}0&0\\0&0\end{bmatrix} \end{align*}
leads to the homogeneous system of equations whose coefficient matrix
\begin{equation*} \begin{bmatrix} 2&-8\\ 2&3\\ 2&3\\ -3&1 \end{bmatrix} \end{equation*}
row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1}&0\\ 0&\leading{1}\\ 0&0\\ 0&0 \end{bmatrix}\text{.} \end{equation*}
So with \(a_1=a_2=0\) as the only solution, the set is linearly independent. Now we can apply Theorem G to see that \(C\) also spans \(Z\) and therefore is a second basis for \(Z\text{.}\)

Example SVP4. Sets of vectors in \(P_4\).

In Example BSP4 we showed that
\begin{equation*} B=\set{x-2,\,x^2-4x+4,\,x^3-6x^2+12x-8,\,x^4-8x^3+24x^2-32x+16} \end{equation*}
is a basis for \(W=\setparts{p(x)}{p\in P_4,\ p(2)=0}\text{.}\) So \(\dimension{W}=4\text{.}\)
The set
\begin{equation*} \set{3x^2-5x-2,\,2x^2-7x+6,\,x^3-2x^2+x-2} \end{equation*}
is a subset of \(W\) (check this) and it happens to be linearly independent (check this, too). However, by Theorem G it cannot span \(W\text{.}\)
The set
\begin{equation*} \set{3x^2-5x-2,\,2x^2-7x+6,\,x^3-2x^2+x-2,\,-x^4+2x^3+5x^2-10x,\,x^4-16} \end{equation*}
is another subset of \(W\) (check this) and Theorem G tells us that it must be linearly dependent.
The set
\begin{equation*} \set{x-2,\,x^2-2x,\,x^3-2x^2,\,x^4-2x^3} \end{equation*}
is a third subset of \(W\) (check this) and is linearly independent (check this). Since it has the right size to be a basis, and is linearly independent, Theorem G tells us that it also spans \(W\text{,}\) and therefore is a basis of \(W\text{.}\)
A simple consequence of Theorem G is the observation that a proper subspace has strictly smaller dimension than its parent vector space. Hopefully this may seem intuitively obvious, but it still requires proof, and we will cite this result later.

Proof.

Suppose that \(\dimension{U}=m\) and \(\dimension{V}=t\text{.}\) Then \(U\) has a basis \(B\) of size \(m\text{.}\) If \(m\gt t\text{,}\) then by Theorem G, \(B\) is linearly dependent, which is a contradiction. If \(m=t\text{,}\) then by Theorem G, \(B\) spans \(V\text{.}\) Then \(U=\spn{B}=V\text{,}\) also a contradiction. All that remains is that \(m\lt t\text{,}\) which is the desired conclusion.
The final theorem of this subsection is an extremely powerful tool for establishing the equality of two sets that are subspaces. Notice that the hypotheses include the equality of two integers (dimensions) while the conclusion is the equality of two sets (subspaces). It is the extra “structure” of a vector space and its dimension that makes possible this huge leap from an integer equality to a set equality.

Proof.

The hypothesis that \(U\subseteq V\) allows for two cases. Either \(U\subsetneq V\) or \(U=V\text{.}\)
If \(U\subsetneq V\text{,}\) then the hypotheses of Theorem PSSD are met and we have a contradiction to the hypothesis that \(\dimension{U}=\dimension{V}\text{.}\)
Since the first case is impossible, the second case must always occur, which is our desired conclusion.

Subsection RT Ranks and Transposes

We now prove one of the most surprising theorems about matrices. Notice the paucity of hypotheses compared to the precision of the conclusion.

Proof.

Suppose we row-reduce \(A\) to the matrix \(B\) in reduced row-echelon form, and \(B\) has \(r\) nonzero rows. The quantity \(r\) tells us three things about \(B\text{:}\) the number of leading 1’s, the number of nonzero rows and the number of pivot columns. For this proof we will be interested in the latter two.
Theorem BRS and Theorem BCS each has a conclusion that provides a basis, for the row space and the column space, respectively. In each case, these bases contain \(r\) vectors. This observation makes the following go.
\begin{align*} \rank{A}&=\dimension{\csp{A}}&& \knowl{./knowl/xref/definition-ROM.html}{\text{Definition ROM}}\\ &=r&& \knowl{./knowl/xref/theorem-BCS.html}{\text{Theorem BCS}}\\ &=\dimension{\rsp{A}}&& \knowl{./knowl/xref/theorem-BRS.html}{\text{Theorem BRS}}\\ &=\dimension{\csp{\transpose{A}}}&& \knowl{./knowl/xref/theorem-CSRST.html}{\text{Theorem CSRST}}\\ &=\rank{\transpose{A}}&& \knowl{./knowl/xref/definition-ROM.html}{\text{Definition ROM}} \end{align*}
Jacob Linenthal helped with this proof.
This says that the row space and the column space of a matrix have the same dimension, which should be very surprising. It does not say that column space and the row space are identical. Indeed, if the matrix is not square, then the sizes (number of slots) of the vectors in each space are different, so the sets are not even comparable.
It is not hard to construct by yourself examples of matrices that illustrate Theorem RMRT, since it applies equally well to any matrix. Grab a matrix, row-reduce it, count the nonzero rows or the number of pivot columns. That is the rank. Transpose the matrix, row-reduce that, count the nonzero rows or the pivot columns. That is the rank of the transpose. The theorem says the two will be equal. Every time. Here is an example anyway.

Example RRTI. Rank, rank of transpose, Archetype I.

Archetype I has a \(4\times 7\) coefficient matrix which row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 4 & 0 & 0 & 2 & 1 & -3\\ 0 & 0 & \leading{1} & 0 & 1 & -3 & 5\\ 0 & 0 & 0 & \leading{1} & 2 & -6 & 6\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}
so the rank is \(3\text{.}\) Row-reducing the transpose yields
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & -\frac{31}{7}\\ 0 & \leading{1} & 0 & \frac{12}{7}\\ 0 & 0 & \leading{1} & \frac{13}{7}\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}
demonstrating that the rank of the transpose is also \(3\text{.}\)

Subsection DFS Dimension of Four Subspaces

That the rank of a matrix equals the rank of its transpose is a fundamental and surprising result. However, applying Theorem FS we can easily determine the dimension of all four fundamental subspaces associated with a matrix.

Proof.

If \(A\) row-reduces to a matrix in reduced row-echelon form with \(r\) nonzero rows, then the matrix \(C\) of extended echelon form (Definition EEF) will be an \(r\times n\) matrix in reduced row-echelon form with no zero rows and \(r\) pivot columns (Theorem PEEF). Similarly, the matrix \(L\) of extended echelon form (Definition EEF) will be an \(m-r\times m\) matrix in reduced row-echelon form with no zero rows and \(m-r\) pivot columns (Theorem PEEF).
\begin{align*} \dimension{\nsp{A}} &=\dimension{\nsp{C}}&& \knowl{./knowl/xref/theorem-FS.html}{\text{Theorem FS}}\\ &=n-r&& \knowl{./knowl/xref/theorem-BNS.html}{\text{Theorem BNS}}\\ \ \\ \dimension{\csp{A}} &=\dimension{\nsp{L}}&& \knowl{./knowl/xref/theorem-FS.html}{\text{Theorem FS}}\\ &=m-(m-r)&& \knowl{./knowl/xref/theorem-BNS.html}{\text{Theorem BNS}}\\ &=r\\ \ \\ \dimension{\rsp{A}} &=\dimension{\rsp{C}}&& \knowl{./knowl/xref/theorem-FS.html}{\text{Theorem FS}}\\ &=r&& \knowl{./knowl/xref/theorem-BRS.html}{\text{Theorem BRS}}\\ \ \\ \dimension{\lns{A}} &=\dimension{\rsp{L}}&& \knowl{./knowl/xref/theorem-FS.html}{\text{Theorem FS}}\\ &=m-r&& \knowl{./knowl/xref/theorem-BRS.html}{\text{Theorem BRS}} \end{align*}
There are many different ways to state and prove this result, and indeed, the equality of the dimensions of the column space and row space is just a slight expansion of Theorem RMRT. However, we have restricted our techniques to applying Theorem FS and then determining dimensions with bases provided by Theorem BNS and Theorem BRS. This provides an appealing symmetry to the results and the proof.
In Section S we saw two general constructions (Theorem SIIS, Theorem SSIS) that combined two subspaces to create new subspaces. The four subspaces involved have dimensions related to each other, a result that will can be very useful.

Proof.

Let \(A=\set{\vectorlist{x}{k}}\) be a basis of the subspace \(U\cap V\text{.}\) Because \(U\) and \(V\) are each individually subspaces of \(U+V\text{,}\) we can extend \(A\) via repeated applications of Theorem ELIS to form bases for \(U\) and \(V\text{.}\) To wit, let \(\set{\vectorlist{u}{r}}\subseteq U\) be a set of vectors such that \(C=\set{\vectorlist{x}{k},\,\vectorlist{u}{r}}\) is a basis for \(U\text{.}\) Similarly, let \(\set{\vectorlist{v}{s}}\subseteq V\) be a set of vectors such that \(D=\set{\vectorlist{x}{k},\,\vectorlist{v}{s}}\) is a basis for \(V\text{.}\) Note that we have implicitly determined (by Definition D) that \(\dimension{U\cap V}=k\text{,}\) \(\dimension{U}=k+r\text{,}\) and \(\dimension{V}=k+s\text{.}\)
With this setup, we claim that
\begin{equation*} B=\set{\vectorlist{x}{k},\,\vectorlist{u}{r},\,\vectorlist{v}{s}} \end{equation*}
is a basis for the subspace \(U+V\text{.}\) We establish spanning first. Suppose that \(\vect{x}\in U+V\text{,}\) so \(\vect{x} = \vect{u}+\vect{v}\) where \(\vect{u}\in U\) and \(\vect{v}\in V\text{.}\) Since \(\vect{u}\in U\) we can express \(\vect{u}\) as a linear combination of the vectors in \(C\text{,}\) and since \(\vect{v}\in V\) we can express \(\vect{v}\) as a linear combination of the vectors in \(D\text{.}\) So
\begin{align*} \vect{x} &= \vect{u} + \vect{v}\\ &=\left(\lincombo{a}{x}{k}+\lincombo{c}{u}{r}\right) + \\ &\quad\quad\left(\lincombo{b}{x}{k}+\lincombo{d}{v}{s}\right)\\ &=\left(a_1+b_1\right)\vect{x_1} + \left(a_2+b_2\right)\vect{x_2} +\left(a_3+b_3\right)\vect{x_3} + \cdots + \left(a_k+b_k\right)\vect{x}_k +\\ &\quad\quad\lincombo{c}{u}{r} + \lincombo{d}{v}{s} \end{align*}
and we see that \(\vect{x}\) is a linear combination of the vectors of \(B\text{,}\) so \(B\) spans \(U+V\) by Definition SSVS.
To establish linear independence, we begin with a relation of linear independence (Definition RLD) on \(B\text{,}\)
\begin{equation*} \lincombo{a}{x}{k}+\lincombo{c}{u}{r} + \lincombo{d}{v}{s}=\zerovector\text{.} \end{equation*}
We rearrange this equation, and give a name to the common vector that is the expression on either side of the equality.
\begin{align*} \vect{w} &= \lincombo{a}{x}{k}+\lincombo{c}{u}{r}\\ &= - \lincombo{d}{v}{s}\text{.} \end{align*}
From the first expression we see that \(\vect{w}\) is a linear combination of the basis \(C\) and so \(\vect{w}\in U\text{.}\) The second expression shows that \(\vect{w}\) is a linear combination of the basis \(D\) and so \(\vect{w}\in V\text{.}\) Thus, \(\vect{w}\in U\cap V\) and we can express \(\vect{w}\) as a linear combination of the vectors of \(A\text{.}\) We use this observation, and another expression for \(\vect{w}\) from just above, to form an equality that we then rearrange,
\begin{gather*} \vect{w} = \lincombo{e}{x}{k} = -\lincombo{d}{v}{s}\\ \zerovector = \lincombo{e}{x}{k} + \lincombo{d}{v}{s} \end{gather*}
Aha! Finally, we have a relation of linear dependence on a linearly independent set (\(D\)) and so by Definition LI we conclude that
\begin{equation*} e_1=e_2=e_3=\cdots=e_k=d_1=d_2=d_3=\cdots=d_s=0 \end{equation*}
and in particular, \(\vect{w}=\zerovector\text{.}\)
Now, if we return to the equation where we first introduced \(\vect{w}\text{,}\) it becomes
\begin{equation*} \zerovector = \vect{w} = \lincombo{a}{x}{k}+\lincombo{c}{u}{r}\text{.} \end{equation*}
Now, this is a relation of linear dependence on a linearly independent set (\(C\)) and so by Definition LI we conclude that
\begin{equation*} a_1=a_2=e_3=\cdots=a_k=c_1=c_2=c_3=\cdots=c_r=0 \end{equation*}
and we have determined that all of the scalars in our original relation of linear dependence on \(B\) are zero, and so by Definition LI, we see that \(B\) is linearly independent.
Having established that \(B\) is a basis, we can say that \(\dimension{U+V}=k+r+s\text{.}\) So we have our result,
\begin{align*} \dimension{U+V} &= k + r + s\\ &= \left(k + r\right) + \left(k + s\right) - k\\ &= \dimension{U} + \dimension{V} - \dimension{U\cap V}\text{.} \end{align*}
An alternate version of the conclusion of Theorem SID is
\begin{equation*} \dimension{U+V} + \dimension{U\cap V} = \dimension{U} + \dimension{V} \end{equation*}
which has an appealing symmetry. Exercise PD.T60 is highly related to this theorem.

Sage DMS. Dimensions of Matrix Subspaces.

The theorems in this section about the dimensions of various subspaces associated with matrices can be tested easily in Sage. For a large arbitrary matrix, we first verify Theorem RMRT, followed by the four conclusions of Theorem DFS.

Reading Questions PD Reading Questions

1. Title of Theorem G.

Why does Theorem G have the title it does?

2. Theorem RMRT surprising.

Why is Theorem RMRT so surprising ?

3. Calculate dimensions of the 4 subspaces.

Row-reduce the matrix \(A\) to reduced row-echelon form. Without any further computations, compute the dimensions of the four subspaces, (a) \(\nsp{A}\text{,}\) (b) \(\csp{A}\text{,}\) (c) \(\rsp{A}\) and (d) \(\lns{A}\text{.}\)
\begin{equation*} A= \begin{bmatrix} 1 & -1 & 2 & 8 & 5 \\ 1 & 1 & 1 & 4 & -1 \\ 0 & 2 & -3 & -8 & -6 \\ 2 & 0 & 1 & 8 & 4 \end{bmatrix} \end{equation*}

Exercises PD Exercises

C10.

Example SVP4 leaves several details for the reader to check. Verify these five claims.

C40.

Determine if the set \(T=\set{x^2-x+5,\,4x^3-x^2+5x,\,3x+2}\) spans the vector space of polynomials with degree 4 or less, \(P_4\text{.}\) (Compare the solution to this exercise with Solution C40.1.)
Solution.
The vector space \(P_4\) has dimension 5 by Theorem DP. Since \(T\) contains only 3 vectors, and \(3\lt 5\text{,}\) Theorem G tells us that \(T\) does not span \(P_4\text{.}\)

T05.

Trivially, if \(U\) and \(V\) are two subspaces of \(W\) with \(U = V\text{,}\) then \(\dimension{U}=\dimension{V}\text{.}\) Combine this fact, Theorem PSSD, and Theorem EDYES all into one grand combined theorem. You might look to Theorem PIP for stylistic inspiration. (Notice this problem does not ask you to prove anything. It just asks you to roll up three theorems into one compact, logically equivalent statement.)

T10.

Prove the following theorem, which could be viewed as a reformulation of parts (3) and (4) of Theorem G, or more appropriately as a corollary of Theorem G (Proof Technique LC).
Suppose \(V\) is a vector space and \(S\) is a subset of \(V\) such that the number of vectors in \(S\) equals the dimension of \(V\text{.}\) Then \(S\) is linearly independent if and only if \(S\) spans \(V\text{.}\)

T15.

Suppose that \(A\) is an \(m\times n\) matrix and let \(\text{min}(m,\,n)\) denote the minimum of \(m\) and \(n\text{.}\) Prove that \(\rank{A}\leq \text{min}(m,\,n)\text{.}\) (If \(m\) and \(n\) are two numbers, then \(\text{min}(m,\,n)\) stands for the number that is the smaller of the two. For example \(\text{min}(4,\,6)=4\text{.}\))

T20.

Suppose that \(A\) is an \(m\times n\) matrix and \(\vect{b}\in\complex{m}\text{.}\) Prove that the linear system \(\linearsystem{A}{\vect{b}}\) is consistent if and only if \(\rank{A}=\rank{\augmented{A}{\vect{b}}}\text{.}\)
Solution.
Proof.
(⇒) 
Suppose first that \(\linearsystem{A}{\vect{b}}\) is consistent. Then by Theorem CSCS, \(\vect{b}\in\csp{A}\text{.}\) This means that \(\csp{A}=\csp{\augmented{A}{\vect{b}}}\) and so it follows that \(\rank{A}=\rank{\augmented{A}{\vect{b}}}\text{.}\)
(⇐) 
Adding a column to a matrix will only increase the size of its column space, so in all cases, \(\csp{A}\subseteq\csp{\augmented{A}{\vect{b}}}\text{.}\) However, if we assume that \(\rank{A}=\rank{\augmented{A}{\vect{b}}}\text{,}\) then by Theorem EDYES we conclude that \(\csp{A}=\csp{\augmented{A}{\vect{b}}}\text{.}\) Then \(\vect{b}\in\csp{\augmented{A}{\vect{b}}}=\csp{A}\) so by Theorem CSCS, \(\linearsystem{A}{\vect{b}}\) is consistent.

T25.

Suppose that \(V\) is a vector space with finite dimension. Let \(W\) be any subspace of \(V\text{.}\) Prove that \(W\) has finite dimension.

T33.

Part of Exercise B.T50 is the half of the proof where we assume the matrix \(A\) is nonsingular and prove that a set is a basis. In Solution T50.1 we proved directly that the set was both linearly independent and a spanning set. Shorten this part of the proof by applying Theorem G. Be careful, there is one subtlety.
Solution.
By Theorem DCM we know that \(\complex{n}\) has dimension \(n\text{.}\) So by Theorem G we need only establish that the set \(C\) is linearly independent or a spanning set. However, the hypotheses also require that \(C\) be of size \(n\text{.}\) We assumed that \(B=\set{\vectorlist{x}{n}}\) had size \(n\text{,}\) but there is no guarantee that \(C=\set{A\vect{x}_1,\,A\vect{x}_2,\,A\vect{x}_3,\,\dots,\,A\vect{x}_n}\) will have size \(n\text{.}\) There could be some “collapsing” or “collisions.”
Suppose we establish that \(C\) is linearly independent. Then \(C\) must have \(n\) distinct elements or else we could fashion a nontrivial relation of linear dependence involving duplicate elements.
If we instead choose to prove that \(C\) is a spanning set, then we could establish the uniqueness of the elements of \(C\) quite easily. Suppose that \(A\vect{x}_i=A\vect{x}_j\text{.}\) Then
\begin{align*} A(\vect{x}_i-\vect{x}_j)&=A\vect{x}_i - A\vect{x}_j=\zerovector\text{.} \end{align*}
Since \(A\) is nonsingular, we conclude that \(\vect{x}_i-\vect{x}_j=\zerovector\text{,}\) or \(\vect{x}_i=\vect{x}_j\text{,}\) contrary to our description of \(B\text{.}\)

T60.

Suppose that \(W\) is a vector space with dimension 5, and \(U\) and \(V\) are subspaces of \(W\text{,}\) each of dimension 3. Prove that \(U\cap V\) contains a nonzero vector. State a more general result.
Solution 1.
Let \(\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3}\) and \(\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3}\) be bases for \(U\) and \(V\) (respectively). Then, the set \(\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3}\) is linearly dependent, since Theorem G says we cannot have 6 linearly independent vectors in a vector space of dimension 5. So we can assert that there is a nontrivial relation of linear dependence,
\begin{equation*} a_1\vect{u}_1+a_2\vect{u}_2+a_3\vect{u}_3+b_1\vect{v}_1+b_2\vect{v}_2+b_3\vect{v}_3=\zerovector \end{equation*}
where \(a_1,\,a_2,\,a_3\) and \(b_1,\,b_2,\,b_3\) are not all zero.
We can rearrange this equation as
\begin{equation*} a_1\vect{u}_1+a_2\vect{u}_2+a_3\vect{u}_3=-b_1\vect{v}_1-b_2\vect{v}_2-b_3\vect{v}_3 \end{equation*}
This is an equality of two vectors, so we can give this common vector a name, say \(\vect{w}\text{,}\)
\begin{equation*} \vect{w}=a_1\vect{u}_1+a_2\vect{u}_2+a_3\vect{u}_3=-b_1\vect{v}_1-b_2\vect{v}_2-b_3\vect{v}_3 \end{equation*}
This is the desired nonzero vector, as we will now show.
First, since \(\vect{w}=a_1\vect{u}_1+a_2\vect{u}_2+a_3\vect{u}_3\text{,}\) we can see that \(\vect{w}\in U\text{.}\) Similarly, \(\vect{w}=-b_1\vect{v}_1-b_2\vect{v}_2-b_3\vect{v}_3\text{,}\) so \(\vect{w}\in V\text{.}\) This establishes that \(\vect{w}\in U\cap V\) (Definition SI).
Is \(\vect{w}\neq\zerovector\text{?}\) Suppose not, in other words, suppose \(\vect{w}=\zerovector\text{.}\) Then
\begin{equation*} \zerovector=\vect{w}=a_1\vect{u}_1+a_2\vect{u}_2+a_3\vect{u}_3 \end{equation*}
Because \(\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3}\) is a basis for \(U\text{,}\) it is a linearly independent set and the relation of linear dependence above means we must conclude that \(a_1=a_2=a_3=0\text{.}\) By a similar process, we would conclude that \(b_1=b_2=b_3=0\text{.}\) But this is a contradiction since \(a_1,\,a_2,\,a_3,\,b_1,\,b_2,\,b_3\) were chosen so that some were nonzero. So \(\vect{w}\neq\zerovector\text{.}\)
How does this generalize? All we really needed was the original relation of linear dependence that resulted because we had “too many” vectors in \(W\text{.}\) A more general statement would be: Suppose that \(W\) is a vector space with dimension \(n\text{,}\) \(U\) is a subspace of dimension \(p\) and \(V\) is a subspace of dimension \(q\text{.}\) If \(p+q\gt n\text{,}\) then \(U\cap V\) contains a nonzero vector.
Solution 2.
The previous solution to this exercise is reminiscent of techniques from the proof of Theorem SID. If we assume the theorem, then a solution becomes much more direct. The key observation is that the subspace \(U+V\) is a subspace of \(W\) and so cannot have dimension greater than \(5\text{,}\) since a larger basis would violate Theorem G. So we have
\begin{align*} \dimension{U\cap V} &= \dimension{U} + \dimension{V} - \dimension{U+V}&&\knowl{./knowl/xref/theorem-SID.html}{\text{Theorem SID}}\\ &= 3 + 3 - \dimension{U+V}&&\text{Hypothesis}\\ &\geq 6 - 5 = 1&&\knowl{./knowl/xref/theorem-G.html}{\text{Theorem G}} \end{align*}
So \(U\cap V\) is a nontrivial subspace and so contains a nonzero vector.
You have attempted of activities on this page.