Skip to main content
Logo image

Section LDS Linear Dependence and Spans

In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).
Indeed, because Theorem DLDS is an equivalence (Proof Technique E) some authors use this condition as a definition (Proof Technique D) of linear dependence. Then linear independence is defined as the logical opposite of linear dependence. Of course, we have chosen to take Definition LICV as our definition, and then follow with Theorem DLDS as a theorem.

Subsection LDSS Linearly Dependent Sets and Spans

If we use a linearly dependent set to construct a span, then we can always create the same infinite set with a starting set that is one vector smaller in size. We will illustrate this behavior in Example RSC5. However, this will not be possible if we build a span from a linearly independent set. So in a certain sense, using a linearly independent set to formulate a span is the best possible way — there are not any extra vectors being used to build up all the necessary linear combinations. OK, here is the theorem, and then the example.

Proof.

(⇒) 
Suppose that \(S\) is linearly dependent, so there exists a nontrivial relation of linear dependence by Definition LICV. That is, there are scalars, \(\alpha_i\text{,}\) \(1\leq i\leq n\text{,}\) which are not all zero, such that
\begin{equation*} \lincombo{\alpha}{u}{n}=\zerovector\text{.} \end{equation*}
Since the \(\alpha_i\) cannot all be zero, choose one, say \(\alpha_t\text{,}\) that is nonzero. Then,
\begin{align*} \vect{u}_t &=\frac{-1}{\alpha_t}\left(-\alpha_t\vect{u}_t\right)&& \knowl{./knowl/xref/property-MICN.html}{\text{Property MICN}}\\ &= \frac{-1}{\alpha_t}\left( \alpha_1\vect{u}_1+ \cdots+ \alpha_{t-1}\vect{u}_{t-1}+ \alpha_{t+1}\vect{u}_{t+1}+ \cdots+ \alpha_n\vect{u}_n \right)&& \knowl{./knowl/xref/theorem-VSPCV.html}{\text{Theorem VSPCV}}\\ &= \frac{-\alpha_1}{\alpha_t}\vect{u}_1+ \cdots+ \frac{-\alpha_{t-1}}{\alpha_t}\vect{u}_{t-1}+ \frac{-\alpha_{t+1}}{\alpha_t}\vect{u}_{t+1}+ \cdots+ \frac{-\alpha_n}{\alpha_t}\vect{u}_n&& \knowl{./knowl/xref/theorem-VSPCV.html}{\text{Theorem VSPCV}} \end{align*}
Since the values of \(\frac{\alpha_i}{\alpha_t}\) are again scalars, we have expressed \(\vect{u}_t\) as a linear combination of the other elements of \(S\text{.}\)
(⇐) 
Assume that the vector \(\vect{u}_t\) is a linear combination of the other vectors in \(S\text{.}\) Write this linear combination, denoting the relevant scalars as \(\beta_1\text{,}\) \(\beta_2\text{,}\) …, \(\beta_{t-1}\text{,}\) \(\beta_{t+1}\text{,}\) …, \(\beta_n\text{,}\) as
\begin{align*} \vect{u_t} &= \beta_1\vect{u}_1+ \beta_2\vect{u}_2+ \cdots+ \beta_{t-1}\vect{u}_{t-1}+ \beta_{t+1}\vect{u}_{t+1}+ \cdots+ \beta_n\vect{u}_n \end{align*}
Then we have
\begin{align*} \beta_1\vect{u}_1 &+\cdots+ \beta_{t-1}\vect{u}_{t-1}+ (-1)\vect{u}_t+ \beta_{t+1}\vect{u}_{t+1}+ \cdots+ \beta_n\vect{u}_n\\ &=\vect{u}_t+(-1)\vect{u}_t&& \knowl{./knowl/xref/theorem-VSPCV.html}{\text{Theorem VSPCV}}\\ &=\left(1+\left(-1\right)\right)\vect{u}_t&& \knowl{./knowl/xref/property-DSAC.html}{\text{Property DSAC}}\\ &=0\vect{u}_t&& \knowl{./knowl/xref/property-AICN.html}{\text{Property AICN}}\\ &=\zerovector&& \knowl{./knowl/xref/definition-CVSM.html}{\text{Definition CVSM}} \end{align*}
So the scalars \(\beta_1,\,\beta_2,\,\beta_3,\,\ldots,\,\beta_{t-1},\,\beta_t=-1,\beta_{t+1},\,\,\ldots,\,\beta_n\) provide a nontrivial linear combination of the vectors in \(S\text{,}\) thus establishing that \(S\) is a linearly dependent set (Definition LICV).
This theorem can be used, sometimes repeatedly, to whittle down the size of a set of vectors used in a span construction. We have seen some of this already in Example SCAD, but in the next example we will detail some of the subtleties.

Example RSC5. Reducing a span in \(\complex{5}\).

Consider the set of \(n=4\) vectors from \(\complex{5}\text{,}\)
\begin{equation*} R=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4} = \set{ \colvector{1\\2\\-1\\3\\2},\, \colvector{2\\1\\3\\1\\2},\, \colvector{0\\-7\\6\\-11\\-2},\, \colvector{4\\1\\2\\1\\6} } \end{equation*}
and define \(V=\spn{R}\text{.}\)
To employ Theorem LIVHS, we form a \(5\times 4\) matrix, \(D\text{,}\) and row-reduce to understand solutions to the homogeneous system \(\homosystem{D}\text{,}\)
\begin{equation*} D= \begin{bmatrix} 1&2&0&4\\ 2&1&-7&1\\ -1&3&6&2\\ 3&1&-11&1\\ 2&2&-2&6 \end{bmatrix} \rref \begin{bmatrix} \leading{1}&0&0&4\\ 0&\leading{1}&0&0\\ 0&0&\leading{1}&1\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix} \end{equation*}
We can find infinitely many solutions to this system, most of them nontrivial, and we choose any one we like to build a relation of linear dependence on \(R\text{.}\) We begin with the choice of the free variable \(x_4=1\text{,}\) to build a solution \(\vect{x}\text{,}\) that we parlay into a nontrivial relation of linear dependence
\begin{align*} \vect{x}&=\colvector{-4\\0\\-1\\1} & (-4)\vect{v}_1+0\vect{v}_2+(-1)\vect{v}_3+1\vect{v}_4&=\zerovector\text{.} \end{align*}
Theorem DLDS guarantees that we can solve this relation of linear dependence for some vector in \(R\text{,}\) but the choice of which one is up to us. Notice however that \(\vect{v}_2\) has a zero coefficient. In this case, we cannot choose to solve for \(\vect{v}_2\text{.}\) Maybe some other relation of linear dependence would produce a nonzero coefficient for \(\vect{v}_2\) if we just had to solve for this vector. Unfortunately, this example has been engineered to always produce a zero coefficient here, as you can see from solving the homogeneous system. Every solution has \(x_2=0\text{!}\)
OK, if we are convinced that we cannot solve for \(\vect{v}_2\text{,}\) let us instead solve for \(\vect{v}_3\text{,}\)
\begin{equation*} \vect{v}_3=(-4)\vect{v}_1+0\vect{v}_2+1\vect{v}_4=(-4)\vect{v}_1+1\vect{v}_4\text{.} \end{equation*}
We now claim that this particular equation will allow us to write
\begin{equation*} V=\spn{R}= \spn{\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4}}= \spn{\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_4}} \end{equation*}
in essence declaring \(\vect{v}_3\) as surplus for the task of building \(V\) as a span. This claim is an equality of two sets, so we will use Definition SE to establish it carefully. Let \(R^\prime=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_4}\) and \(V^\prime=\spn{R^\prime}\text{.}\) We want to show that \(V=V^\prime\text{.}\)
First show that \(V^\prime\subseteq V\text{.}\) Since every vector of \(R^\prime\) is in \(R\text{,}\) any vector we can construct in \(V^\prime\) as a linear combination of vectors from \(R^\prime\) can also be constructed as a vector in \(V\) by the same linear combination of the same vectors in \(R\text{.}\) That was easy, now turn it around.
Next show that \(V\subseteq V^\prime\text{.}\) Choose any \(\vect{v}\) from \(V\text{.}\) So there are scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4\) such that
\begin{align*} \vect{v}&= \alpha_1\vect{v}_1+\alpha_2\vect{v}_2+\alpha_3\vect{v}_3+\alpha_4\vect{v}_4\\ &=\alpha_1\vect{v}_1+\alpha_2\vect{v}_2+ \alpha_3\left((-4)\vect{v}_1+1\vect{v}_4\right)+ \alpha_4\vect{v}_4\\ &=\alpha_1\vect{v}_1+\alpha_2\vect{v}_2+ \left((-4\alpha_3)\vect{v}_1+\alpha_3\vect{v}_4\right)+ \alpha_4\vect{v}_4\\ &=\left(\alpha_1-4\alpha_3\right)\vect{v}_1+ \alpha_2\vect{v}_2+ \left(\alpha_3+\alpha_4\right)\vect{v}_4\text{.} \end{align*}
This equation says that \(\vect{v}\) can then be written as a linear combination of the vectors in \(R^\prime\) and hence qualifies for membership in \(V^\prime\text{.}\) So \(V\subseteq V^\prime\) and we have established that \(V=V^\prime\text{.}\)
If \(R^\prime\) was also linearly dependent (it is not), we could reduce the set even further. Notice that we could have chosen to eliminate any one of \(\vect{v}_1\text{,}\) \(\vect{v}_3\) or \(\vect{v}_4\text{,}\) but somehow \(\vect{v}_2\) is essential to the creation of \(V\) since it cannot be replaced by any linear combination of \(\vect{v}_1\text{,}\) \(\vect{v}_3\) or \(\vect{v}_4\text{.}\)

Sage RLD. Relations of Linear Dependence.

Example RSC5 turned on a nontrivial relation of linear dependence (Definition RLDCV) on the set \(\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4}\text{.}\) Besides indicating linear independence, the Sage vector space method .linear_dependence() produces relations of linear dependence for linearly dependent sets. Here is how we would employ this method in Example RSC5. The optional argument zeros='right' will produce results consistent with our work here, you can also experiment with zeros='left' (which is the default).
You can check that the list L has just one element (maybe with len(L)), but realize that any multiple of the vector L[0] is also a relation of linear dependence on R, most of which are nontrivial. Notice that we have verified the final conclusion of Example RSC5 with a comparison of two spans.
We will give the .linear_dependence() method a real workout in the next Sage subsection (Sage COV) — this is just a quick introduction.

Subsection COV Casting Out Vectors

In Example RSC5 we used four vectors to create a span. With a relation of linear dependence in hand, we were able to “toss out” one of these four vectors and create the same span from a subset of just three vectors from the original set of four. We did have to take some care as to just which vector we tossed out. In the next example, we will be more methodical about just how we choose to eliminate vectors from a linearly dependent set while preserving a span.

Example COV. Casting out vectors.

We begin with a set \(S\) containing seven vectors from \(\complex{4}\)
\begin{equation*} S=\set{ \colvector{1\\2\\0\\-1},\, \colvector{4\\8\\0\\-4},\, \colvector{0\\-1\\2\\2},\, \colvector{-1\\3\\-3\\4},\, \colvector{0\\9\\-4\\8},\, \colvector{7\\-13\\12\\-31},\, \colvector{-9\\7\\-8\\37} } \end{equation*}
and define \(W=\spn{S}\text{.}\)
The set \(S\) is obviously linearly dependent by Theorem MVSLD, since we have \(n=7\) vectors from \(\complex{4}\text{.}\) So we can slim down \(S\) some, and still create \(W\) as the span of a smaller set of vectors.
As a device for identifying relations of linear dependence among the vectors of \(S\text{,}\) we place the seven column vectors of \(S\) into a matrix as columns,
\begin{equation*} A=\matrixcolumns{A}{7}=\begin{bmatrix} 1 & 4 & 0 & -1 & 0 & 7 & -9 \\ 2 & 8 & -1 & 3 & 9 & -13 & 7\\ 0 & 0 & 2 & -3 & -4 & 12 & -8\\ -1 & -4 & 2 & 4 & 8 & -31 & 37 \end{bmatrix}\text{.} \end{equation*}
By Theorem SLSLC a nontrivial solution to \(\homosystem{A}\) will give us a nontrivial relation of linear dependence (Definition RLDCV) on the columns of \(A\) (which are the elements of the set \(S\)). The row-reduced form for \(A\) is the matrix
\begin{equation*} B=\begin{bmatrix} \leading{1} & 4 & 0 & 0 & 2 & 1 & -3\\ 0 & 0 & \leading{1} & 0 & 1 & -3 & 5\\ 0 & 0 & 0 & \leading{1} & 2 & -6 & 6\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}
so we can easily create solutions to the homogeneous system \(\homosystem{A}\) using the free variables \(x_2,\,x_5,\,x_6,\,x_7\text{.}\) Any such solution will provide a relation of linear dependence on the columns of \(B\text{.}\) These solutions will allow us to solve for one column vector as a linear combination of some others, in the spirit of Theorem DLDS, and remove that vector from the set. We will set about forming these linear combinations methodically.
Set the free variable \(x_2=1\text{,}\) and set the other free variables to zero. Then a solution to \(\linearsystem{A}{\zerovector}\) and the resulting nontrivial relation of linear dependence are
\begin{align*} \vect{x}&=\colvector{-4\\1\\0\\0\\0\\0\\0} & (-4)\vect{A}_1+ 1\vect{A}_2+ 0\vect{A}_3+ 0\vect{A}_4+ 0\vect{A}_5+ 0\vect{A}_6+ 0\vect{A}_7 &=\zerovector\text{.} \end{align*}
This can then be arranged and solved for \(\vect{A}_2\text{,}\) resulting in \(\vect{A}_2\) expressed as a linear combination of \(\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}\text{,}\) \(\vect{A}_2= 4\vect{A}_1+ 0\vect{A}_3+ 0\vect{A}_4\text{.}\)
This means that \(\vect{A}_2\) is surplus, and we can create \(W\) just as well with a smaller set with this vector removed, \(W=\spn{\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4,\,\vect{A}_5,\,\vect{A}_6,\,\vect{A}_7}}\text{.}\)
Technically, this set equality for \(W\) requires a proof, in the spirit of Example RSC5, but we will bypass this requirement here, and in the next few paragraphs.
Now, set the free variable \(x_5=1\text{,}\) and set the other free variables to zero. Then a solution to \(\linearsystem{A}{\zerovector}\) and the resulting nontrivial relation of linear dependence are
\begin{align*} \vect{x}&=\colvector{-2\\0\\-1\\-2\\1\\0\\0} & (-2)\vect{A}_1+ 0\vect{A}_2+ (-1)\vect{A}_3+ (-2)\vect{A}_4+ 1\vect{A}_5+ 0\vect{A}_6+ 0\vect{A}_7 &=\zerovector\text{.} \end{align*}
This can then be arranged and solved for \(\vect{A}_5\text{,}\) resulting in \(\vect{A}_5\) expressed as a linear combination of \(\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}\text{,}\) \(\vect{A}_5= 2\vect{A}_1+ 1\vect{A}_3+ 2\vect{A}_4\text{.}\)
This means that \(\vect{A}_5\) is surplus, and we can create \(W\) just as well with a smaller set with this vector removed, \(W=\spn{\left\{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4,\,\vect{A}_6,\,\vect{A}_7\right\}}\text{.}\)
Do it again, set the free variable \(x_6=1\text{,}\) and set the other free variables to zero. Then a solution to \(\linearsystem{A}{\zerovector}\) and the resulting nontrivial relation of linear dependence are
\begin{align*} \vect{x}&=\colvector{-1\\0\\3\\6\\0\\1\\0} & (-1)\vect{A}_1+ 0\vect{A}_2+ 3\vect{A}_3+ 6\vect{A}_4+ 0\vect{A}_5+ 1\vect{A}_6+ 0\vect{A}_7 &=\zerovector\text{.} \end{align*}
This can then be arranged and solved for \(\vect{A}_6\text{,}\) resulting in \(\vect{A}_6\) expressed as a linear combination of \(\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}\text{,}\) \(\vect{A}_6=1\vect{A}_1+ (-3)\vect{A}_3+ (-6)\vect{A}_4\text{.}\)
This means that \(\vect{A}_6\) is surplus, and we can create \(W\) just as well with a smaller set with this vector removed, \(W=\spn{\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4,\,\vect{A}_7}}\text{.}\)
Set the free variable \(x_7=1\text{,}\) and set the other free variables to zero. Then a solution to \(\linearsystem{A}{\zerovector}\) and the resulting nontrivial relation of linear dependence are
\begin{align*} \vect{x}&=\colvector{3\\0\\-5\\-6\\0\\0\\1} & 3\vect{A}_1+ 0\vect{A}_2+ (-5)\vect{A}_3+ (-6)\vect{A}_4+ 0\vect{A}_5+ 0\vect{A}_6+ 1\vect{A}_7 &=\zerovector\text{.} \end{align*}
This can then be arranged and solved for \(\vect{A}_7\text{,}\) resulting in \(\vect{A}_7\) expressed as a linear combination of \(\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}\text{,}\) \(\vect{A}_7= (-3)\vect{A}_1+ 5\vect{A}_3+ 6\vect{A}_4\text{.}\)
This means that \(\vect{A}_7\) is surplus, and we can create \(W\) just as well with a smaller set with this vector removed, \(W=\spn{\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}}\text{.}\)
You might think we could keep this up, but we have run out of free variables. And not coincidentally, the set \(\set{\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4}\) is linearly independent (check this!). It should be clear how each free variable was used to eliminate a column from the set used to span the column space, as this will be the essence of the proof of the next theorem. The column vectors in \(S\) were not chosen entirely at random, they are the columns of Archetype I. See if you can mimic this example using the columns of Archetype J. Go ahead, we’ll go grab a cup of coffee and be back before you finish up.
For extra credit, notice that the vector
\begin{equation*} \vect{b}=\colvector{3\\9\\1\\4} \end{equation*}
is the vector of constants in the definition of Archetype I. Since the system \(\linearsystem{A}{\vect{b}}\) is consistent, we know by Theorem SLSLC that \(\vect{b}\) is a linear combination of the columns of \(A\text{,}\) or stated equivalently, \(\vect{b}\in W\text{.}\) This means that \(\vect{b}\) must also be a linear combination of just the three columns \(\vect{A}_1,\,\vect{A}_3,\,\vect{A}_4\text{.}\) Can you find such a linear combination? Did you notice that there is just a single (unique) answer? Hmmmm.

Sage COV. Casting Out Vectors.

We will redo Example COV, though somewhat tersely, just producing the justification for each time we toss a vector (a specific relation of linear dependence), and then verifying that the resulting spans, each with one fewer vector, still produce the original span. We also introduce the .remove() method for lists. Ready? Here we go.
Notice that S begins with all seven original vectors, and slowly gets whittled down to just the list [v1, v3, v4]. If you experiment with the above commands, be sure to return to the start and work your way through in order, or the results will not be right.
As a bonus, notice that the set of relations of linear dependence provided by Sage, D, is itself a linearly independent set (but within a very different vector space). Is that too weird?
Now, can you answer the extra credit question from Example COV using Sage?
Example COV deserves your careful attention, since this important example motivates the following very fundamental theorem.

Proof.

To prove that \(T\) is linearly independent, begin with a relation of linear dependence on \(T\text{,}\)
\begin{equation*} \alpha_1\vect{v}_{d_1}+\alpha_2\vect{v}_{d_2}+\alpha_3\vect{v}_{d_3}+\ldots+\alpha_r\vect{v}_{d_r}=\zerovector \end{equation*}
and we will conclude that the only possibility for the scalars \(\alpha_i\) is that they are all zero. Denote the non-pivot columns of \(B\) by \(F=\set{\scalarlist{f}{n-r}}\text{.}\) Then we can preserve the equality by adding a big fat zero to the linear combination,
\begin{equation*} \alpha_1\vect{v}_{d_1}+\alpha_2\vect{v}_{d_2}+\alpha_3\vect{v}_{d_3}+\ldots+\alpha_r\vect{v}_{d_r}+ 0\vect{v}_{f_1}+0\vect{v}_{f_2}+0\vect{v}_{f_3}+\ldots+0\vect{v}_{f_{n-r}}=\zerovector\text{.} \end{equation*}
By Theorem SLSLC, the scalars in this linear combination are a solution to the homogeneous system \(\homosystem{A}\text{,}\) when suitably reordered to match the order of the columns of \(A\text{.}\) But notice that this is the solution obtained by setting each free variable to zero. If we consider the description of a solution vector in the conclusion of Theorem VFSLS, in the case of a homogeneous system, then we see that if all the free variables are set to zero the resulting solution vector is the zero vector. So it must be that \(\alpha_i=0\text{,}\) \(1\leq i\leq r\text{.}\) This implies by Definition LICV that \(T\) is a linearly independent set.
The second conclusion of this theorem is an equality of sets (Definition SE). Since \(T\) is a subset of \(S\text{,}\) any linear combination of elements of the set \(T\) can also be viewed as a linear combination of elements of the set \(S\text{.}\) So \(\spn{T}\subseteq\spn{S}=W\text{.}\) It remains to prove that \(W=\spn{S}\subseteq\spn{T}\text{.}\)
Choose \(\vect{w}\in W\text{.}\) Then \(\vect{w}\) is a linear combination of the vectors in \(S\text{,}\) and thus by Theorem SLSLC the system \(\linearsystem{A}{\vect{w}}\) is consistent. The same sequence of row-operations that transforms \(A\) to \(B\) will transform the augmented matrix \(\augmented{A}{\vect{w}}\) into \(\augmented{B}{\vect{w}^\prime}\text{,}\) where \(\vect{w}^\prime\) is simply the entries of the final column that results from applying the sequence of row-operations to \(\vect{w}\text{.}\) The solution to this system that we obtain from Theorem VFSLS when all the free variables are set to zero is then the vector \(\vect{c}\) with
\begin{equation*} \vectorentry{\vect{c}}{i}= \begin{cases} 0&\text{if }i\in F\\ \vectorentry{\vect{w}^\prime}{k}&\text{if }i\in D,i=d_k \end{cases}\text{.} \end{equation*}
In other words, viewed as a solution to \(\linearsystem{A}{\vect{w}}\text{,}\) Theorem SLSLC tells us that \(\vect{w}\) is a linear combination of the columns of \(A\) where the non-pivot columns, \(\vect{v}_{f_1},\,\vect{v}_{f_2},\,\vect{v}_{f_3},\,\ldots\,\vect{v}_{f_{n-r}}\text{,}\) all have zero coefficients in the linear combination. So \(\vect{w}\) is a linear combination of just the vectors in \(T\text{,}\) and so \(\vect{w}\in\spn{T}\text{,}\) as desired. Therefore, \(W=\spn{S}\subseteq\spn{T}\text{,}\) and by Definition SE, \(W=\spn{T}\text{.}\)
In Example COV, we tossed-out vectors one at a time. But in each instance, we rewrote the offending vector as a linear combination of those vectors with the column indices of the pivot columns of the reduced row-echelon form of the matrix of columns. In the proof of Theorem BS, we accomplish this reduction in one big step. In Example COV we arrived at a linearly independent set at exactly the same moment that we ran out of free variables to exploit. This was not a coincidence, it is the substance of our conclusion of linear independence in Theorem BS.
Here is a straightforward application of Theorem BS.

Example RSC4. Reducing a span in \(\complex{4}\).

Begin with a set of five vectors from \(\complex{4}\text{,}\)
\begin{equation*} S=\set{ \colvector{ 1 \\ 1 \\ 2 \\ 1},\, \colvector{ 2 \\ 2 \\ 4 \\ 2},\, \colvector{ 2 \\ 0 \\ -1 \\ 1},\, \colvector{ 7 \\ 1 \\ -1 \\ 4},\, \colvector{ 0 \\ 2 \\ 5 \\ 1} } \end{equation*}
and let \(W=\spn{S}\text{.}\) To arrive at a (smaller) linearly independent set, follow the procedure described in Theorem BS. Place the vectors from \(S\) into a matrix as columns, and row-reduce,
\begin{equation*} \begin{bmatrix} 1 & 2 & 2 & 7 & 0 \\ 1 & 2 & 0 & 1 & 2 \\ 2 & 4 & -1 & -1 & 5 \\ 1 & 2 & 1 & 4 & 1 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 2 & 0 & 1 & 2 \\ 0 & 0 & \leading{1} & 3 & -1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
Columns 1 and 3 are the pivot columns (\(D=\set{1,\,3}\)) so the set
\begin{equation*} T=\set{ \colvector{ 1 \\ 1 \\ 2 \\ 1},\, \colvector{ 2 \\ 0 \\ -1 \\ 1} } \end{equation*}
is linearly independent and \(\spn{T}=\spn{S}=W\text{.}\) Boom!
Since the reduced row-echelon form of a matrix is unique (Theorem RREFU), the procedure of Theorem BS leads us to a unique set \(T\text{.}\) However, there is a wide variety of possibilities for sets \(T\) that are linearly independent and which can be employed in a span to create \(W\text{.}\) Without proof, we list two other possibilities
\begin{align*} T^{\prime}&=\set{ \colvector{ 2 \\ 2 \\ 4 \\ 2},\, \colvector{ 2 \\ 0 \\ -1 \\ 1} } & T^{*}&=\set{ \colvector{3 \\ 1 \\ 1 \\ 2},\, \colvector{-1 \\ 1 \\ 3 \\ 0} }\text{.} \end{align*}
Can you prove that \(T^{\prime}\) and \(T^{*}\) are linearly independent sets and \(W=\spn{S}=\spn{T^{\prime}}=\spn{T^{*}}\text{?}\) You would get an early exposure to several key ideas by doing so.

Sage RS. Reducing a Span.

Theorem BS allows us to construct a reduced spanning set for a span. As with the theorem, employing Sage we begin by constructing a matrix with the vectors of the spanning set as columns. Here is a do-over of Example RSC4, illustrating the use of Theorem BS in Sage.
Notice how we compute T with the single line that mirrors the construction of the set \(T=\set{\vect{v}_{d_1},\,\vect{v}_{d_2},\,\vect{v}_{d_3},\,\ldots\,\vect{v}_{d_r}}\) in the statement of Theorem BS. Again, the row-reducing is hidden in the use of the .pivot() matrix method, which necessarily must compute the reduced row-echelon form. The final two compute cells verify both conclusions of the theorem.

Example RES. Reworking elements of a span.

Begin with a set of five vectors from \(\complex{4}\text{,}\)
\begin{equation*} R=\set{ \colvector{ 2 \\ 1 \\ 3 \\ 2 },\, \colvector{ -1 \\ 1 \\ 0 \\ 1 },\, \colvector{ -8 \\ -1 \\ -9 \\ -4 },\, \colvector{ 3 \\ 1 \\ -1 \\ -2 },\, \colvector{ -10 \\ -1 \\ -1 \\ 4} }\text{.} \end{equation*}
It is easy to create elements of \(X=\spn{R}\) — we will create one at random,
\begin{equation*} \vect{y}= 6\colvector{ 2 \\ 1 \\ 3 \\ 2 }+ (-7)\colvector{ -1 \\ 1 \\ 0 \\ 1 }+ 1\colvector{ -8 \\ -1 \\ -9 \\ -4 }+ 6\colvector{ 3 \\ 1 \\ -1 \\ -2 }+ 2\colvector{ -10 \\ -1 \\ -1 \\ 4} = \colvector{9\\2\\1\\-3}\text{.} \end{equation*}
We know we can replace \(R\) by a smaller set (since it is obviously linearly dependent by Theorem MVSLD) that will create the same span. Here goes,
\begin{equation*} \begin{bmatrix} 2 & -1 & -8 & 3 & -10 \\ 1 & 1 & -1 & 1 & -1 \\ 3 & 0 & -9 & -1 & -1 \\ 2 & 1 & -4 & -2 & 4 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & -3 & 0 & -1 \\ 0 & \leading{1} & 2 & 0 & 2 \\ 0 & 0 & 0 & \leading{1} & -2 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
So, if we collect the first, second and fourth vectors from \(R\text{,}\)
\begin{equation*} P=\set{ \colvector{ 2 \\ 1 \\ 3 \\ 2 },\, \colvector{ -1 \\ 1 \\ 0 \\ 1 },\, \colvector{ 3 \\ 1 \\ -1 \\ -2 } } \end{equation*}
then \(P\) is linearly independent and \(\spn{P}=\spn{R}=X\) by Theorem BS. Since we built \(\vect{y}\) as an element of \(\spn{R}\) it must also be an element of \(\spn{P}\text{.}\) Can we write \(\vect{y}\) as a linear combination of just the three vectors in \(P\text{?}\) The answer is, of course, yes. But let us compute an explicit linear combination just for fun. By Theorem SLSLC we can get such a linear combination by solving a system of equations with the column vectors of \(R\) as the columns of a coefficient matrix, and \(\vect{y}\) as the vector of constants.
Employing an augmented matrix to solve this system,
\begin{equation*} \begin{bmatrix} 2 & -1 & 3 & 9 \\ 1 & 1 & 1 & 2 \\ 3 & 0 & -1 & 1 \\ 2 & 1 & -2 & -3 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 1 \\ 0 & \leading{1} & 0 & -1 \\ 0 & 0 & \leading{1} & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
So we see, as expected, that
\begin{equation*} 1\colvector{ 2 \\ 1 \\ 3 \\ 2 }+ (-1)\colvector{ -1 \\ 1 \\ 0 \\ 1 }+ 2\colvector{ 3 \\ 1 \\ -1 \\ -2 } =\colvector{9 \\ 2 \\ 1 \\ -3} =\vect{y}\text{.} \end{equation*}
A key feature of this example is that the linear combination that expresses \(\vect{y}\) as a linear combination of the vectors in \(P\) is unique. This is a consequence of the linear independence of \(P\text{.}\) The linearly independent set \(P\) is smaller than \(R\text{,}\) but still just (barely) big enough to create elements of the set \(X=\spn{R}\text{.}\) There are many, many ways to write \(\vect{y}\) as a linear combination of the five vectors in \(R\) (the appropriate system of equations to verify this claim yields two free variables in the description of the solution set), yet there is precisely one way to write \(\vect{y}\) as a linear combination of the three vectors in \(P\text{.}\)

Sage RES. Reworking a Span.

As another demonstration of using Sage to help us understand spans, linear combinations, linear independence and reduced row-echelon form, we will recreate parts of Example RES. Most of this should be familiar, but see the comments following.
The final two results — a trivial null space for coeff and the linear independence of P — both individually imply that the solution to the system of equations (just prior) is unique. Sage produces its own linearly independent spanning set for each span, as we see whenever we inquire about a span.
Can you extract the three vectors that Sage uses to span X and solve the appropriate system of equations to see how to write y as a linear combination of these three vectors? Once you have done that, check your answer by hand and think about how using Sage could have been overkill for this question.

Reading Questions LDS Reading Questions

1. Caclulate linear combination from dependent set.

Let \(S\) be the linearly dependent set of three vectors below.
\begin{equation*} S=\set{\colvector{1\\10\\100\\1000},\,\colvector{1\\1\\1\\1},\,\colvector{5\\23\\203\\2003}} \end{equation*}
Write one vector from \(S\) as a linear combination of the other two and include this vector equality in your response. (You should be able to do this on sight, rather than doing some computations.) Convert this expression into a nontrivial relation of linear dependence on \(S\text{.}\)

2. "Dependent" in definition of linear dependence.

Explain why the word “dependent” is used in the definition of linear dependence.

3. Prefer linearly dependent spanning set?

Suppose that \(Y=\spn{P}=\spn{Q}\text{,}\) where \(P\) is a linearly dependent set and \(Q\) is linearly independent. Would you rather use \(P\) or \(Q\) to describe \(Y\text{?}\) Why?

Exercises LDS Exercises

C20.

Let \(T\) be the set of columns of the matrix \(B\) below. Define \(W=\spn{T}\text{.}\) Find a set \(R\) so that (1) \(R\) has 3 vectors, (2) \(R\) is a subset of \(T\text{,}\) and (3) \(W=\spn{R}\text{.}\)
\begin{equation*} B= \begin{bmatrix} -3 & 1 & -2 & 7\\ -1 & 2 & 1 & 4\\ 1 & 1 & 2 & -1 \end{bmatrix} \end{equation*}
Solution.
Let \(T=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3,\,\vect{w}_4}\text{.}\) The vector \(\colvector{2\\-1\\0\\1}\)is a solution to the homogeneous system with the matrix \(B\) as the coefficient matrix (check this!). By Theorem SLSLC it provides the scalars for a linear combination of the columns of \(B\) (the vectors in \(T\)) that equals the zero vector, a relation of linear dependence on \(T\)
\begin{equation*} 2\vect{w}_1+(-1)\vect{w}_2+(1)\vect{w}_4=\zerovector\text{.} \end{equation*}
We can rearrange this equation by solving for \(\vect{w}_4\)
\begin{equation*} \vect{w}_4=(-2)\vect{w}_1+\vect{w}_2\text{.} \end{equation*}
This equation tells us that the vector \(\vect{w}_4\) is superfluous in the span construction that creates \(W\text{.}\) So \(W=\spn{\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3}}\text{.}\) The requested set is \(R=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3}\text{.}\)

C40.

Verify that the set \(R^\prime=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_4}\) at the end of Example RSC5 is linearly independent.

C50.

Consider the set of vectors from \(\complex{3}\text{,}\) \(W\text{,}\) given below. Find a linearly independent set \(T\) that contains three vectors from \(W\) and such that \(\spn{W}=\spn{T}\text{.}\)
\begin{equation*} W= \set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4,\,\vect{v}_5} =\set{ \colvector{2\\1\\1},\, \colvector{-1\\-1\\1},\, \colvector{1\\2\\3},\, \colvector{3\\1\\3},\, \colvector{0\\1\\-3} }\text{.} \end{equation*}
Solution.
To apply Theorem BS, we formulate a matrix \(A\) whose columns are \(\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4,\,\vect{v}_5\text{.}\) Then we row-reduce \(A\text{.}\) After row-reducing, we obtain
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 2 & -1\\ 0 & \leading{1} & 0 & 1 & -2\\ 0 & 0 & \leading{1} & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
From this we see that the pivot columns are \(D=\set{1,\,2,\,3}\text{.}\) Thus
\begin{equation*} T=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3}=\set{\colvector{2\\1\\1},\,\colvector{-1\\-1\\1},\,\colvector{1\\2\\3}} \end{equation*}
is a linearly independent set and \(\spn{T}=W\text{.}\) Compare this problem with Exercise LI.M50.

C51.

Given the set \(S\) below, find a linearly independent set \(T\) so that \(\spn{T}=\spn{S}\text{.}\)
\begin{equation*} S=\set{ \colvector{2\\-1\\2},\, \colvector{3\\0\\1},\, \colvector{1\\1\\-1},\, \colvector{5\\-1\\3} }\text{.} \end{equation*}
Solution.
Theorem BS says we can make a matrix with these four vectors as columns, row-reduce, and just keep the columns with indices in the set \(D\text{.}\) Here we go, forming the relevant matrix and row-reducing,
\begin{equation*} \begin{bmatrix} 2 & 3 & 1 & 5 \\ -1 & 0 & 1 & -1 \\ 2 & 1 & -1 & 3 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & -1 & 1 \\ 0 & \leading{1} & 1 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
Analyzing the row-reduced version of this matrix, we see that the first two columns are pivot columns, so \(D=\set{1,2}\text{.}\) Theorem BS says we need only “keep” the first two columns to create a set with the requisite properties
\begin{equation*} T=\set{ \colvector{2\\-1\\2},\, \colvector{3\\0\\1} }\text{.} \end{equation*}

C52.

Let \(W\) be the span of the set of vectors \(S\) below, \(W=\spn{S}\text{.}\) Find a set \(T\) so that 1) the span of \(T\) is \(W\text{,}\) \(\spn{T}=W\text{,}\) (2) \(T\) is a linearly independent set, and (3) \(T\) is a subset of \(S\text{.}\)
\begin{align*} S&= \set{ \colvector{1 \\ 2 \\ -1},\, \colvector{2 \\ -3 \\ 1},\, \colvector{4 \\ 1 \\ -1},\, \colvector{3 \\ 1 \\ 1},\, \colvector{3 \\ -1 \\ 0} }\text{.} \end{align*}
Solution.
This is a straight setup for the conclusion of Theorem BS. The hypotheses of this theorem tell us to pack the vectors of \(W\) into the columns of a matrix and row-reduce
\begin{align*} \begin{bmatrix} 1 & 2 & 4 & 3 & 3 \\ 2 & -3 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 0 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 2 & 0 & 1 \\ 0 & \leading{1} & 1 & 0 & 1 \\ 0 & 0 & 0 & \leading{1} & 0 \end{bmatrix}\text{.} \end{align*}
Pivot columns have indices \(D=\set{1,\,2,\,4}\text{.}\) Theorem BS tells us to form \(T\) with columns \(1,\,2\) and \(4\) of \(S\)
\begin{align*} T&= \set{ \colvector{1 \\ 2 \\ -1},\, \colvector{2 \\ -3 \\ 1},\, \colvector{3 \\ 1 \\ 1} }\text{.} \end{align*}

C55.

Let \(T\) be the set of vectors
\begin{equation*} T=\set{ \colvector{1 \\ -1 \\ 2},\, \colvector{3 \\ 0 \\ 1},\, \colvector{4 \\ 2 \\ 3},\, \colvector{3 \\ 0 \\ 6} }\text{.} \end{equation*}
Find two different subsets of \(T\text{,}\) named \(R\) and \(S\text{,}\) so that \(R\) and \(S\) each contain three vectors, and so that \(\spn{R}=\spn{T}\) and \(\spn{S}=\spn{T}\text{.}\) Prove that both \(R\) and \(S\) are linearly independent.
Solution.
Let \(A\) be the matrix whose columns are the vectors in \(T\text{.}\) Then row-reduce \(A\)
\begin{equation*} A\rref B= \begin{bmatrix} \leading{1} & 0 & 0 & 2 \\ 0 & \leading{1} & 0 & -1 \\ 0 & 0 & \leading{1} & 1 \end{bmatrix}\text{.} \end{equation*}
From Theorem BS we can form \(R\) by choosing the columns of \(A\) that have the same indices as the pivot columns of \(B\text{.}\) Theorem BS also guarantees that \(R\) will be linearly independent.
\begin{equation*} R=\set{ \colvector{1 \\ -1 \\ 2},\, \colvector{3 \\ 0 \\ 1},\, \colvector{4 \\ 2 \\ 3} }\text{.} \end{equation*}
That was easy. To find \(S\) will require a bit more work. From \(B\) we can obtain a solution to \(\homosystem{A}\text{,}\) which by Theorem SLSLC will provide a nontrivial relation of linear dependence on the columns of \(A\text{,}\) which are the vectors in \(T\text{.}\) To wit, choose the free variable \(x_4\) to be 1, then \(x_1=-2\text{,}\) \(x_2=1\text{,}\) \(x_3=-1\text{,}\) and so
\begin{equation*} (-2)\colvector{1 \\ -1 \\ 2}+ (1)\colvector{3 \\ 0 \\ 1}+ (-1)\colvector{4 \\ 2 \\ 3}+ (1)\colvector{3 \\ 0 \\ 6} = \colvector{0\\0\\0}\text{.} \end{equation*}
This equation can be rewritten with the second vector staying put, and the other three moving to the other side of the equality
\begin{equation*} \colvector{3 \\ 0 \\ 1} = (2)\colvector{1 \\ -1 \\ 2}+ (1)\colvector{4 \\ 2 \\ 3}+ (-1)\colvector{3 \\ 0 \\ 6}\text{.} \end{equation*}
We could have chosen other vectors to stay put, but may have then needed to divide by a nonzero scalar. This equation is enough to conclude that the second vector in \(T\) is “surplus” and can be replaced (see the careful argument in Example RSC5). So set
\begin{equation*} S=\set{ \colvector{1 \\ -1 \\ 2},\, \colvector{4 \\ 2 \\ 3},\, \colvector{3 \\ 0 \\ 6} } \end{equation*}
and then \(\spn{S}=\spn{T}\text{.}\) \(T\) is also a linearly independent set, which we can show directly. Make a matrix \(C\) whose columns are the vectors in \(S\text{.}\) Row-reduce \(C\) and you will obtain the identity matrix \(I_3\text{.}\) By Theorem LIVRN, the set \(S\) is linearly independent.

C70.

Reprise Example RES by creating a new version of the vector \(\vect{y}\text{.}\) In other words, form a new, different linear combination of the vectors in \(R\) to create a new vector \(\vect{y}\) (but do not simplify the problem too much by choosing any of the five new scalars to be zero). Then express this new \(\vect{y}\) as a combination of the vectors in \(P\text{.}\)

M10.

At the conclusion of Example RSC4 two alternative solutions, sets \(T^{\prime}\) and \(T^{*}\text{,}\) are proposed. Verify these claims by proving that \(\spn{T}=\spn{T^{\prime}}\) and \(\spn{T}=\spn{T^{*}}\text{.}\)

T40.

Suppose that \(\vect{v}_1\) and \(\vect{v}_2\) are any two vectors from \(\complex{m}\text{.}\) Prove the following set equality.
\begin{equation*} \spn{\set{\vect{v}_1,\,\vect{v}_2}} = \spn{\set{\vect{v}_1+\vect{v}_2,\,\vect{v}_1-\vect{v}_2}}\text{.} \end{equation*}
Solution.
This is an equality of sets, so Definition SE applies.
The “easy” half first. Show that
\begin{equation*} X=\spn{\set{\vect{v}_1+\vect{v}_2,\,\vect{v}_1-\vect{v}_2}}\subseteq \spn{\set{\vect{v}_1,\,\vect{v}_2}}=Y\text{.} \end{equation*}
Choose \(\vect{x}\in X\text{.}\) Then \(\vect{x}=a_1(\vect{v}_1+\vect{v}_2)+a_2(\vect{v}_1-\vect{v}_2)\) for some scalars \(a_1\) and \(a_2\text{.}\) Then
\begin{align*} \vect{x}&=a_1(\vect{v}_1+\vect{v}_2)+a_2(\vect{v}_1-\vect{v}_2)\\ &=a_1\vect{v}_1+a_1\vect{v}_2+a_2\vect{v}_1+(-a_2)\vect{v}_2\\ &=(a_1+a_2)\vect{v}_1+(a_1-a_2)\vect{v}_2 \end{align*}
which qualifies \(\vect{x}\) for membership in \(Y\text{,}\) as it is a linear combination of \(\vect{v}_1,\,\vect{v}_2\text{.}\)
Now show the opposite inclusion, \(Y=\spn{\set{\vect{v}_1,\,\vect{v}_2}}\subseteq\spn{\set{\vect{v}_1+\vect{v}_2,\,\vect{v}_1-\vect{v}_2}}=X\text{.}\)
Choose \(\vect{y}\in Y\text{.}\) Then there are scalars \(b_1,\,b_2\) such that \(\vect{y}=b_1\vect{v}_1+b_2\vect{v}_2 \text{.}\) Rearranging, we obtain
\begin{align*} \vect{y}&=b_1\vect{v}_1+b_2\vect{v}_2\\ &=\frac{b_1}{2}\left[\left(\vect{v}_1+\vect{v}_2\right)+\left(\vect{v}_1-\vect{v}_2\right)\right] + \frac{b_2}{2}\left[\left(\vect{v}_1+\vect{v}_2\right)-\left(\vect{v}_1-\vect{v}_2\right)\right]\\ &=\frac{b_1+b_2}{2}\left(\vect{v}_1+\vect{v}_2\right)+\frac{b_1-b_2}{2}\left(\vect{v}_1-\vect{v}_2\right) \end{align*}
This is an expression for \(\vect{y}\) as a linear combination of \(\vect{v}_1+\vect{v}_2\) and \(\vect{v}_1-\vect{v}_2\text{,}\) earning \(\vect{y}\) membership in \(X\text{.}\) Since \(X\) is a subset of \(Y\text{,}\) and vice versa, we see that \(X=Y\text{,}\) as desired.
You have attempted of activities on this page.