Skip to main content

Section 1.6 Linear Independence

Recall that in Example 1.3.4, we had to take care to insist that the vectors spanning our plane were not parallel. Otherwise, what we thought was a plane would, in fact, be only a line. Similarly, we said that a line is given by the set of all vectors of the form \(t\vv\text{,}\) where \(t\) is a scalar, and \(\vv\) is not the zero vector. Otherwise, if \(\vv=\zer\text{,}\) we would have \(t\vv=\zer\) for all \(t\in\R\text{,}\) and our “line” would be the trivial subspace.
When we define a subspace as the span of a set of vectors, we want to have an idea of the size (or perhaps complexity) of the subspace. Certainly the number of vectors we use to generate the span gives a measure of this, but it is not the whole story: we also need to know how many of these vectors “depend” on other vectors in the generating set. As Theorem 1.4.10 tells us, when one of the vectors in our generating set can be written as a linear combination of the others, we can remove it as a generator without changing the span.
Given a set of vectors \(S=\{\vv_1,\vv_2,\ldots, \vv_k\}\text{,}\) an important question is therefore: can any of these vectors be written as a linear combination of other vectors in the set? If the answer is no, we say that \(S\) is linearly independent. This is a difficult condition to check, however: first, we would have to show that \(\vv_1\) cannot be written as a linear combination of \(\vv_2,\ldots, \vv_k\text{.}\) Then, that \(\vv_2\) cannot be written in terms of \(\vv_1,\vv_3,\ldots, \vv_k\text{,}\) and so on.
This could amount to solving \(k\) different systems of equations in \(k-1\) variables! But the systems are not all unrelated. The equation \(\vv_1=c_2\vv_2+\cdots+c_k\vv_k\) can be rewritten as \(c_1\vv_1-c_2\vv_2-\cdots -c_k\vv_k=\zer\text{,}\) where we happen to have set \(c_1=1\text{.}\)
In fact, we can do the same thing for each of these systems, and in each case we end up with the same thing: a single homogeneous system with one extra variable. (We get back each of the systems we started with by setting one of the variables equal to \(1\text{.}\)) This not only is far more efficient, it changes the question: it is no longer a question of existence of solutions to a collection of non-homogeneous systems, but a question of uniqueness for the solution of a single homogeneous system.

Definition 1.6.1.

Let \(\{\vv_1,\ldots,\vv_k\}\) be a set of vectors in a vector space \(V\text{.}\) We say that this set is linearly independent if, for scalars \(c_1,\ldots, c_k\text{,}\) the equation
\begin{equation*} c_1\vv_1+\cdots + c_k\vv_k = \zer \end{equation*}
implies that \(c_1=0, c_2=0,\ldots, c_k=0\text{.}\)
A set of vectors that is not linearly indepdendent is called linearly dependent.

Exercise 1.6.2.

    True or false: if \(c_1\vv_1+\cdots +c_k\vv_k=\zer\text{,}\) where \(c_1=0,\ldots, c_k=0\text{,}\) then \(\{\vv_1,\ldots, \vv_k\}\) is linearly independent.
  • True.

  • The definition of independence is a conditional statement: if \(c_1\vv_1+\cdots + c_k\vv_k = \zer\text{,}\) then \(c_1=0,\ldots, c_k=0\text{.}\) It is important to get the order of the logic correct here, as the converse is always true.
  • False.

  • The definition of independence is a conditional statement: if \(c_1\vv_1+\cdots + c_k\vv_k = \zer\text{,}\) then \(c_1=0,\ldots, c_k=0\text{.}\) It is important to get the order of the logic correct here, as the converse is always true.
Note that the definition of independence asserts that there can be no “non-trivial” linear combinations that add up to the zero vector. Indeed, if even one scalar can be nonzero, then we can solve for the corresponding vector. Say, for example, that we have a solution to \(c_1\vv_1+c_2\vv_2\cdots + c_k\vv_k = \zer\) with \(c_1\neq 0\text{.}\) Then we can move all other vectors to the right-hand side, and multiply both sides by \(1/c_1\) to give
\begin{equation*} \vv_1 = -\frac{c_2}{c_1}\vv_2-\cdots - \frac{c_k}{c_1}\vv_k\text{.} \end{equation*}

Remark 1.6.3. Proofs involving linear independence.

Note that the definition of linear independence is a conditional statement: if \(c_1\vv_1+\cdots +c_k\vv_k=\zer\) for some \(c_1,\ldots, c_k\text{,}\) then \(c_1=0,\ldots, c_k=0\text{.}\)
When we want to conclude that a set of vectors is linearly independent, we should assume that \(c_1\vv_1+\cdots +c_k\vv_k=\zer\) for some \(c_1,\ldots, c_k\text{,}\) and then try to show that the scalars must be zero. It’s important that we do not assume anything about the scalars to begin with.
If the hypothesis of a statement includes the assumption that a set of vectors is independent, we know that if we can get a linear combination of those vectors equal to the zero vector, then the scalars in that linear combination are automatically zero.

Exercise 1.6.4.

    Which of the following are equivalent to the statement, “The set of vectors \(\{\vv_1,\ldots, \vv_k\}\) is linearly independent.”?
  • If \(c_1\vv_1+\cdots +c_k\vv_k=\zer\text{,}\) then \(c_1=0,\ldots, c_k=0\text{.}\)
  • Yes! This is essentially the definition.
  • If \(c_1=0,\ldots, c_k=0\text{,}\) then \(c_1\vv_1+\cdots + c_k\vv=\zer\text{.}\)
  • Remember that a conditional statement is not equivalent to its converse. This statement is true for any set of vectors.
  • The only scalars \(c_1,\ldots, c_k\) for which \(c_1\vv_1+\cdots + c_k\vv=\zer\) are \(c_1=0,\ldots, c_k=0\text{.}\)
  • Correct!
  • For all scalars \(c_1,\ldots, c_k\text{,}\) \(c_1\vv_1+\cdots + c_k\vv=\zer\text{.}\)
  • The only way this can be true is if all the vectors in the set are the zero vector!
  • For some scalars \(c_1,\ldots, c_k\text{,}\) \(c_1\vv_1+\cdots + c_k\vv=\zer\text{.}\)
  • Such scalars always exist, because we can choose them to be zero. Independence means that this is the only possible choice.
When looking for vectors that span a subspace, it is useful to find a spanning set that is also linearly independent. Otherwise, as Theorem 1.4.10 tells us, we will have some “redundant” vectors, in the sense that removing them as generators does not change the span.

Strategy.

This time, we will outline the strategy, and leave the execution to you. Both parts are about linear combinations. What does independence look like for a single vector? We would need to show that if \(c\vv=\zer\) for some scalar \(c\text{,}\) then \(c=0\text{.}\) Now recall that in Exercise 1.2.4, we showed that if \(c\vv=\zer\text{,}\) either \(c=0\) or \(\vv=\zer\text{.}\) We’re assuming \(\vv=\zer\text{,}\) so what does that tell you about \(c\text{?}\)
In the second part, if we have a linear combination involving the zero vector, does the value of the scalar in front of \(\zer\) matter? (Can it change the value of the linear combination?) If not, is there any reason that scalar would have to be zero?
The definition of linear independence tells us that if \(\{\vv_1,\ldots, \vv_k\}\) is an independent set of vectors, then there is only one way to write \(\zer\) as a linear combination of these vectors; namely,
\begin{equation*} \zer = 0\vv_1+0\vv_2+\cdots +0\vv_k\text{.} \end{equation*}
In fact, more is true: any vector in the span of a linearly independent set can be written in only one way as a linear combination of those vectors.

Remark 1.6.6.

Computationally, questions about linear independence are just questions about homogeneous systems of linear equations. For example, suppose we want to know if the vectors
\begin{equation*} \uu=\bbm 1\\-1\\4\ebm, \vv=\bbm 0\\2\\-3\ebm, \ww=\bbm 4\\0\\-3\ebm \end{equation*}
are linearly independent in \(\mathbb{R}^3\text{.}\) This question leads to the vector equation
\begin{equation*} x\uu+y\vv+z\ww=\zer\text{,} \end{equation*}
which becomes the matrix equation
\begin{equation*} \bbm 1\amp0\amp4\\-1\amp2\amp0\\4\amp-3\amp-3\ebm\bbm x\\y\\z\ebm = \bbm 0\\0\\0\ebm\text{.} \end{equation*}
We now apply some basic theory from linear algebra. A unique (and therefore, trivial) solution to this system is guaranteed if the matrix \(A = \bbm 1\amp0\amp4\\-1\amp2\amp0\\4\amp-3\amp-3\ebm\) is invertible, since in that case we have \(\bbm x\\y\\z\ebm = A^{-1}\zer = \zer\text{.}\)
The approach in Remark 1.6.6 is problematic, however, since it won’t work if we have 2 vectors, or 4. In general, we should look at the reduced row-echelon form. A unique solution corresponds to having a leading 1 in each column of \(A\text{.}\) Let’s check this condition.
One observation is useful here, and will lead to a better understanding of independence. First, it would be impossible to have 4 or more linearly independent vectors in \(\mathbb{R}^3\text{.}\) Why? (How many leading ones can you have in a \(3\times 4\) matrix?) Second, having two or fewer vectors makes it more likely that the set is independent.
The largest set of linearly independent vectors possible in \(\mathbb{R}^3\) contains three vectors. You might have also observed that the smallest number of vectors needed to span \(\mathbb{R}^3\) is 3. Hmm. Seems like there’s something interesting going on here. But first, some more computation. (For the first two exercises, once you’ve tried it yourself, you can find a solution using a Sage cell for computation at the end of the book.)

Exercise 1.6.7.

Determine whether the set \(\left\{\bbm 1\\2\\0\ebm, \bbm -1\\0\\3\ebm,\bbm -1\\4\\9\ebm\right\}\) is linearly independent in \(\R^3\text{.}\)

Exercise 1.6.8.

Which of the following subsets of \(P_2(\mathbb{R})\) are independent?
\begin{gather*} \text{(a) } S_1 = \{x^2+1, x+1, x\}\\ \text{(b) } S_2 = \{x^2-x+3, 2x^2+x+5, x^2+5x+1\} \end{gather*}

Exercise 1.6.9.

Determine whether or not the set
\begin{equation*} \left\{\bbm -1\amp 0\\0\amp -1\ebm, \bbm 1\amp -1\\ -1\amp 1\ebm, \bbm 1\amp 1\\1\amp 1\ebm, \bbm 0\amp -1\\-1\amp 0\ebm\right\} \end{equation*}
is linearly independent in \(M_2(\mathbb{R})\text{.}\)
We end with one last exercise, which provides a result that often comes in handy.

Exercise 1.6.10.

Prove that any nonempty subset of a linearly independent set is linearly independent.
Hint.
Start by assigning labels: let the larger set be \(\{\vv_1,\vv_2,\ldots, \vv_n\}\text{,}\) and let the smaller set be \(\{\vv_1, \ldots, \vv_m\}\text{,}\) where \(m\leq n\text{.}\) What happens if the smaller set is not independent?

Exercises Exercises

1.

    Let \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) be a linearly independent set of vectors. Select the best statement.
  • \(\{\vv_1,\vv_2,\vv_3\}\) is never a linearly independent set of vectors.
  • The independence of the set \(\{\vv_1,\vv_2,\vv_3\}\) depends on the vectors chosen.
  • \(\{\vv_1,\vv_2,\vv_3\}\) is always a linearly independent set of vectors.

2.

    Let \(\vv_4\) be a linear combination of \(\{\vv_1,\vv_2,\vv_3\}\text{.}\) Select the best statement.
  • \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is never a linearly independent set of vectors.
  • \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is always a linearly independent set of vectors.
  • We can’t conclude whether or not \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is a linearly independent set of vectors.
  • The set \(\{\vv_1,\vv_2,\vv_3\}\) must be a linearly independent set of vectors.
  • The set \(\{\vv_1,\vv_2,\vv_3\}\) cannot be a linearly independent set of vectors.

3.

    Assume \(\vv_4\) is not a linear combination of the vectors \(\vv_1,\vv_2,\vv_3\text{.}\) Select the best statement.
  • The set \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is always linearly independent.
  • The set \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is never linearly independent.
  • The set \(\{\vv_1,\vv_2,\vv_3,\vv_4\}\) is linearly independent provided that \(\{\vv_1,\vv_2,\vv_3\}\) is linearly independent.

4.

Are the vectors \(\vec{u}= {\left[\begin{array}{c} -4\cr -3\cr -3 \end{array}\right]}\text{,}\) \(\vec{v} = {\left[\begin{array}{c} 3\cr -1\cr -4 \end{array}\right]}\) and \(\vec{w} = {\left[\begin{array}{c} -7\cr -15\cr -24 \end{array}\right]}\) linearly independent?
If they are linearly dependent, find scalars that are not all zero such that the equation below is true.
If they are linearly independent, find the only scalars that will make the equation below true.
\(\vec{u} +\) \(\vec{v} +\) \(\vec{w} = \vec{0}\text{.}\)

5.

Are the vectors \(\vec{u} = {\left[\begin{array}{ccc} -4 \amp -3 \amp -3 \end{array}\right]}\text{,}\) \(\vec{v} = {\left[\begin{array}{ccc} 3 \amp -1 \amp -4 \end{array}\right]}\) and \(\vec{w} = {\left[\begin{array}{ccc} -7 \amp -15 \amp -24 \end{array}\right]}\) linearly independent?
If they are linearly dependent, find scalars that are not all zero such that the equation below is true.
If they are linearly independent, find the only scalars that will make the equation below true.
\(\vec{u} +\) \(\vec{v} +\) \(\vec{w} = \vec{0}\text{.}\)

6.

Are the vectors \(p(x) = {5x-4+3x^{2}}, q(x) = {7x-6+4x^{2}}\) and \(r(x) = {1-2x-x^{2}}\) linearly independent?
If the vectors are independent, enter zero in every answer blank below, since zeros are only the values that make the equation below true.
If they are dependent, find numbers, not all zero, that make the equation below true. You should be able to explain and justify your answer.
\(0 =\) \(p(x) +\) \(q(x) +\) \(r(x)\)

7.

Are the vectors \(p(x) = {3x-3-9x^{2}}, q(x) = {4+12x-8x^{2}}\) and \(r(x) = {-5-7x}\) linearly independent?
If the vectors are independent, enter zero in every answer blank since zeros are only the values that make the equation below true.
If they are dependent, find numbers, not all zero, that make the equation below true. You should be able to explain and justify your answer.
\(0 =\) \(p(x) +\) \(q(x) +\) \(r(x)\)

8.

Determine whether or not the following sets \(S\) of \(2\times 2\) matrices are linearly independent.
  1. \(\displaystyle S= \left\{ \begin{pmatrix} 0\amp -4\cr 1\amp -3 \end{pmatrix},\, \begin{pmatrix} 0\amp 12\cr -3\amp 9 \end{pmatrix} \right\}\)
  2. \(\displaystyle S= \left\{ \begin{pmatrix} 0\amp -4\cr 1\amp -3 \end{pmatrix},\, \begin{pmatrix} 0\amp 9\cr -15\amp 9 \end{pmatrix} \right\}\)
  3. \(S= \left\{ \begin{pmatrix} 0\amp -4\cr 1\amp -3 \end{pmatrix}, \, \begin{pmatrix} 0\amp 9\cr -15\amp 9 \end{pmatrix} ,\, \begin{pmatrix} 1\amp -3\cr 9\amp 10 \end{pmatrix} ,\,\right.\) \(\left. \begin{pmatrix} -4\amp 0\cr 12\amp -3 \end{pmatrix} ,\, \begin{pmatrix} 17\amp -31\cr \pi \amp e^2 \end{pmatrix} \right\}\)
  4. \(\displaystyle S= \left\{ \begin{pmatrix} 3\amp 2\cr -4\amp -1 \end{pmatrix},\, \begin{pmatrix} 2\amp 3\cr 3\amp -4 \end{pmatrix},\, \begin{pmatrix} 2\amp -4\cr 3\amp 0 \end{pmatrix} \right\}\)
You have attempted of activities on this page.