Skip to main content

Section 1.6 Linear Independence

Recall that in Example 1.3.4, we had to take care to insist that the vectors spanning our plane were not parallel. Otherwise, what we thought was a plane would, in fact, be only a line. Similarly, we said that a line is given by the set of all vectors of the form tv, where t is a scalar, and v is not the zero vector. Otherwise, if v=0, we would have tv=0 for all tR, and our “line” would be the trivial subspace.
When we define a subspace as the span of a set of vectors, we want to have an idea of the size (or perhaps complexity) of the subspace. Certainly the number of vectors we use to generate the span gives a measure of this, but it is not the whole story: we also need to know how many of these vectors “depend” on other vectors in the generating set. As Theorem 1.4.10 tells us, when one of the vectors in our generating set can be written as a linear combination of the others, we can remove it as a generator without changing the span.
Given a set of vectors S={v1,v2,,vk}, an important question is therefore: can any of these vectors be written as a linear combination of other vectors in the set? If the answer is no, we say that S is linearly independent. This is a difficult condition to check, however: first, we would have to show that v1 cannot be written as a linear combination of v2,,vk. Then, that v2 cannot be written in terms of v1,v3,,vk, and so on.
This could amount to solving k different systems of equations in k1 variables! But the systems are not all unrelated. The equation v1=c2v2++ckvk can be rewritten as c1v1c2v2ckvk=0, where we happen to have set c1=1.
In fact, we can do the same thing for each of these systems, and in each case we end up with the same thing: a single homogeneous system with one extra variable. (We get back each of the systems we started with by setting one of the variables equal to 1.) This not only is far more efficient, it changes the question: it is no longer a question of existence of solutions to a collection of non-homogeneous systems, but a question of uniqueness for the solution of a single homogeneous system.

Definition 1.6.1.

Let {v1,,vk} be a set of vectors in a vector space V. We say that this set is linearly independent if, for scalars c1,,ck, the equation
c1v1++ckvk=0
implies that c1=0,c2=0,,ck=0.
A set of vectors that is not linearly indepdendent is called linearly dependent.

Exercise 1.6.2.

    True or false: if c1v1++ckvk=0, where c1=0,,ck=0, then {v1,,vk} is linearly independent.
  • True.

  • The definition of independence is a conditional statement: if c1v1++ckvk=0, then c1=0,,ck=0. It is important to get the order of the logic correct here, as the converse is always true.
  • False.

  • The definition of independence is a conditional statement: if c1v1++ckvk=0, then c1=0,,ck=0. It is important to get the order of the logic correct here, as the converse is always true.
Note that the definition of independence asserts that there can be no “non-trivial” linear combinations that add up to the zero vector. Indeed, if even one scalar can be nonzero, then we can solve for the corresponding vector. Say, for example, that we have a solution to c1v1+c2v2+ckvk=0 with c10. Then we can move all other vectors to the right-hand side, and multiply both sides by 1/c1 to give
v1=c2c1v2ckc1vk.

Remark 1.6.3. Proofs involving linear independence.

Note that the definition of linear independence is a conditional statement: if c1v1++ckvk=0 for some c1,,ck, then c1=0,,ck=0.
When we want to conclude that a set of vectors is linearly independent, we should assume that c1v1++ckvk=0 for some c1,,ck, and then try to show that the scalars must be zero. It’s important that we do not assume anything about the scalars to begin with.
If the hypothesis of a statement includes the assumption that a set of vectors is independent, we know that if we can get a linear combination of those vectors equal to the zero vector, then the scalars in that linear combination are automatically zero.

Exercise 1.6.4.

    Which of the following are equivalent to the statement, “The set of vectors {v1,,vk} is linearly independent.”?
  • If c1v1++ckvk=0, then c1=0,,ck=0.
  • Yes! This is essentially the definition.
  • If c1=0,,ck=0, then c1v1++ckv=0.
  • Remember that a conditional statement is not equivalent to its converse. This statement is true for any set of vectors.
  • The only scalars c1,,ck for which c1v1++ckv=0 are c1=0,,ck=0.
  • Correct!
  • For all scalars c1,,ck, c1v1++ckv=0.
  • The only way this can be true is if all the vectors in the set are the zero vector!
  • For some scalars c1,,ck, c1v1++ckv=0.
  • Such scalars always exist, because we can choose them to be zero. Independence means that this is the only possible choice.
When looking for vectors that span a subspace, it is useful to find a spanning set that is also linearly independent. Otherwise, as Theorem 1.4.10 tells us, we will have some “redundant” vectors, in the sense that removing them as generators does not change the span.

Strategy.

This time, we will outline the strategy, and leave the execution to you. Both parts are about linear combinations. What does independence look like for a single vector? We would need to show that if cv=0 for some scalar c, then c=0. Now recall that in Exercise 1.2.4, we showed that if cv=0, either c=0 or v=0. We’re assuming v=0, so what does that tell you about c?
In the second part, if we have a linear combination involving the zero vector, does the value of the scalar in front of 0 matter? (Can it change the value of the linear combination?) If not, is there any reason that scalar would have to be zero?
The definition of linear independence tells us that if {v1,,vk} is an independent set of vectors, then there is only one way to write 0 as a linear combination of these vectors; namely,
0=0v1+0v2++0vk.
In fact, more is true: any vector in the span of a linearly independent set can be written in only one way as a linear combination of those vectors.

Remark 1.6.6.

Computationally, questions about linear independence are just questions about homogeneous systems of linear equations. For example, suppose we want to know if the vectors
u=[114],v=[023],w=[403]
are linearly independent in R3. This question leads to the vector equation
xu+yv+zw=0,
which becomes the matrix equation
[104120433][xyz]=[000].
We now apply some basic theory from linear algebra. A unique (and therefore, trivial) solution to this system is guaranteed if the matrix A=[104120433] is invertible, since in that case we have [xyz]=A10=0.
The approach in Remark 1.6.6 is problematic, however, since it won’t work if we have 2 vectors, or 4. In general, we should look at the reduced row-echelon form. A unique solution corresponds to having a leading 1 in each column of A. Let’s check this condition.
One observation is useful here, and will lead to a better understanding of independence. First, it would be impossible to have 4 or more linearly independent vectors in R3. Why? (How many leading ones can you have in a 3×4 matrix?) Second, having two or fewer vectors makes it more likely that the set is independent.
The largest set of linearly independent vectors possible in R3 contains three vectors. You might have also observed that the smallest number of vectors needed to span R3 is 3. Hmm. Seems like there’s something interesting going on here. But first, some more computation. (For the first two exercises, once you’ve tried it yourself, you can find a solution using a Sage cell for computation at the end of the book.)

Exercise 1.6.7.

Determine whether the set {[120],[103],[149]} is linearly independent in R3.

Exercise 1.6.8.

Which of the following subsets of P2(R) are independent?
(a) S1={x2+1,x+1,x}(b) S2={x2x+3,2x2+x+5,x2+5x+1}

Exercise 1.6.9.

Determine whether or not the set
{[1001],[1111],[1111],[0110]}
is linearly independent in M2(R).
We end with one last exercise, which provides a result that often comes in handy.

Exercise 1.6.10.

Prove that any nonempty subset of a linearly independent set is linearly independent.
Hint.
Start by assigning labels: let the larger set be {v1,v2,,vn}, and let the smaller set be {v1,,vm}, where mn. What happens if the smaller set is not independent?

Exercises Exercises

1.

    Let {v1,v2,v3,v4} be a linearly independent set of vectors. Select the best statement.
  • {v1,v2,v3} is never a linearly independent set of vectors.
  • The independence of the set {v1,v2,v3} depends on the vectors chosen.
  • {v1,v2,v3} is always a linearly independent set of vectors.

2.

    Let v4 be a linear combination of {v1,v2,v3}. Select the best statement.
  • {v1,v2,v3,v4} is never a linearly independent set of vectors.
  • {v1,v2,v3,v4} is always a linearly independent set of vectors.
  • We can’t conclude whether or not {v1,v2,v3,v4} is a linearly independent set of vectors.
  • The set {v1,v2,v3} must be a linearly independent set of vectors.
  • The set {v1,v2,v3} cannot be a linearly independent set of vectors.

3.

    Assume v4 is not a linear combination of the vectors v1,v2,v3. Select the best statement.
  • The set {v1,v2,v3,v4} is always linearly independent.
  • The set {v1,v2,v3,v4} is never linearly independent.
  • The set {v1,v2,v3,v4} is linearly independent provided that {v1,v2,v3} is linearly independent.

4.

Are the vectors u=[433], v=[314] and w=[71524] linearly independent?
If they are linearly dependent, find scalars that are not all zero such that the equation below is true.
If they are linearly independent, find the only scalars that will make the equation below true.
u+ v+ w=0.

5.

Are the vectors u=[433], v=[314] and w=[71524] linearly independent?
If they are linearly dependent, find scalars that are not all zero such that the equation below is true.
If they are linearly independent, find the only scalars that will make the equation below true.
u+ v+ w=0.

6.

Are the vectors p(x)=5x4+3x2,q(x)=7x6+4x2 and r(x)=12xx2 linearly independent?
If the vectors are independent, enter zero in every answer blank below, since zeros are only the values that make the equation below true.
If they are dependent, find numbers, not all zero, that make the equation below true. You should be able to explain and justify your answer.
0= p(x)+ q(x)+ r(x)

7.

Are the vectors p(x)=3x39x2,q(x)=4+12x8x2 and r(x)=57x linearly independent?
If the vectors are independent, enter zero in every answer blank since zeros are only the values that make the equation below true.
If they are dependent, find numbers, not all zero, that make the equation below true. You should be able to explain and justify your answer.
0= p(x)+ q(x)+ r(x)

8.

Determine whether or not the following sets S of 2×2 matrices are linearly independent.
  1. S={(0413),(01239)}
  2. S={(0413),(09159)}
  3. S={(0413),(09159),(13910), (40123),(1731πe2)}
  4. S={(3241),(2334),(2430)}
You have attempted 1 of 15 activities on this page.