Skip to main content

Section 1.8 New subspaces from old

Let \(V\) be a finite-dimensional vector space, and let \(U,W\) be subspaces of \(V\text{.}\) In what ways can we combine \(U\) and \(W\) to obtain new subspaces?
At first, we might try set operations: union, intersection, and difference. The set difference we can rule out right away: since \(U\) and \(W\) must both contain the zero vector, \(U\setminus W\) cannot.
What about the union, \(U\cup W\text{?}\) Before trying to understand this in general, let’s try a concrete example: take \(V=\R^2\text{,}\) and let \(U=\{(x,0)\,|,x\in \R\}\) (the \(x\) axis, essentially), and \(W=\{(0,y)\,|\,y\in\R\}\) (the \(y\) axis). Is their union a subspace?

Exercise 1.8.1.

    The union of the “\(x\) axis” and “\(y\) axis” in \(\R^2\) is a subspace of \(\R^2\text{.}\)
  • True.

  • Any subspace has to be closed under addition. If we add the vector \((1,0)\) (which lies along the \(x\) axis) to the vector \((0,1)\) (which lies along the \(y\) axis), we get the vector \((1,1)\text{,}\) which does not lie along either axis.
  • False.

  • Any subspace has to be closed under addition. If we add the vector \((1,0)\) (which lies along the \(x\) axis) to the vector \((0,1)\) (which lies along the \(y\) axis), we get the vector \((1,1)\text{,}\) which does not lie along either axis.
With a motivating example under our belts, we can try to tackle the general result. (Note that this result remains true even if \(V\) is infinite-dimensional!)

Strategy.

We have an “if and only” if statement, which means we have to prove two directions:
  1. If \(U\subseteq W\) or \(W\subseteq U\text{,}\) then \(U\cup W\) is a subspace.
  2. If \(U\cup W\) is a subspace, then \(U\subseteq W\) or \(W\subseteq U\text{.}\)
The first direction is the easy one: if \(U\subseteq W\text{,}\) what can you say about \(U\cup W\text{?}\)
For the other direction, it’s not clear how to get started with our hypothesis. When a direct proof seems difficult, remember that we can also try proving the contrapositive: If \(U\not\subseteq W\) and \(W\not\subseteq U\text{,}\) then \(U\cup W\) is not a subspace.
Now we have more to work with: negation turns the “or” into an “and”, and proving that something is not a subspace is easier: we just have to show that one part of the subspace test fails. As our motivating example suggests, we should expect closure under addition to be the condition that fails.
To get started, we need to answer one more question: if \(U\) is not a subset of \(W\text{,}\) what does that tell us?
An important point to keep in mind with this proof: closure under addition means that if a subspace contains \(\uu\) and \(\ww\text{,}\) then it must contain \(\uu+\ww\text{.}\) But if a subspace contains \(\uu+\ww\text{,}\) that does not mean it has to contain \(\uu\) and \(\ww\text{.}\) As an example, consider the subspace \(\{(x,x)\,|\,x\in\R\}\) of \(\R^2\text{.}\) It contains the vector \((1,1)=(1,0)+(0,1)\text{,}\) but it does not contain \((1,0)\) or \((0,1)\text{.}\)

Proof.

Suppose \(U\subseteq W\) or \(W\subseteq U\text{.}\) In the first case, \(U\cup W=W\text{,}\) and in the second case, \(U\cup W=U\text{.}\) Since both \(U\) and \(W\) are subspaces, \(U\cup W\) is a subspace.
Now, suppose that \(U\not\subseteq W\text{,}\) and \(W\not\subseteq U\text{.}\) Since \(U\not\subseteq W\text{,}\) there must be some element \(\uu\in U\) such that \(\uu\notin W\text{.}\) Since \(W\not\subseteq U\text{,}\) there must be some element \(\ww\in W\) such that \(\ww\notin U\text{.}\) We know that \(\uu,\ww\in U\cup W\text{,}\) so we consider the sum, \(\uu+\ww\text{.}\)
If \(\uu+\ww\in U\cup W\text{,}\) then \(\uu+\ww\in U\text{,}\) or \(\uu+\ww\in W\text{.}\) Suppose \(\uu+\ww\in U\text{.}\) Since \(\uu \in U\) and \(U\) is a subspace, \(-\uu\in U\text{.}\) Since \(-\uu, \uu+\ww\in U\) and \(U\) is a subspace,
\begin{equation*} -\uu+(\uu+\ww)=(-\uu+\uu)+\ww=\zer+\ww=\ww \in U\text{.} \end{equation*}
But we assumed that \(\ww\notin U\text{,}\) so it must be that \(\uu+\ww\notin U\text{.}\)
By a similar argument, if \(\uu+\ww\in W\text{,}\) we can conclude that \(\uu\in W\text{,}\) contradicting the assumption that \(\uu\notin W\text{.}\) So \(\uu+\ww\) does not belong to \(U\) or \(W\text{,}\) so it cannot belong to \(U\cup W\text{.}\) Since \(U\cup W\) is not closed under addition, it is not a subspace.
This leaves us with intersection. Will it fail as well? Fortunately, the answer is no: this operation actually gives us a subspace.

Strategy.

The key here is that the intersection contains only those vectors that belong to both subspaces. So any operation (addition, scalar multiplication) that we do in \(U\cap W\) can be viewed as taking place in either \(U\) or \(W\text{,}\) and we know that these are subspaces. After this observation, the rest is the Subspace Test.

Proof.

Let \(U\) and \(W\) be subspaces of \(V\text{.}\) Since \(\zer\in U\) and \(\zer \in W\text{,}\) we have \(\zer\in U\cap W\text{.}\) Now, suppose \(\xx,\yy\in U\cap W\text{.}\) Then \(\xx,\yy\in U\text{,}\) and \(\xx,\yy\in W\text{.}\) Since \(\xx,\yy\in U\) and \(U\) is a subspace, \(\xx+\yy\in U\text{.}\) Similarly, \(\xx+\yy\in W\text{,}\) so \(\xx+\yy\in U\cap W\text{.}\) If \(c\) is any scalar, then \(c\xx\) is in both \(U\) and \(W\text{,}\) since both sets are subspaces, and therefore, \(c\xx\in U\cap W\text{.}\) By the Subspace Test, \(U\cap W\) is a subspace.
The intersection of two subspaces gives us a subspace, but it is a smaller subspace, contained in the two subspaces we’re intersecting. Given subspaces \(U\) and \(W\text{,}\) is there a way to construct a larger subspace that contains them? We know that \(U\cup W\) doesn’t work, because it isn’t closed under addition. But what if we started with \(U\cup W\text{,}\) and threw in all the missing sums? This leads to a definition:

Definition 1.8.4.

Let \(U\) and \(W\) be subspaces of a vector space \(V\text{.}\) We define the sum \(U+W\) of these subspaces by
\begin{equation*} U+W = \{\uu+\ww \,|\, \uu\in U \text{ and } \ww\in W\}\text{.} \end{equation*}
It turns out that this works! Not only is \(U+W\) a subspace of \(V\text{,}\) it is the smallest subspace containing both \(U\) and \(W\text{.}\)

Strategy.

The key to working with \(U+W\) is to understand how to work with the definition. If we say that \(\xx\in U+W\text{,}\) then we are saying there exist vectors \(\uu\in U\) and \(\ww\in W\) such that \(\uu+\ww=\xx\text{.}\)
We prove that \(U+W\) is a subspace using this observation and the subspace test.
To prove the second part, we assume that \(U\subseteq X\) and \(W\subseteq X\text{.}\) We then choose an element \(\xx\in U+W\text{,}\) and using the idea above, show that \(\xx\in X\text{.}\)

Proof.

Let \(U,W\) be subspaces. Since \(\zer=\zer+\zer\text{,}\) with \(\zer\in U\) and \(\zer \in W\text{,}\) we see that \(\zer\in U+W\text{.}\)
Suppose that \(\xx,\yy\in U+W\text{.}\) Then there exist \(\uu_1,\uu_2\in U\text{,}\) and \(\ww_1,\ww_2\in W\text{,}\) with \(\uu_1+\ww_1=\xx\text{,}\) and \(\uu_2+\ww_2=\yy\text{.}\) Then
\begin{equation*} \xx+\yy = (\uu_1+\ww_1)+(\uu_2+\ww_2)=(\uu_1+\uu_2)+(\ww_1+\ww_2)\text{,} \end{equation*}
and we know that \(\uu_1+\uu_2\in U\text{,}\) and \(\ww_1+\ww_2\in W\text{,}\) since \(U\) and \(W\) are subspaces. Since \(\xx+\yy\) can be written as the sum of an element of \(U\) and an element of \(W\text{,}\) we have \(\xx+\yy\in U+W\text{.}\)
If \(c\) is any scalar, then
\begin{equation*} c\xx=c(\uu_1+\ww_1)=c\uu_1+c\ww_1\in U+W\text{,} \end{equation*}
since \(c\uu_1\in U\) and \(c\ww_1\in W\text{.}\)
Since \(U+W\) contains \(\zer\text{,}\) and is closed under both addition and scalar multiplication, it is a subspace.
Now, suppose \(X\) is a subspace of \(V\) such that \(U\subseteq X\) and \(W\subseteq X\text{.}\) Let \(\xx\in U+W\text{.}\) Then \(\xx=\uu+\ww\) for some \(\uu\in U\) and \(\ww\in W\text{.}\) Since \(\uu\in U\) and \(U\subseteq X\text{,}\) \(\uu\in X\text{.}\) Similarly, \(\ww\in X\text{.}\) Since \(X\) is a subspace, it is closed under addition, so \(\uu+\ww=\xx\in X\text{.}\) Therefore, \(U+W\subseteq X\text{.}\)
By choosing bases for two subspaces \(U\) and \(W\) of a finite-dimensional vector space, we can obtain the following cool dimension-counting result:

Strategy.

This is a proof that would be difficult (if not impossible) without using a basis. Your first thought might be to choose bases for the subspaces \(U\) and \(W\text{,}\) but this runs into trouble: some of the basis vectors for \(U\) might be in \(W\text{,}\) and vice-versa.
Of course, those vectors will be in \(U\cap W\text{,}\) but it gets hard to keep track: without more information (and we have none, since we want to be completely general), how do we tell which basis vectors are in the intersection, and how many?
Instead, we start with a basis for \(U\cap W\text{.}\) This is useful, because \(U\cap W\) is a subspace of both \(U\) and \(W\text{.}\) So any basis for \(U\cap W\) can be extended to a basis of \(U\text{,}\) and it can also be extended to a basis of \(W\text{.}\)
The rest of the proof relies on making sure that neither of these extensions have any vectors in common, and that putting everything together gives a basis for \(U+W\text{.}\) (This amounts to going back to the definition of a basis: we need to show that it’s linearly independent, and that it spans \(U+W\text{.}\))

Proof.

Let \(B_1 = \{\xx_1,\ldots, \xx_k\}\) be a basis for \(U\cap W\text{.}\) Extend \(B_1\) to a basis \(B_2=\{\xx_1,\ldots, \xx_k,\uu_1,\ldots,\uu_m\}\) of \(U\text{,}\) and to a basis \(B_3=\{\xx_1,\ldots, \xx_k,\ww_1,\ldots, \ww_n\}\) of \(W\text{.}\) Note that we have \(\dim (U\cap W)=k\text{,}\) \(\dim U = k+m\text{,}\) and \(\dim W=k+n\text{.}\)
Now, consider the set \(B=\{\xx_1,\ldots, \xx_k,\uu_1,\ldots, \uu_m,\ww_1,\ldots, \ww_n\}\text{.}\) We claim that \(B\) is a basis for \(U+W\text{.}\) We know that \(B_2\) is linearly independent, since it’s a basis for \(U\text{,}\) and that \(B=B_2\cup\{\ww_1,\ldots, \ww_n\}\text{.}\) It remains to show that none of the \(\ww_i\) are in the span of \(B_2\text{;}\) if so, then \(B\) is independent by Lemma 1.7.11.
Since \(\spn B_2=U\text{,}\) it suffices to show that none of the \(\ww_i\) belong to \(U\text{.}\) But we know that \(\ww_i\in W\text{,}\) so if \(\ww_i\in U\text{,}\) then \(\ww_i\in U\cap W\text{.}\) But if \(\ww_i\in U\cap W\text{,}\) then \(\ww_i\in \spn B_1\text{,}\) which would imply that \(B_3\) is linearly dependent, and since \(B_3\) is a basis, this is impossible.
Next, we need to show that \(\spn B = U+W\text{.}\) Let \(\vv\in U+W\text{;}\) then \(\vv=\uu+\ww\) for some \(\uu\in U\) and \(\ww\in W\text{.}\) Since \(\uu\in U\text{,}\) there exist scalars \(a_1,\ldots, a_k, b_1,\ldots, b_m\) such that
\begin{equation*} \uu=a_1\xx_1+\cdots + a_k\xx_k+b_1\uu_1+\cdots+b_m\uu_m\text{,} \end{equation*}
and since \(\ww\in W\text{,}\) there exist scalars \(c_1,\ldots, c_k,d_1,\ldots, d_n\) such that
\begin{equation*} \ww=c_1\xx_1+\cdots +c_k\xx_k+d_1\ww_1+\cdots +d_n\ww_n\text{.} \end{equation*}
Thus,
\begin{equation*} \vv = \uu+\ww = (a_1+c_1)\xx_1+\cdots+(a_k+c_k)\xx_k+b_1\uu_1+\cdots +b_m\uu_m+d_1\ww_1+\cdots + d_n\ww_n\text{,} \end{equation*}
which shows that \(\vv\in \spn B\text{.}\)
Finally, we check that this gives the dimension as claimed. We have
\begin{equation*} \dim U + \dim W - \dim (U\cap W) = (k+m)+(k+n)-k=k+m+n=\dim(U+W)\text{,} \end{equation*}
since there are \(k\) vectors in \(B_1\text{,}\) \(k+m\) vectors in \(B_2\text{,}\) \(k+n\) vectors in \(B_3\text{,}\) and \(k+m+n\) vectors in \(B\text{.}\)
Notice how a vector \(\vv\in U+W\) can be written as a sum of a vector in \(U\) and a vector \(W\text{,}\) but not uniquely, in general: in the above proof, we can change the values of the coefficients \(a_i\) and \(c_i\text{,}\) as long as the sum \(a_i+c_i\) remains unchanged. Note that these are the coefficients of the basis vectors for \(U\cap W\text{,}\) so we can avoid this ambiguity if \(U\) and \(W\) have no nonzero vectors in common.

Exercise 1.8.7.

Let \(V=\R^3\text{,}\) and let \(U=\{(x,y,0)\,|\,x,y,\in\R\}, W=\{(0,y,z)\,|\,y,z\in\R\}\) be two subspaces.

(a)

Determine the intersection \(U\cap W\text{.}\)

(b)

Write the vector \(\vv=(1,1,1)\) in the form \(\vv=\uu+\ww\text{,}\) where \(\uu\in U\) and \(\ww\in W\text{,}\) in at least two different ways.

Definition 1.8.8.

Let \(U\) and \(W\) be subspaces of a vector space \(V\text{.}\) If \(U\cap W =\{\zer\}\text{,}\) we say that the sum \(U+W\) is a direct sum, which we denote by \(U\oplus W\text{.}\)
If the sum is direct, then we have simply \(\dim(U\oplus W) = \dim U + \dim W\text{.}\) The other reason why direct sums are preferable, is that any \(\vv\in U\oplus W\) can be written uniquely as \(\vv=\uu+\ww\) where \(\uu\in U\) and \(\ww\in W\text{,}\) since we no longer have the ambiguity resulting from the basis vectors in \(U\cap W\text{.}\)

Proof.

Suppose that \(U\cap W = \{\zer\}\text{,}\) and suppose that we have \(\vv = \uu_1+\ww_1 = \uu_2+\ww_2\text{,}\) for \(\uu_1,\uu_2\in U,\ww_1,\ww_2\in W\text{.}\) Then \(\zer=(\uu_1-\uu_2)+(\ww_1-\ww_2)\text{,}\) which implies that
\begin{equation*} \ww_1-\ww_2 = -(\uu_1-\uu_2)\text{.} \end{equation*}
Now, \(\uu=\uu_1-\uu_2\in U\text{,}\) since \(U\) is a subspace, and similarly, \(\ww=\ww_1-\ww_2\in W\text{.}\) But we also have \(\ww=-\uu\text{,}\) which implies that \(\ww\in U\text{.}\) (Since \(-\uu\) is in \(U\text{,}\) and this is the same vector as \(\ww\text{.}\)) Therefore, \(\ww\in U\cap W\text{,}\) which implies that \(\ww=\zer\text{,}\) so \(\ww_1=\ww_2\text{.}\) But we must also then have \(\uu=\zer\text{,}\) so \(\uu_1=\uu_2\text{.}\)
Conversely, suppose that every \(\vv\in U+W\) can be written uniquely as \(\vv=\vec{u}+\ww\text{,}\) with \(\uu\in U\) and \(\ww\in W\text{.}\) Suppose that \(\mathbf{a}\in U\cap W\text{.}\) Then \(\mathbf{a}\in U\) and \(\mathbf{a}\in W\text{,}\) so we also have \(-\mathbf{a}\in W\text{,}\) since \(W\) is a subspace. But then \(\zer=\mathbf{a}+(-\mathbf{a})\text{,}\) where \(\mathbf{a}\in U\) and \(-\mathbf{a}\in W\text{.}\) On the other hand, \(\zer=\zer+\zer\text{,}\) and \(\zer\) belongs to both \(U\) and \(W\text{.}\) It follows that \(\mathbf{a}=\zer\text{.}\) Since \(\mathbf{a}\) was arbitrary, \(U\cap W = \{\zer\}\text{.}\)
We end with one last application of the theory we’ve developed on the existence of a basis for a finite-dimensional vector space. As we continue on to later topics, we’ll find that it is often useful to be able to decompose a vector space into a direct sum of subspaces. Using bases, we can show that this is always possible.

Proof.

Let \(\{\uu_1,\ldots, \uu_m\}\) be a basis of \(U\text{.}\) Since \(U\subseteq V\text{,}\) the set \(\{\uu_1,\ldots, \uu_m\}\) is a linearly independent subset of \(V\text{.}\) Since any linearly independent set can be extended to a basis of \(V\text{,}\) there exist vectors \(\ww_1,\ldots,\ww_n\) such that
\begin{equation*} \{\uu_1,\ldots, \uu_m,\ww_1,\ldots, \ww_n\} \end{equation*}
is a basis of \(V\text{.}\)
Now, let \(W = \spn\{\ww_1,\ldots, \ww_n\}\text{.}\) Then \(W\) is a subspace, and \(\{\ww_1,\ldots, \ww_n\}\) is a basis for \(W\text{.}\) (It spans, and must be independent since it’s a subset of an independent set.)
Clearly, \(U+W=V\text{,}\) since \(U+W\) contains the basis for \(V\) we’ve constructed. To show the sum is direct, it suffices to show that \(U\cap W = \{\zer\}\text{.}\) To that end, suppose that \(\vv\in U\cap W\text{.}\) Since \(\vv\in U\text{,}\) we have
\begin{equation*} \vv=a_1\uu_1+\cdots +a_m\uu_m \end{equation*}
for scalars \(a_1,\ldots, a_m\text{.}\) Since \(\vv\in W\text{,}\) we can write
\begin{equation*} \vv=b_1\ww_1+\cdots + b_n\ww_n \end{equation*}
for scalars \(b_1,\ldots, b_n\text{.}\) But then
\begin{equation*} \zer=\vv-\vv=a_1\uu_1+\cdots a_m\uu_m-b_1\ww_1-\cdots -b_n\ww_n. \end{equation*}
Since \(\{\uu_1,\ldots, \uu_m,\ww_1,\ldots, \ww_n\}\) is a basis for \(V\text{,}\) it’s independent, and therefore, all of the \(a_i,b_j\) must be zero, and therefore, \(\vv=\zer\text{.}\)
The subspace \(W\) constructed in the theorem above is called a complement of \(U\text{.}\) It is not unique; indeed, it depends on the choice of basis vectors. For example, if \(U\) is a one-dimensional subspace of \(\R^2\text{;}\) that is, a line, then any other non-parallel line through the origin provides a complement of \(U\text{.}\) Later we will see that an especially useful choice of complement is the orthogonal complement.

Definition 1.8.11.

Let \(U\) be a subspace of a vector space \(V\text{.}\) We say that a subspace \(W\) of \(V\) is a complement of \(U\) if \(U\oplus W=V\text{.}\)

Exercises Exercises

1.

Let \(U\) be the subspace of \(P_3(\R)\) consisting of all polynomials \(p(x)\) with \(p(1)=0\text{.}\)
(a)
Determine a basis for \(U\text{.}\)
Hint.
Use the factor theorem.
(b)
Find a complement of \(U\text{.}\)
Hint.
What is the dimension of \(U\text{?}\) (So what must be the dimension of its complement?) What condition ensures that a polynomial does not belong to \(U\text{?}\)

2.

Let \(U\) be the subspace of \(\R^5\) define by
\begin{equation*} U = \{(x_1,x_2,x_3,x_4,x_5)\,|\, x_1=3x_3, \text{ and } 3x_2-5x_4=x_5\}\text{.} \end{equation*}
(a)
Determine a basis for \(U\text{.}\)
Hint.
Try plugging in the given conditions, and then decomposing the vector into pieces with one variable each.
(b)
Find a complement of \(U\text{.}\)
Hint.
One way to solve this is to ask yourself, what vectors are not in the span of the basis you found above? You can do this by solving an appropriate system of equations.

3.

    Suppose \(U\) and \(W\) are 4-dimensional subspaces of \(\R^6\text{.}\) What are all possible dimensions of \(U\cap W\text{?}\)
  • \(1\)
  • What would Theorem 1.8.6 say about \(\dim U+W\) in this case? Why is that not possible?
  • \(2\)
  • Good job! If \(\dim U+W = 6\) (the largest it possibly can), then \(\dim U\cap W = \dim U+\dim W-\dim (U+W) = 4+4-6=2\text{.}\)
  • \(3\)
  • Yes! This will be the case if \(\dim U+W = 5\text{.}\)
  • \(4\)
  • Correct! If \(U\subseteq W\text{,}\) then \(U=W=U+W=U\cap W\text{,}\) all with dimension \(4\text{.}\)
  • \(5\)
  • Since \(U+W\) contains both \(U\) and \(W\text{,}\) its dimension cannot be less than \(4\text{.}\)
Hint.

4.

Let \(U=\spn\{{2x^{2}-1},{2x-4x^{2}}\}\) and \(W=\spn\{{4x^{3}-2x^{2}},{24x^{2}-8x-4}\}\) be subspaces of the vector space \(V=P_3(\R)\text{.}\)
(a)
Is \(\{{2x^{2}-1},{2x-4x^{2}},{4x^{3}-2x^{2}},{24x^{2}-8x-4}\}\) a basis for \(V\text{?}\)
(b)
What is the dimension of \(U+W\text{?}\)
(c)
What is the dimension of \(U\cap W\text{?}\)
You have attempted of activities on this page.