Skip to main content
Logo image

Section S Subspaces

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

Subsection S Subspaces

Here is the principal definition for this section.

Definition S. Subspace.

Suppose that \(V\) and \(W\) are two vector spaces that have identical definitions of vector addition and scalar multiplication, and suppose that \(W\) is a subset of \(V\text{,}\) \(W\subseteq V\text{.}\) Then \(W\) is a subspace of \(V\text{.}\)
Let us look at an example of a vector space inside another vector space.

Example SC3. A subspace of \(\complex{3}\).

We know that \(\complex{3}\) is a vector space (Example VSCV). Consider the subset,
\begin{equation*} W=\setparts{\colvector{x_1\\x_2\\x_3}}{2x_1-5x_2+7x_3=0} \end{equation*}
It is clear that \(W\subseteq\complex{3}\text{,}\) since the objects in \(W\) are column vectors of size 3. But is \(W\) a vector space? Does it satisfy the ten properties of Definition VS when we use the same operations? That is the main question.
Suppose we have two vectors from \(W\text{,}\)
\begin{align*} \vect{x}&=\colvector{x_1\\x_2\\x_3}\in W & \vect{y}=\colvector{y_1\\y_2\\y_3}\in W\text{.} \end{align*}
Then we know that these vectors cannot be totally arbitrary, they must have gained membership in \(W\) by virtue of meeting the membership test. For example, we know that \(\vect{x}\) must satisfy \(2x_1-5x_2+7x_3=0\) while \(\vect{y}\) must satisfy \(2y_1-5y_2+7y_3=0\text{.}\) Our first property (Property AC) asks the question, is \(\vect{x}+\vect{y}\in W\text{?}\) When our set of vectors was \(\complex{3}\text{,}\) this was an easy question to answer. Now it is not so obvious. Notice first that
\begin{equation*} \vect{x}+\vect{y}= \colvector{x_1\\x_2\\x_3}+\colvector{y_1\\y_2\\y_3}= \colvector{x_1+y_1\\x_2+y_2\\x_3+y_3} \end{equation*}
and we can test this vector for membership in \(W\) as follows. Because \(\vect{x}\in W\) we know \(2x_1-5x_2+7x_3=0\) and because \(\vect{y}\in W\) we know \(2y_1-5y_2+7y_3=0\text{.}\) Therefore,
\begin{align*} 2(x_1+y_1)-5(x_2+y_2)+7(x_3+y_3) &=2x_1+2y_1-5x_2-5y_2+7x_3+7y_3\\ &=(2x_1-5x_2+7x_3)+(2y_1-5y_2+7y_3)\\ &=0 + 0\\ &=0 \end{align*}
and by this computation we see that \(\vect{x}+\vect{y}\in W\text{.}\) One property down, nine to go.
If \(\alpha\) is a scalar and \(\vect{x}\in W\text{,}\) is it always true that \(\alpha\vect{x}\in W\text{?}\) This is what we need to establish Property SC. Again, the answer is not as obvious as it was when our set of vectors was all of \(\complex{3}\text{.}\) Let us see. First, note that because \(\vect{x}\in W\) we know \(2x_1-5x_2+7x_3=0\text{.}\) Therefore,
\begin{equation*} \alpha\vect{x}=\alpha\colvector{x_1\\x_2\\x_3}=\colvector{\alpha x_1\\\alpha x_2\\\alpha x_3} \end{equation*}
and we can test this vector for membership in \(W\text{.}\) First, note that because \(\vect{x}\in W\) we know \(2x_1-5x_2+7x_3=0\text{.}\) Therefore,
\begin{align*} 2(\alpha x_1)-5(\alpha x_2)+7(\alpha x_3) &=\alpha(2x_1-5x_2+7x_3)\\ &=\alpha 0\\ &=0 \end{align*}
and we see that indeed \(\alpha\vect{x}\in W\text{.}\) Always.
If \(W\) has a zero vector, it will be unique (Theorem ZVU). The zero vector for \(\complex{3}\) should also perform the required duties when added to elements of \(W\text{.}\) So the likely candidate for a zero vector in \(W\) is the same zero vector that we know \(\complex{3}\) has. You can check that \(\zerovector=\colvector{0\\0\\0}\) is a zero vector in \(W\) too (Property Z).
With a zero vector, we can now ask about additive inverses (Property AI). As you might suspect, the natural candidate for an additive inverse in \(W\) is the same as the additive inverse from \(\complex{3}\text{.}\) However, we must insure that these additive inverses actually are elements of \(W\text{.}\) Given \(\vect{x}\in W\text{,}\) is \(\vect{-x}\in W\text{?}\)
\begin{equation*} \vect{-x}=\colvector{-x_1\\-x_2\\-x_3} \end{equation*}
and we can test this vector for membership in \(W\text{.}\) As before, because \(\vect{x}\in W\) we know \(2x_1-5x_2+7x_3=0\text{.}\)
\begin{align*} 2(-x_1)-5(-x_2)+7(-x_3) &=-(2x_1-5x_2+7x_3)\\ &=-0\\ &=0 \end{align*}
and we now believe that \(\vect{-x}\in W\text{.}\)
Is the vector addition in \(W\) commutative (Property C)? Is \(\vect{x}+\vect{y}=\vect{y}+\vect{x}\text{?}\) Of course! Nothing about restricting the scope of our set of vectors will prevent the operation from still being commutative. Indeed, the remaining five properties are unaffected by the transition to a smaller set of vectors, and so remain true. That was convenient.
So \(W\) satisfies all ten properties, is therefore a vector space, and thus earns the title of being a subspace of \(\complex{3}\text{.}\)

Subsection TS Testing Subspaces

In Example SC3 we proceeded through all ten of the vector space properties before believing that a subset was a subspace. But six of the properties were easy to prove, and we can lean on some of the properties of the vector space (the superset) to make the other four easier. Here is a theorem that will make it easier to test if a subset is a vector space. A shortcut if there ever was one.

Proof.

(⇒) 
We have the hypothesis that \(W\) is a subspace, so by Property Z we know that \(W\) contains a zero vector. This is enough to show that \(W\neq\emptyset\text{.}\) Also, since \(W\) is a vector space it satisfies the additive and scalar multiplication closure properties (Property AC, Property SC), and so exactly meets the second and third conditions. If that was easy, the other direction might require a bit more work.
(⇐) 
We have three properties for our hypothesis, and from this we should conclude that \(W\) has the ten defining properties of a vector space. The second and third conditions of our hypothesis are exactly Property AC and Property SC. Our hypothesis that \(V\) is a vector space implies that Property C, Property AA, Property SMA, Property DVA, Property DSA and Property O all hold. They continue to be true for vectors from \(W\) since passing to a subset, and keeping the operation the same, leaves their statements unchanged. Eight down, two to go.
Suppose \(\vect{x}\in W\text{.}\) Then by the third part of our hypothesis (scalar closure), we know that \((-1)\vect{x}\in W\text{.}\) By Theorem AISM \((-1)\vect{x}=\vect{-x}\text{,}\) so together these statements show us that \(\vect{-x}\in W\text{.}\) \(\vect{-x}\) is the additive inverse of \(\vect{x}\) in \(V\text{,}\) but will continue in this role when viewed as an element of the subset \(W\text{.}\) So every element of \(W\) has an additive inverse that is an element of \(W\) and Property AI is completed. Just one property left.
While we have implicitly discussed the zero vector in the previous paragraph, we need to be certain that the zero vector (of \(V\)) really lives in \(W\text{.}\) Since \(W\) is nonempty, we can choose some vector \(\vect{z}\in W\text{.}\) Then by the argument in the previous paragraph, we know \(\vect{-z}\in W\text{.}\) Now by Property AI for \(V\) and then by the second part of our hypothesis (additive closure) we see that
\begin{equation*} \zerovector=\vect{z}+(\vect{-z})\in W \end{equation*}
So \(W\) contains the zero vector from \(V\text{.}\) Since this vector performs the required duties of a zero vector in \(V\text{,}\) it will continue in that role as an element of \(W\text{.}\) This gives us, Property Z, the final property of the ten required. (Sarah Fellez contributed to this proof.)
So just three conditions, plus being a subset of a known vector space, gets us all ten properties. Fabulous! This theorem can be paraphrased by saying that a subspace is “a nonempty subset (of a vector space) that is closed under vector addition and scalar multiplication.”
You might want to go back and rework Example SC3 in light of this result, perhaps seeing where we can now economize or where the work done in the example mirrored the proof and where it did not. We will press on and apply this theorem in a slightly more abstract setting.

Example SP4. A subspace of \(P_4\).

\(P_4\) is the vector space of polynomials with degree at most \(4\) (Example VSP). Define a subset \(W\) as
\begin{equation*} W=\setparts{p(x)}{p\in P_4,\ p(2)=0} \end{equation*}
so \(W\) is the collection of those polynomials (with degree 4 or less) whose graphs cross the \(x\)-axis at \(x=2\text{.}\) Whenever we encounter a new set it is a good idea to gain a better understanding of the set by finding a few elements in the set, and a few outside it. For example \(x^2-x-2\in W\text{,}\) while \(x^4+x^3-7\not\in W\text{.}\)
Is \(W\) nonempty? Yes, \(x-2\in W\text{.}\)
Additive closure? Suppose \(p\in W\) and \(q\in W\text{.}\) Is \(p+q\in W\text{?}\) \(p\) and \(q\) are not totally arbitrary, we know that \(p(2)=0\) and \(q(2)=0\text{.}\) Then we can check \(p+q\) for membership in \(W\text{,}\)
\begin{align*} (p+q)(2)&=p(2)+q(2)&&\text{Addition in }P_4\\ &=0+0&&p\in W,\,q\in W\\ &=0&&\knowl{./knowl/xref/property-ZCN.html}{\text{Property ZCN}} \end{align*}
so we see that \(p+q\) qualifies for membership in \(W\text{.}\)
Scalar multiplication closure? Suppose that \(\alpha\in\complexes\) and \(p\in W\text{.}\) Then we know that \(p(2)=0\text{.}\) Testing \(\alpha p\) for membership,
\begin{align*} (\alpha p)(2)&=\alpha p(2)&&\text{Scalar multiplication in }P_4\\ &=\alpha 0&&p\in W\\ &=0&&\knowl{./knowl/xref/theorem-ZPCN.html}{\text{Theorem ZPCN}} \end{align*}
so \(\alpha p\in W\text{.}\)
We have shown that \(W\) meets the three conditions of Theorem TSS and so qualifies as a subspace of \(P_4\text{.}\) Notice that by Definition S we now know that \(W\) is also a vector space. So all the properties of a vector space (Definition VS) and the theorems of Section VS apply in full.
Much of the power of Theorem TSS is that we can easily establish new vector spaces if we can locate them as subsets of other vector spaces, such as the vector spaces presented in Subsection VS.EVS.
It can be as instructive to consider some subsets that are not subspaces. Since Theorem TSS is an equivalence (see Proof Technique E) we can be assured that a subset is not a subspace if it violates one of the three conditions, and in any example of interest this will not be the “nonempty” condition. However, since a subspace has to be a vector space in its own right, we can also search for a violation of any one of the ten defining properties in Definition VS or any inherent property of a vector space, such as those given by the basic theorems of Subsection VS.VSP. Notice also that a violation need only be for a specific vector or pair of vectors.

Example NSC2Z. A non-subspace in \(\complex{2}\text{,}\) zero vector.

Consider the subset \(W\) below as a candidate for being a subspace of \(\complex{2}\)
\begin{equation*} W=\setparts{\colvector{x_1\\x_2}}{3x_1-5x_2=12} \end{equation*}
The zero vector of \(\complex{2}\text{,}\) \(\zerovector=\colvector{0\\0}\) will need to be the zero vector in \(W\) also. However, \(\zerovector\not\in W\) since \(3(0)-5(0)=0\neq 12\text{.}\) So \(W\) has no zero vector and fails Property Z of Definition VS. This subspace also fails to be closed under addition and scalar multiplication. Can you find examples of this?

Example NSC2A. A non-subspace in \(\complex{2}\text{,}\) additive closure.

Consider the subset \(X\) below as a candidate for being a subspace of \(\complex{2}\)
\begin{equation*} X=\setparts{\colvector{x_1\\x_2}}{x_1x_2=0} \end{equation*}
You can check that \(\zerovector\in X\text{,}\) so the approach of the last example will not get us anywhere. However, notice that \(\vect{x}=\colvector{1\\0}\in X\) and \(\vect{y}=\colvector{0\\1}\in X\text{.}\) Yet
\begin{equation*} \vect{x}+\vect{y}=\colvector{1\\0}+\colvector{0\\1}=\colvector{1\\1}\not\in X \end{equation*}
So \(X\) fails the additive closure requirement of either Property AC or Theorem TSS, and is therefore not a subspace.

Example NSC2S. A non-subspace in \(\complex{2}\text{,}\) scalar multiplication closure.

Consider the subset \(Y\) below as a candidate for being a subspace of \(\complex{2}\)
\begin{equation*} Y=\setparts{\colvector{x_1\\x_2}}{x_1\in{\mathbb Z},\,x_2\in{\mathbb Z}} \end{equation*}
\({\mathbb Z}\) is the set of integers, so we are only allowing “whole numbers” as the constituents of our vectors. Now, \(\zerovector\in Y\text{,}\) and additive closure also holds (can you prove these claims?). So we will have to try something different. Note that \(\alpha = \frac{1}{2}\in\complexes\) and \(\colvector{2\\3}\in Y\text{,}\) but
\begin{equation*} \alpha\vect{x}=\frac{1}{2}\colvector{2\\3}=\colvector{1\\\frac{3}{2}}\not\in Y \end{equation*}
So \(Y\) fails the scalar multiplication closure requirement of either Property SC or Theorem TSS, and is therefore not a subspace.
There are two examples of subspaces that are trivial. Suppose that \(V\) is any vector space. Then \(V\) is a subset of itself and is a vector space. By Definition S, \(V\) qualifies as a subspace of itself. The set containing just the zero vector \(Z=\set{\zerovector}\) is also a subspace as can be seen by applying Theorem TSS or by simple modifications of the techniques hinted at in Example VSS. Since these subspaces are so obvious (and therefore not too interesting) we will refer to them as being trivial.

Definition TS. Trivial Subspaces.

Given the vector space \(V\text{,}\) the subspaces \(V\) and \(\set{\zerovector}\) are each called a trivial subspace.
We can also use Theorem TSS to prove more general statements about subspaces, as illustrated in the next theorem.

Proof.

We will examine the three requirements of Theorem TSS. Recall that Definition NSM can be formulated as \(\nsp{A}=\setparts{\vect{x}\in\complex{n}}{A\vect{x}=\zerovector}\text{.}\)
First, \(\zerovector\in\nsp{A}\text{,}\) which can be inferred as a consequence of Theorem HSC. So \(\nsp{A}\neq\emptyset\text{.}\)
Second, check additive closure by supposing that \(\vect{x}\in\nsp{A}\) and \(\vect{y}\in\nsp{A}\text{.}\) So we know a little something about \(\vect{x}\) and \(\vect{y}\text{:}\) \(A\vect{x}=\zerovector\) and \(A\vect{y}=\zerovector\text{,}\) and that is all we know. Question: Is \(\vect{x}+\vect{y}\in\nsp{A}\text{?}\) Let us check.
\begin{align*} A(\vect{x}+\vect{y})&=A\vect{x}+A\vect{y}&& \knowl{./knowl/xref/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=\zerovector+\zerovector&&\vect{x}\in\nsp{A},\ \vect{y}\in\nsp{A}\\ &=\zerovector&& \knowl{./knowl/xref/theorem-VSPCV.html}{\text{Theorem VSPCV}} \end{align*}
So, yes, \(\vect{x}+\vect{y}\) qualifies for membership in \(\nsp{A}\text{.}\)
Third, check scalar multiplication closure by supposing that \(\alpha\in\complexes\) and \(\vect{x}\in\nsp{A}\text{.}\) So we know a little something about \(\vect{x}\text{:}\) \(A\vect{x}=\zerovector\text{,}\) and that is all we know. Question: Is \(\alpha\vect{x}\in\nsp{A}\text{?}\) Let us check.
\begin{align*} A(\alpha\vect{x})&=\alpha(A\vect{x})&& \knowl{./knowl/xref/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=\alpha\zerovector&&\vect{x}\in\nsp{A}\\ &=\zerovector&& \knowl{./knowl/xref/theorem-ZVSM.html}{\text{Theorem ZVSM}} \end{align*}
So, yes, \(\alpha\vect{x}\) qualifies for membership in \(\nsp{A}\text{.}\)
Having met the three conditions in Theorem TSS we can now say that the null space of a matrix is a subspace (and hence a vector space in its own right!).
Here is an example where we can exercise Theorem NSMS.

Example RSNS. Recasting a subspace as a null space.

Consider the subset of \(\complex{5}\) defined as
\begin{equation*} W =\setparts{\colvector{x_1\\x_2\\x_3\\x_4\\x_5}}{ \begin{array}{l} 3x_1+x_2-5x_3+7x_4+x_5=0,\\ 4x_1+6x_2+3x_3-6x_4-5x_5=0,\\ -2x_1+4x_2+7x_4+x_5=0 \end{array} }\text{.} \end{equation*}
It is possible to show that \(W\) is a subspace of \(\complex{5}\) by checking the three conditions of Theorem TSS directly, but it will get tedious rather quickly. Instead, give \(W\) a fresh look and notice that it is a set of solutions to a homogeneous system of equations. Define the matrix
\begin{equation*} A=\begin{bmatrix} 3&1&-5&7&1\\ 4&6&3&-6&-5\\ -2&4&0&7&1 \end{bmatrix} \end{equation*}
and then recognize that \(W=\nsp{A}\text{.}\) By Theorem NSMS we can immediately see that \(W\) is a subspace. Boom!

Subsection TSS The Span of a Set

The span of a set of column vectors got a heavy workout in Chapter V and Chapter M. The definition of the span depended only on being able to formulate linear combinations. In any of our more general vector spaces we always have a definition of vector addition and of scalar multiplication. So we can build linear combinations and manufacture spans. This subsection contains two definitions that are just mild variants of definitions we have seen earlier for column vectors. If you have not already, compare them with Definition LCCV and Definition SSCV.

Definition LC. Linear Combination.

Suppose that \(V\) is a vector space. Given \(n\) vectors \(\vectorlist{u}{n}\) and \(n\) scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_n\text{,}\) their linear combination is the vector
\begin{equation*} \lincombo{\alpha}{u}{n}\text{.} \end{equation*}

Example LCM. A linear combination of matrices.

In the vector space \(M_{23}\) of \(2\times 3\) matrices, we have the vectors
\begin{align*} \vect{x}&= \begin{bmatrix} 1&3&-2\\ 2&0&7 \end{bmatrix} & \vect{y}&= \begin{bmatrix} 3&-1&2\\ 5&5&1 \end{bmatrix} & \vect{z}&= \begin{bmatrix} 4&2&-4\\ 1&1&1 \end{bmatrix} \end{align*}
and we can form linear combinations such as
\begin{align*} 2\vect{x}+4\vect{y}+(-1)\vect{z}&= 2 \begin{bmatrix} 1&3&-2\\ 2&0&7 \end{bmatrix} +4 \begin{bmatrix} 3&-1&2\\ 5&5&1 \end{bmatrix} +(-1) \begin{bmatrix} 4&2&-4\\ 1&1&1 \end{bmatrix}\\ &= \begin{bmatrix} 2&6&-4\\ 4&0&14 \end{bmatrix} + \begin{bmatrix} 12&-4&8\\ 20&20&4 \end{bmatrix} + \begin{bmatrix} -4&-2&4\\ -1&-1&-1 \end{bmatrix} = \begin{bmatrix} 10&0&8\\ 23&19&17 \end{bmatrix}\\ \end{align*}
or,
\begin{align*} 4\vect{x}-2\vect{y}+3\vect{z}&= 4 \begin{bmatrix} 1&3&-2\\ 2&0&7 \end{bmatrix} -2 \begin{bmatrix} 3&-1&2\\ 5&5&1 \end{bmatrix} +3 \begin{bmatrix} 4&2&-4\\ 1&1&1 \end{bmatrix}\\ &= \begin{bmatrix} 4&12&-8\\ 8&0&28 \end{bmatrix} + \begin{bmatrix} -6&2&-4\\ -10&-10&-2 \end{bmatrix} + \begin{bmatrix} 12&6&-12\\ 3&3&3 \end{bmatrix} = \begin{bmatrix} 10&20&-24\\ 1&-7&29 \end{bmatrix}\text{.} \end{align*}
When we realize that we can form linear combinations in any vector space, then it is natural to revisit our definition of the span of a set, since it is the set of all possible linear combinations of a set of vectors.

Definition SS. Span of a Set.

Suppose that \(V\) is a vector space. Given a set of vectors \(S=\{\vectorlist{u}{t}\}\text{,}\) their span, \(\spn{S}\text{,}\) is the set of all possible linear combinations of \(\vectorlist{u}{t}\text{.}\) Symbolically,
\begin{align*} \spn{S}&=\setparts{\lincombo{\alpha}{u}{t}}{\alpha_i\in\complexes,\,1\leq i\leq t}\\ &=\setparts{\sum_{i=1}^{t}\alpha_i\vect{u}_i}{\alpha_i\in\complexes,\,1\leq i\leq t}\text{.} \end{align*}

Proof.

By Definition SS, the span contains linear combinations of vectors from the vector space \(V\text{,}\) so by repeated use of the closure properties, Property AC and Property SC, \(\spn{S}\) can be seen to be a subset of \(V\text{.}\)
We will then verify the three conditions of Theorem TSS. First,
\begin{align*} \zerovector &=\zerovector+\zerovector+\zerovector+\ldots+\zerovector&& \knowl{./knowl/xref/property-Z.html}{\text{Property Z}}\\ &=0\vect{u}_1+0\vect{u}_2+0\vect{u}_3+\cdots+0\vect{u}_t&& \knowl{./knowl/xref/theorem-ZSSM.html}{\text{Theorem ZSSM}}\text{.} \end{align*}
So we have written \(\zerovector\) as a linear combination of the vectors in \(S\) and by Definition SS\(, \zerovector\in\spn{S}\) and therefore \(\spn{S}\neq\emptyset\text{.}\)
Second, suppose \(\vect{x}\in\spn{S}\) and \(\vect{y}\in\spn{S}\text{.}\) Can we conclude that \(\vect{x}+\vect{y}\in\spn{S}\text{?}\) What do we know about \(\vect{x}\) and \(\vect{y}\) by virtue of their membership in \(\spn{S}\text{?}\) There must be scalars from \(\complexes\text{,}\) \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_t\) and \(\beta_1,\,\beta_2,\,\beta_3,\,\ldots,\,\beta_t\) so that
\begin{align*} \vect{x}&=\lincombo{\alpha}{u}{t}\\ \vect{y}&=\lincombo{\beta}{u}{t} \end{align*}
Then
\begin{align*} \vect{x}+\vect{y}&=\lincombo{\alpha}{u}{t}\\ &\quad\quad+\lincombo{\beta}{u}{t}\\ &=\alpha_1\vect{u}_1+\beta_1\vect{u}_1+\alpha_2\vect{u}_2+\beta_2\vect{u}_2\\ &\quad\quad+\alpha_3\vect{u}_3+\beta_3\vect{u}_3+\cdots+\alpha_t\vect{u}_t+\beta_t\vect{u}_t&& \knowl{./knowl/xref/property-AA.html}{\text{Property AA}},\knowl{./knowl/xref/property-C.html}{\text{Property C}}\\ &=(\alpha_1+\beta_1)\vect{u}_1+(\alpha_2+\beta_2)\vect{u}_2\\ &\quad\quad+(\alpha_3+\beta_3)\vect{u}_3+\cdots+(\alpha_t+\beta_t)\vect{u}_t&& \knowl{./knowl/xref/property-DSA.html}{\text{Property DSA}}\text{.} \end{align*}
Since each \(\alpha_i+\beta_i\) is again a scalar from \(\complexes\) we have expressed the vector sum \(\vect{x}+\vect{y}\) as a linear combination of the vectors from \(S\text{,}\) and therefore by Definition SS we can say that \(\vect{x}+\vect{y}\in\spn{S}\text{.}\)
Third, suppose \(\alpha\in\complexes\) and \(\vect{x}\in\spn{S}\text{.}\) Can we conclude that \(\alpha\vect{x}\in\spn{S}\text{?}\) What do we know about \(\vect{x}\) by virtue of its membership in \(\spn{S}\text{?}\) There must be scalars from \(\complexes\text{,}\) \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_t\) so that
\begin{align*} \vect{x}&=\lincombo{\alpha}{u}{t} \end{align*}
Then
\begin{align*} \alpha\vect{x}&=\alpha\left(\lincombo{\alpha}{u}{t}\right)\\ &=\alpha(\alpha_1\vect{u}_1)+\alpha(\alpha_2\vect{u}_2)+\alpha(\alpha_3\vect{u}_3)+\cdots+\alpha(\alpha_t\vect{u}_t)&& \knowl{./knowl/xref/property-DVA.html}{\text{Property DVA}}\\ &=(\alpha\alpha_1)\vect{u}_1+(\alpha\alpha_2)\vect{u}_2+(\alpha\alpha_3)\vect{u}_3+\cdots+(\alpha\alpha_t)\vect{u}_t&& \knowl{./knowl/xref/property-SMA.html}{\text{Property SMA}}\text{.} \end{align*}
Since each \(\alpha\alpha_i\) is again a scalar from \(\complexes\) we have expressed the scalar multiple \(\alpha\vect{x}\) as a linear combination of the vectors from \(S\text{,}\) and therefore by Definition SS we can say that \(\alpha\vect{x}\in\spn{S}\text{.}\)
With the three conditions of Theorem TSS met, we can say that \(\spn{S}\) is a subspace (and so is also a vector space, Definition VS). (See Exercise SS.T20, Exercise SS.T21, Exercise SS.T22.)

Example SSP. Span of a set of polynomials.

In Example SP4 we proved that
\begin{equation*} W=\setparts{p(x)}{p\in P_4,\ p(2)=0} \end{equation*}
is a subspace of \(P_4\text{,}\) the vector space of polynomials of degree at most 4. Since \(W\) is a vector space itself, let us construct a span within \(W\text{.}\) First let
\begin{equation*} S=\set{x^4-4x^3+5x^2-x-2,\,2x^4-3x^3-6x^2+6x+4} \end{equation*}
and verify that \(S\) is a subset of \(W\) by checking that each of these two polynomials has \(x=2\) as a root. Now, if we define \(U=\spn{S}\text{,}\) then Theorem SSS tells us that \(U\) is a subspace of \(W\text{.}\) So quite quickly we have built a chain of subspaces, \(U\) inside \(W\text{,}\) and \(W\) inside \(P_4\text{.}\)
Rather than dwell on how quickly we can build subspaces, let us try to gain a better understanding of just how the span construction creates subspaces, in the context of this example. We can quickly build representative elements of \(U\text{,}\)
\begin{equation*} 3(x^4-4x^3+5x^2-x-2)+5(2x^4-3x^3-6x^2+6x+4)=13x^4-27x^3-15x^2+27x+14 \end{equation*}
and
\begin{equation*} (-2)(x^4-4x^3+5x^2-x-2)+8(2x^4-3x^3-6x^2+6x+4)=14x^4-16x^3-58x^2+50x+36 \end{equation*}
and each of these polynomials must be in \(W\) since it is closed under addition and scalar multiplication. But you might check for yourself that both of these polynomials have \(x=2\) as a root.
I can tell you that \(\vect{y}=3x^4-7x^3-x^2+7x-2\) is not in \(U\text{,}\) but would you believe me? A first check shows that \(\vect{y}\) does have \(x=2\) as a root, but that only shows that \(\vect{y}\in W\text{.}\) What does \(\vect{y}\) have to do to gain membership in \(U=\spn{S}\text{?}\) It must be a linear combination of the vectors in \(S\text{,}\) \(x^4-4x^3+5x^2-x-2\) and \(2x^4-3x^3-6x^2+6x+4\text{.}\) So let us suppose that \(\vect{y}\) is such a linear combination,
\begin{align*} \vect{y} &=3x^4-7x^3-x^2+7x-2\\ &=\alpha_1(x^4-4x^3+5x^2-x-2)+\alpha_2(2x^4-3x^3-6x^2+6x+4)\\ &= (\alpha_1+2\alpha_2)x^4+ (-4\alpha_1-3\alpha_2)x^3+ (5\alpha_1-6\alpha_2)x^2\\ &\quad\quad+ (-\alpha_1+6\alpha_2)x+ (-2\alpha_1+4\alpha_2)\text{.} \end{align*}
Notice that operations above are done in accordance with the definition of the vector space of polynomials (Example VSP). Now, if we equate coefficients, which is the definition of equality for polynomials, then we obtain the system of five linear equations in two variables
\begin{align*} \alpha_1+2\alpha_2&=3\\ -4\alpha_1-3\alpha_2&=-7\\ 5\alpha_1-6\alpha_2&=-1\\ -\alpha_1+6\alpha_2&=7\\ -2\alpha_1+4\alpha_2&=-2\text{.} \end{align*}
Build an augmented matrix from the system and row-reduce,
\begin{equation*} \begin{bmatrix} 1 & 2 & 3\\ -4 & -3 & -7\\ 5 & -6 & -1\\ -1 & 6 & 7\\ -2 & 4 & -2 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0\\ 0 & \leading{1} & 0\\ 0 & 0 & \leading{1}\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
Since the final column of the row-reduced augmented matrix is a pivot column, Theorem RCLS tells us the system of equations is inconsistent. Therefore, there are no scalars, \(\alpha_1\) and \(\alpha_2\text{,}\) to establish \(\vect{y}\) as a linear combination of the elements in \(U\text{.}\) So \(\vect{y}\not\in U\text{.}\)
Let us again examine membership in a span.

Example SM32. A subspace of \(M_{32}\).

The set of all \(3\times 2\) matrices forms a vector space when we use the operations of matrix addition (Definition MA) and scalar matrix multiplication (Definition MSM), as was shown in Example VSM. Consider the subset
\begin{equation*} S=\set{ \begin{bmatrix} 3 & 1 \\ 4 & 2 \\ 5 & -5 \end{bmatrix},\, \begin{bmatrix} 1 & 1 \\ 2 &-1 \\ 14 & -1 \end{bmatrix},\, \begin{bmatrix} 3 & -1 \\ -1&2 \\ -19 & -11 \end{bmatrix},\, \begin{bmatrix} 4 & 2 \\ 1 & -2 \\ 14 & -2 \end{bmatrix},\, \begin{bmatrix} 3 & 1 \\ -4 & 0 \\ -17 & 7 \end{bmatrix} } \end{equation*}
and define a new subset of vectors \(W\) in \(M_{32}\) using the span (Definition SS), \(W=\spn{S}\text{.}\) So by Theorem SSS we know that \(W\) is a subspace of \(M_{32}\text{.}\) While \(W\) is an infinite set, and this is a precise description, it would still be worthwhile to investigate whether or not \(W\) contains certain elements.
First, is
\begin{equation*} \vect{y}=\begin{bmatrix} 9 & 3 \\ 7 & 3 \\ 10 & -11 \end{bmatrix} \end{equation*}
in \(W\text{?}\) To answer this, we want to determine if \(\vect{y}\) can be written as a linear combination of the five matrices in \(S\text{.}\) Can we find scalars, \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\) so that
\begin{align*} &\begin{bmatrix} 9 & 3 \\ 7&3 \\ 10 & -11 \end{bmatrix}\\ &= \alpha_1 \begin{bmatrix} 3 & 1 \\ 4 & 2 \\ 5 & -5 \end{bmatrix} +\alpha_2 \begin{bmatrix} 1 & 1 \\ 2 & -1 \\ 14 & -1 \end{bmatrix} +\alpha_3 \begin{bmatrix} 3 & -1 \\ -1 & 2 \\ -19 & -11 \end{bmatrix} +\alpha_4 \begin{bmatrix} 4 & 2 \\ 1 & -2 \\ 14 & -2 \end{bmatrix} +\alpha_5 \begin{bmatrix} 3 & 1 \\ -4 & 0 \\ -17 & 7 \end{bmatrix}\\ &= \begin{bmatrix} 3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5 & \alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5\\ 4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& 2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 \\ 5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& -5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5 \end{bmatrix}\text{.} \end{align*}
Using our definition of matrix equality (Definition ME) we can translate this statement into six equations in the five unknowns,
\begin{align*} 3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5& =9\\ \alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5& =3\\ 4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& =7\\ 2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 & =3\\ 5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& =10\\ -5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5&=-11\text{.} \end{align*}
This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions. The matrix that is row-equivalent to the augmented matrix is
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0 & \frac{5}{8} & 2\\ 0 & \leading{1} & 0 & 0 & \frac{-19}{4} & -1\\ 0 & 0 & \leading{1} & 0 & \frac{-7}{8} & 0\\ 0 & 0 & 0 & \leading{1} & \frac{17}{8} & 1\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
So we recognize that the system is consistent since the final column is not a pivot column (Theorem RCLS), and compute \(n-r=5-4=1\) free variables (Theorem FVCS). While there are infinitely many solutions, we are only in pursuit of a single solution, so let us choose the free variable \(\alpha_5=0\) for simplicity’s sake. Then we easily see that \(\alpha_1=2\text{,}\) \(\alpha_2=-1\text{,}\) \(\alpha_3=0\text{,}\) \(\alpha_4=1\text{.}\) So the scalars \(\alpha_1=2\text{,}\) \(\alpha_2=-1\text{,}\) \(\alpha_3=0\text{,}\) \(\alpha_4=1\text{,}\) \(\alpha_5=0\) will provide a linear combination of the elements of \(S\) that equals \(\vect{y}\text{,}\) as we can verify by checking,
\begin{align*} \begin{bmatrix} 9 & 3 \\ 7 & 3 \\ 10 & -11 \end{bmatrix} = 2 \begin{bmatrix} 3 & 1 \\ 4 & 2 \\ 5 & -5 \end{bmatrix} +(-1) \begin{bmatrix} 1 & 1 \\ 2 & -1 \\ 14 & -1 \end{bmatrix} +(1) \begin{bmatrix} 4 & 2 \\ 1 & -2 \\ 14 & -2 \end{bmatrix} \end{align*}
So with one particular linear combination in hand, we are convinced that \(\vect{y}\) deserves to be a member of \(W=\spn{S}\text{.}\)
Second, is
\begin{equation*} \vect{x}=\begin{bmatrix} 2 & 1 \\ 3 & 1 \\ 4 & -2 \end{bmatrix} \end{equation*}
in \(W\text{?}\) To answer this, we want to determine if \(\vect{x}\) can be written as a linear combination of the five matrices in \(S\text{.}\) Can we find scalars, \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\) so that
\begin{align*} &\begin{bmatrix} 2 & 1 \\ 3 & 1 \\ 4 & -2 \end{bmatrix}\\ &= \alpha_1 \begin{bmatrix} 3 & 1 \\ 4 & 2 \\ 5 & -5 \end{bmatrix} +\alpha_2 \begin{bmatrix} 1 & 1 \\ 2 & -1 \\ 14 & -1 \end{bmatrix} +\alpha_3 \begin{bmatrix} 3 & -1 \\ -1 & 2 \\ -19 & -11 \end{bmatrix} +\alpha_4 \begin{bmatrix} 4 & 2 \\ 1 & -2 \\ 14 & -2 \end{bmatrix} +\alpha_5 \begin{bmatrix} 3 & 1 \\ -4 & 0 \\ -17 & 7 \end{bmatrix}\\ &= \begin{bmatrix} 3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5 & \alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5\\ 4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& 2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 \\ 5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& -5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5 \end{bmatrix}\text{.} \end{align*}
Using our definition of matrix equality (Definition ME) we can translate this statement into six equations in the five unknowns,
\begin{align*} 3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5& =2\\ \alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5& =1\\ 4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& =3\\ 2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 & =1\\ 5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& =4\\ -5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5&=-2\text{.} \end{align*}
This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions. The matrix that is row-equivalent to the augmented matrix is
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0 & \frac{5}{8} & 0\\ 0 & \leading{1} & 0 & 0 & -\frac{38}{8} & 0\\ 0 & 0 & \leading{1} & 0 & -\frac{7}{8} & 0\\ 0 & 0 & 0 & \leading{1} & -\frac{17}{8} & 0\\ 0 & 0 & 0 & 0 & 0 & \leading{1}\\ 0 & 0 & 0 & 0 & 0 & 0\ \end{bmatrix}\text{.} \end{equation*}
Since the last column is a pivot column, Theorem RCLS tells us that the system is inconsistent. Therefore, there are no values for the scalars that will place \(\vect{x}\) in \(W\text{,}\) and so we conclude that \(\vect{x}\not\in W\text{.}\)
Notice how Example SSP and Example SM32 contained questions about membership in a span, but these questions quickly became questions about solutions to a system of linear equations. This will be a common theme going forward.

Subsection SC Subspace Constructions

Several of the subsets of vectors spaces that we worked with in Chapter M are also subspaces — they are closed under vector addition and scalar multiplication in \(\complex{m}\text{.}\)

Proof.

Definition CSM shows us that \(\csp{A}\) is a subset of \(\complex{m}\text{,}\) and that it is defined as the span of a set of vectors from \(\complex{m}\) (the columns of the matrix). Since \(\csp{A}\) is a span, Theorem SSS says it is a subspace.
That was easy! Notice that we could have used this same approach to prove that the null space is a subspace, since Theorem SSNS provided a description of the null space of a matrix as the span of a set of vectors. However, I much prefer the current proof of Theorem NSMS. Speaking of easy, here is a very easy theorem that exposes another of our constructions as creating subspaces.

Proof.

Definition RSM says \(\rsp{A}=\csp{\transpose{A}}\text{,}\) so the row space of a matrix is a column space, and every column space is a subspace by Theorem CSMS. That’s enough.
One more.

Proof.

Definition LNS says \(\lns{A}=\nsp{\transpose{A}}\text{,}\) so the left null space is a null space, and every null space is a subspace by Theorem NSMS. Done.
So the span of a set of vectors, and the null space, column space, row space and left null space of a matrix are all subspaces, and hence are all vector spaces, meaning they have all the properties detailed in Definition VS and in the basic theorems presented in Section VS. We have worked with these objects as just sets in Chapter V and Chapter M, but now we understand that they have much more structure. In particular, being closed under vector addition and scalar multiplication means a subspace is also closed under linear combinations.
We can combine two arbitrary subspaces, in two different ways, to make new subspaces. We first look at the intersection (Definition SI) of two subspaces.

Proof.

We appeal to the three-part tests of Theorem TSS. First, since \(U\) and \(V\) are subspaces of \(W\text{,}\) they each contain the zero vector of \(W\text{,}\) and so by Definition SI, \(U\cap V\) also contains the zero vector of \(W\text{.}\)
Second, choose \(\vect{x},\,\vect{y}\in U\cap V\text{.}\) Since \(\vect{x},\,\vect{y}\in U\text{,}\) Property AC says that \(\vect{x}+\vect{y}\in U\text{.}\) Similarly, since \(\vect{x},\,\vect{y}\in V\text{,}\) Property AC says that \(\vect{x}+\vect{y}\in V\text{.}\) And therefore, by Definition SI, \(\vect{x}+\vect{y}\in U\cap V\text{,}\) providing additive closure.
Third, choose \(\alpha\in\complexes\text{,}\) \(\vect{x}\in U\cap V\text{.}\) Since \(\vect{x}\in U\text{,}\) Property SC says that \(\alpha\vect{x}\in U\text{.}\) Similarly, since \(\vect{x}\in V\text{,}\) Property SC says that \(\alpha\vect{x}\in V\text{.}\) And therefore, by Definition SI, \(\alpha\vect{x}\in U\cap V\text{,}\) providing scalar closure.
Take note of the generality of Theorem SIIS. We have made no assumptions about the specific operations used for the vector space \(W\text{,}\) nor have we assumed any specifics about the subspaces \(U\) and \(V\text{.}\) So the result applies to a wide variety of general situations.
While a set intersection results in a smaller set, a set union (Definition SU) results in a larger set. Unfortunately, the union of two subspaces is not always a subspace (see Exercise S.M40). Instead, we define a somewhat similar construction.

Definition SOS. Sum of Subspaces.

Suppose that \(U\) and \(V\) are subspaces of the vector space \(W\text{.}\) Then
\begin{equation*} U+V=\setparts{\vect{u}+\vect{v}}{\vect{u}\in U,\,\vect{v}\in V} \end{equation*}
is the sum of \(U\) and \(V\text{.}\)
Notice that the “+” operation has been given yet another meaning, which might only be clear from its context. By choosing \(\vect{v}\) in the defintion to be the zero vector, we can see that \(U\subseteq U+V\text{.}\) Similarly, \(V\subseteq U+V\text{.}\) So \(U+V\) contains \(U\cup V\text{.}\) Furthermore, the sum of two subspaces is again a subspace.

Proof.

We appeal to the three-part tests of Theorem TSS. First, since \(U\) and \(V\) are subspaces of \(W\text{,}\) they each contain the zero vector of \(W\text{,}\) and so
\begin{equation*} \zerovector = \zerovector + \zerovector \in U+V\text{.} \end{equation*}
Second, choose \(\vect{x},\,\vect{y}\in U+V\text{.}\) By Definition SOS, there are vectors \(\vect{u}_1,\,\vect{u}_2\in U\) and \(\vect{v}_1,\,\vect{v}_2\in V\) so that \(\vect{x} = \vect{u}_1 + \vect{v}_1\) and \(\vect{y} = \vect{u}_2 + \vect{v}_2\text{.}\) Then, applying Property AA we have
\begin{align*} \vect{x} + \vect{y} &= \left(\vect{u}_1 + \vect{v}_1\right) + \left(\vect{u}_2 + \vect{v}_2\right)\\ &= \left(\vect{u}_1 + \vect{u}_2\right) + \left(\vect{v}_1 + \vect{v}_2\right)\text{.} \end{align*}
Since \(U\) and \(V\) are both closed under vector addition (Property AC) this final expression is an element of \(U+V\) according to Definition SOS, and so \(\vect{x}+\vect{y}\in U+V\text{.}\)
Third, choose \(\alpha\in\complexes\text{,}\) \(\vect{x}\in U+V\text{.}\) By Definition SOS, there are vectors \(\vect{u}\in U\) and \(\vect{v}\in V\) so that \(\vect{x} = \vect{u} + \vect{v}\text{.}\) Then
\begin{equation*} \alpha\vect{x} = \alpha\left(\vect{u} + \vect{v}\right) = \alpha\vect{u} + \alpha\vect{v}\text{.} \end{equation*}
Since \(U\) and \(V\) are both closed under scalar multiplication (Property SC) this final expression is an element of \(U+V\) according to Definition SOS, and so \(\alpha\vect{x}\in U+V\text{.}\)

Sage VS. Vector Spaces.

Our conception of a vector space has become much broader with the introduction of abstract vector spaces — those whose elements (“vectors”) are not just column vectors, but polynomials, matrices, sequences, functions, etc. Sage is able to perform computations using many different abstract and advanced ideas (such as derivatives of functions), but in the case of linear algebra, Sage will primarily stay with vector spaces of column vectors. Chapter R, and specifically, Section VR and Sage SUTH2 will show us that this is not as much of a limitation as it might first appear.
While limited to vector spaces of column vectors, Sage has an impressive range of capabilities for vector spaces, which we will detail throughout this chapter. You may have already noticed that many questions about abstract vector spaces can be translated into questions about column vectors. This theme will continue, and Sage commands we already know will often be helpful in answering these questions.
Theorem SSS, Theorem NSMS, Theorem CSMS, Theorem RSMS and Theorem LNSMS each tells us that a certain set is a subspace. The first is the abstract version of creating a subspace via the span of a set of vectors, but still applies to column vectors as a special case. The remaining four all begin with a matrix and create a subspace of column vectors. We have created these spaces many times already, but notice now that the description Sage outputs explicitly says they are vector spaces, and that there are still some parts of the output that we need to explain. Here are two reminders, first a span, and then a vector space created from a matrix.

Reading Questions S Reading Questions

1. Subspace test.

Summarize the three conditions that allow us to quickly test if a set is a subspace.

2. Apply the subspace test.

Consider the set of vectors
\begin{equation*} W=\setparts{\colvector{a\\b\\c}}{3a-2b+c=5}\text{.} \end{equation*}
Is the set \(W\) a subspace of \(\complex{3}\text{?}\) Explain your answer.

3. Name five subspaces.

Name five general constructions of sets of column vectors (subsets of \(\complex{m}\)) that we now know as subspaces.

Exercises S Exercises

C15.

Working within the vector space \(\complex{3}\text{,}\) determine if \(\vect{b} = \colvector{4\\3\\1}\) is in the subspace \(W\text{,}\)
\begin{equation*} W = \spn{\set{ \colvector{3\\2\\3}, \colvector{1\\0\\3}, \colvector{1\\1\\0}, \colvector{2\\1\\3} }}\text{.} \end{equation*}
Solution.
For \(\vect{b}\) to be an element of \(W=\spn{S}\) there must be a linear combination of the vectors in \(S\) that equals \(\vect{b}\) (Definition SSCV). The existence of such scalars is equivalent to the linear system \(\linearsystem{A}{\vect{b}}\) being consistent, where \(A\) is the matrix whose columns are the vectors from \(S\) (Theorem SLSLC).
\begin{align*} \begin{bmatrix} 3 & 1 & 1 & 2 & 4\\ 2 & 0 & 1 & 1 & 3\\ 3 & 3 & 0 & 3 & 1 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 1/2 & 1/2 & 0\\ 0 & \leading{1} & -1/2 & 1/2 & 0\\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{align*}
So by Theorem RCLS the system is inconsistent, which indicates that \(\vect{b}\) is not an element of the subspace \(W\text{.}\)

C16.

Working within the vector space \(\complex{4}\text{,}\) determine if \(\vect{b} = \colvector{1\\1\\0\\1}\) is in the subspace \(W\text{,}\)
\begin{equation*} W =\spn{\set{ \colvector{1\\2\\-1\\1}, \colvector{1\\0\\3\\1}, \colvector{2\\1\\1\\2} }}\text{.} \end{equation*}
Solution.
For \(\vect{b}\) to be an element of \(W=\spn{S}\) there must be a linear combination of the vectors in \(S\) that equals \(\vect{b}\) (Definition SSCV). The existence of such scalars is equivalent to the linear system \(\linearsystem{A}{\vect{b}}\) being consistent, where \(A\) is the matrix whose columns are the vectors from \(S\) (Theorem SLSLC).
\begin{align*} \begin{bmatrix} 1 & 1 & 2 & 1\\ 2 & 0 & 1 & 1\\ -1 & 3 & 1 & 0\\ 1 & 1 & 2 & 1 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 1/3\\ 0 & \leading{1} & 0 & 0 \\ 0 & 0 & \leading{1} & 1/3 \\ 0 & 0 & 0& 0 \end{bmatrix}\text{.} \end{align*}
So by Theorem RCLS the system is consistent, which indicates that \(\vect{b}\) is in the subspace \(W\text{.}\)

C17.

Working within the vector space \(\complex{4}\text{,}\) determine if \(\vect{b} = \colvector{2\\1\\2\\1}\) is in the subspace \(W\text{,}\)
\begin{equation*} W = \spn{\set{ \colvector{1\\2\\0\\2}, \colvector{1\\0\\3\\1}, \colvector{0\\1\\0\\2}, \colvector{1\\1\\2\\0} }}\text{.} \end{equation*}
Solution.
For \(\vect{b}\) to be an element of \(W=\spn{S}\) there must be a linear combination of the vectors in \(S\) that equals \(\vect{b}\) (Definition SSCV). The existence of such scalars is equivalent to the linear system \(\linearsystem{A}{\vect{b}}\) being consistent, where \(A\) is the matrix whose columns are the vectors from \(S\) (Theorem SLSLC).
\begin{align*} \begin{bmatrix} 1 & 1 & 0 & 1 & 2\\ 2 & 0 & 1 & 1 & 1\\ 0 & 3 & 0 & 2 & 2\\ 2 & 1 & 2 & 0 & 0 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 3/2\\ 0 & \leading{1} & 0 & 0 & 1\\ 0 & 0 & \leading{1} & 0 & -3/2 \\ 0 & 0 & 0& \leading{1} & -1/2 \end{bmatrix}\text{.} \end{align*}
So by Theorem RCLS the system is consistent, which indicates that \(\vect{b}\) is in the subspace \(W\text{.}\)

C20.

Working within the vector space \(P_3\) of polynomials of degree 3 or less, determine if \(p(x)=x^3+6x+4\) is in the subspace \(W\) below.
\begin{equation*} W=\spn{\set{x^3+x^2+x,\,x^3+2x-6,\,x^2-5}} \end{equation*}
Solution.
The question is if \(p\) can be written as a linear combination of the vectors in \(W\text{.}\) To check this, we set \(p\) equal to a linear combination and massage with the definitions of vector addition and scalar multiplication that we get with \(P_3\) (Example VSP).
\begin{align*} p(x)&=a_1(x^3+x^2+x)+a_2(x^3+2x-6)+a_3(x^2-5)\\ x^3+6x+4&=(a_1+a_2)x^3+(a_1+a_3)x^2+(a_1+2a_2)x+(-6a_2-5a_3) \end{align*}
Equating coefficients of equal powers of \(x\text{,}\) we get the system of equations,
\begin{align*} a_1+a_2&=1\\ a_1+a_3&=0\\ a_1+2a_2&=6\\ -6a_2-5a_3&=4\text{.} \end{align*}
The augmented matrix of this system of equations row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0\\ 0 & \leading{1} & 0 & 0\\ 0 & 0 & \leading{1} & 0\\ 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{equation*}
Since the last column is a pivot column, Theorem RCLS implies that the system is inconsistent. So there is no way for \(p\) to gain membership in \(W\text{,}\) so \(p\not\in W\text{.}\)

C21.

Consider the subspace
\begin{equation*} W=\spn{\set{ \begin{bmatrix} 2 & 1\\3 & -1 \end{bmatrix} ,\, \begin{bmatrix} 4 & 0\\2 & 3 \end{bmatrix} ,\, \begin{bmatrix} -3 & 1\\2 & 1 \end{bmatrix} }} \end{equation*}
of the vector space of \(2\times 2\) matrices, \(M_{22}\text{.}\) Is
\begin{equation*} C=\begin{bmatrix} -3 & 3\\6 & -4 \end{bmatrix} \end{equation*}
an element of \(W\text{?}\)
Solution.
In order to belong to \(W\text{,}\) we must be able to express \(C\) as a linear combination of the elements in the spanning set of \(W\text{.}\) So we begin with such an expression, using the unknowns \(a,\,b,\,c\) for the scalars in the linear combination.
\begin{equation*} C= \begin{bmatrix} -3 & 3\\6 & -4 \end{bmatrix} = a \begin{bmatrix} 2 & 1\\3 & -1 \end{bmatrix} +b \begin{bmatrix} 4 & 0\\2 & 3 \end{bmatrix} +c \begin{bmatrix} -3 & 1\\2 & 1 \end{bmatrix} \end{equation*}
Massaging the right-hand side, according to the definition of the vector space operations in \(M_{22}\) (Example VSM), we find the matrix equality,
\begin{equation*} \begin{bmatrix} -3 & 3\\6 & -4 \end{bmatrix} = \begin{bmatrix} 2a+4b-3c & a+c\\ 3a+2b+2c & -a+3b+c \end{bmatrix} \end{equation*}
Matrix equality allows us to form a system of four equations in three variables, whose augmented matrix row-reduces as follows,
\begin{equation*} \begin{bmatrix} 2 & 4 & -3 & -3 \\ 1 & 0 & 1 & 3 \\ 3 & 2 & 2 & 6 \\ -1 & 3 & 1 & -4 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 2 \\ 0 & \leading{1} & 0 & -1 \\ 0 & 0 & \leading{1} & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
Since this system of equations is consistent (Theorem RCLS), a solution will provide values for \(a,\,b\) and \(c\) that allow us to recognize \(C\) as an element of \(W\text{.}\)

C26.

Show that the set \(Y=\setparts{\colvector{x_1\\x_2}}{x_1\in{\mathbb Z},\,x_2\in{\mathbb Z}}\) from Example NSC2S has Property AC.

M20.

In \(\complex{3}\text{,}\) the vector space of column vectors of size 3, prove that the set \(Z\) is a subspace.
\begin{equation*} Z=\setparts{\colvector{x_1\\x_2\\x_3}}{4x_1-x_2+5x_3=0} \end{equation*}
Solution.
The membership criteria for \(Z\) is a single linear equation, which comprises a homogeneous system of equations. As such, we can recognize \(Z\) as the solutions to this system, and therefore \(Z\) is a null space. Specifically, \(Z=\nsp{\begin{bmatrix}4&-1&5\end{bmatrix}}\text{.}\) Every null space is a subspace by Theorem NSMS.
A less direct solution appeals to Theorem TSS.
First, we want to be certain \(Z\) is nonempty. The zero vector of \(\complex{3}\text{,}\) \(\zerovector=\colvector{0\\0\\0}\text{,}\) is a good candidate, since if it fails to be in \(Z\text{,}\) we will know that \(Z\) is not a vector space. Check that
\begin{equation*} 4(0)-(0)+5(0)=0 \end{equation*}
so that \(\zerovector\in Z\text{.}\)
Suppose \(\vect{x}=\colvector{x_1\\x_2\\x_3}\) and \(\vect{y}=\colvector{y_1\\y_2\\y_3}\) are vectors from \(Z\text{.}\) Then we know that these vectors cannot be totally arbitrary, they must have gained membership in \(Z\) by virtue of meeting the membership test. For example, we know that \(\vect{x}\) must satisfy \(4x_1-x_2+5x_3=0\) while \(\vect{y}\) must satisfy \(4y_1-y_2+5y_3=0\text{.}\) Our second criteria asks the question, is \(\vect{x}+\vect{y}\in Z\text{?}\) Notice first that
\begin{equation*} \vect{x}+\vect{y}= \colvector{x_1\\x_2\\x_3}+\colvector{y_1\\y_2\\y_3}= \colvector{x_1+y_1\\x_2+y_2\\x_3+y_3} \end{equation*}
and we can test this vector for membership in \(Z\) as follows,
\begin{align*} &\ 4(x_1+y_1)-1(x_2+y_2)+5(x_3+y_3)\\ &=4x_1+4y_1-x_2-y_2+5x_3+5y_3\\ &=(4x_1-x_2+5x_3)+(4y_1-y_2+5y_3)\\ &=0 + 0&&\vect{x}\in Z,\ \vect{y}\in Z\\ &=0 \end{align*}
and by this computation we see that \(\vect{x}+\vect{y}\in Z\text{.}\)
If \(\alpha\in\complexes\) is a scalar and \(\vect{x}\in Z\text{,}\) is it always true that \(\alpha\vect{x}\in Z\text{?}\) To check our third criteria, we examine
\begin{equation*} \alpha\vect{x}=\alpha\colvector{x_1\\x_2\\x_3}=\colvector{\alpha x_1\\\alpha x_2\\\alpha x_3} \end{equation*}
and we can test this vector for membership in \(Z\) with
\begin{align*} &4(\alpha x_1)-(\alpha x_2)+5(\alpha x_3)\\ &\quad\quad=\alpha(4x_1-x_2+5x_3)\\ &\quad\quad=\alpha 0&&\vect{x}\in Z\\ &\quad\quad=0 \end{align*}
and we see that indeed \(\alpha\vect{x}\in Z\text{.}\) With the three conditions of Theorem TSS fulfilled, we can conclude that \(Z\) is a subspace of \(\complex{3}\text{.}\)

M40.

This section claims that the union of two subspaces is not always a subspace. Construct an example of two subspaces where you can prove that the set union is not a subspace. Once you have such an example, see if you can create another that is as “small” and as “simple” as possible
.
Solution.
Create two subspaces of \(\complex{2}\text{,}\) each as the span of a single nonzero vector. Simply ensure that the two vectors chosen are not scalar multiples of each other. Then the sum of these two vectors is not contained in the union of the two subspaces (this needs an argument), and so the set union fails Property AC and is not a subspace.

T20.

A square matrix \(A\) of size \(n\) is upper triangular if \(\matrixentry{A}{ij}=0\) whenever \(i\gt j\) (see Definition UTM). Let \(UT_n\) be the set of all upper triangular matrices of size \(n\text{.}\) Prove that \(UT_n\) is a subspace of the vector space of all square matrices of size \(n\text{,}\) \(M_{nn}\text{.}\)
Solution.
First, the zero vector of \(M_{nn}\) is the zero matrix, \(\zeromatrix\text{,}\) whose entries are all zero (Definition ZM). This matrix then meets the condition that \(\matrixentry{\zeromatrix}{ij}=0\) for \(i\gt j\) and so is an element of \(UT_n\text{.}\)
Suppose \(A,B\in UT_n\text{.}\) Is \(A+B\in UT_n\text{?}\) We examine the entries of \(A+B\) “below” the diagonal. That is, in the following, assume that \(i\gt j\text{.}\)
\begin{align*} \matrixentry{A+B}{ij} &=\matrixentry{A}{ij}+\matrixentry{B}{ij}&& \knowl{./knowl/xref/definition-MA.html}{\text{Definition MA}}\\ &=0 + 0&& A,B\in UT_n\\ &=0 \end{align*}
which qualifies \(A+B\) for membership in \(UT_n\text{.}\)
Suppose \(\alpha\in\complex{}\) and \(A\in UT_n\text{.}\) Is \(\alpha A\in UT_n\text{?}\) We examine the entries of \(\alpha A\) “below” the diagonal. That is, in the following, assume that \(i\gt j\text{.}\)
\begin{align*} \matrixentry{\alpha A}{ij} &=\alpha\matrixentry{A}{ij}&& \knowl{./knowl/xref/definition-MSM.html}{\text{Definition MSM}}\\ &=\alpha 0&& A\in UT_n\\ &=0 \end{align*}
which qualifies \(\alpha A\) for membership in \(UT_n\text{.}\)
Having fulfilled the three conditions of Theorem TSS we see that \(UT_n\) is a subspace of \(M_{nn}\text{.}\)

T30.

Let \(P\) be the set of all polynomials, of any degree. The set \(P\) is a vector space. Let \(E\) be the subset of \(P\) consisting of all polynomials with only terms of even degree. Prove or disprove: the set \(E\) is a subspace of \(P\text{.}\)
Solution.
Proof: Let \(E\) be the subset of \(P\) comprised of all polynomials with all terms of even degree. Clearly the set \(E\) is nonempty, as \(z(x) = 0\) is a polynomial of even degree. Let \(p(x)\) and \(q(x)\) be arbitrary elements of \(E\text{.}\) Then there exist nonnegative integers \(m\) and \(n\) so that
\begin{align*} p(x) &= a_0 + a_2 x^2 + a_4 x^4 + \cdots + a_{2n}x^{2n}\\ q(x) &= b_0 + b_2 x^2 + b_4 x^4 + \cdots + b_{2m}x^{2m} \end{align*}
for some constants \(a_0, a_2, \ldots, a_{2n}\) and \(b_0, b_2, \ldots, b_{2m}\text{.}\) Without loss of generality, we can assume that \(m \le n\text{.}\) Thus, we have
\begin{align*} p(x) + q(x) &= (a_0 + b_0) + (a_2 + b_2)x^2 + \cdots + (a_{2m} + b_{2m})x^{2m} + a_{2m +2} x^{2m+2} + \cdots + a_{2n} x^{2n} \end{align*}
so \(p(x) + q(x)\) has all even terms, and thus \(p(x) + q(x) \in E\text{.}\) Similarly, let \(\alpha\) be a scalar. Then
\begin{align*} \alpha p(x) &= \alpha (a_0 + a_2 x^2 + a_4 x^4 + \cdots + a_{2n}x^{2n}) \\ &= \alpha a_0 + (\alpha a_2) x^2 + (\alpha a_4) x^4 + \cdots + (\alpha a_{2n})x^{2n} \end{align*}
so that \(\alpha p(x)\) also has only terms of even degree, and \(\alpha p(x) \in E\text{.}\) Thus, \(E\) is a subspace of \(P\text{.}\)

T31.

Let \(P\) be the set of all polynomials, of any degree. The set \(P\) is a vector space. Let \(F\) be the subset of \(P\) consisting of all polynomials with only terms of odd degree. Prove or disprove: the set \(F\) is a subspace of \(P\text{.}\)
Solution.
This conjecture is false. The polynomials \(p(x) = x^3 + x^2\) and \(q(x) = -x^3 + x^2\) provide a counterexample to Property AC.
There is another very technical reason. Constant polynomials have degree zero, with one exception. The zero polynomial either does not have a degree, or is defined to have degree \(-\infty\text{.}\) The latter definition will work best with our definition of \(P\text{,}\) the vector space of polynomials of any degree (or for \(P_n\)). One justification of this definition is that we want various properties of arithmetic with polynomials to hold when the zero polynomial is involved. For example, the degree of the sum of two polynomials should be less than, or equal to, the sum of the degrees of the polynomials. So the zero polynomial does not have odd degree, and hence is not an element of \(P\text{.}\)
You have attempted of activities on this page.