Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
SubsectionILTInjective Linear Transformations
As usual, we lead with a definition.
DefinitionILT.Injective Linear Transformation.
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is injective if whenever \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{,}\) then \(\vect{x}=\vect{y}\text{.}\)
Given an arbitrary function, it is possible for two different inputs to yield the same output (think about the function \(f(x)=x^2\) and the inputs \(x=3\) and \(x=-3\)). For an injective function, this never happens. If we have equal outputs (\(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)) then we must have achieved those equal outputs by employing equal inputs (\(\vect{x}=\vect{y}\)). Some authors prefer the term one-to-one where we use injective, and we will sometimes refer to an injective linear transformation as an injection.
SubsectionEILTExamples of Injective Linear Transformations
It is perhaps most instructive to examine a linear transformation that is not injective first.
So we have two vectors from the domain, \(\vect{x}\neq\vect{y}\text{,}\) yet \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{,}\) in violation of Definition ILT. This is another example where you should not concern yourself with how \(\vect{x}\) and \(\vect{y}\) were selected, as this will be explained shortly. However, do understand why these two vectors provide enough evidence to conclude that \(T\) is not injective.
Here is a cartoon of a non-injective linear transformation. Notice that the central feature of this cartoon is that \(\lteval{T}{\vect{u}}=\vect{v}=\lteval{T}{\vect{w}}\text{.}\) Even though this happens again with some unnamed vectors, it only takes one occurrence to destroy the possibility of injectivity. Note also that the two vectors displayed in the bottom of \(V\) have no bearing, either way, on the injectivity of \(T\text{.}\)
To show that a linear transformation is not injective, it is enough to find a single pair of inputs that get sent to the identical output, as in Example NIAQ. However, to show that a linear transformation is injective we must establish that this coincidence of outputs never occurs. Here is an example that shows how to establish this.
To establish that \(R\) is injective we must begin with the assumption that \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\) and somehow arrive at the conclusion that \(\vect{x}=\vect{y}\text{.}\) Here we go,
Now we recognize that we have a homogeneous system of 5 equations in 5 variables (the terms \(x_i-y_i\) are the variables), so we row-reduce the coefficient matrix to
and we conclude that indeed \(\vect{x}=\vect{y}\text{.}\) By Definition ILT, \(T\) is injective.
Here is the cartoon for an injective linear transformation. It is meant to suggest that we never have two inputs associated with a single output. Again, the two lonely vectors at the bottom of \(V\) have no bearing either way on the injectivity of \(T\text{.}\)
Let us now examine an injective linear transformation between abstract vector spaces.
so the two inputs must be equal polynomials. By Definition ILT, \(T\) is injective.
SubsectionKLTKernel of a Linear Transformation
For a linear transformation \(\ltdefn{T}{U}{V}\text{,}\) the kernel is a subset of the domain \(U\text{.}\) Informally, it is the set of all inputs that the transformation sends to the zero vector of the codomain. It will have some natural connections with the null space of a matrix, so we will keep the same notation, and if you think about your objects, then there should be little confusion. Here is the careful definition.
DefinitionKLT.Kernel of a Linear Transformation.
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then the kernel of \(T\) is the set
To determine the elements of \(\complex{3}\) in \(\krn{T}\text{,}\) find those vectors \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\zerovector\text{,}\) that is,
We know that the span of a set of vectors is always a subspace (Theorem SSS), so the kernel computed in Example NKAO is also a subspace. This is no accident, the kernel of a linear transformation is always a subspace.
TheoremKLTS.Kernel of a Linear Transformation is a Subspace.
Suppose that \(\ltdefn{T}{U}{V}\) is a linear transformation. Then the kernel of \(T\text{,}\)\(\krn{T}\text{,}\) is a subspace of \(U\text{.}\)
Proof.
We can apply the three-part test of Theorem TSS. First \(\lteval{T}{\zerovector_U}=\zerovector_V\) by Theorem LTTZZ, so \(\zerovector_U\in\krn{T}\) and we know that the kernel is nonempty.
Suppose we assume that \(\vect{x},\,\vect{y}\in\krn{T}\text{.}\) Is \(\vect{x}+\vect{y}\in\krn{T}\text{?}\) We have
This qualifies \(\alpha\vect{x}\) for membership in \(\krn{T}\text{.}\) So we have scalar closure and Theorem TSS tells us that \(\krn{T}\) is a subspace of \(U\text{.}\)
Let us compute another kernel, now that we know in advance that it will be a subspace.
To determine the elements of \(\complex{3}\) in \(\krn{T}\text{,}\) find those vectors \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\zerovector\text{,}\) that is,
The kernel of \(T\) is the set of solutions to this homogeneous system of equations, which is simply the trivial solution \(\vect{u}=\zerovector\text{,}\) so
Our next theorem says that if a preimage is a nonempty set then we can construct it by picking any one element and adding on elements of the kernel.
TheoremKPI.Kernel and Preimage.
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation and \(\vect{v}\in V\text{.}\) If the preimage \(\preimage{T}{\vect{v}}\) is nonempty, and \(\vect{u}\in\preimage{T}{\vect{v}}\) then
Let \(M=\setparts{\vect{u}+\vect{z}}{\vect{z}\in\krn{T}}\text{.}\) First, we show that \(M\subseteq\preimage{T}{\vect{v}}\text{.}\) Suppose that \(\vect{w}\in M\text{,}\) so \(\vect{w}\) has the form \(\vect{w}=\vect{u}+\vect{z}\text{,}\) where \(\vect{z}\in\krn{T}\text{.}\) Then
This qualifies \(\vect{x}-\vect{u}\) for membership in the kernel of \(T\text{,}\)\(\krn{T}\text{.}\) So there is a vector \(\vect{z}\in\krn{T}\) such that \(\vect{x}-\vect{u}=\vect{z}\text{.}\) Rearranging this equation gives \(\vect{x}=\vect{u}+\vect{z}\) and so \(\vect{x}\in M\text{.}\) So \(\preimage{T}{\vect{v}}\subseteq M\) and we see that \(M=\preimage{T}{\vect{v}}\text{,}\) as desired.
This theorem, and its proof, should remind you very much of Theorem PSPHS. Additionally, you might go back and review Example SPIAS. Can you tell now which is the only preimage to be a subspace?
Here is the cartoon which describes the “many-to-one” behavior of a typical linear transformation. Presume that \(\lteval{T}{\vect{u}_i}=\vect{v}_i\text{,}\) for \(i=1,2,3\text{,}\) and as guaranteed by Theorem LTTZZ, \(\lteval{T}{\zerovector_U}=\zerovector_V\text{.}\) Then four preimages are depicted, each labeled slightly different. \(\preimage{T}{\vect{v}_2}\) is the most general, employing Theorem KPI to provide two equal descriptions of the set. The most unusual is \(\preimage{T}{\zerovector_V}\) which is equal to the kernel, \(\krn{T}\text{,}\) and hence is a subspace (by Theorem KLTS). The subdivisions of the domain, \(U\text{,}\) are meant to suggest the partioning of the domain by the collection of preimages. It also suggests that each preimage is of similar size or structure, since each is a “shifted” copy of the kernel. Notice that we cannot speak of the dimension of a preimage, since it is almost never a subspace. Also notice that \(\vect{x},\,\vect{y}\in V\) are elements of the codomain with empty preimages.
The next theorem is one we will cite frequently, as it characterizes injections by the size of the kernel.
TheoremKILT.Kernel of an Injective Linear Transformation.
Suppose that \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is injective if and only if the kernel of \(T\) is trivial, \(\krn{T}=\set{\zerovector}\text{.}\)
Proof.
(⇒)
We assume \(T\) is injective and we need to establish that two sets are equal (Definition SE). Since the kernel is a subspace (Theorem KLTS), \(\set{\zerovector}\subseteq\krn{T}\text{.}\) To establish the opposite inclusion, suppose \(\vect{x}\in\krn{T}\text{.}\) We have
We can apply Definition ILT to conclude that \(\vect{x}=\zerovector\text{.}\) Therefore \(\krn{T}\subseteq\set{\zerovector}\) and by Definition SE, \(\krn{T}=\set{\zerovector}\text{.}\)
(⇐)
To establish that \(T\) is injective, appeal to Definition ILT and begin with the assumption that \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{.}\) Then
So \(\vect{x}-\vect{y}\in\krn{T}\) by Definition KLT and with the hypothesis that the kernel is trivial we conclude that \(\vect{x}-\vect{y}=\zerovector\text{.}\) Then
thus establishing that \(T\) is injective by Definition ILT.
You might begin to think about how Figure KPI would change if the linear transformation is injective, which would make the kernel trivial by Theorem KILT.
ExampleNIAQR.Not injective, Archetype Q, revisited.
We are now in a position to revisit our first example in this section, Example NIAQ. In that example, we showed that Archetype Q is not injective by constructing two vectors, which when used to evaluate the linear transformation provided the same output, thus violating Definition ILT. Just where did those two vectors come from?
which you can check is an element of \(\krn{T}\) for Archetype Q. Choose a vector \(\vect{x}\) at random, and then compute \(\vect{y}=\vect{x}+\vect{z}\) (verify this computation back in Example NIAQ). Then
Whenever the kernel of a linear transformation is nontrivial, we can employ this device and conclude that the linear transformation is not injective. This is another way of viewing Theorem KILT. For an injective linear transformation, the kernel is trivial and our only choice for \(\vect{z}\) is the zero vector, which will not help us create two different inputs for \(T\) that yield identical outputs. For every one of the archetypes that is not injective, there is an example presented of exactly this form.
ExampleNIAO.Not injective, Archetype O.
In Example NKAO the kernel of Archetype O was determined to be
a subspace of \(\complex{3}\) with dimension 1. Since the kernel is not trivial, Theorem KILT tells us that \(T\) is not injective.
ExampleIAP.Injective, Archetype P.
In Example TKAP it was shown that the linear transformation in Archetype P has a trivial kernel. So by Theorem KILT, \(T\) is injective.
SageILT.Injective Linear Transformations.
By now, you have probably already figured out how to determine if a linear transformation is injective, and what its kernel is. You may also now begin to understand why Sage calls the null space of a matrix a kernel. Here are two examples, first a reprise of Example NKAO.
So we have a concrete demonstration of one half of Theorem KILT. Here is the second example, a do-over for Example TKAP, but renamed as S.
And so we have a concrete demonstration of the other half of Theorem KILT.
Now that we have Theorem KPI, we can return to our discussion from Sage PI. The .preimage_representative() method of a linear transformation will give us a single element of the preimage, with no other guarantee about the nature of that element. That is fine, since this is all Theorem KPI requires (in addition to the kernel). Remember that not every element of the codomain may have a nonempty preimage (as indicated in the hypotheses of Theorem KPI). Here is an example using T from above, with a choice of a codomain element that has a nonempty preimage.
Now the following will create random elements of the preimage of v, which can be verified by the test always returning True. Use the compute cell just below if you are curious what p looks like.
As suggested, some choices of v can lead to empty preimages, in which case Theorem KPI does not even apply.
The situation is less interesting for an injective linear transformation. Still, preimages may be empty, but when they are nonempty, they are just singletons (a single element) since the kernel is empty. So a repeat of the above example, with S rather than T, would not be very informative.
SubsectionILTLIInjective Linear Transformations and Linear Independence
There is a connection between injective linear transformations and linearly independent sets that we will make precise in the next two theorems. However, more informally, we can get a feel for this connection when we think about how each property is defined. A set of vectors is linearly independent if the only relation of linear dependence is the trivial one. A linear transformation is injective if the only way two input vectors can produce the same output is in the trivial way, when both input vectors are equal.
TheoremILTLI.Injective Linear Transformations and Linear Independence.
Suppose that \(\ltdefn{T}{U}{V}\) is an injective linear transformation and
Assume \(T\) is injective. Since \(B\) is a basis, we know \(B\) is linearly independent (Definition B). Then Theorem ILTLI says that \(C\) is a linearly independent subset of \(V\text{.}\)
(⇐)
Assume that \(C\) is linearly independent. To establish that \(T\) is injective, we will show that the kernel of \(T\) is trivial (Theorem KILT). Suppose that \(\vect{u}\in\krn{T}\text{.}\) As an element of \(U\text{,}\) we can write \(\vect{u}\) as a linear combination of the basis vectors in \(B\) (uniquely). So there are are scalars, \(\scalarlist{a}{m}\text{,}\) such that
This is a relation of linear dependence (Definition RLD) on the linearly independent set \(C\text{,}\) so the scalars are all zero: \(a_1=a_2=a_3=\cdots=a_m=0\text{.}\) Then
Since \(\vect{u}\) was chosen as an arbitrary vector from \(\krn{T}\text{,}\) we have \(\krn{T}=\set{\zerovector}\) and Theorem KILT tells us that \(T\) is injective.
SubsectionILTDInjective Linear Transformations and Dimension
TheoremILTD.Injective Linear Transformations and Dimension.
Suppose that \(\ltdefn{T}{U}{V}\) is an injective linear transformation. Then \(\dimension{U}\leq\dimension{V}\text{.}\)
Proof.
Suppose to the contrary that \(m=\dimension{U}\gt\dimension{V}=t\text{.}\) Let \(B\) be a basis of \(U\text{,}\) which will then contain \(m\) vectors. Apply \(T\) to each element of \(B\) to form a set \(C\) that is a subset of \(V\text{.}\) By Theorem ILTB, \(C\) is linearly independent and therefore must contain \(m\) distinct vectors. So we have found a set of \(m\) linearly independent vectors in \(V\text{,}\) a vector space of dimension \(t\text{,}\) with \(m\gt t\text{.}\) However, this contradicts Theorem G, so our assumption is false and \(\dimension{U}\leq\dimension{V}\text{.}\)
ExampleNIDAU.Not injective by dimension, Archetype U.
Since \(\dimension{M_{23}}=6\gt 4=\dimension{\complex{4}}\text{,}\)\(T\) cannot be injective for then \(T\) would violate Theorem ILTD.
Notice that the previous example made no use of the actual formula defining the function. Merely a comparison of the dimensions of the domain and codomain is enough to conclude that the linear transformation is not injective. Archetype M and Archetype N are two more examples of linear transformations that have “big” domains and “small” codomains, resulting in “collisions” of outputs and thus are non-injective linear transformations.
SubsectionCILTComposition of Injective Linear Transformations
In Subsection LT.NLTFO we saw how to combine linear transformations to build new linear transformations, specifically, how to build the composition of two linear transformations (Definition LTC). It will be useful later to know that the composition of injective linear transformations is again injective, so we prove that here.
TheoremCILTI.Composition of Injective Linear Transformations is Injective.
Suppose that \(\ltdefn{T}{U}{V}\) and \(\ltdefn{S}{V}{W}\) are injective linear transformations. Then \(\ltdefn{(\compose{S}{T})}{U}{W}\) is an injective linear transformation.
Proof.
That the composition is a linear transformation was established in Theorem CLTLT, so we need only establish that the composition is injective. Applying Definition ILT, choose \(\vect{x}\text{,}\)\(\vect{y}\) from \(U\text{.}\) Then if \(\lteval{\left(\compose{S}{T}\right)}{\vect{x}}=\lteval{\left(\compose{S}{T}\right)}{\vect{y}}\text{,}\)
\begin{align*}
&\Rightarrow&\lteval{S}{\lteval{T}{\vect{x}}}&=\lteval{S}{\lteval{T}{\vect{y}}}&&
\knowl{./knowl/xref/definition-LTC.html}{\text{Definition LTC}}\\
&\Rightarrow&\lteval{T}{\vect{x}}&=\lteval{T}{\vect{y}}&&
\knowl{./knowl/xref/definition-ILT.html}{\text{Definition ILT}}\text{ for }S\\
&\Rightarrow&\vect{x}&=\vect{y}&&
\knowl{./knowl/xref/definition-ILT.html}{\text{Definition ILT}}\text{ for }T\text{.}
\end{align*}
SageCILT.Composition of Injective Linear Transformations.
One way to use Sage is to construct examples of theorems and verify the conclusions. Sometimes you will get this wrong: you might build an example that does not satisfy the hypotheses, or your example may not satisfy the conclusions. This may be because you are not using Sage properly, or because you do not understand a definition or a theorem, or in very limited cases you may have uncovered a bug in Sage (which is always the preferred explanation!). But in the process of trying to understand a discrepancy or unexpected result, you will learn much more, both about linear algebra and about Sage. And Sage is incredibly patient — it will stay up with you all night to help you through a rough patch.
Let us illustrate the above in the context of Theorem CILTI. The hypotheses indicate we need two injective linear transformations. Where will get two such linear transformations? Well, the contrapositive of Theorem ILTD tells us that if the dimension of the domain exceeds the dimension of the codomain, we will never be injective. So we should at a minimum avoid this scenario. We can build two linear transformations from matrices created randomly, and just hope that they lead to injective linear transformations. Here is an example of how we create examples like this. The random matrix has single-digit entries, and almost always will lead to an injective linear transformation, though we cannot be absolutely certain. Evaluate this cell repeatedly, to see how rarely the result is not injective.
Our concrete example below was created this way, so here we go.
Reading QuestionsILTReading Questions
1.Why Not Injective?
Suppose \(\ltdefn{T}{\complex{8}}{\complex{5}}\) is a linear transformation. Why is \(T\) not injective?
2.Kernel of an Injective Linear Transformation.
Describe the kernel of an injective linear transformation.
Each archetype below is a linear transformation. Compute the kernel for each.
Archetype M, Archetype N, Archetype O, Archetype P, Archetype Q, Archetype R, Archetype S, Archetype T, Archetype U, Archetype V, Archetype W, Archetype X
C20.
The linear transformation \(\ltdefn{T}{\complex{4}}{\complex{3}}\) is not injective. Find two inputs \(\vect{x},\,\vect{y}\in\complex{4}\) that yield the same output (that is \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)).
A linear transformation that is not injective will have a nontrivial kernel (Theorem KILT), and this is the key to finding the desired inputs. We need one nontrivial element of the kernel, so suppose that \(\vect{z}\in\complex{4}\) is an element of the kernel,
A quicker solution is to take two elements of the kernel (in this case, scalar multiples of \(\vect{z}\)) which both get sent to \(\zerovector\) by \(T\text{.}\) Quicker yet, take \(\zerovector\) and \(\vect{z}\) as \(\vect{x}\) and \(\vect{y}\text{,}\) which also both get sent to \(\zerovector\) by \(T\text{.}\)
and let \(\ltdefn{T}{\complex{5}}{\complex{4}}\) be given by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\) Is \(T\) injective? (Hint: No calculation is required.)
Solution.
By Theorem ILTD, if a linear transformation \(\ltdefn{T}{U}{V}\) is injective, then \(\dim(U)\le\dim(V)\text{.}\) In this case, \(\ltdefn{T}{\complex{5}}{\complex{4}}\text{,}\) and \(5=\dimension{\complex{5}}\gt\dimension{\complex{4}}=4\text{.}\) Thus, \(T\) cannot possibly be injective.
C27.
Let \(\ltdefn{T}{\complex{3}}{\complex{3}}\) be given by \(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y + z\\ x - y + 2z\\ x + 2y - z}\text{.}\) Find \(\krn{T}\text{.}\) Is \(T\) injective?
Solution.
If \(\lteval{T}{\colvector{x\\y\\z}} = \zerovector\text{,}\) then \(\colvector{2x + y + z\\x - y + 2z\\x + 2y - z} = \zerovector\text{.}\) Thus, we have the system
\begin{align*}
2x + y + z &= 0\\
x - y + 2z &= 0\\
x + 2y - z &= 0\text{.}
\end{align*}
Thus, we are looking for the null space of the matrix
Thus, a basis for the null space of \(A\) is \(\set{\colvector{-1\\-1\\3\\0}}\text{,}\) and the kernel is \(\krn{T} = \spn{\set{\colvector{-1\\-1\\3\\0}}}\text{.}\) Since the kernel is nontrivial, this linear transformation is not injective.
C30.
Let \(T : M_{22} \rightarrow P_2\) be given by \(T\left(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\right) = (a + b) + (a + c)x + (a + d)x^2\text{.}\) Is \(T\) injective? Find \(\krn{T}\text{.}\)
Solution.
We can see without computing that \(T\) is not injective, since the dimension of \(M_{22}\) is larger than the dimension of \(P_2\text{.}\) However, that does not address the question of the kernel of \(T\text{.}\) We need to find all matrices \(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\) so that \((a + b) + (a + c)x + (a + d)x^2 = 0\text{.}\) This means \(a + b = 0\text{,}\)\(a + c = 0\text{,}\) and \(a + d = 0\text{,}\) or equivalently, \(b = d = c = -a\text{.}\) Thus, the kernel is a one-dimensional subspace of \(M_{22}\) spanned by \(\begin{bmatrix} 1 & -1\\-1&-1 \end{bmatrix}\text{.}\) Symbolically, we have \(\krn{T} = \spn{\set{\begin{bmatrix} 1 & -1\\-1&-1 \end{bmatrix}}}\text{.}\)
C31.
Given that the linear transformation \(\ltdefn{T}{\complex{3}}{\complex{3}}\text{,}\)\(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y\\2y + z\\x + 2z}\) is injective, show directly that
so the set of vectors \(\set{\lteval{T}{\vect{e}_1},\, \lteval{T}{\vect{e}_2},\,\lteval{T}{\vect{e}_3}}\) is linearly independent.
C32.
Given that the linear transformation \(\ltdefn{T}{\complex{2}}{\complex{3}}\text{,}\)\(\lteval{T}{\colvector{x\\y}} = \colvector{x+y\\2x + y\\x + 2y}\) is injective, show directly that
We have \(\lteval{T}{\vect{e}_1} = \colvector{1\\2\\1}\) and \(\lteval{T}{\vect{e}_2} = \colvector{1\\1\\2}\text{.}\) Putting these into a matrix as columns and row-reducing, we have
so since \(r = 3 = n\text{,}\) the set of vectors \(\set{\lteval{T}{\vect{e}_1},\,\lteval{T}{\vect{e}_2},\,\lteval{T}{\vect{e}_3}}\) is linearly independent.
C40.
Show that the linear transformation \(R\) is not injective by finding two different elements of the domain, \(\vect{x}\) and \(\vect{y}\text{,}\) such that \(\lteval{R}{\vect{x}}=\lteval{R}{\vect{y}}\text{.}\) (\(S_{22}\) is the vector space of symmetric \(2\times 2\) matrices.)
We choose \(\vect{x}\) to be any vector we like. A particularly cocky choice would be to choose \(\vect{x}=\zerovector\text{,}\) but we will instead choose
Then \(\lteval{R}{\vect{x}}=9+9x\text{.}\) Now compute the kernel of \(R\text{,}\) which by Theorem KILT we expect to be nontrivial. Setting \(\lteval{R}{\begin{bmatrix}a&b\\b&c\end{bmatrix}}\) equal to the zero vector, \(\zerovector=0+0x\text{,}\) and equating coefficients leads to a homogeneous system of equations. Row-reducing the coefficient matrix of this system will allow us to determine the values of \(a\text{,}\)\(b\) and \(c\) that create elements of the null space of \(R\text{,}\)
We only need a single element of the null space of this coefficient matrix, so we will not compute a precise description of the whole null space. Instead, choose the free variable \(c=2\text{.}\) Then
Then check that \(\lteval{R}{\vect{y}}=9+9x\text{.}\)
M60.
Suppose \(U\) and \(V\) are vector spaces. Define the function \(\ltdefn{Z}{U}{V}\) by \(\lteval{Z}{\vect{u}}=\zerovector_{V}\) for every \(\vect{u}\in U\text{.}\) Then by Exercise LT.M60, \(Z\) is a linear transformation. Formulate a condition on \(U\) that is equivalent to \(Z\) being an injective linear transformation. In other words, fill in the blank to complete the following statement (and then give a proof): \(Z\) is injective if and only if \(U\) is . (See Exercise SLT.M60, Exercise IVLT.M60.)
T10.
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. For which vectors \(\vect{v}\in V\) is \(\preimage{T}{\vect{v}}\) a subspace of \(U\text{?}\)
Solution.
Suppose that \(\preimage{T}{\vect{v}}\) is a subspace of \(U\text{.}\) Then \(\zerovector\in\preimage{T}{\vect{v}}\) by Property Z, which we can rearrange to say \(\lteval{T}{\zerovector}=\vect{v}\text{.}\) We know from Theorem LTTZZ that \(\lteval{T}{\zerovector}=\zerovector\text{.}\) Putting these together we have
So our hypothesis that the preimage is a subspace has led to the conclusion that \(\vect{v}\) could only be one vector, the zero vector. We still need to verify that \(\preimage{T}{\zerovector}\) is indeed a subspace, but since \(\preimage{T}{\zerovector}=\krn{T}\) this is just Theorem KLTS.
T15.
Suppose that \(\ltdefn{T}{U}{V}\) and \(\ltdefn{S}{V}{W}\) are linear transformations. Prove the following relationship between kernels.
We are asked to prove that \(\krn{T}\) is a subset of \(\krn{\compose{S}{T}}\text{.}\) Employing Definition SSET, choose \(\vect{x}\in\krn{T}\text{.}\) Then we know that \(\lteval{T}{\vect{x}}=\zerovector\text{.}\) So