Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is injective if whenever \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{,}\) then \(\vect{x}=\vect{y}\text{.}\)
Given an arbitrary function, it is possible for two different inputs to yield the same output (think about the function \(f(x)=x^2\) and the inputs \(x=3\) and \(x=-3\)). For an injective function, this never happens. If we have equal outputs (\(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)) then we must have achieved those equal outputs by employing equal inputs (\(\vect{x}=\vect{y}\)). Some authors prefer the term one-to-one where we use injective, and we will sometimes refer to an injective linear transformation as an injection.
So we have two vectors from the domain, \(\vect{x}\neq\vect{y}\text{,}\) yet \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{,}\) in violation of Definition ILT. This is another example where you should not concern yourself with how \(\vect{x}\) and \(\vect{y}\) were selected, as this will be explained shortly. However, do understand why these two vectors provide enough evidence to conclude that \(T\) is not injective.
Here is a cartoon of a non-injective linear transformation. Notice that the central feature of this cartoon is that \(\lteval{T}{\vect{u}}=\vect{v}=\lteval{T}{\vect{w}}\text{.}\) Even though this happens again with some unnamed vectors, it only takes one occurrence to destroy the possibility of injectivity. Note also that the two vectors displayed in the bottom of \(V\) have no bearing, either way, on the injectivity of \(T\text{.}\)
To show that a linear transformation is not injective, it is enough to find a single pair of inputs that get sent to the identical output, as in Example NIAQ. However, to show that a linear transformation is injective we must establish that this coincidence of outputs never occurs. Here is an example that shows how to establish this.
To establish that \(R\) is injective we must begin with the assumption that \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\) and somehow arrive at the conclusion that \(\vect{x}=\vect{y}\text{.}\) Here we go,
Now we recognize that we have a homogeneous system of 5 equations in 5 variables (the terms \(x_i-y_i\) are the variables), so we row-reduce the coefficient matrix to
Here is the cartoon for an injective linear transformation. It is meant to suggest that we never have two inputs associated with a single output. Again, the two lonely vectors at the bottom of \(V\) have no bearing either way on the injectivity of \(T\text{.}\)
For a linear transformation \(\ltdefn{T}{U}{V}\text{,}\) the kernel is a subset of the domain \(U\text{.}\) Informally, it is the set of all inputs that the transformation sends to the zero vector of the codomain. It will have some natural connections with the null space of a matrix, so we will keep the same notation, and if you think about your objects, then there should be little confusion. Here is the careful definition.
To determine the elements of \(\complex{3}\) in \(\krn{T}\text{,}\) find those vectors \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\zerovector\text{,}\) that is,
We know that the span of a set of vectors is always a subspace (Theorem SSS), so the kernel computed in Example NKAO is also a subspace. This is no accident, the kernel of a linear transformation is always a subspace.
We can apply the three-part test of Theorem TSS. First \(\lteval{T}{\zerovector_U}=\zerovector_V\) by Theorem LTTZZ, so \(\zerovector_U\in\krn{T}\) and we know that the kernel is nonempty.
This qualifies \(\alpha\vect{x}\) for membership in \(\krn{T}\text{.}\) So we have scalar closure and Theorem TSS tells us that \(\krn{T}\) is a subspace of \(U\text{.}\)
To determine the elements of \(\complex{3}\) in \(\krn{T}\text{,}\) find those vectors \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\zerovector\text{,}\) that is,
The kernel of \(T\) is the set of solutions to this homogeneous system of equations, which is simply the trivial solution \(\vect{u}=\zerovector\text{,}\) so
Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation and \(\vect{v}\in V\text{.}\) If the preimage \(\preimage{T}{\vect{v}}\) is nonempty, and \(\vect{u}\in\preimage{T}{\vect{v}}\) then
Let \(M=\setparts{\vect{u}+\vect{z}}{\vect{z}\in\krn{T}}\text{.}\) First, we show that \(M\subseteq\preimage{T}{\vect{v}}\text{.}\) Suppose that \(\vect{w}\in M\text{,}\) so \(\vect{w}\) has the form \(\vect{w}=\vect{u}+\vect{z}\text{,}\) where \(\vect{z}\in\krn{T}\text{.}\) Then
This qualifies \(\vect{x}-\vect{u}\) for membership in the kernel of \(T\text{,}\)\(\krn{T}\text{.}\) So there is a vector \(\vect{z}\in\krn{T}\) such that \(\vect{x}-\vect{u}=\vect{z}\text{.}\) Rearranging this equation gives \(\vect{x}=\vect{u}+\vect{z}\) and so \(\vect{x}\in M\text{.}\) So \(\preimage{T}{\vect{v}}\subseteq M\) and we see that \(M=\preimage{T}{\vect{v}}\text{,}\) as desired.
This theorem, and its proof, should remind you very much of Theorem PSPHS. Additionally, you might go back and review Example SPIAS. Can you tell now which is the only preimage to be a subspace?
Here is the cartoon which describes the “many-to-one” behavior of a typical linear transformation. Presume that \(\lteval{T}{\vect{u}_i}=\vect{v}_i\text{,}\) for \(i=1,2,3\text{,}\) and as guaranteed by Theorem LTTZZ, \(\lteval{T}{\zerovector_U}=\zerovector_V\text{.}\) Then four preimages are depicted, each labeled slightly different. \(\preimage{T}{\vect{v}_2}\) is the most general, employing Theorem KPI to provide two equal descriptions of the set. The most unusual is \(\preimage{T}{\zerovector_V}\) which is equal to the kernel, \(\krn{T}\text{,}\) and hence is a subspace (by Theorem KLTS). The subdivisions of the domain, \(U\text{,}\) are meant to suggest the partioning of the domain by the collection of preimages. It also suggests that each preimage is of similar size or structure, since each is a “shifted” copy of the kernel. Notice that we cannot speak of the dimension of a preimage, since it is almost never a subspace. Also notice that \(\vect{x},\,\vect{y}\in V\) are elements of the codomain with empty preimages.
TheoremKILT.Kernel of an Injective Linear Transformation.
Suppose that \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is injective if and only if the kernel of \(T\) is trivial, \(\krn{T}=\set{\zerovector}\text{.}\)
We assume \(T\) is injective and we need to establish that two sets are equal (Definition SE). Since the kernel is a subspace (Theorem KLTS), \(\set{\zerovector}\subseteq\krn{T}\text{.}\) To establish the opposite inclusion, suppose \(\vect{x}\in\krn{T}\text{.}\) We have
We can apply Definition ILT to conclude that \(\vect{x}=\zerovector\text{.}\) Therefore \(\krn{T}\subseteq\set{\zerovector}\) and by Definition SE, \(\krn{T}=\set{\zerovector}\text{.}\)
To establish that \(T\) is injective, appeal to Definition ILT and begin with the assumption that \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{.}\) Then
So \(\vect{x}-\vect{y}\in\krn{T}\) by Definition KLT and with the hypothesis that the kernel is trivial we conclude that \(\vect{x}-\vect{y}=\zerovector\text{.}\) Then
You might begin to think about how Figure KPI would change if the linear transformation is injective, which would make the kernel trivial by Theorem KILT.
ExampleNIAQR.Not injective, Archetype Q, revisited.
We are now in a position to revisit our first example in this section, Example NIAQ. In that example, we showed that Archetype Q is not injective by constructing two vectors, which when used to evaluate the linear transformation provided the same output, thus violating Definition ILT. Just where did those two vectors come from?
which you can check is an element of \(\krn{T}\) for Archetype Q. Choose a vector \(\vect{x}\) at random, and then compute \(\vect{y}=\vect{x}+\vect{z}\) (verify this computation back in Example NIAQ). Then
Whenever the kernel of a linear transformation is nontrivial, we can employ this device and conclude that the linear transformation is not injective. This is another way of viewing Theorem KILT. For an injective linear transformation, the kernel is trivial and our only choice for \(\vect{z}\) is the zero vector, which will not help us create two different inputs for \(T\) that yield identical outputs. For every one of the archetypes that is not injective, there is an example presented of exactly this form.
By now, you have probably already figured out how to determine if a linear transformation is injective, and what its kernel is. You may also now begin to understand why Sage calls the null space of a matrix a kernel. Here are two examples, first a reprise of Example NKAO.
Now that we have Theorem KPI, we can return to our discussion from Sage PI. The .preimage_representative() method of a linear transformation will give us a single element of the preimage, with no other guarantee about the nature of that element. That is fine, since this is all Theorem KPI requires (in addition to the kernel). Remember that not every element of the codomain may have a nonempty preimage (as indicated in the hypotheses of Theorem KPI). Here is an example using T from above, with a choice of a codomain element that has a nonempty preimage.
Now the following will create random elements of the preimage of v, which can be verified by the test always returning True. Use the compute cell just below if you are curious what p looks like.
The situation is less interesting for an injective linear transformation. Still, preimages may be empty, but when they are nonempty, they are just singletons (a single element) since the kernel is empty. So a repeat of the above example, with S rather than T, would not be very informative.
SubsectionILTLIInjective Linear Transformations and Linear Independence
There is a connection between injective linear transformations and linearly independent sets that we will make precise in the next two theorems. However, more informally, we can get a feel for this connection when we think about how each property is defined. A set of vectors is linearly independent if the only relation of linear dependence is the trivial one. A linear transformation is injective if the only way two input vectors can produce the same output is in the trivial way, when both input vectors are equal.
Assume \(T\) is injective. Since \(B\) is a basis, we know \(B\) is linearly independent (Definition B). Then Theorem ILTLI says that \(C\) is a linearly independent subset of \(V\text{.}\)
Assume that \(C\) is linearly independent. To establish that \(T\) is injective, we will show that the kernel of \(T\) is trivial (Theorem KILT). Suppose that \(\vect{u}\in\krn{T}\text{.}\) As an element of \(U\text{,}\) we can write \(\vect{u}\) as a linear combination of the basis vectors in \(B\) (uniquely). So there are are scalars, \(\scalarlist{a}{m}\text{,}\) such that
This is a relation of linear dependence (Definition RLD) on the linearly independent set \(C\text{,}\) so the scalars are all zero: \(a_1=a_2=a_3=\cdots=a_m=0\text{.}\) Then
Since \(\vect{u}\) was chosen as an arbitrary vector from \(\krn{T}\text{,}\) we have \(\krn{T}=\set{\zerovector}\) and Theorem KILT tells us that \(T\) is injective.
Suppose to the contrary that \(m=\dimension{U}\gt\dimension{V}=t\text{.}\) Let \(B\) be a basis of \(U\text{,}\) which will then contain \(m\) vectors. Apply \(T\) to each element of \(B\) to form a set \(C\) that is a subset of \(V\text{.}\) By Theorem ILTB, \(C\) is linearly independent and therefore must contain \(m\) distinct vectors. So we have found a set of \(m\) linearly independent vectors in \(V\text{,}\) a vector space of dimension \(t\text{,}\) with \(m\gt t\text{.}\) However, this contradicts Theorem G, so our assumption is false and \(\dimension{U}\leq\dimension{V}\text{.}\)
Notice that the previous example made no use of the actual formula defining the function. Merely a comparison of the dimensions of the domain and codomain is enough to conclude that the linear transformation is not injective. Archetype M and Archetype N are two more examples of linear transformations that have “big” domains and “small” codomains, resulting in “collisions” of outputs and thus are non-injective linear transformations.
SubsectionCILTComposition of Injective Linear Transformations
In Subsection LT.NLTFO we saw how to combine linear transformations to build new linear transformations, specifically, how to build the composition of two linear transformations (Definition LTC). It will be useful later to know that the composition of injective linear transformations is again injective, so we prove that here.
TheoremCILTI.Composition of Injective Linear Transformations is Injective.
Suppose that \(\ltdefn{T}{U}{V}\) and \(\ltdefn{S}{V}{W}\) are injective linear transformations. Then \(\ltdefn{(\compose{S}{T})}{U}{W}\) is an injective linear transformation.
That the composition is a linear transformation was established in Theorem CLTLT, so we need only establish that the composition is injective. Applying Definition ILT, choose \(\vect{x}\text{,}\)\(\vect{y}\) from \(U\text{.}\) Then if \(\lteval{\left(\compose{S}{T}\right)}{\vect{x}}=\lteval{\left(\compose{S}{T}\right)}{\vect{y}}\text{,}\)
\begin{align*}
&\Rightarrow&\lteval{S}{\lteval{T}{\vect{x}}}&=\lteval{S}{\lteval{T}{\vect{y}}}&&
\knowl{./knowl/xref/definition-LTC.html}{\text{Definition LTC}}\\
&\Rightarrow&\lteval{T}{\vect{x}}&=\lteval{T}{\vect{y}}&&
\knowl{./knowl/xref/definition-ILT.html}{\text{Definition ILT}}\text{ for }S\\
&\Rightarrow&\vect{x}&=\vect{y}&&
\knowl{./knowl/xref/definition-ILT.html}{\text{Definition ILT}}\text{ for }T\text{.}
\end{align*}
SageCILT.Composition of Injective Linear Transformations.
One way to use Sage is to construct examples of theorems and verify the conclusions. Sometimes you will get this wrong: you might build an example that does not satisfy the hypotheses, or your example may not satisfy the conclusions. This may be because you are not using Sage properly, or because you do not understand a definition or a theorem, or in very limited cases you may have uncovered a bug in Sage (which is always the preferred explanation!). But in the process of trying to understand a discrepancy or unexpected result, you will learn much more, both about linear algebra and about Sage. And Sage is incredibly patient — it will stay up with you all night to help you through a rough patch.
Let us illustrate the above in the context of Theorem CILTI. The hypotheses indicate we need two injective linear transformations. Where will get two such linear transformations? Well, the contrapositive of Theorem ILTD tells us that if the dimension of the domain exceeds the dimension of the codomain, we will never be injective. So we should at a minimum avoid this scenario. We can build two linear transformations from matrices created randomly, and just hope that they lead to injective linear transformations. Here is an example of how we create examples like this. The random matrix has single-digit entries, and almost always will lead to an injective linear transformation, though we cannot be absolutely certain. Evaluate this cell repeatedly, to see how rarely the result is not injective.
Archetype M, Archetype N, Archetype O, Archetype P, Archetype Q, Archetype R, Archetype S, Archetype T, Archetype U, Archetype V, Archetype W, Archetype X
The linear transformation \(\ltdefn{T}{\complex{4}}{\complex{3}}\) is not injective. Find two inputs \(\vect{x},\,\vect{y}\in\complex{4}\) that yield the same output (that is \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)).
A linear transformation that is not injective will have a nontrivial kernel (Theorem KILT), and this is the key to finding the desired inputs. We need one nontrivial element of the kernel, so suppose that \(\vect{z}\in\complex{4}\) is an element of the kernel,
A quicker solution is to take two elements of the kernel (in this case, scalar multiples of \(\vect{z}\)) which both get sent to \(\zerovector\) by \(T\text{.}\) Quicker yet, take \(\zerovector\) and \(\vect{z}\) as \(\vect{x}\) and \(\vect{y}\text{,}\) which also both get sent to \(\zerovector\) by \(T\text{.}\)
and let \(\ltdefn{T}{\complex{5}}{\complex{4}}\) be given by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\) Is \(T\) injective? (Hint: No calculation is required.)
By Theorem ILTD, if a linear transformation \(\ltdefn{T}{U}{V}\) is injective, then \(\dim(U)\le\dim(V)\text{.}\) In this case, \(\ltdefn{T}{\complex{5}}{\complex{4}}\text{,}\) and \(5=\dimension{\complex{5}}\gt\dimension{\complex{4}}=4\text{.}\) Thus, \(T\) cannot possibly be injective.
Let \(\ltdefn{T}{\complex{3}}{\complex{3}}\) be given by \(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y + z\\ x - y + 2z\\ x + 2y - z}\text{.}\) Find \(\krn{T}\text{.}\) Is \(T\) injective?
If \(\lteval{T}{\colvector{x\\y\\z}} = \zerovector\text{,}\) then \(\colvector{2x + y + z\\x - y + 2z\\x + 2y - z} = \zerovector\text{.}\) Thus, we have the system
\begin{align*}
2x + y + z &= 0\\
x - y + 2z &= 0\\
x + 2y - z &= 0\text{.}
\end{align*}
Thus, we are looking for the null space of the matrix
Thus, a basis for the null space of \(A\) is \(\set{\colvector{-1\\-1\\3\\0}}\text{,}\) and the kernel is \(\krn{T} = \spn{\set{\colvector{-1\\-1\\3\\0}}}\text{.}\) Since the kernel is nontrivial, this linear transformation is not injective.
Let \(T : M_{22} \rightarrow P_2\) be given by \(T\left(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\right) = (a + b) + (a + c)x + (a + d)x^2\text{.}\) Is \(T\) injective? Find \(\krn{T}\text{.}\)
We can see without computing that \(T\) is not injective, since the dimension of \(M_{22}\) is larger than the dimension of \(P_2\text{.}\) However, that does not address the question of the kernel of \(T\text{.}\) We need to find all matrices \(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\) so that \((a + b) + (a + c)x + (a + d)x^2 = 0\text{.}\) This means \(a + b = 0\text{,}\)\(a + c = 0\text{,}\) and \(a + d = 0\text{,}\) or equivalently, \(b = d = c = -a\text{.}\) Thus, the kernel is a one-dimensional subspace of \(M_{22}\) spanned by \(\begin{bmatrix} 1 & -1\\-1&-1 \end{bmatrix}\text{.}\) Symbolically, we have \(\krn{T} = \spn{\set{\begin{bmatrix} 1 & -1\\-1&-1 \end{bmatrix}}}\text{.}\)
Given that the linear transformation \(\ltdefn{T}{\complex{3}}{\complex{3}}\text{,}\)\(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y\\2y + z\\x + 2z}\) is injective, show directly that
Given that the linear transformation \(\ltdefn{T}{\complex{2}}{\complex{3}}\text{,}\)\(\lteval{T}{\colvector{x\\y}} = \colvector{x+y\\2x + y\\x + 2y}\) is injective, show directly that
We have \(\lteval{T}{\vect{e}_1} = \colvector{1\\2\\1}\) and \(\lteval{T}{\vect{e}_2} = \colvector{1\\1\\2}\text{.}\) Putting these into a matrix as columns and row-reducing, we have
so since \(r = 3 = n\text{,}\) the set of vectors \(\set{\lteval{T}{\vect{e}_1},\,\lteval{T}{\vect{e}_2},\,\lteval{T}{\vect{e}_3}}\) is linearly independent.
Show that the linear transformation \(R\) is not injective by finding two different elements of the domain, \(\vect{x}\) and \(\vect{y}\text{,}\) such that \(\lteval{R}{\vect{x}}=\lteval{R}{\vect{y}}\text{.}\) (\(S_{22}\) is the vector space of symmetric \(2\times 2\) matrices.)
We choose \(\vect{x}\) to be any vector we like. A particularly cocky choice would be to choose \(\vect{x}=\zerovector\text{,}\) but we will instead choose
Then \(\lteval{R}{\vect{x}}=9+9x\text{.}\) Now compute the kernel of \(R\text{,}\) which by Theorem KILT we expect to be nontrivial. Setting \(\lteval{R}{\begin{bmatrix}a&b\\b&c\end{bmatrix}}\) equal to the zero vector, \(\zerovector=0+0x\text{,}\) and equating coefficients leads to a homogeneous system of equations. Row-reducing the coefficient matrix of this system will allow us to determine the values of \(a\text{,}\)\(b\) and \(c\) that create elements of the null space of \(R\text{,}\)
We only need a single element of the null space of this coefficient matrix, so we will not compute a precise description of the whole null space. Instead, choose the free variable \(c=2\text{.}\) Then
Suppose \(U\) and \(V\) are vector spaces. Define the function \(\ltdefn{Z}{U}{V}\) by \(\lteval{Z}{\vect{u}}=\zerovector_{V}\) for every \(\vect{u}\in U\text{.}\) Then by Exercise LT.M60, \(Z\) is a linear transformation. Formulate a condition on \(U\) that is equivalent to \(Z\) being an injective linear transformation. In other words, fill in the blank to complete the following statement (and then give a proof): \(Z\) is injective if and only if \(U\) is . (See Exercise SLT.M60, Exercise IVLT.M60.)
Suppose that \(\preimage{T}{\vect{v}}\) is a subspace of \(U\text{.}\) Then \(\zerovector\in\preimage{T}{\vect{v}}\) by Property Z, which we can rearrange to say \(\lteval{T}{\zerovector}=\vect{v}\text{.}\) We know from Theorem LTTZZ that \(\lteval{T}{\zerovector}=\zerovector\text{.}\) Putting these together we have
So our hypothesis that the preimage is a subspace has led to the conclusion that \(\vect{v}\) could only be one vector, the zero vector. We still need to verify that \(\preimage{T}{\zerovector}\) is indeed a subspace, but since \(\preimage{T}{\zerovector}=\krn{T}\) this is just Theorem KLTS.
We are asked to prove that \(\krn{T}\) is a subset of \(\krn{\compose{S}{T}}\text{.}\) Employing Definition SSET, choose \(\vect{x}\in\krn{T}\text{.}\) Then we know that \(\lteval{T}{\vect{x}}=\zerovector\text{.}\) So