Skip to main content
Logo image

Section CB Change of Basis

We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.

Subsection EELT Eigenvalues and Eigenvectors of Linear Transformations

We now define the notion of an eigenvalue and eigenvector of a linear transformation. It should not be too surprising, especially if you remind yourself of the close relationship between matrices and linear transformations.

Definition EELT. Eigenvalue and Eigenvector of a Linear Transformation.

Suppose that \(\ltdefn{T}{V}{V}\) is a linear transformation. Then a nonzero vector \(\vect{v}\in V\) is an eigenvector of \(T\) for the eigenvalue \(\lambda\) if \(\lteval{T}{\vect{v}}=\lambda\vect{v}\text{.}\)
We will see shortly the best method for computing the eigenvalues and eigenvectors of a linear transformation, but for now, here are some examples to verify that such things really do exist.

Example ELTBM. Eigenvectors of linear transformation between matrices.

Consider the linear transformation \(\ltdefn{T}{M_{22}}{M_{22}}\) defined by
\begin{equation*} \lteval{T}{\begin{bmatrix}a&b\\c&d\end{bmatrix}} = \begin{bmatrix} -17a+11b+8c-11d & -57a+35b+24c-33d \\ -14a+10b+6c-10d & -41a+25b+16c-23d \end{bmatrix} \end{equation*}
and the vectors
\begin{align*} \vect{x}_1 &= \begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix} & \vect{x}_2 &= \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} & \vect{x}_3 &= \begin{bmatrix} 1 & 3 \\ 2 & 3 \end{bmatrix} & \vect{x}_4 &= \begin{bmatrix} 2 & 6 \\ 1 & 4 \end{bmatrix}\text{.} \end{align*}
Then compute
\begin{align*} \lteval{T}{\vect{x}_1} &= \lteval{T}{\begin{bmatrix} 0 & 1 \\ 0 & 1\end{bmatrix}} = \begin{bmatrix} 0 & 2 \\ 0 & 2 \end{bmatrix} = 2\vect{x}_1\\ \lteval{T}{\vect{x}_2} &= \lteval{T}{\begin{bmatrix} 1 & 1 \\ 1 & 0\end{bmatrix}} = \begin{bmatrix} 2 & 2 \\ 2 & 0 \end{bmatrix} = 2\vect{x}_2\\ \lteval{T}{\vect{x}_3} &= \lteval{T}{\begin{bmatrix} 1 & 3 \\ 2 & 3\end{bmatrix}} = \begin{bmatrix} -1 & -3 \\ -2 & -3 \end{bmatrix} = (-1)\vect{x}_3\\ \lteval{T}{\vect{x}_4} &= \lteval{T}{\begin{bmatrix} 2 & 6 \\ 1 & 4\end{bmatrix}} = \begin{bmatrix} -4 & -12 \\ -2 & -8 \end{bmatrix} = (-2)\vect{x}_4\text{.} \end{align*}
So \(\vect{x}_1\text{,}\) \(\vect{x}_2\text{,}\) \(\vect{x}_3\text{,}\) \(\vect{x}_4\) are eigenvectors of \(T\) with eigenvalues (respectively) \(\lambda_1=2\text{,}\) \(\lambda_2=2\text{,}\) \(\lambda_3=-1\text{,}\) \(\lambda_4=-2\text{.}\)
Here is another.

Example ELTBP. Eigenvectors of linear transformation between polynomials.

Consider the linear transformation \(\ltdefn{R}{P_2}{P_2}\) defined by
\begin{equation*} \lteval{R}{a+bx+cx^2}= (15a+8b-4c)+(-12a-6b+3c)x+(24a+14b-7c)x^2 \end{equation*}
and the vectors
\begin{align*} \vect{w}_1 &=1-x+x^2 & \vect{w}_2 &=x+2x^2 & \vect{w}_3 &=1+4x^2 &\text{.} \end{align*}
Then compute
\begin{align*} \lteval{R}{\vect{w}_1} &= \lteval{R}{1-x+x^2} = 3-3x+3x^2 =3\vect{w}_1\\ \lteval{R}{\vect{w}_2} &= \lteval{R}{x+2x^2} = 0+0x+0x^2 =0\vect{w}_2\\ \lteval{R}{\vect{w}_3} &= \lteval{R}{1+4x^2} = -1-4x^2 =(-1)\vect{w}_3\text{.} \end{align*}
So \(\vect{w}_1\text{,}\) \(\vect{w}_2\text{,}\) \(\vect{w}_3\) are eigenvectors of \(R\) with eigenvalues (respectively) \(\lambda_1=3\text{,}\) \(\lambda_2=0\text{,}\) \(\lambda_3=-1\text{.}\) Notice how the eigenvalue \(\lambda_2=0\) indicates that the eigenvector \(\vect{w}_2\) is a nontrivial element of the kernel of \(R\text{,}\) and therefore \(R\) is not injective (Exercise CB.T15).
Of course, these examples are meant only to illustrate the definition of eigenvectors and eigenvalues for linear transformations, and therefore beg the question, “How would I find eigenvectors?” We will have an answer before we finish this section. We need one more construction first.

Sage ENDO. Endomorphisms.

An endomorphism is an “operation-preserving” function (a “morphism”) whose domain and codomain are equal. Sage takes this definition one step further for linear transformations and requires that the domain and codomain have the same bases (either a default echelonized basis or the same user basis). When a linear transformation meets this extra requirement, several natural methods become available.
Principally, we can compute the eigenvalues provided by Definition EELT. We also get a natural notion of a characteristic polynomial.
Now the question of eigenvalues being elements of the set of scalars used for the vector space becomes even more obvious. If we define an endomorphism on a vector space whose scalars are the rational numbers, should we “allow” irrational or complex eigenvalues? You will now recognize our use of the complex numbers in the text for the gross convenience that it is.

Subsection CBM Change-of-Basis Matrix

Given a vector space, we know we can usually find many different bases for the vector space, some nice, some nasty. If we choose a single vector from this vector space, we can build many different representations of the vector by constructing the representations relative to different bases. How are these different representations related to each other? A change-of-basis matrix answers this question.

Definition CBM. Change-of-Basis Matrix.

Suppose that \(V\) is a vector space, and \(\ltdefn{I_V}{V}{V}\) is the identity linear transformation on \(V\text{.}\) Let \(B=\set{\vectorlist{v}{n}}\) and \(C\) be two bases of \(V\text{.}\) Then the change-of-basis matrix from \(B\) to \(C\) is the matrix representation of \(I_V\) relative to \(B\) and \(C\text{,}\)
\begin{align*} \cbm{B}{C}&=\matrixrep{I_V}{B}{C}\\ &=\matrixrepcolumns{I_V}{C}{v}{n}\\ &=\left\lbrack \left.\vectrep{C}{\vect{v}_1}\right| \left.\vectrep{C}{\vect{v}_2}\right| \left.\vectrep{C}{\vect{v}_3}\right| \ldots \left|\vectrep{C}{\vect{v}_n}\right. \right\rbrack\text{.} \end{align*}
Notice that this definition is primarily about a single vector space (\(V\)) and two bases of \(V\) (\(B\text{,}\) \(C\)). The linear transformation (\(I_V\)) is necessary but not critical. As you might expect, this matrix has something to do with changing bases. Here is the theorem that gives the matrix its name (not the other way around).

Proof.

We have
\begin{align*} \vectrep{C}{\vect{v}} &=\vectrep{C}{\lteval{I_V}{\vect{v}}}&& \knowl{./knowl/xref/definition-IDLT.html}{\text{Definition IDLT}}\\ &=\matrixrep{I_V}{B}{C}\vectrep{B}{\vect{v}}&& \knowl{./knowl/xref/theorem-FTMR.html}{\text{Theorem FTMR}}\\ &=\cbm{B}{C}\vectrep{B}{\vect{v}}&& \knowl{./knowl/xref/definition-CBM.html}{\text{Definition CBM}}\text{.} \end{align*}
So the change-of-basis matrix can be used with matrix multiplication to convert a vector representation of a vector (\(\vect{v}\)) relative to one basis (\(\vectrep{B}{\vect{v}}\)) to a representation of the same vector relative to a second basis (\(\vectrep{C}{\vect{v}}\)).

Proof.

The linear transformation \(\ltdefn{I_V}{V}{V}\) is invertible, and its inverse is itself, \(I_V\) (check this!). So by Theorem IMR, the matrix \(\matrixrep{I_V}{B}{C}=\cbm{B}{C}\) is invertible. Theorem NI says an invertible matrix is nonsingular.
Then
\begin{align*} \inverse{\cbm{B}{C}} &=\inverse{\left(\matrixrep{I_V}{B}{C}\right)}&& \knowl{./knowl/xref/definition-CBM.html}{\text{Definition CBM}}\\ &=\matrixrep{\ltinverse{I_V}}{C}{B}&& \knowl{./knowl/xref/theorem-IMR.html}{\text{Theorem IMR}}\\ &=\matrixrep{I_V}{C}{B}&& \knowl{./knowl/xref/definition-IDLT.html}{\text{Definition IDLT}}\\ &=\cbm{C}{B}&& \knowl{./knowl/xref/definition-CBM.html}{\text{Definition CBM}}\text{.} \end{align*}

Example CBP. Change of basis with polynomials.

The vector space \(P_4\) (Example VSP) has two nice bases (Example BP),
\begin{align*} B&=\set{1,x,x^2,x^3,x^4}\\ C&=\set{1,1+x,1+x+x^2,1+x+x^2+x^3,1+x+x^2+x^3+x^4}\text{.} \end{align*}
To build the change-of-basis matrix between \(B\) and \(C\text{,}\) we must first build a vector representation of each vector in \(B\) relative to \(C\text{,}\)
\begin{align*} \vectrep{C}{1} &=\vectrep{C}{(1)\left(1\right)} =\colvector{1\\0\\0\\0\\0}\\ \vectrep{C}{x} &=\vectrep{C}{(-1)\left(1\right)+(1)\left(1+x\right)} =\colvector{-1\\1\\0\\0\\0}\\ \vectrep{C}{x^2} &=\vectrep{C}{(-1)\left(1+x\right)+(1)\left(1+x+x^2\right)} =\colvector{0\\-1\\1\\0\\0}\\ \vectrep{C}{x^3} &=\vectrep{C}{(-1)\left(1+x+x^2\right)+(1)\left(1+x+x^2+x^3\right)} =\colvector{0\\0\\-1\\1\\0}\\ \vectrep{C}{x^4} &=\vectrep{C}{(-1)\left(1+x+x^2+x^3\right)+(1)\left(1+x+x^2+x^3+x^4\right)} =\colvector{0\\0\\0\\-1\\1}\text{.} \end{align*}
Then we package up these vectors as the columns of a matrix,
\begin{equation*} \cbm{B}{C}= \begin{bmatrix} 1 &-1 & 0 & 0 & 0\\ 0 & 1 &-1 & 0 & 0\\ 0 & 0 & 1 &-1 & 0\\ 0 & 0 & 0 & 1 &-1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix}\text{.} \end{equation*}
Now, to illustrate Theorem CB, consider the vector \(\vect{u}=5-3x+2x^2+8x^3-3x^4\text{.}\) We can build the representation of \(\vect{u}\) relative to \(B\) easily,
\begin{equation*} \vectrep{B}{\vect{u}}= \vectrep{B}{5-3x+2x^2+8x^3-3x^4}= \colvector{5\\-3\\2\\8\\-3}\text{.} \end{equation*}
Applying Theorem CB, we obtain a second representation of \(\vect{u}\text{,}\) but now relative to \(C\text{,}\)
\begin{align*} \vectrep{C}{\vect{u}} &=\cbm{B}{C}\vectrep{B}{\vect{u}}&& \knowl{./knowl/xref/theorem-CB.html}{\text{Theorem CB}}\\ &= \begin{bmatrix} 1 &-1 & 0 & 0 & 0\\ 0 & 1 &-1 & 0 & 0\\ 0 & 0 & 1 &-1 & 0\\ 0 & 0 & 0 & 1 &-1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix} \colvector{5\\-3\\2\\8\\-3}\\ &=\colvector{8\\-5\\-6\\11\\-3}&& \knowl{./knowl/xref/definition-MVP.html}{\text{Definition MVP}}\text{.} \end{align*}
We can check our work by unraveling this second representation,
\begin{align*} \vect{u} &=\vectrepinv{C}{\vectrep{C}{\vect{u}}}&& \knowl{./knowl/xref/definition-IVLT.html}{\text{Definition IVLT}}\\ &=\vectrepinv{C}{\colvector{8\\-5\\-6\\11\\-3}}\\ &=8(1)+(-5)(1+x)+(-6)(1+x+x^2)\\ &\quad\quad+(11)(1+x+x^2+x^3)+(-3)(1+x+x^2+x^3+x^4)&& \knowl{./knowl/xref/definition-VR.html}{\text{Definition VR}}\\ &=5-3x+2x^2+8x^3-3x^4\text{.} \end{align*}
The change-of-basis matrix from \(C\) to \(B\) is actually easier to build. Grab each vector in the basis \(C\) and form its representation relative to \(B\)
\begin{align*} \vectrep{B}{1} &=\vectrep{B}{(1)1} =\colvector{1\\0\\0\\0\\0}\\ \vectrep{B}{1+x} &=\vectrep{B}{(1)1+(1)x} =\colvector{1\\1\\0\\0\\0}\\ \vectrep{B}{1+x+x^2} &=\vectrep{B}{(1)1+(1)x+(1)x^2} =\colvector{1\\1\\1\\0\\0}\\ \vectrep{B}{1+x+x^2+x^3} &=\vectrep{B}{(1)1+(1)x+(1)x^2+(1)x^3} =\colvector{1\\1\\1\\1\\0}\\ \vectrep{B}{1+x+x^2+x^3+x^4} &=\vectrep{B}{(1)1+(1)x+(1)x^2+(1)x^3+(1)x^4} =\colvector{1\\1\\1\\1\\1}\text{.} \end{align*}
Then we package up these vectors as the columns of a matrix,
\begin{equation*} \cbm{C}{B}= \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 & 1\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix}\text{.} \end{equation*}
We formed two representations of the vector \(\vect{u}\) above, so we can again provide a check on our computations by converting from the representation of \(\vect{u}\) relative to \(C\) to the representation of \(\vect{u}\) relative to \(B\text{,}\)
\begin{align*} \vectrep{B}{\vect{u}} &=\cbm{C}{B}\vectrep{C}{\vect{u}}&& \knowl{./knowl/xref/theorem-CB.html}{\text{Theorem CB}}\\ &= \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 & 1\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix} \colvector{8\\-5\\-6\\11\\-3}\\ &=\colvector{5\\-3\\2\\8\\-3}&& \knowl{./knowl/xref/definition-MVP.html}{\text{Definition MVP}}\text{.} \end{align*}
One more computation that is either a check on our work, or an illustration of a theorem. The two change-of-basis matrices, \(\cbm{B}{C}\) and \(\cbm{C}{B}\text{,}\) should be inverses of each other, according to Theorem ICBM. Here we go,
\begin{equation*} \cbm{B}{C}\cbm{C}{B}= \begin{bmatrix} 1 &-1 & 0 & 0 & 0\\ 0 & 1 &-1 & 0 & 0\\ 0 & 0 & 1 &-1 & 0\\ 0 & 0 & 0 & 1 &-1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 & 1\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix}\text{.} \end{equation*}
The computations of the previous example are not meant to present any labor-saving devices, but instead are meant to illustrate the utility of the change-of-basis matrix. However, you might have noticed that \(\cbm{C}{B}\) was easier to compute than \(\cbm{B}{C}\text{.}\) If you needed \(\cbm{B}{C}\text{,}\) then you could first compute \(\cbm{C}{B}\) and then compute its inverse, which by Theorem ICBM, would equal \(\cbm{B}{C}\text{.}\)
Here is another illustrative example. We have been concentrating on working with abstract vector spaces, but all of our theorems and techniques apply just as well to \(\complex{m}\text{,}\) the vector space of column vectors. We only need to use more complicated bases than the standard unit vectors (Theorem SUVB) to make things interesting.

Example CBCV. Change of basis with column vectors.

For the vector space \(\complex{4}\) we have the two bases,
\begin{align*} B&=\set{ \colvector{1 \\ -2 \\ 1 \\ -2},\, \colvector{-1 \\ 3 \\ 1 \\ 1},\, \colvector{2 \\ -3 \\ 3 \\ -4},\, \colvector{-1 \\ 3 \\ 3 \\ 0} } & C&=\set{ \colvector{1 \\ -6 \\ -4 \\ -1},\, \colvector{-4 \\ 8 \\ -5 \\ 8},\, \colvector{-5 \\ 13 \\ -2 \\ 9},\, \colvector{3 \\ -7 \\ 3 \\ -6} }\text{.} \end{align*}
The change-of-basis matrix from \(B\) to \(C\) requires writing each vector of \(B\) as a linear combination of the vectors in \(C\text{,}\)
\begin{align*} \vectrep{C}{\colvector{1 \\ -2 \\ 1 \\ -2}} &=\vectrep{C}{ (1)\colvector{1 \\ -6 \\ -4 \\ -1}+ (-2)\colvector{-4 \\ 8 \\ -5 \\ 8}+ (1)\colvector{-5 \\ 13 \\ -2 \\ 9}+ (-1)\colvector{3 \\ -7 \\ 3 \\ -6} } =\colvector{1\\-2\\1\\-1}\\ \vectrep{C}{\colvector{-1 \\ 3 \\ 1 \\ 1}} &=\vectrep{C}{ (2)\colvector{1 \\ -6 \\ -4 \\ -1}+ (-3)\colvector{-4 \\ 8 \\ -5 \\ 8}+ (3)\colvector{-5 \\ 13 \\ -2 \\ 9}+ (0)\colvector{3 \\ -7 \\ 3 \\ -6} } =\colvector{2\\-3\\3\\0}\\ \vectrep{C}{\colvector{2 \\ -3 \\ 3 \\ -4}} &=\vectrep{C}{ (1)\colvector{1 \\ -6 \\ -4 \\ -1}+ (-3)\colvector{-4 \\ 8 \\ -5 \\ 8}+ (1)\colvector{-5 \\ 13 \\ -2 \\ 9}+ (-2)\colvector{3 \\ -7 \\ 3 \\ -6} } =\colvector{1\\-3\\1\\-2}\\ \vectrep{C}{\colvector{-1 \\ 3 \\ 3 \\ 0}} &=\vectrep{C}{ (2)\colvector{1 \\ -6 \\ -4 \\ -1}+ (-2)\colvector{-4 \\ 8 \\ -5 \\ 8}+ (4)\colvector{-5 \\ 13 \\ -2 \\ 9}+ (3)\colvector{3 \\ -7 \\ 3 \\ -6} } =\colvector{2\\-2\\4\\3}\text{.} \end{align*}
Then we package these vectors up as the change-of-basis matrix,
\begin{equation*} \cbm{B}{C}= \begin{bmatrix} 1 & 2 & 1 & 2 \\ -2 & -3 & -3 & -2 \\ 1 & 3 & 1 & 4 \\ -1 & 0 & -2 & 3 \end{bmatrix}\text{.} \end{equation*}
Now consider a single (arbitrary) vector \(\vect{y}=\colvector{2\\6\\-3\\4}\text{.}\) First, build the vector representation of \(\vect{y}\) relative to \(B\text{.}\) This will require writing \(\vect{y}\) as a linear combination of the vectors in \(B\text{,}\)
\begin{align*} \vectrep{B}{\vect{y}} &=\vectrep{B}{\colvector{2\\6\\-3\\4}}\\ &=\vectrep{B}{ (-21)\colvector{1 \\ -2 \\ 1 \\ -2}+ (6)\colvector{-1 \\ 3 \\ 1 \\ 1}+ (11)\colvector{2 \\ -3 \\ 3 \\ -4}+ (-7)\colvector{-1 \\ 3 \\ 3 \\ 0} } &=\colvector{-21\\6\\11\\-7}\text{.} \end{align*}
Now, applying Theorem CB we can convert the representation of \(\vect{y}\) relative to \(B\) into a representation relative to \(C\text{,}\)
\begin{align*} \vectrep{C}{\vect{y}} &=\cbm{B}{C}\vectrep{B}{\vect{y}}&& \knowl{./knowl/xref/theorem-CB.html}{\text{Theorem CB}}\\ &= \begin{bmatrix} 1 & 2 & 1 & 2 \\ -2 & -3 & -3 & -2 \\ 1 & 3 & 1 & 4 \\ -1 & 0 & -2 & 3 \end{bmatrix} \colvector{-21\\6\\11\\-7}\\ &=\colvector{-12\\5\\-20\\-22}&& \knowl{./knowl/xref/definition-MVP.html}{\text{Definition MVP}}\text{.} \end{align*}
We could continue further with this example, perhaps by computing the representation of \(\vect{y}\) relative to the basis \(C\) directly as a check on our work (Exercise CB.C20). Or we could choose another vector to play the role of \(\vect{y}\) and compute two different representations of this vector relative to the two bases \(B\) and \(C\text{.}\)

Sage CBM. Change-of-Basis Matrix.

To create a change-of-basis matrix, it is enough to construct an identity linear transformation relative to a domain and codomain with the specified user bases, which is simply a straight application of Definition CBM. Here we go with two arbitrary bases.
We can demonstrate that CB is indeed the change-of-basis matrix from B to C, converting vector representations relative to B into vector representations relative to C. We choose an arbitrary vector, x, to experiment with (you could experiment with other possibilities). We use the Sage conveniences to create vector representations relative to the two bases, and then verify Theorem CB. Recognize that x, u and v are all the same vector.
We can also verify the construction above by building the change-of-basis matrix directly (i.e., without constructing a linear transformation).

Subsection MRS Matrix Representations and Similarity

Here is the main theorem of this section. It looks a bit involved at first glance, but the proof should make you realize it is not all that complicated. In any event, we are more interested in a special case.

Proof.

We have
\begin{align*} \cbm{E}{D}\matrixrep{T}{C}{E}\cbm{B}{C} &=\matrixrep{I_V}{E}{D}\matrixrep{T}{C}{E}\matrixrep{I_U}{B}{C}&& \knowl{./knowl/xref/definition-CBM.html}{\text{Definition CBM}}\\ &=\matrixrep{I_V}{E}{D}\matrixrep{\compose{T}{I_U}}{B}{E}&& \knowl{./knowl/xref/theorem-MRCLT.html}{\text{Theorem MRCLT}}\\ &=\matrixrep{I_V}{E}{D}\matrixrep{T}{B}{E}&& \knowl{./knowl/xref/definition-IDLT.html}{\text{Definition IDLT}}\\ &=\matrixrep{\compose{I_V}{T}}{B}{D}&& \knowl{./knowl/xref/theorem-MRCLT.html}{\text{Theorem MRCLT}}\\ &=\matrixrep{T}{B}{D}&& \knowl{./knowl/xref/definition-IDLT.html}{\text{Definition IDLT}}\text{.} \end{align*}
We will be most interested in a special case of this theorem (Theorem SCB), but here is an example that illustrates the full generality of Theorem MRCB.

Example MRCM. Matrix representations and change-of-basis matrices.

Begin with two vector spaces, \(S_2\text{,}\) the subspace of \(M_{22}\) containing all \(2\times 2\) symmetric matrices, and \(P_3\) (Example VSP), the vector space of all polynomials of degree 3 or less. Then define the linear transformation \(\ltdefn{Q}{S_2}{P_3}\) by
\begin{equation*} \lteval{Q}{\begin{bmatrix}a&b\\b&c\end{bmatrix}} = (5a-2b+6c)+(3a-b+2c)x+(a+3b-c)x^2+(-4a+2b+c)x^3\text{.} \end{equation*}
Here are two bases for each vector space, one nice, one nasty. First for \(S_2\text{,}\)
\begin{align*} B&= \set{ \begin{bmatrix}5&-3\\-3&-2\end{bmatrix},\, \begin{bmatrix}2&-3\\-3&0\end{bmatrix},\, \begin{bmatrix}1&2\\2&4\end{bmatrix} } & C&= \set{ \begin{bmatrix}1&0\\0&0\end{bmatrix},\, \begin{bmatrix}0&1\\1&0\end{bmatrix},\, \begin{bmatrix}0&0\\0&1\end{bmatrix} } \end{align*}
and then for \(P_3\text{,}\)
\begin{align*} D&=\set{ 2+x-2x^2+3x^3,\, -1-2x^2+3x^3,\, -3-x+x^3,\, -x^2+x^3 }\\ E&=\set{1,\,x,\,x^2,\,x^3}\text{.} \end{align*}
We will begin with a matrix representation of \(Q\) relative to \(C\) and \(E\text{.}\) We first find vector representations of the elements of \(C\) relative to \(E\text{,}\)
\begin{align*} \vectrep{E}{\lteval{Q}{\begin{bmatrix}1&0\\0&0\end{bmatrix}}} &=\vectrep{E}{5+3x+x^2-4x^3}=\colvector{5\\3\\1\\-4}\\ \vectrep{E}{\lteval{Q}{\begin{bmatrix}0&1\\1&0\end{bmatrix}}} &=\vectrep{E}{-2-x+3x^2+2x^3}=\colvector{-2\\-1\\3\\2}\\ \vectrep{E}{\lteval{Q}{\begin{bmatrix}0&0\\0&1\end{bmatrix}}} &=\vectrep{E}{6+2x-x^2+x^3}=\colvector{6\\2\\-1\\1}\text{.} \end{align*}
So
\begin{align*} \matrixrep{Q}{C}{E} = \begin{bmatrix} 5 & -2 & 6\\ 3 & -1 & 2\\ 1 & 3 & -1\\ -4 & 2 & 1 \end{bmatrix}\text{.} \end{align*}
Now we construct two change-of-basis matrices. First, \(\cbm{B}{C}\) requires vector representations of the elements of \(B\text{,}\) relative to \(C\text{.}\) Since \(C\) is a nice basis, this is straightforward,
\begin{align*} \vectrep{C}{\begin{bmatrix}5&-3\\-3&-2\end{bmatrix}} &=\vectrep{C}{ (5)\begin{bmatrix}1&0\\0&0\end{bmatrix}+ (-3)\begin{bmatrix}0&1\\1&0\end{bmatrix}+ (-2)\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{5\\-3\\-2}\\ \vectrep{C}{\begin{bmatrix}2&-3\\-3&0\end{bmatrix}} &=\vectrep{C}{ (2)\begin{bmatrix}1&0\\0&0\end{bmatrix}+ (-3)\begin{bmatrix}0&1\\1&0\end{bmatrix}+ (0)\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{2\\-3\\0}\\ \vectrep{C}{\begin{bmatrix}1&2\\2&4\end{bmatrix}} &=\vectrep{C}{ (1)\begin{bmatrix}1&0\\0&0\end{bmatrix}+ (2)\begin{bmatrix}0&1\\1&0\end{bmatrix}+ (4)\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{1\\2\\4}\text{.} \end{align*}
So
\begin{align*} \cbm{B}{C}&= \begin{bmatrix} 5 & 2 & 1\\ -3 & -3 & 2\\ -2 & 0 & 4 \end{bmatrix}\text{.} \end{align*}
The other change-of-basis matrix we will compute is \(\cbm{E}{D}\text{.}\) However, since \(E\) is a nice basis (and \(D\) is not) we will turn it around and instead compute \(\cbm{D}{E}\) and apply Theorem ICBM to use an inverse to compute \(\cbm{E}{D}\text{.}\) We have
\begin{align*} \vectrep{E}{2+x-2x^2+3x^3} &=\vectrep{E}{(2)1+(1)x+(-2)x^2+(3)x^3} =\colvector{2\\1\\-2\\3}\\ \vectrep{E}{-1-2x^2+3x^3} &=\vectrep{E}{(-1)1+(0)x+(-2)x^2+(3)x^3} =\colvector{-1\\0\\-2\\3}\\ \vectrep{E}{-3-x+x^3} &=\vectrep{E}{(-3)1+(-1)x+(0)x^2+(1)x^3} =\colvector{-3\\-1\\0\\1}\\ \vectrep{E}{-x^2+x^3} &=\vectrep{E}{(0)1+(0)x+(-1)x^2+(1)x^3} =\colvector{0\\0\\-1\\1}\text{.} \end{align*}
So, we can package these column vectors up as a matrix to obtain \(\cbm{D}{E}\) and then with an application of Theorem ICBM,
\begin{equation*} \cbm{E}{D} =\inverse{\left(\cbm{D}{E}\right)} =\inverse{ \begin{bmatrix} 2 & -1 & -3 & 0 \\ 1 & 0 & -1 & 0 \\ -2 & -2 & 0 & -1 \\ 3 & 3 & 1 & 1 \end{bmatrix} } = \begin{bmatrix} 1 & -2 & 1 & 1 \\ -2 & 5 & -1 & -1 \\ 1 & -3 & 1 & 1 \\ 2 & -6 & -1 & 0 \end{bmatrix}\text{.} \end{equation*}
We are now in a position to apply Theorem MRCB. The matrix representation of \(Q\) relative to \(B\) and \(D\) can be obtained as follows,
\begin{align*} \matrixrep{Q}{B}{D} &=\cbm{E}{D}\matrixrep{Q}{C}{E}\cbm{B}{C}\\ &= \begin{bmatrix} 1 & -2 & 1 & 1 \\ -2 & 5 & -1 & -1 \\ 1 & -3 & 1 & 1 \\ 2 & -6 & -1 & 0 \end{bmatrix} \begin{bmatrix} 5 & -2 & 6\\ 3 & -1 & 2\\ 1 & 3 & -1\\ -4 & 2 & 1 \end{bmatrix} \begin{bmatrix} 5 & 2 & 1\\ -3 & -3 & 2\\ -2 & 0 & 4 \end{bmatrix}\\ &= \begin{bmatrix} 1 & -2 & 1 & 1 \\ -2 & 5 & -1 & -1 \\ 1 & -3 & 1 & 1 \\ 2 & -6 & -1 & 0 \end{bmatrix} \begin{bmatrix} 19 & 16 & 25 \\ 14 & 9 & 9 \\ -2 & -7 & 3 \\ -28 & -14 & 4 \end{bmatrix}\\ &= \begin{bmatrix} -39 & -23 & 14 \\ 62 & 34 & -12 \\ -53 & -32 & 5 \\ -44 & -15 & -7 \end{bmatrix}\text{.} \end{align*}
Now check our work by computing \(\matrixrep{Q}{B}{D}\) directly (Exercise CB.C21).
Here is a special case of the previous theorem, where we choose \(U\) and \(V\) to be the same vector space, so the matrix representations and the change-of-basis matrices are all square of the same size.

Proof.

In the conclusion of Theorem MRCB, replace \(D\) by \(B\text{,}\) and replace \(E\) by \(C\text{,}\)
\begin{align*} \matrixrep{T}{B}{B} &=\cbm{C}{B}\matrixrep{T}{C}{C}\cbm{B}{C}&& \knowl{./knowl/xref/theorem-MRCB.html}{\text{Theorem MRCB}}\\ &=\inverse{\cbm{B}{C}}\matrixrep{T}{C}{C}\cbm{B}{C}&& \knowl{./knowl/xref/theorem-ICBM.html}{\text{Theorem ICBM}}\text{.} \end{align*}
This is the third surprise of this chapter. Theorem SCB considers the special case where a linear transformation has the same vector space for the domain and codomain (\(V\)). We build a matrix representation of \(T\) using the basis \(B\) simultaneously for both the domain and codomain (\(\matrixrep{T}{B}{B}\)), and then we build a second matrix representation of \(T\text{,}\) now using the basis \(C\) for both the domain and codomain (\(\matrixrep{T}{C}{C}\)). Then these two representations are related via a similarity transformation (Definition SIM) using a change-of-basis matrix (\(\cbm{B}{C}\))!

Example MRBE. Matrix representation with basis of eigenvectors.

We return to the linear transformation \(\ltdefn{T}{M_{22}}{M_{22}}\) of Example ELTBM defined by
\begin{equation*} \lteval{T}{\begin{bmatrix}a&b\\c&d\end{bmatrix}} = \begin{bmatrix} -17a+11b+8c-11d & -57a+35b+24c-33d \\ -14a+10b+6c-10d & -41a+25b+16c-23d \end{bmatrix}\text{.} \end{equation*}
In Example ELTBM we showcased four eigenvectors of \(T\text{.}\) We will now put these four vectors in a set,
\begin{equation*} B=\set{\vect{x}_1,\,\vect{x}_2,\,\vect{x}_3,\,\vect{x}_4} =\set{ \begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix} ,\, \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} ,\, \begin{bmatrix} 1 & 3 \\ 2 & 3 \end{bmatrix} ,\, \begin{bmatrix} 2 & 6 \\ 1 & 4 \end{bmatrix} }\text{.} \end{equation*}
Check that \(B\) is a basis of \(M_{22}\) by first establishing the linear independence of \(B\) and then employing Theorem G to get the spanning property easily. Here is a second set of \(2\times 2\) matrices, which also forms a basis of \(M_{22}\) (Example BM),
\begin{equation*} C=\set{\vect{y}_1,\,\vect{y}_2,\,\vect{y}_3,\,\vect{y}_4} =\set{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} }\text{.} \end{equation*}
We can build two matrix representations of \(T\text{,}\) one relative to \(B\) and one relative to \(C\text{.}\) Each is easy, but for wildly different reasons. In our computation of the matrix representation relative to \(B\) we borrow some of our work in Example ELTBM. Here are the representations, then the explanation. We have
\begin{align*} \vectrep{B}{\lteval{T}{\vect{x}_1}} &= \vectrep{B}{2\vect{x}_1} =\vectrep{B}{2\vect{x}_1+0\vect{x}_2+0\vect{x}_3+0\vect{x}_4} =\colvector{2\\0\\0\\0}\\ \vectrep{B}{\lteval{T}{\vect{x}_2}} &= \vectrep{B}{2\vect{x}_2} =\vectrep{B}{0\vect{x}_1+2\vect{x}_2+0\vect{x}_3+0\vect{x}_4} =\colvector{0\\2\\0\\0}\\ \vectrep{B}{\lteval{T}{\vect{x}_3}} &= \vectrep{B}{(-1)\vect{x}_3} =\vectrep{B}{0\vect{x}_1+0\vect{x}_2+(-1)\vect{x}_3+0\vect{x}_4} =\colvector{0\\0\\-1\\0}\\ \vectrep{B}{\lteval{T}{\vect{x}_4}} &= \vectrep{B}{(-2)\vect{x}_4} =\vectrep{B}{0\vect{x}_1+0\vect{x}_2+0\vect{x}_3+(-2)\vect{x}_4} =\colvector{0\\0\\0\\-2}\text{.} \end{align*}
So the resulting representation is
\begin{align*} \matrixrep{T}{B}{B} = \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -2\\ \end{bmatrix}\text{.} \end{align*}
Very pretty.
Now for the matrix representation relative to \(C\) first compute,
\begin{align*} &\vectrep{C}{\lteval{T}{\vect{y}_1}} =\vectrep{C}{\begin{bmatrix}-17&-57\\-14&-41\end{bmatrix}}\\ &=\vectrep{C}{ (-17)\begin{bmatrix}1&0\\0&0\end{bmatrix}+ (-57)\begin{bmatrix}0&1\\0&0\end{bmatrix}+ (-14)\begin{bmatrix}0&0\\1&0\end{bmatrix}+ (-41)\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{-17\\-57\\-14\\-41}\\ &\vectrep{C}{\lteval{T}{\vect{y}_2}} =\vectrep{C}{\begin{bmatrix}11&35\\10&25\end{bmatrix}}\\ &=\vectrep{C}{ 11\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 35\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 10\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 25\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{11\\35\\10\\25}\\ &\vectrep{C}{\lteval{T}{\vect{y}_3}} =\vectrep{C}{\begin{bmatrix}8&24\\6&16\end{bmatrix}}\\ &=\vectrep{C}{ 8\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 24\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 6\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 16\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{8\\24\\6\\16}\\ &\vectrep{C}{\lteval{T}{\vect{y}_4}} =\vectrep{C}{\begin{bmatrix}-11&-33\\-10&-23\end{bmatrix}}\\ &=\vectrep{C}{ (-11)\begin{bmatrix}1&0\\0&0\end{bmatrix}+ (-33)\begin{bmatrix}0&1\\0&0\end{bmatrix}+ (-10)\begin{bmatrix}0&0\\1&0\end{bmatrix}+ (-23)\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{-11\\-33\\-10\\-23}\text{.} \end{align*}
So the resulting representation is
\begin{align*} \matrixrep{T}{C}{C} = \begin{bmatrix} -17 & 11 & 8 & -11 \\ -57 & 35 & 24 & -33 \\ -14 & 10 & 6 & -10 \\ -41 & 25 & 16 & -23 \end{bmatrix}\text{.} \end{align*}
Not quite as pretty.
The purpose of this example is to illustrate Theorem SCB. This theorem says that the two matrix representations, \(\matrixrep{T}{B}{B}\) and \(\matrixrep{T}{C}{C}\text{,}\) of the one linear transformation, \(T\text{,}\) are related by a similarity transformation using the change-of-basis matrix \(\cbm{B}{C}\text{.}\) Let us compute this change-of-basis matrix. Notice that since \(C\) is such a nice basis, this is fairly straightforward,
\begin{align*} \vectrep{C}{\vect{x}_1} &=\vectrep{C}{\begin{bmatrix}0 & 1 \\ 0 & 1\end{bmatrix}} =\vectrep{C}{ 0\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 1\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 0\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 1\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{0\\1\\0\\1}\\ \vectrep{C}{\vect{x}_2} &=\vectrep{C}{\begin{bmatrix}1 & 1 \\ 1 & 0\end{bmatrix}} =\vectrep{C}{ 1\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 1\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 1\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 0\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{1\\1\\1\\0}\\ \vectrep{C}{\vect{x}_3} &=\vectrep{C}{\begin{bmatrix}1 & 3 \\ 2 & 3\end{bmatrix}} =\vectrep{C}{ 1\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 3\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 2\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 3\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{1\\3\\2\\3}\\ \vectrep{C}{\vect{x}_4} &=\vectrep{C}{\begin{bmatrix}2 & 6 \\ 1 & 4\end{bmatrix}} =\vectrep{C}{ 2\begin{bmatrix}1&0\\0&0\end{bmatrix}+ 6\begin{bmatrix}0&1\\0&0\end{bmatrix}+ 1\begin{bmatrix}0&0\\1&0\end{bmatrix}+ 4\begin{bmatrix}0&0\\0&1\end{bmatrix} } =\colvector{2\\6\\1\\4}\text{.} \end{align*}
So we have,
\begin{equation*} \cbm{B}{C} = \begin{bmatrix} 0 & 1 & 1 & 2 \\ 1 & 1 & 3 & 6 \\ 0 & 1 & 2 & 1 \\ 1 & 0 & 3 & 4 \end{bmatrix}\text{.} \end{equation*}
Now, according to Theorem SCB we can write,
\begin{align*} \matrixrep{T}{B}{B}&=\inverse{\cbm{B}{C}}\matrixrep{T}{C}{C}\cbm{B}{C}\\ \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -2\\ \end{bmatrix} &= \inverse{ \begin{bmatrix} 0 & 1 & 1 & 2 \\ 1 & 1 & 3 & 6 \\ 0 & 1 & 2 & 1 \\ 1 & 0 & 3 & 4 \end{bmatrix} } \begin{bmatrix} -17 & 11 & 8 & -11 \\ -57 & 35 & 24 & -33 \\ -14 & 10 & 6 & -10 \\ -41 & 25 & 16 & -23 \end{bmatrix} \begin{bmatrix} 0 & 1 & 1 & 2 \\ 1 & 1 & 3 & 6 \\ 0 & 1 & 2 & 1 \\ 1 & 0 & 3 & 4 \end{bmatrix}\text{.} \end{align*}
This should look and feel exactly like the process for diagonalizing a matrix, as was described in Section SD. And it is.

Sage MRCB. Matrix Representation and Change-of-Basis.

In Sage MR we built two matrix representations of one linear transformation, relative to two different pairs of bases. We now understand how these two matrix representations are related — Theorem MRCB gives the precise relationship with change-of-basis matrices, one converting vector representations on the domain, the other converting vector representations on the codomain. Here is the demonstration. We use MT as the prefix of names for matrix representations, CB as the prefix for change-of-basis matrices, and numerals to distinguish the two domain-codomain pairs.
This is as far as we could go back in Section MR. These two matrices represent the same linear transformation (namely T_symbolic), but the question now is “how are these representations related?” We need two change-of-basis matrices. Notice that with different dimensions for the domain and codomain, we get square matrices of different sizes.
Finally, here is Theorem MRCB, relating the the two matrix representations via the change-of-basis matrices.
We can walk through this theorem just a bit more carefully, step-by-step. We will compute three matrix-vector products, using three vector representations, to demonstrate the equality above. To prepare, we choose the vector x arbitrarily, and we compute its value when evaluted by T_symbolic, and then verify the vector and matrix representations relative to D and E.
So far this is not really new, we have just verified the representation MTDE in the case of one input vector (x), but now we will use the alternate version of this matrix representation, CBCE * MTBC * CBDB, in steps.
First, convert the input vector from a representation relative to D to a representation relative to B.
Now apply the matrix representation, which expects “input” coordinatized relative to B and produces “output” coordinatized relative to C.
Now convert the output vector from a representation relative to C to a representation relative to E.
It is no surprise that this version of v_E equals the previous one, since we have checked the equality of the matrices earlier. But it may be instructive to see the input converted by change-of-basis matrices before and after being hit by the linear transformation (as a matrix representation).
Now we will perform another example, but this time using Sage endomorphisms, linear transformations with equal bases for the domain and codomain. This will allow us to illustrate Theorem SCB. Just for fun, we will do something large. Notice the labor-saving device for manufacturing many symbolic variables at once.
Not very interesting, and perhaps even transparent, with a definiton from a matrix and with the standard basis attached to V1 == QQ^11. Let us use a different basis to obtain a more interesting representation. We will input the basis compactly as the columns of a nonsingular matrix.
Well, now that is interesting! What a nice representation. Of course, it is all due to the choice of the basis (which we have not explained). To explain the relationship between the two matrix representations, we need a change-of-basis-matrix, and its inverse. Theorem SCB says we need the matrix that converts vector representations relative to B into vector representations relative to C.
OK, all set.
Which is MB. So the conversion from a “messy” matrix representation relative to a standard basis to a “clean” representation relative to some other basis is just a similarity transformation by a change-of-basis matrix. Oh, I almost forgot. Where did that basis come from? Hint: find a description of “Jordan Canonical Form”, perhaps in our Second Course in Linear Algebra.
We can now return to the question of computing an eigenvalue or eigenvector of a linear transformation. For a linear transformation of the form \(\ltdefn{T}{V}{V}\text{,}\) we know that representations relative to different bases are similar matrices. We also know that similar matrices have equal characteristic polynomials by Theorem SMEE. We will now show that eigenvalues of a linear transformation \(T\) are precisely the eigenvalues of any matrix representation of \(T\text{.}\) Since the choice of a different matrix representation leads to a similar matrix, there will be no “new” eigenvalues obtained from this second representation. Similarly, the change-of-basis matrix can be used to show that eigenvectors obtained from one matrix representation will be precisely those obtained from any other representation. So we can determine the eigenvalues and eigenvectors of a linear transformation by forming one matrix representation, using any basis we please, and analyzing the matrix in the manner of Chapter E.

Proof.

(⇒) 
Assume that \(\vect{v}\in V\) is an eigenvector of \(T\) for the eigenvalue \(\lambda\text{.}\) Then
\begin{align*} \matrixrep{T}{B}{B}\vectrep{B}{\vect{v}} &=\vectrep{B}{\lteval{T}{\vect{v}}}&& \knowl{./knowl/xref/theorem-FTMR.html}{\text{Theorem FTMR}}\\ &=\vectrep{B}{\lambda\vect{v}}&& \knowl{./knowl/xref/definition-EELT.html}{\text{Definition EELT}}\\ &=\lambda\vectrep{B}{\vect{v}}&& \knowl{./knowl/xref/theorem-VRLT.html}{\text{Theorem VRLT}} \end{align*}
which by Definition EEM says that \(\vectrep{B}{\vect{v}}\) is an eigenvector of the matrix \(\matrixrep{T}{B}{B}\) for the eigenvalue \(\lambda\text{.}\)
(⇐) 
Assume that \(\vectrep{B}{\vect{v}}\) is an eigenvector of \(\matrixrep{T}{B}{B}\) for the eigenvalue \(\lambda\text{.}\) Then
\begin{align*} \lteval{T}{\vect{v}} &=\vectrepinv{B}{\vectrep{B}{\lteval{T}{\vect{v}}}}&& \knowl{./knowl/xref/definition-IVLT.html}{\text{Definition IVLT}}\\ &=\vectrepinv{B}{\matrixrep{T}{B}{B}\vectrep{B}{\vect{v}}}&& \knowl{./knowl/xref/theorem-FTMR.html}{\text{Theorem FTMR}}\\ &=\vectrepinv{B}{\lambda\vectrep{B}{\vect{v}}}&& \knowl{./knowl/xref/definition-EEM.html}{\text{Definition EEM}}\\ &=\lambda\vectrepinv{B}{\vectrep{B}{\vect{v}}}&& \knowl{./knowl/xref/theorem-ILTLT.html}{\text{Theorem ILTLT}}\\ &=\lambda\vect{v}&& \knowl{./knowl/xref/definition-IVLT.html}{\text{Definition IVLT}} \end{align*}
which by Definition EELT says \(\vect{v}\) is an eigenvector of \(T\) for the eigenvalue \(\lambda\text{.}\)

Subsection CELT Computing Eigenvectors of Linear Transformations

Theorem EER tells us that the eigenvalues of a linear transformation are the eigenvalues of any representation, no matter what the choice of the basis \(B\) might be. So we could now unambiguously define items such as the characteristic polynomial of a linear transformation, which we would define as the characteristic polynomial of any matrix representation. We will say that again — eigenvalues, eigenvectors, and characteristic polynomials are intrinsic properties of a linear transformation, independent of the choice of a basis used to construct a matrix representation.
As a practical matter, how does one compute the eigenvalues and eigenvectors of a linear transformation of the form \(\ltdefn{T}{V}{V}\text{?}\) Choose a nice basis \(B\) for \(V\text{,}\) one where the vector representations of the values of the linear transformations necessary for the matrix representation are easy to compute. Construct the matrix representation relative to this basis, and find the eigenvalues and eigenvectors of this matrix using the techniques of Chapter E. The resulting eigenvalues of the matrix are precisely the eigenvalues of the linear transformation. The eigenvectors of the matrix are column vectors that need to be converted to vectors in \(V\) through application of \(\ltinverse{\vectrepname{B}}\) (this is part of the content of Theorem EER).
Now consider the case where the matrix representation of a linear transformation is diagonalizable. The \(n\) linearly independent eigenvectors that must exist for the matrix (Theorem DC) can be converted (via \(\ltinverse{\vectrepname{B}}\)) into eigenvectors of the linear transformation. A matrix representation of the linear transformation relative to a basis of eigenvectors will be a diagonal matrix — an especially nice representation! Though we did not know it at the time, the diagonalizations of Section SD were really about finding especially pleasing matrix representations of linear transformations.
Here are some examples.

Example ELTT. Eigenvectors of a linear transformation, twice.

Consider the linear transformation \(\ltdefn{S}{M_{22}}{M_{22}}\) defined by
\begin{equation*} \lteval{S}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}= \begin{bmatrix} -b - c - 3d & -14a - 15b - 13c + d\\ 18a + 21b + 19c + 3d & -6a - 7b - 7c - 3d \end{bmatrix}\text{.} \end{equation*}
To find the eigenvalues and eigenvectors of \(S\) we will build a matrix representation and analyze the matrix. Since Theorem EER places no restriction on the choice of the basis \(B\text{,}\) we may as well use a basis that is easy to work with. So set
\begin{equation*} B=\set{\vect{x}_1,\,\vect{x}_2,\,\vect{x}_3,\,\vect{x}_4} =\set{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} ,\, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} }\text{.} \end{equation*}
Then to build the matrix representation of \(S\) relative to \(B\) compute,
\begin{align*} \vectrep{B}{\lteval{S}{\vect{x}_1}}&= \vectrep{B}{\begin{bmatrix}0 & -14 \\ 18 & -6\end{bmatrix}}\\ &=\vectrep{B}{0\vect{x}_1+(-14)\vect{x}_2+18\vect{x}_3+(-6)\vect{x}_4}= \colvector{0\\-14\\18\\-6}\\ \vectrep{B}{\lteval{S}{\vect{x}_2}}&= \vectrep{B}{\begin{bmatrix}-1 & -15\\21 & -7\end{bmatrix}}\\ &=\vectrep{B}{(-1)\vect{x}_1+(-15)\vect{x}_2+21\vect{x}_3+(-7)\vect{x}_4}= \colvector{-1\\-15\\21\\-7}\\ \vectrep{B}{\lteval{S}{\vect{x}_3}}&= \vectrep{B}{\begin{bmatrix}-1 & -13\\19 & -7\end{bmatrix}}\\ &=\vectrep{B}{(-1)\vect{x}_1+(-13)\vect{x}_2+19\vect{x}_3+(-7)\vect{x}_4}= \colvector{-1\\-13\\19\\-7}\\ \vectrep{B}{\lteval{S}{\vect{x}_4}}&= \vectrep{B}{\begin{bmatrix}-3 & 1\\3 & -3\end{bmatrix}}\\ &=\vectrep{B}{(-3)\vect{x}_1+1\vect{x}_2+3\vect{x}_3+(-3)\vect{x}_4}= \colvector{-3\\1\\3\\-3}\text{.} \end{align*}
So by Definition MR we have
\begin{equation*} M=\matrixrep{S}{B}{B}= \begin{bmatrix} 0 & -1 & -1 & -3 \\ -14 & -15 & -13 & 1 \\ 18 & 21 & 19 & 3 \\ -6 & -7 & -7 & -3 \end{bmatrix}\text{.} \end{equation*}
Now compute eigenvalues and eigenvectors of the matrix representation of \(M\) with the techniques of Section EE. First the characteristic polynomial,
\begin{equation*} \charpoly{M}{x}=\detname{M-xI_4}=x^4-x^3-10 x^2+4 x+24=(x-3) (x-2) (x+2)^2\text{.} \end{equation*}
We could now make statements about the eigenvalues of \(M\text{,}\) but in light of Theorem EER we can refer to the eigenvalues of \(S\) and mildly abuse (or extend) our notation for multiplicities to write
\begin{align*} \algmult{S}{3}&=1 & \algmult{S}{2}&=1 & \algmult{S}{-2}&=2\text{.} \end{align*}
Now compute the eigenvectors of \(M\text{,}\)
\begin{align*} \lambda&=3&M-3I_4&= \begin{bmatrix} -3 & -1 & -1 & -3 \\ -14 & -18 & -13 & 1 \\ 18 & 21 & 16 & 3 \\ -6 & -7 & -7 & -6 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 1 \\ 0 & \leading{1} & 0 & -3 \\ 0 & 0 & \leading{1} & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{M}{3}&=\nsp{M-3I_4} =\spn{\set{\colvector{-1\\3\\-3\\1}}} \end{align*}
\begin{align*} \lambda&=2&M-2I_4&= \begin{bmatrix} -2 & -1 & -1 & -3 \\ -14 & -17 & -13 & 1 \\ 18 & 21 & 17 & 3 \\ -6 & -7 & -7 & -5 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 2 \\ 0 & \leading{1} & 0 & -4 \\ 0 & 0 & \leading{1} & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{M}{2}&=\nsp{M-2I_4} =\spn{\set{\colvector{-2\\4\\-3\\1}}} \end{align*}
\begin{align*} \lambda&=-2&M-(-2)I_4&= \begin{bmatrix} 2 & -1 & -1 & -3 \\ -14 & -13 & -13 & 1 \\ 18 & 21 & 21 & 3 \\ -6 & -7 & -7 & -1 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & -1 \\ 0 & \leading{1} & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{M}{-2}&=\nsp{M-(-2)I_4} =\spn{\set{\colvector{0\\-1\\1\\0},\,\colvector{1\\-1\\0\\1}}}\text{.} \end{align*}
According to Theorem EER the eigenvectors just listed as basis vectors for the eigenspaces of \(M\) are vector representations (relative to \(B\)) of eigenvectors for \(S\text{.}\) So the application of the inverse function \(\vectrepinvname{B}\) will convert these column vectors into elements of the vector space \(M_{22}\) (\(2\times 2\) matrices) that are eigenvectors of \(S\text{.}\) Since \(\vectrepname{B}\) is an isomorphism (Theorem VRILT), so is \(\vectrepinvname{B}\text{.}\) Applying the inverse function will then preserve linear independence and spanning properties, so with a sweeping application of the The Coordinatization Principle and some extensions of our previous notation for eigenspaces and geometric multiplicities, we can write,
\begin{align*} \vectrepinv{B}{\colvector{-1\\3\\-3\\1}} &= (-1)\vect{x}_1+3\vect{x}_2+(-3)\vect{x}_3+1\vect{x}_4= \begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}\\ \vectrepinv{B}{\colvector{-2\\4\\-3\\1}} &= (-2)\vect{x}_1+4\vect{x}_2+(-3)\vect{x}_3+1\vect{x}_4= \begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}\\ \vectrepinv{B}{\colvector{0\\-1\\1\\0}} &= 0\vect{x}_1+(-1)\vect{x}_2+1\vect{x}_3+0\vect{x}_4= \begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\\ \vectrepinv{B}{\colvector{1\\-1\\0\\1}} &= 1\vect{x}_1+(-1)\vect{x}_2+0\vect{x}_3+1\vect{x}_4= \begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}\text{.} \end{align*}
So
\begin{align*} \eigenspace{S}{3}&= \spn{\set{\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}}}\\ \eigenspace{S}{2}&= \spn{\set{\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}}}\\ \eigenspace{S}{-2}&= \spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}}} \end{align*}
with geometric multiplicities given by
\begin{align*} \geomult{S}{3}&=1 & \geomult{S}{2}&=1 & \geomult{S}{-2}&=2\text{.} \end{align*}
Suppose we now decided to build another matrix representation of \(S\text{,}\) only now relative to a linearly independent set of eigenvectors of \(S\text{,}\) such as
\begin{equation*} C= \set{ \begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix},\, \begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix},\, \begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\, \begin{bmatrix}1 & -1\\0 & 1\end{bmatrix} }\text{.} \end{equation*}
At this point you should have computed enough matrix representations to predict that the result of representing \(S\) relative to \(C\) will be a diagonal matrix. Computing this representation is an example of how Theorem SCB generalizes the diagonalizations from Section SD. For the record, here is the diagonal representation,
\begin{equation*} \matrixrep{S}{C}{C} = \begin{bmatrix} 3 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & -2 & 0 \\ 0 & 0 & 0 & -2 \end{bmatrix}\text{.} \end{equation*}
Our interest in this example is not necessarily building nice representations, but instead we want to demonstrate how eigenvalues and eigenvectors are an intrinsic property of a linear transformation, independent of any particular representation. To this end, we will repeat the foregoing, but replace \(B\) by another basis. We will make this basis different, but not extremely so,
\begin{equation*} D=\set{\vect{y}_1,\,\vect{y}_2,\,\vect{y}_3,\,\vect{y}_4} =\set{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} ,\, \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} ,\, \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} }\text{.} \end{equation*}
Then to build the matrix representation of \(S\) relative to \(D\) compute,
\begin{align*} \vectrep{D}{\lteval{S}{\vect{y}_1}}&= \vectrep{D}{\begin{bmatrix}0 & -14\\18 & -6\end{bmatrix}}\\ &=\vectrep{D}{14\vect{y}_1+(-32)\vect{y}_2+24\vect{y}_3+(-6)\vect{y}_4}= \colvector{14\\-32\\24\\-6}\\ \vectrep{D}{\lteval{S}{\vect{y}_2}}&= \vectrep{D}{\begin{bmatrix}-1 & -29 \\ 39 & -13\end{bmatrix}}\\ &=\vectrep{D}{28\vect{y}_1+(-68)\vect{y}_2+52\vect{y}_3+(-13)\vect{y}_4}= \colvector{28\\-68\\52\\-13}\\ \vectrep{D}{\lteval{S}{\vect{y}_3}}&= \vectrep{D}{\begin{bmatrix}-2 & -42 \\ 58 & -20\end{bmatrix}}\\ &=\vectrep{D}{40\vect{y}_1+(-100)\vect{y}_2+78\vect{y}_3+(-20)\vect{y}_4}= \colvector{40\\-100\\78\\-20}\\ \vectrep{D}{\lteval{S}{\vect{y}_4}}&= \vectrep{D}{\begin{bmatrix}-5 & -41 \\ 61 & -23\end{bmatrix}}\\ &=\vectrep{D}{36\vect{y}_1+(-102)\vect{y}_2+84\vect{y}_3+(-23)\vect{y}_4}= \colvector{36\\-102\\84\\-23}\text{.} \end{align*}
So by Definition MR we have
\begin{equation*} N=\matrixrep{S}{D}{D}= \begin{bmatrix} 14 & 28 & 40 & 36 \\ -32 & -68 & -100 & -102 \\ 24 & 52 & 78 & 84 \\ -6 & -13 & -20 & -23 \end{bmatrix}\text{.} \end{equation*}
Now compute eigenvalues and eigenvectors of the matrix representation of \(N\) with the techniques of Section EE. First the characteristic polynomial,
\begin{equation*} \charpoly{N}{x}=\detname{N-xI_4}=x^4-x^3-10 x^2+4 x+24=(x-3) (x-2) (x+2)^2\text{.} \end{equation*}
Of course this is not news. We now know that \(M=\matrixrep{S}{B}{B}\) and \(N=\matrixrep{S}{D}{D}\) are similar matrices (Theorem SCB). But Theorem SMEE told us long ago that similar matrices have identical characteristic polynomials. Now compute eigenvectors for the matrix representation, which will be different than what we found for \(M\text{,}\)
\begin{align*} \lambda&=3&N-3I_4&= \begin{bmatrix} 11 & 28 & 40 & 36 \\ -32 & -71 & -100 & -102 \\ 24 & 52 & 75 & 84 \\ -6 & -13 & -20 & -26 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 0 & 4 \\ 0 & 1 & 0 & -6 \\ 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{N}{3}&=\nsp{N-3I_4} =\spn{\set{\colvector{-4\\6\\-4\\1}}} \end{align*}
\begin{align*} \lambda&=2&N-2I_4&= \begin{bmatrix} 12 & 28 & 40 & 36 \\ -32 & -70 & -100 & -102 \\ 24 & 52 & 76 & 84 \\ -6 & -13 & -20 & -25 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 0 & 6 \\ 0 & 1 & 0 & -7 \\ 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{N}{2}&=\nsp{N-2I_4} =\spn{\set{\colvector{-6\\7\\-4\\1}}} \end{align*}
\begin{align*} \lambda&=-2&N-(-2)I_4&= \begin{bmatrix} 16 & 28 & 40 & 36 \\ -32 & -66 & -100 & -102 \\ 24 & 52 & 80 & 84 \\ -6 & -13 & -20 & -21 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & -1 & -3 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\\ &&\eigenspace{N}{-2}&=\nsp{N-(-2)I_4} =\spn{\set{\colvector{1\\-2\\1\\0},\,\colvector{3\\-3\\0\\1}}}\text{.} \end{align*}
Employing Theorem EER we can apply \(\vectrepinvname{D}\) to each of the basis vectors of the eigenspaces of \(N\) to obtain eigenvectors for \(S\) that also form bases for eigenspaces of \(S\text{,}\)
\begin{align*} \vectrepinv{D}{\colvector{-4\\6\\-4\\1}} &= (-4)\vect{y}_1+6\vect{y}_2+(-4)\vect{y}_3+1\vect{y}_4= \begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}\\ \vectrepinv{D}{\colvector{-6\\7\\-4\\1}} &= (-6)\vect{y}_1+7\vect{y}_2+(-4)\vect{y}_3+1\vect{y}_4= \begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}\\ \vectrepinv{D}{\colvector{1\\-2\\1\\0}} &= 1\vect{y}_1+(-2)\vect{y}_2+1\vect{y}_3+0\vect{y}_4= \begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\\ \vectrepinv{D}{\colvector{3\\-3\\0\\1}} &= 3\vect{y}_1+(-3)\vect{y}_2+0\vect{y}_3+1\vect{y}_4= \begin{bmatrix}1 & -2\\1 & 1\end{bmatrix}\text{.} \end{align*}
The eigenspaces for the eigenvalues of algebraic multiplicity 1 are exactly as before,
\begin{align*} \eigenspace{S}{3}&= \spn{\set{\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}}}\\ \eigenspace{S}{2}&= \spn{\set{\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}}}\text{.} \end{align*}
However, the eigenspace for \(\lambda=-2\) would at first glance appear to be different. Here are the two eigenspaces for \(\lambda=-2\text{,}\) first the eigenspace obtained from \(M=\matrixrep{S}{B}{B}\text{,}\) then followed by the eigenspace obtained from \(M=\matrixrep{S}{D}{D}\text{.}\) We have
\begin{align*} \eigenspace{S}{-2}&= \spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}}} & \eigenspace{S}{-2}&= \spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -2\\1 & 1\end{bmatrix}}}\text{.} \end{align*}
Subspaces generally have many bases, and that is the situation here. With a careful proof of set equality, you can show that these two eigenspaces are equal sets. The key observation to make such a proof go is that
\begin{equation*} \begin{bmatrix}1 & -2\\1 & 1\end{bmatrix} = \begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}+\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix} \end{equation*}
which will establish that the second set is a subset of the first. With equal dimensions, Theorem EDYES will finish the task.
So the eigenvalues of a linear transformation are independent of the matrix representation employed to compute them!
Another example, this time a bit larger and with complex eigenvalues.

Example CELT. Complex eigenvectors of a linear transformation.

Consider the linear transformation \(\ltdefn{Q}{P_4}{P_4}\) defined by
\begin{align*} &\lteval{Q}{a+bx+cx^2+dx^3+ex^4}\\ &=(-46a-22b+13c+5d+e)+(117a+57b-32c-15d-4e) x+\\ &\quad\quad (-69a-29b+21c-7e)x^2+(159a+73b-44c-13d+2e)x^3+\\ &\quad\quad (-195a-87b+55c+10d-13e)x^4\text{.} \end{align*}
Choose a simple basis to compute with, say
\begin{equation*} B=\set{1,\,x,\,x^2,\,x^3,\,x^4}\text{.} \end{equation*}
Then it should be apparent that the matrix representation of \(Q\) relative to \(B\) is
\begin{equation*} M=\matrixrep{Q}{B}{B}= \begin{bmatrix} -46 & -22 & 13 & 5 & 1 \\ 117 & 57 & -32 & -15 & -4 \\ -69 & -29 & 21 & 0 & -7 \\ 159 & 73 & -44 & -13 & 2 \\ -195 & -87 & 55 & 10 & -13 \end{bmatrix}\text{.} \end{equation*}
Compute the characteristic polynomial, eigenvalues and eigenvectors according to the techniques of Section EE,
\begin{align*} \charpoly{Q}{x} &=-x^5+6 x^4-x^3-88 x^2+252 x-208\\ &=-(x-2)^2 (x+4) \left(x^2-6x+13\right)\\ &=-(x-2)^2 (x+4) \left(x-(3+2i)\right) \left(x-(3-2i)\right) \end{align*}
\begin{align*} \algmult{Q}{2}&=2 & \algmult{Q}{-4}&=1 & \algmult{Q}{3+2i}&=1 & \algmult{Q}{3-2i}&=1 \end{align*}
\begin{align*} \lambda&=2\\ M-(2)I_5&= \begin{bmatrix} -48 & -22 & 13 & 5 & 1 \\ 117 & 55 & -32 & -15 & -4 \\ -69 & -29 & 19 & 0 & -7 \\ 159 & 73 & -44 & -15 & 2 \\ -195 & -87 & 55 & 10 & -15 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 0 & \frac{1}{2} & -\frac{1}{2} \\ 0 & 1 & 0 & -\frac{5}{2} & -\frac{5}{2} \\ 0 & 0 & 1 & -2 & -6 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \eigenspace{M}{2}&=\nsp{M-(2)I_5} =\spn{\set{ \colvector{-\frac{1}{2}\\\frac{5}{2}\\2\\1\\0},\, \colvector{\frac{1}{2}\\\frac{5}{2}\\6\\0\\1} }} =\spn{\set{ \colvector{-1\\5\\4\\2\\0},\, \colvector{1\\5\\12\\0\\2} }} \end{align*}
\begin{align*} \lambda&=-4\\ M-(-4)I_5&= \begin{bmatrix} -42 & -22 & 13 & 5 & 1 \\ 117 & 61 & -32 & -15 & -4 \\ -69 & -29 & 25 & 0 & -7 \\ 159 & 73 & -44 & -9 & 2 \\ -195 & -87 & 55 & 10 & -9 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & -3 \\ 0 & 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 1 & -2 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \eigenspace{M}{-4}&=\nsp{M-(-4)I_5} =\spn{\set{\colvector{-1\\3\\1\\2\\1}}} \end{align*}
\begin{align*} \lambda&=3+2i\\ M-(3+2i)I_5&= \begin{bmatrix} -49-2 i & -22 & 13 & 5 & 1 \\ 117 & 54-2 i & -32 & -15 & -4\\ -69 & -29 & 18-2 i & 0 & -7 \\ 159 & 73 & -44 & -16-2 i & 2 \\ -195 & -87 & 55 & 10 & -16-2 i \end{bmatrix}\\ &\quad\quad\rref \begin{bmatrix} 1 & 0 & 0 & 0 & -\frac{3}{4}+\frac{i}{4} \\ 0 & 1 & 0 & 0 & \frac{7}{4}-\frac{i}{4} \\ 0 & 0 & 1 & 0 & -\frac{1}{2}+\frac{i}{2} \\ 0 & 0 & 0 & 1 & \frac{7}{4}-\frac{i}{4} \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \eigenspace{M}{3+2i}&=\nsp{M-(3+2i)I_5} =\spn{\set{\colvector{\frac{3}{4}-\frac{i}{4} \\ -\frac{7}{4}+\frac{i}{4} \\ \frac{1}{2}-\frac{i}{2} \\ -\frac{7}{4}+\frac{i}{4} \\ 1}}} =\spn{\set{\colvector{3-i\\-7+i\\2-2i\\-7+i\\4}}} \end{align*}
\begin{align*} \lambda&=3-2i\\ M-(3-2i)I_5&= \begin{bmatrix} -49+2 i & -22 & 13 & 5 & 1 \\ 117 & 54+2 i & -32 & -15 & -4 \\ -69 & -29 & 18+2 i & 0 & -7 \\ 159 & 73 & -44 & -16+2 i & 2 \\ -195 & -87 & 55 & 10 & -16+2 i \end{bmatrix}\\ &\quad\quad\rref \begin{bmatrix} 1 & 0 & 0 & 0 & -\frac{3}{4}-\frac{i}{4} \\ 0 & 1 & 0 & 0 & \frac{7}{4}+\frac{i}{4} \\ 0 & 0 & 1 & 0 & -\frac{1}{2}-\frac{i}{2} \\ 0 & 0 & 0 & 1 & \frac{7}{4}+\frac{i}{4} \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \eigenspace{M}{3-2i}&=\nsp{M-(3-2i)I_5} =\spn{\set{\colvector{\frac{3}{4}+\frac{i}{4} \\ -\frac{7}{4}-\frac{i}{4} \\ \frac{1}{2}+\frac{i}{2} \\ -\frac{7}{4}-\frac{i}{4} \\ 1}}} =\spn{\set{\colvector{3+i\\-7-i\\2+2i\\-7-i\\4}}}\text{.} \end{align*}
It is straightforward to convert each of these basis vectors for eigenspaces of \(M\) back to elements of \(P_4\) by applying the isomorphism \(\vectrepinvname{B}\text{,}\)
\begin{align*} \vectrepinv{B}{\colvector{-1\\5\\4\\2\\0}}&=-1+5x+4x^2+2x^3\\ \vectrepinv{B}{\colvector{1\\5\\12\\0\\2}}&=1+5x+12x^2+2x^4\\ \vectrepinv{B}{\colvector{-1\\3\\1\\2\\1}}&=-1+3x+x^2+2x^3+x^4\\ \vectrepinv{B}{\colvector{3-i\\-7+i\\2-2i\\-7+i\\4}}&=(3-i)+(-7+i)x+(2-2i)x^2+(-7+i)x^3+4x^4\\ \vectrepinv{B}{\colvector{3+i\\-7-i\\2+2i\\-7-i\\4}}&=(3+i)+(-7-i)x+(2+2i)x^2+(-7-i)x^3+4x^4\text{.} \end{align*}
So we apply Theorem EER and the The Coordinatization Principle to get the eigenspaces for \(Q\text{,}\)
\begin{align*} \eigenspace{Q}{2}&=\spn{\set{-1+5x+4x^2+2x^3,\,1+5x+12x^2+2x^4}}\\ \eigenspace{Q}{-4}&=\spn{\set{-1+3x+x^2+2x^3+x^4}}\\ \eigenspace{Q}{3+2i}&=\spn{\set{(3-i)+(-7+i)x+(2-2i)x^2+(-7+i)x^3+4x^4}}\\ \eigenspace{Q}{3-2i}&=\spn{\set{(3+i)+(-7-i)x+(2+2i)x^2+(-7-i)x^3+4x^4}} \end{align*}
with geometric multiplicities
\begin{align*} \geomult{Q}{2}&=2 & \geomult{Q}{-4}&=1 & \geomult{Q}{3+2i}&=1 & \geomult{Q}{3-2i}&=1\text{.} \end{align*}

Sage CELT. Designing Matrix Representations.

How do we find the eigenvectors of a linear transformation? How do we find pleasing (or computationally simple) matrix representations of linear transformations? Theorem EER and Theorem SCB applied in the context of Theorem DC can answer both questions. Here is an example.
Now we compute the eigenvalues and eigenvectors of M1. Since M1 is diagonalizable, we can find a basis of eigenvectors for use as the basis for a new representation.
The eigenvectors that are the basis elements in B are the eigenvectors of the linear transformation, relative to the standard basis. For different representations the eigenvectors take different forms, relative to other bases. What are the eigenvectors of the matrix representation M2?
Notice that the eigenvalues of the linear transformation are totally independent of the representation. So in a sense, they are an inherent property of the linear transformation.
You should be able to use these techniques with linear transformations on abstract vector spaces — just use a mental linear transformation transforming the abstract vector space back-and-forth between a vector space of column vectors of the right size.

Sage SUTH4. Sage Under The Hood, Round 4.

We finally have enough theorems to understand how Sage creates and manages linear transformations. With a choice of bases for the domain and codomain, a linear transformation can be represented by a matrix. Every interesting property of the linear transformation can be computed from the matrix representation, and we can convert between representations (of vectors and linear transformations) with change-of-basis matrices, similarity and matrix multiplication.
So we can understand the theory of linear algebra better by experimenting with the assistance of Sage, and the theory of linear algebra helps us understand how Sage is designed and functions. A virtuous cycle, if there ever was one. Keep it going.

Reading Questions CB Reading Questions

1.

The change-of-basis matrix is a matrix representation of which linear transformation?

2.

Find the change-of-basis matrix, \(\cbm{B}{C}\text{,}\) for the two bases of \(\complex{2}\)
\begin{align*} B&=\set{\colvector{2\\3},\,\colvector{-1\\2}}& C&=\set{\colvector{1\\0},\,\colvector{1\\1}}\text{.} \end{align*}

3.

What is the third “surprise,” and why is it surprising?

Exercises CB Exercises

C20.

In Example CBCV we computed the vector representation of \(\vect{y}\) relative to \(C\text{,}\) \(\vectrep{C}{\vect{y}}\text{,}\) as an example of Theorem CB. Compute this same representation directly. In other words, apply Definition VR rather than Theorem CB.

C21.

Perform a check on Example MRCM by computing \(\matrixrep{Q}{B}{D}\) directly. In other words, apply Definition MR rather than Theorem MRCB.
Solution.
\begin{align*} &\vectrep{D}{\lteval{Q}{\begin{bmatrix}5&-3\\-3&-2\end{bmatrix}}} =\vectrep{D}{19+14x-2x^2-28x^3}\\ &=\vectrep{D}{(-39)(2+x-2x^2+3x^3)+62(-1-2x^2+3x^3)+(-53)(-3-x+x^3)+(-44)(-x^2+x^3)}\\ &=\colvector{-39\\62\\-53\\-44}\\ &\vectrep{D}{\lteval{Q}{\begin{bmatrix}2&-3\\-3&0\end{bmatrix}}} =\vectrep{D}{16+9x-7x^2-14x^3}\\ &=\vectrep{D}{(-23)(2+x-2x^2+3x^3)+(34)(-1-2x^2+3x^3)+(-32)(-3-x+x^3)+(-15)(-x^2+x^3)}\\ &=\colvector{-23\\34\\-32\\-15}\\ &\vectrep{D}{\lteval{Q}{\begin{bmatrix}1&2\\2&4\end{bmatrix}}} =\vectrep{D}{25+9x+3x^2+4x^3}\\ &=\vectrep{D}{(14)(2+x-2x^2+3x^3)+(-12)(-1-2x^2+3x^3)+5(-3-x+x^3)+(-7)(-x^2+x^3)}\\ &=\colvector{14\\-12\\5\\-7}\text{.} \end{align*}
These three vectors are the columns of the matrix representation,
\begin{equation*} \matrixrep{Q}{B}{D}= \begin{bmatrix} -39 & -23 & 14 \\ 62 & 34 & -12 \\ -53 & -32 & 5 \\ -44 & -15 & -7 \end{bmatrix} \end{equation*}
which coincides with the result obtained in Example MRCM.

C30.

Find a basis for the vector space \(P_3\) composed of eigenvectors of the linear transformation \(T\text{.}\) Then find a matrix representation of \(T\) relative to this basis.
\begin{equation*} \ltdefn{T}{P_3}{P_3},\quad\lteval{T}{a+bx+cx^2+dx^3}= (a+c+d)+(b+c+d)x+(a+b+c)x^2+(a+b+d)x^3\text{.} \end{equation*}
Solution.
With the domain and codomain being identical, we will build a matrix representation using the same basis for both the domain and codomain. The eigenvalues of the matrix representation will be the eigenvalues of the linear transformation, and we can obtain the eigenvectors of the linear transformation by un-coordinatizing (Theorem EER). Since the method does not depend on which basis we choose, we can choose a natural basis for ease of computation, say,
\begin{equation*} B=\set{1,\,x,\,x^2,x^3}\text{.} \end{equation*}
The matrix representation is then,
\begin{equation*} \matrixrep{T}{B}{B}= \begin{bmatrix} 1 & 0 & 1 & 1\\ 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 0\\ 1 & 1 & 0 & 1 \end{bmatrix}\text{.} \end{equation*}
The eigenvalues and eigenvectors of this matrix were computed in Example ESMS4. A basis for \(\complex{4}\text{,}\) composed of eigenvectors of the matrix representation is,
\begin{equation*} C=\set{ \colvector{1\\1\\1\\1},\, \colvector{-1\\1\\0\\0},\, \colvector{0\\0\\-1\\1},\, \colvector{-1\\-1\\1\\1} }\text{.} \end{equation*}
Applying \(\vectrepinvname{B}\) to each vector of this set, yields a basis of \(P_3\) composed of eigenvectors of \(T\text{,}\)
\begin{equation*} D=\set{1+x+x^2+x^3, -1+x,\,-x^2+x^3,\,-1-x+x^2+x^3}\text{.} \end{equation*}
The matrix representation of \(T\) relative to the basis \(D\) will be a diagonal matrix with the corresponding eigenvalues along the diagonal, so in this case we get
\begin{equation*} \matrixrep{T}{D}{D}= \begin{bmatrix} 3 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1 \end{bmatrix}\text{.} \end{equation*}

C40.

Let \(S_{22}\) be the vector space of \(2\times 2\) symmetric matrices. Find a basis \(C\) for \(S_{22}\) that yields a diagonal matrix representation of the linear transformation \(R\text{.}\)
\begin{align*} \ltdefn{R}{S_{22}}{S_{22}},\quad \lteval{R}{\begin{bmatrix}a&b\\b&c\end{bmatrix}}= \begin{bmatrix} -5a + 2b - 3c & -12a + 5b - 6c\\ -12a + 5b - 6c & 6a - 2b + 4c \end{bmatrix}\text{.} \end{align*}
Solution.
Begin with a matrix representation of \(R\text{,}\) any matrix representation, but use the same basis for both instances of \(S_{22}\text{.}\) We will choose a basis that makes it easy to compute vector representations in \(S_{22}\text{.}\)
\begin{equation*} B=\set{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},\, \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix},\, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} }\text{.} \end{equation*}
Then the resulting matrix representation of \(R\) (Definition MR) is
\begin{equation*} \matrixrep{R}{B}{B}= \begin{bmatrix} -5 & 2 & -3 \\ -12 & 5 & -6 \\ 6 & -2 & 4 \end{bmatrix}\text{.} \end{equation*}
Now, compute the eigenvalues and eigenvectors of this matrix, with the goal of diagonalizing the matrix (Theorem DC),
\begin{align*} \lambda&=2 & \eigenspace{\matrixrep{R}{B}{B}}{2}&=\spn{\set{\colvector{-1\\-2\\1}}}\\ \lambda&=1 & \eigenspace{\matrixrep{R}{B}{B}}{1}&=\spn{\set{\colvector{-1\\0\\2},\,\colvector{1\\3\\0}}}\text{.} \end{align*}
The three vectors that occur as basis elements for these eigenspaces will together form a linearly independent set (check this!). So these column vectors may be employed in a matrix that will diagonalize the matrix representation. If we “un-coordinatize” these three column vectors relative to the basis \(B\text{,}\) we will find three linearly independent elements of \(S_{22}\) that are eigenvectors of the linear transformation \(R\) (Theorem EER). A matrix representation relative to this basis of eigenvectors will be diagonal, with the eigenvalues (\(\lambda=2,\,1\)) as the diagonal elements. Here we go,
\begin{align*} \vectrepinv{B}{\colvector{-1\\-2\\1}}&= (-1)\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}+ (-2)\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}+ 1\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} -1 & -2 \\-2 & 1 \end{bmatrix}\\ \vectrepinv{B}{\colvector{-1\\0\\2}}&= (-1)\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}+ 0\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}+ 2\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & 2 \end{bmatrix}\\ \vectrepinv{B}{\colvector{1\\3\\0}}&= 1\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}+ 3\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}+ 0\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 3 \\ 3 & 0 \end{bmatrix}\text{.} \end{align*}
So the requested basis of \(S_{22}\text{,}\) yielding a diagonal matrix representation of \(R\text{,}\) is
\begin{align*} C = \set{ \begin{bmatrix} -1 & -2 \\-2 & 1 \end{bmatrix}\,\ \begin{bmatrix} -1 & 0 \\ 0 & 2 \end{bmatrix},\, \begin{bmatrix} 1 & 3 \\ 3 & 0 \end{bmatrix}% }\text{.} \end{align*}

C41.

Let \(S_{22}\) be the vector space of \(2\times 2\) symmetric matrices. Find a basis for \(S_{22}\) composed of eigenvectors of the linear transformation \(\ltdefn{Q}{S_{22}}{S_{22}}\text{.}\)
\begin{equation*} \lteval{Q}{ \begin{bmatrix} a & b\\ b & c \end{bmatrix} } = \begin{bmatrix} 25a + 18b + 30c & -16a - 11b - 20c\\ -16a - 11b - 20c & -11a - 9b - 12c \end{bmatrix}\text{.} \end{equation*}
Solution.
Use a single basis for both the domain and codomain, since they are equal.
\begin{equation*} B=\set{ \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix},\, \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix},\, \begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix} }\text{.} \end{equation*}
The matrix representation of \(Q\) relative to \(B\) is
\begin{equation*} M= \matrixrep{Q}{B}{B} = \begin{bmatrix} 25 & 18 & 30 \\ -16 & -11 & -20 \\ -11 & -9 & -12 \end{bmatrix}\text{.} \end{equation*}
We can analyze this matrix with the techniques of Section EE and then apply Theorem EER. The eigenvalues of this matrix are \(\lambda=-2,\,1,\,3\) with eigenspaces
\begin{align*} \eigenspace{M}{-2}&=\spn{\set{\colvector{-6\\4\\3}}} & \eigenspace{M}{1}&=\spn{\set{\colvector{-2\\1\\1}}} & \eigenspace{M}{3}&=\spn{\set{\colvector{-3\\2\\1}}}\text{.} \end{align*}
Because the three eigenvalues are distinct, the three basis vectors from the three eigenspaces for a linearly independent set (Theorem EDELI). Theorem EER says we can uncoordinatize these eigenvectors to obtain eigenvectors of \(Q\text{.}\) By Theorem ILTLI the resulting set will remain linearly independent. Set
\begin{equation*} C=\set{ \vectrepinv{B}{\colvector{-6\\4\\3}},\, \vectrepinv{B}{\colvector{-2\\1\\1}},\, \vectrepinv{B}{\colvector{-3\\2\\1}} } = \set{ \begin{bmatrix}-6 & 4 \\ 4 & 3\end{bmatrix},\, \begin{bmatrix}-2 & 1 \\ 1 & 1\end{bmatrix},\, \begin{bmatrix}-3 & 2 \\ 2 & 1\end{bmatrix} }\text{.} \end{equation*}
Then \(C\) is a linearly independent set of size 3 in the vector space \(S_{22}\text{,}\) which has dimension 3 as well. By Theorem G, \(C\) is a basis of \(S_{22}\text{.}\)

T10.

Suppose that \(\ltdefn{T}{V}{V}\) is an invertible linear transformation with a nonzero eigenvalue \(\lambda\text{.}\) Prove that \(\displaystyle\frac{1}{\lambda}\) is an eigenvalue of \(\ltinverse{T}\text{.}\)
Solution.
Let \(\vect{v}\) be an eigenvector of \(T\) for the eigenvalue \(\lambda\text{.}\) Then,
\begin{align*} \lteval{\ltinverse{T}}{\vect{v}}&= \frac{1}{\lambda}\lambda\lteval{\ltinverse{T}}{\vect{v}}&& \lambda\neq 0\\ &=\frac{1}{\lambda}\lteval{\ltinverse{T}}{\lambda\vect{v}}&& \knowl{./knowl/xref/theorem-ILTLT.html}{\text{Theorem ILTLT}}\\ &=\frac{1}{\lambda}\lteval{\ltinverse{T}}{\lteval{T}{\vect{v}}}&& \vect{v}\text{ eigenvector of }T\\ &=\frac{1}{\lambda}\lteval{I_V}{\vect{v}}&& \knowl{./knowl/xref/definition-IVLT.html}{\text{Definition IVLT}}\\ &=\frac{1}{\lambda}\vect{v}&& \knowl{./knowl/xref/definition-IDLT.html}{\text{Definition IDLT}} \end{align*}
which says that \(\displaystyle\frac{1}{\lambda}\) is an eigenvalue of \(\ltinverse{T}\) with eigenvector \(\vect{v}\text{.}\) Note that it is possible to prove that any eigenvalue of an invertible linear transformation is never zero. So the hypothesis that \(\lambda\) be nonzero is just a convenience for this problem.

T15.

Suppose that \(V\) is a vector space and \(\ltdefn{T}{V}{V}\) is a linear transformation. Prove that \(T\) is injective if and only if \(\lambda=0\) is not an eigenvalue of \(T\text{.}\)
You have attempted of activities on this page.