Consider the linear transformation \(\ltdefn{S}{M_{22}}{M_{22}}\) defined by
\begin{equation*}
\lteval{S}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}=
\begin{bmatrix}
-b - c - 3d & -14a - 15b - 13c + d\\
18a + 21b + 19c + 3d & -6a - 7b - 7c - 3d
\end{bmatrix}\text{.}
\end{equation*}
To find the eigenvalues and eigenvectors of
\(S\) we will build a matrix representation and analyze the matrix. Since
Theorem EER places no restriction on the choice of the basis
\(B\text{,}\) we may as well use a basis that is easy to work with. So set
\begin{equation*}
B=\set{\vect{x}_1,\,\vect{x}_2,\,\vect{x}_3,\,\vect{x}_4}
=\set{
\begin{bmatrix}
1 & 0 \\ 0 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
0 & 1 \\ 0 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
0 & 0 \\ 1 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
0 & 0 \\ 0 & 1
\end{bmatrix}
}\text{.}
\end{equation*}
Then to build the matrix representation of \(S\) relative to \(B\) compute,
\begin{align*}
\vectrep{B}{\lteval{S}{\vect{x}_1}}&=
\vectrep{B}{\begin{bmatrix}0 & -14 \\ 18 & -6\end{bmatrix}}\\
&=\vectrep{B}{0\vect{x}_1+(-14)\vect{x}_2+18\vect{x}_3+(-6)\vect{x}_4}=
\colvector{0\\-14\\18\\-6}\\
\vectrep{B}{\lteval{S}{\vect{x}_2}}&=
\vectrep{B}{\begin{bmatrix}-1 & -15\\21 & -7\end{bmatrix}}\\
&=\vectrep{B}{(-1)\vect{x}_1+(-15)\vect{x}_2+21\vect{x}_3+(-7)\vect{x}_4}=
\colvector{-1\\-15\\21\\-7}\\
\vectrep{B}{\lteval{S}{\vect{x}_3}}&=
\vectrep{B}{\begin{bmatrix}-1 & -13\\19 & -7\end{bmatrix}}\\
&=\vectrep{B}{(-1)\vect{x}_1+(-13)\vect{x}_2+19\vect{x}_3+(-7)\vect{x}_4}=
\colvector{-1\\-13\\19\\-7}\\
\vectrep{B}{\lteval{S}{\vect{x}_4}}&=
\vectrep{B}{\begin{bmatrix}-3 & 1\\3 & -3\end{bmatrix}}\\
&=\vectrep{B}{(-3)\vect{x}_1+1\vect{x}_2+3\vect{x}_3+(-3)\vect{x}_4}=
\colvector{-3\\1\\3\\-3}\text{.}
\end{align*}
\begin{equation*}
M=\matrixrep{S}{B}{B}=
\begin{bmatrix}
0 & -1 & -1 & -3 \\
-14 & -15 & -13 & 1 \\
18 & 21 & 19 & 3 \\
-6 & -7 & -7 & -3
\end{bmatrix}\text{.}
\end{equation*}
Now compute eigenvalues and eigenvectors of the matrix representation of
\(M\) with the techniques of
Section EE. First the characteristic polynomial,
\begin{equation*}
\charpoly{M}{x}=\detname{M-xI_4}=x^4-x^3-10 x^2+4 x+24=(x-3) (x-2) (x+2)^2\text{.}
\end{equation*}
We could now make statements about the eigenvalues of
\(M\text{,}\) but in light of
Theorem EER we can refer to the eigenvalues of
\(S\) and mildly abuse (or extend) our notation for multiplicities to write
\begin{align*}
\algmult{S}{3}&=1
&
\algmult{S}{2}&=1
&
\algmult{S}{-2}&=2\text{.}
\end{align*}
Now compute the eigenvectors of \(M\text{,}\)
\begin{align*}
\lambda&=3&M-3I_4&=
\begin{bmatrix}
-3 & -1 & -1 & -3 \\
-14 & -18 & -13 & 1 \\
18 & 21 & 16 & 3 \\
-6 & -7 & -7 & -6
\end{bmatrix}
\rref
\begin{bmatrix}
\leading{1} & 0 & 0 & 1 \\
0 & \leading{1} & 0 & -3 \\
0 & 0 & \leading{1} & 3 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{M}{3}&=\nsp{M-3I_4}
=\spn{\set{\colvector{-1\\3\\-3\\1}}}
\end{align*}
\begin{align*}
\lambda&=2&M-2I_4&=
\begin{bmatrix}
-2 & -1 & -1 & -3 \\
-14 & -17 & -13 & 1 \\
18 & 21 & 17 & 3 \\
-6 & -7 & -7 & -5
\end{bmatrix}
\rref
\begin{bmatrix}
\leading{1} & 0 & 0 & 2 \\
0 & \leading{1} & 0 & -4 \\
0 & 0 & \leading{1} & 3 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{M}{2}&=\nsp{M-2I_4}
=\spn{\set{\colvector{-2\\4\\-3\\1}}}
\end{align*}
\begin{align*}
\lambda&=-2&M-(-2)I_4&=
\begin{bmatrix}
2 & -1 & -1 & -3 \\
-14 & -13 & -13 & 1 \\
18 & 21 & 21 & 3 \\
-6 & -7 & -7 & -1
\end{bmatrix}
\rref
\begin{bmatrix}
\leading{1} & 0 & 0 & -1 \\
0 & \leading{1} & 1 & 1 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{M}{-2}&=\nsp{M-(-2)I_4}
=\spn{\set{\colvector{0\\-1\\1\\0},\,\colvector{1\\-1\\0\\1}}}\text{.}
\end{align*}
According to
Theorem EER the eigenvectors just listed as basis vectors for the eigenspaces of
\(M\) are vector representations (relative to
\(B\)) of eigenvectors for
\(S\text{.}\) So the application of the inverse function
\(\vectrepinvname{B}\) will convert these column vectors into elements of the vector space
\(M_{22}\) (
\(2\times 2\) matrices) that are eigenvectors of
\(S\text{.}\) Since
\(\vectrepname{B}\) is an isomorphism (
Theorem VRILT), so is
\(\vectrepinvname{B}\text{.}\) Applying the inverse function will then preserve linear independence and spanning properties, so with a sweeping application of the
The Coordinatization Principle and some extensions of our previous notation for eigenspaces and geometric multiplicities, we can write,
\begin{align*}
\vectrepinv{B}{\colvector{-1\\3\\-3\\1}}
&=
(-1)\vect{x}_1+3\vect{x}_2+(-3)\vect{x}_3+1\vect{x}_4=
\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}\\
\vectrepinv{B}{\colvector{-2\\4\\-3\\1}}
&=
(-2)\vect{x}_1+4\vect{x}_2+(-3)\vect{x}_3+1\vect{x}_4=
\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}\\
\vectrepinv{B}{\colvector{0\\-1\\1\\0}}
&=
0\vect{x}_1+(-1)\vect{x}_2+1\vect{x}_3+0\vect{x}_4=
\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\\
\vectrepinv{B}{\colvector{1\\-1\\0\\1}}
&=
1\vect{x}_1+(-1)\vect{x}_2+0\vect{x}_3+1\vect{x}_4=
\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}\text{.}
\end{align*}
So
\begin{align*}
\eigenspace{S}{3}&=
\spn{\set{\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}}}\\
\eigenspace{S}{2}&=
\spn{\set{\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}}}\\
\eigenspace{S}{-2}&=
\spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}}}
\end{align*}
with geometric multiplicities given by
\begin{align*}
\geomult{S}{3}&=1
&
\geomult{S}{2}&=1
&
\geomult{S}{-2}&=2\text{.}
\end{align*}
Suppose we now decided to build another matrix representation of \(S\text{,}\) only now relative to a linearly independent set of eigenvectors of \(S\text{,}\) such as
\begin{equation*}
C=
\set{
\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix},\,
\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix},\,
\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,
\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}
}\text{.}
\end{equation*}
At this point you should have computed enough matrix representations to predict that the result of representing
\(S\) relative to
\(C\) will be a diagonal matrix. Computing this representation is an example of how
Theorem SCB generalizes the diagonalizations from
Section SD. For the record, here is the diagonal representation,
\begin{equation*}
\matrixrep{S}{C}{C}
=
\begin{bmatrix}
3 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & -2 & 0 \\
0 & 0 & 0 & -2
\end{bmatrix}\text{.}
\end{equation*}
Our interest in this example is not necessarily building nice representations, but instead we want to demonstrate how eigenvalues and eigenvectors are an intrinsic property of a linear transformation, independent of any particular representation. To this end, we will repeat the foregoing, but replace \(B\) by another basis. We will make this basis different, but not extremely so,
\begin{equation*}
D=\set{\vect{y}_1,\,\vect{y}_2,\,\vect{y}_3,\,\vect{y}_4}
=\set{
\begin{bmatrix}
1 & 0 \\ 0 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
1 & 1 \\ 0 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
1 & 1 \\ 1 & 0
\end{bmatrix}
,\,
\begin{bmatrix}
1 & 1 \\ 1 & 1
\end{bmatrix}
}\text{.}
\end{equation*}
Then to build the matrix representation of \(S\) relative to \(D\) compute,
\begin{align*}
\vectrep{D}{\lteval{S}{\vect{y}_1}}&=
\vectrep{D}{\begin{bmatrix}0 & -14\\18 & -6\end{bmatrix}}\\
&=\vectrep{D}{14\vect{y}_1+(-32)\vect{y}_2+24\vect{y}_3+(-6)\vect{y}_4}=
\colvector{14\\-32\\24\\-6}\\
\vectrep{D}{\lteval{S}{\vect{y}_2}}&=
\vectrep{D}{\begin{bmatrix}-1 & -29 \\ 39 & -13\end{bmatrix}}\\
&=\vectrep{D}{28\vect{y}_1+(-68)\vect{y}_2+52\vect{y}_3+(-13)\vect{y}_4}=
\colvector{28\\-68\\52\\-13}\\
\vectrep{D}{\lteval{S}{\vect{y}_3}}&=
\vectrep{D}{\begin{bmatrix}-2 & -42 \\ 58 & -20\end{bmatrix}}\\
&=\vectrep{D}{40\vect{y}_1+(-100)\vect{y}_2+78\vect{y}_3+(-20)\vect{y}_4}=
\colvector{40\\-100\\78\\-20}\\
\vectrep{D}{\lteval{S}{\vect{y}_4}}&=
\vectrep{D}{\begin{bmatrix}-5 & -41 \\ 61 & -23\end{bmatrix}}\\
&=\vectrep{D}{36\vect{y}_1+(-102)\vect{y}_2+84\vect{y}_3+(-23)\vect{y}_4}=
\colvector{36\\-102\\84\\-23}\text{.}
\end{align*}
\begin{equation*}
N=\matrixrep{S}{D}{D}=
\begin{bmatrix}
14 & 28 & 40 & 36 \\
-32 & -68 & -100 & -102 \\
24 & 52 & 78 & 84 \\
-6 & -13 & -20 & -23
\end{bmatrix}\text{.}
\end{equation*}
Now compute eigenvalues and eigenvectors of the matrix representation of
\(N\) with the techniques of
Section EE. First the characteristic polynomial,
\begin{equation*}
\charpoly{N}{x}=\detname{N-xI_4}=x^4-x^3-10 x^2+4 x+24=(x-3) (x-2) (x+2)^2\text{.}
\end{equation*}
Of course this is not news. We now know that
\(M=\matrixrep{S}{B}{B}\) and
\(N=\matrixrep{S}{D}{D}\) are similar matrices (
Theorem SCB). But
Theorem SMEE told us long ago that similar matrices have identical characteristic polynomials. Now compute eigenvectors for the matrix representation, which will be different than what we found for
\(M\text{,}\)
\begin{align*}
\lambda&=3&N-3I_4&=
\begin{bmatrix}
11 & 28 & 40 & 36 \\
-32 & -71 & -100 & -102 \\
24 & 52 & 75 & 84 \\
-6 & -13 & -20 & -26
\end{bmatrix}
\rref
\begin{bmatrix}
1 & 0 & 0 & 4 \\
0 & 1 & 0 & -6 \\
0 & 0 & 1 & 4 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{N}{3}&=\nsp{N-3I_4}
=\spn{\set{\colvector{-4\\6\\-4\\1}}}
\end{align*}
\begin{align*}
\lambda&=2&N-2I_4&=
\begin{bmatrix}
12 & 28 & 40 & 36 \\
-32 & -70 & -100 & -102 \\
24 & 52 & 76 & 84 \\
-6 & -13 & -20 & -25
\end{bmatrix}
\rref
\begin{bmatrix}
1 & 0 & 0 & 6 \\
0 & 1 & 0 & -7 \\
0 & 0 & 1 & 4 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{N}{2}&=\nsp{N-2I_4}
=\spn{\set{\colvector{-6\\7\\-4\\1}}}
\end{align*}
\begin{align*}
\lambda&=-2&N-(-2)I_4&=
\begin{bmatrix}
16 & 28 & 40 & 36 \\
-32 & -66 & -100 & -102 \\
24 & 52 & 80 & 84 \\
-6 & -13 & -20 & -21
\end{bmatrix}
\rref
\begin{bmatrix}
1 & 0 & -1 & -3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}\\
&&\eigenspace{N}{-2}&=\nsp{N-(-2)I_4}
=\spn{\set{\colvector{1\\-2\\1\\0},\,\colvector{3\\-3\\0\\1}}}\text{.}
\end{align*}
Employing
Theorem EER we can apply
\(\vectrepinvname{D}\) to each of the basis vectors of the eigenspaces of
\(N\) to obtain eigenvectors for
\(S\) that also form bases for eigenspaces of
\(S\text{,}\)
\begin{align*}
\vectrepinv{D}{\colvector{-4\\6\\-4\\1}}
&=
(-4)\vect{y}_1+6\vect{y}_2+(-4)\vect{y}_3+1\vect{y}_4=
\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}\\
\vectrepinv{D}{\colvector{-6\\7\\-4\\1}}
&=
(-6)\vect{y}_1+7\vect{y}_2+(-4)\vect{y}_3+1\vect{y}_4=
\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}\\
\vectrepinv{D}{\colvector{1\\-2\\1\\0}}
&=
1\vect{y}_1+(-2)\vect{y}_2+1\vect{y}_3+0\vect{y}_4=
\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\\
\vectrepinv{D}{\colvector{3\\-3\\0\\1}}
&=
3\vect{y}_1+(-3)\vect{y}_2+0\vect{y}_3+1\vect{y}_4=
\begin{bmatrix}1 & -2\\1 & 1\end{bmatrix}\text{.}
\end{align*}
The eigenspaces for the eigenvalues of algebraic multiplicity 1 are exactly as before,
\begin{align*}
\eigenspace{S}{3}&=
\spn{\set{\begin{bmatrix}-1 & 3\\-3 & 1\end{bmatrix}}}\\
\eigenspace{S}{2}&=
\spn{\set{\begin{bmatrix}-2 & 4\\-3 & 1\end{bmatrix}}}\text{.}
\end{align*}
However, the eigenspace for \(\lambda=-2\) would at first glance appear to be different. Here are the two eigenspaces for \(\lambda=-2\text{,}\) first the eigenspace obtained from \(M=\matrixrep{S}{B}{B}\text{,}\) then followed by the eigenspace obtained from \(M=\matrixrep{S}{D}{D}\text{.}\) We have
\begin{align*}
\eigenspace{S}{-2}&=
\spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}}}
&
\eigenspace{S}{-2}&=
\spn{\set{\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix},\,\begin{bmatrix}1 & -2\\1 & 1\end{bmatrix}}}\text{.}
\end{align*}
Subspaces generally have many bases, and that is the situation here. With a careful proof of set equality, you can show that these two eigenspaces are equal sets. The key observation to make such a proof go is that
\begin{equation*}
\begin{bmatrix}1 & -2\\1 & 1\end{bmatrix}
=
\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}+\begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}
\end{equation*}
which will establish that the second set is a subset of the first. With equal dimensions,
Theorem EDYES will finish the task.
So the eigenvalues of a linear transformation are independent of the matrix representation employed to compute them!