This example illustrates the proof of
Theoremย EMHE, and so will employ the same notation as the proof โ look there for full explanations. It is
not meant to be a final example of a reasonable computational approach to finding eigenvalues and eigenvectors. OK, warnings in place, here we go.
Consider the matrix \(A\text{,}\) and choose the vector \(\vect{x}\text{,}\)
\begin{align*}
A&=\begin{bmatrix}
-7 & -1 & 11 & 0 & -4\\
4 & 1 & 0 & 2 & 0\\
-10 & -1 & 14 & 0 & -4\\
8 & 2 & -15 & -1 & 5\\
-10 & -1 & 16 & 0 & -6
\end{bmatrix}
&
\vect{x}&=\colvector{3\\0\\3\\-5\\4}\text{.}
\end{align*}
It is important to notice that the choice of
\(\vect{x}\) could be
anything, so long as it is
not the zero vector. We have not chosen
\(\vect{x}\) totally at random, but so as to make our illustration of the theorem as general as possible. You could replicate this example with your own choice and the computations are guaranteed to be reasonable, provided you have a computational tool that will factor a fifth degree polynomial for you.
The set
\begin{align*}
S&=\set{\vect{x},\,A\vect{x},\,A^2\vect{x},\,A^3\vect{x},\,A^4\vect{x},\,A^5\vect{x}}\\
&=
\set{
\colvector{3\\0\\3\\-5\\4},\,
\colvector{-4\\2\\-4\\4\\-6},\,
\colvector{6\\-6\\6\\-2\\10},\,
\colvector{-10\\14\\-10\\-2\\-18},\,
\colvector{18\\-30\\18\\10\\34},\,
\colvector{-34\\62\\-34\\-26\\-66}
}
\end{align*}
is guaranteed to be linearly dependent, as it has six vectors from
\(\complex{5}\) (
Theoremย MVSLD).
We will search for a nontrivial relation of linear dependence by solving a homogeneous system of equations whose coefficient matrix has the vectors of \(S\) as columns through row operations,
\begin{equation*}
\begin{bmatrix}
3 & -4 & 6 & -10 & 18 & -34\\
0 & 2 & -6 & 14 & -30 & 62\\
3 & -4 & 6 & -10 & 18 & -34\\
-5 & 4 & -2 & -2 & 10 & -26\\
4 & -6 & 10 & -18 & 34 & -66
\end{bmatrix}
\rref
\begin{bmatrix}
\leading{1} & 0 & -2 & 6 & -14 & 30\\
0 & \leading{1} & -3 & 7 & -15 & 31\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}\text{.}
\end{equation*}
There are four free variables for describing solutions to this homogeneous system, so we have our pick of solutions. The most expedient choice would be to set
\(x_3=1\) and
\(x_4=x_5=x_6=0\text{.}\) However, we will again opt to maximize the generality of our illustration of
Theoremย EMHE and choose
\(x_3=-8\text{,}\) \(x_4=-3\text{,}\) \(x_5=1\) and
\(x_6=0\text{.}\) This leads to a solution with
\(x_1=16\) and
\(x_2=12\text{.}\)
This relation of linear dependence then says that
\begin{align*}
\zerovector&=16\vect{x}+12A\vect{x}-8A^2\vect{x}-3A^3\vect{x}+A^4\vect{x}+0A^5\vect{x}\\
\zerovector&=\left(16+12A-8A^2-3A^3+A^4\right)\vect{x}\text{.}
\end{align*}
So we define
\(p(x)=16+12x-8x^2-3x^3+x^4\text{,}\) and as advertised in the proof of
Theoremย EMHE, we have a polynomial of degree
\(m=4\geq 1\) such that
\(p(A)\vect{x}=\zerovector\text{.}\) Now we need to factor
\(p(x)\) over
\(\complexes\text{.}\) If you made your own choice of
\(\vect{x}\) at the start, this is where you might have a fifth degree polynomial, and where you might need to use a computational tool to find roots and factors. We have
\begin{equation*}
p(x)=16+12x-8x^2-3x^3+x^4=(x-4)(x+2)(x-2)(x+1)\text{.}
\end{equation*}
So we know that
\begin{equation*}
\zerovector=p(A)\vect{x}=(A-4I_5)(A+2I_5)(A-2I_5)(A+1I_5)\vect{x}\text{.}
\end{equation*}
We apply one factor at a time, until we get the zero vector, so as to determine the value of
\(k\) described in the proof of
Theoremย EMHE,
\begin{align*}
(A+1I_5)\vect{x}&=
\begin{bmatrix}
-6 & -1 & 11 & 0 & -4\\
4 & 2 & 0 & 2 & 0\\
-10 & -1 & 15 & 0 & -4\\
8 & 2 & -15 & 0 & 5\\
-10 & -1 & 16 & 0 & -5
\end{bmatrix}
\colvector{3\\0\\3\\-5\\4}
=
\colvector{-1\\2\\-1\\-1\\-2}\\
(A-2I_5)(A+1I_5)\vect{x}&=
\begin{bmatrix}
-9 & -1 & 11 & 0 & -4\\
4 & -1 & 0 & 2 & 0\\
-10 & -1 & 12 & 0 & -4\\
8 & 2 & -15 & -3 & 5\\
-10 & -1 & 16 & 0 & -8
\end{bmatrix}
\colvector{-1\\2\\-1\\-1\\-2}
=
\colvector{4\\-8\\4\\4\\8}\\
(A+2I_5)(A-2I_5)(A+1I_5)\vect{x}&=
\begin{bmatrix}
-5 & -1 & 11 & 0 & -4\\
4 & 3 & 0 & 2 & 0\\
-10 & -1 & 16 & 0 & -4\\
8 & 2 & -15 & 1 & 5\\
-10 & -1 & 16 & 0 & -4
\end{bmatrix}
\colvector{4\\-8\\4\\4\\8}
=
\colvector{0\\0\\0\\0\\0}\text{.}
\end{align*}
So \(k=3\) and
\begin{equation*}
\vect{z}=(A-2I_5)(A+1I_5)\vect{x}=\colvector{4\\-8\\4\\4\\8}
\end{equation*}
is an eigenvector of \(A\) for the eigenvalue \(\lambda=-2\text{,}\) as you can check by doing the computation \(A\vect{z}\text{.}\)
Using the same choice of \(\vect{x}\text{,}\) will lead to the same polynomial \(p(x)\text{,}\) but we can work with the factorization in a different order.
\begin{align*}
(A+2I_5)\vect{x}&=
\begin{bmatrix}
-5 & -1 & 11 & 0 & -4\\
4 & 3 & 0 & 2 & 0\\
-10 & -1 & 16 & 0 & -4\\
8 & 2 & -15 & 1 & 5\\
-10 & -1 & 16 & 0 & -4
\end{bmatrix}
\colvector{3\\0\\3\\-5\\4}
=
\colvector{2\\2\\2\\-6\\2}\\
(A+1I_5)(A+2I_5)\vect{x}&=
\begin{bmatrix}
-6 & -1 & 11 & 0 & -4\\
4 & 2 & 0 & 2 & 0\\
-10 & -1 & 15 & 0 & -4\\
8 & 2 & -15 & 0 & 5\\
-10 & -1 & 16 & 0 & -5
\end{bmatrix}
\colvector{2\\2\\2\\-6\\2}
=
\colvector{0\\0\\0\\0\\0}
\end{align*}
Now \(k=2\) and
\begin{equation*}
\vect{z}=(A+2I_5)\vect{x}=\colvector{2\\2\\2\\-6\\2}
\end{equation*}
is an eigenvector of \(A\) for the eigenvalue \(\lambda=-1\text{,}\) as you can check by doing the computation \(A\vect{z}\text{.}\)
Theoremย EMHE guarantees the
existence of an eigenvalue, and the
constructive proof (see
Proof Techniqueย C) provides a procedure for finding an eigenvalue, provided we can factor a polynomial partway through. But it does not seem to offer much control. We choose (guess?) any vector
\(\vect{x}\text{,}\) and the set of linearly dependent vectors created via powers of the matrix could have many different choices (guesses?) for relations of linear dependence leading to many different polynomials. In the extreme you could be unlucky (or lucky?) and end up with a degree 1 polynomial. For example, choose
\begin{equation*}
\vect{x}=\colvector{2\\1\\3\\-3\\3}
\end{equation*}
and observe that the vectors \(A^i\vect{x}\) lead with
\begin{align*}
A^0\vect{x} &= \colvector{2\\1\\3\\-3\\3} & A^1\vect{x} &= \colvector{6\\3\\9\\-9\\9}
\end{align*}
which is already a linearly dependent set. So a possible polynomial is \(p(x)=x-3\text{,}\) and we recognize that \(\vect{x}\) happens to be an eigenvector of \(A\text{!}\) If we want to find some of the other eigenvalues of \(A\) we need to start over with another choice of \(\vect{x}\text{.}\) (Why? Compute a few more \(A^i\vect{x}\) to see what happens.)
So the goal of this chapter is to make eigenvalues less mysterious, but knowing that they actually exist is a good start.
If you work through this example with your own choice of the vector
\(\vect{x}\) (strongly recommended) then the eigenvalue you will find may be different, but will be in the set
\(\set{3,\,0,\,1,\,-1,\,-2}\text{.}\) See
Exerciseย EE.M60 for a suggested starting vector.