Section 5.6 Generalized eigenspaces
Example Example 5.4.9 showed us that if where and are -invariant, then the matrix has block diagonal form as long as the basis is the union of bases of and
We want to take this idea further. If where each subspace is -invariant, then with respect to a basis consisting of basis vectors for each subspace, we will have
Our goal moving forward is twofold: one, to make the blocks as small as possible, so that is as close to diagonal as possible, and two, to make the blocks as simple as possible. Of course, if is diagonalizable, then we can get all blocks down to size but this is not always possible.
Recall from Section 4.1 that if the characteristic polynomial of (or equivalently, any matrix representation of ) is
then for each and is diagonalizable if and only if we have equality for each (This guarantees that we have sufficiently many independent eigenvectors to form a basis of )
Since eigenspaces are -invariant, we see that being able to diagonalize is equivalent to having the direct sum decomposition
If cannot be diagonalized, it’s because we came up short on the number of eigenvectors, and the direct sum of all eigenspaces only produces some subspace of of lower dimension. We now consider how one might enlarge a set of independent eigenvectors in some standard, and ideally optimal, way.
First, we note that for any operator the restriction of to is the zero operator, since by definition, for all Since we define it follows that restricts to the zero operator on the eigenspace The idea is to relax the condition “identically zero” to something that will allow us to potentially enlarge some of our eigenspaces, so that we end up with enough vectors to span
It turns out that the correct replacement for “identically zero” is “nilpotent”. What we would like to find is some subspace such that the restriction of to will be nilpotent. (Recall that this means for some integer when restricted to ) The only problem is that we don’t (yet) know what this subspace should be. To figure it out, we rely on some ideas you may have explored in your last assignment.
In other words, for any operator the kernels of successive powers of can get bigger, but the moment the kernel doesn’t change for the next highest power, it stops changing for all further powers of That is, we have a sequence of kernels of strictly greater dimension until we reach a maximum, at which point the kernels stop growing. And of course, the maximum dimension cannot be more than the dimension of
Definition 5.6.2.
Some remarks are in order. First, we can actually define for any scalar But this space will be trivial if is not an eigenvalue. Second, it is possible to show (although we will not do so here) that if is an eigenvalue with multiplicity then (The kernel will usually have stopped growing well before we hit but we know they’re all eventually equal, so using guarantees we have everything).
We will not prove it here (see Nicholson, or Axler), but the advantage of using generalized eigenspaces is that they’re just big enough to cover all of
Theorem 5.6.3.
For each eigenvalue of let denote the smallest integer power such that Then certainly we have for each (Note also that if then )
The polynomial is the polynomial of smallest degree such that The polynomial is called the minimal polynomial of Note that is diagonalizable if and only if the minimal polynomial of has no repeated roots.
In Section 5.7, we’ll explore a systematic method for determining the generalized eigenspaces of a matrix, and in particular, for computing a basis for each generalized eigenspace, with respect to which the corresponding block in the block-diagonal form is especially simple.
You have attempted 1 of 1 activities on this page.