Skip to main content
Logo image

Applied Discrete Structures

Section 12.5 Some Applications

A large and varied number of applications involve computations of powers of matrices. These applications can be found in science, the social sciences, economics, engineering, and, many other areas where mathematics is used. We will consider a few diverse examples the mathematics behind these applications here.

Subsection 12.5.1 Diagonalization

We begin by developing a helpful technique for computing Am, m>1. If A can be diagonalized, then there is a matrix P such that P1AP=D, where D is a diagonal matrix and
(12.5.1)Am=PDmP1 for all m1
The proof of this identity was an exercise in Section 5.4. The condition that D be a diagonal matrix is not necessary but when it is, the calculation on the right side is particularly easy to perform. Although the formal proof is done by induction, the reason why it is true is easily seen by writing out an example such as m=3:
A3=(PDP1)3=(PDP1)(PDP1)(PDP1)=PD(P1P)D(P1P)DP1by associativity of matrix multiplication=PDIDIDP1=PDDDP1=PD3P1

Example 12.5.1. Application to Recursion: Matrix Computation of the Fibonnaci Sequence.

Consider the computation of terms of the Fibonnaci Sequence. Recall that F0=1,F1=1 and Fk=Fk1+Fk2 for k2.
In order to formulate the calculation in matrix form, we introduced the “dummy equation” Fk1=Fk1 so that now we have two equations
Fk=Fk1+Fk2Fk1=Fk1
If A=(1110), these two equations can be expressed in matrix form as
(FkFk1)=(1110)(Fk1Fk2) if k2=A(Fk1Fk2)=A2(Fk2Fk3) if k3etc.
We can use induction to prove that if k2,
(FkFk1)=Ak1(11)
Next, by diagonalizing A and using the fact that Am=PDmP1. we can show that
Fk=15((1+52)k(152)k)
Some comments on this example:
  1. An equation of the form Fk=aFk1+bFk2 , where a and b are given constants, is referred to as a linear homogeneous second-order difference equation. The conditions F0=c0 and F1=c1 , where c1 and c2 are constants, are called initial conditions. Those of you who are familiar with differential equations may recognize that this language parallels what is used in differential equations. Difference (aka recurrence) equations move forward discretely; that is, in a finite number of positive steps. On the other hand, a differential equation moves continuously; that is, takes an infinite number of infinitesimal steps.
  2. A recurrence relationship of the form Sk=aSk1+b, where a and b are constants, is called a first-order difference equation. In order to write out the sequence, we need to know one initial condition. Equations of this type can be solved similarly to the method outlined in the example by introducing the superfluous equation 1=0Fk1+1 to obtain in matrix equation:
    (Fk1)=(ab01)(Sk11) (Fk1)=(ab01)k(F01)

Subsection 12.5.2 Path Counting

In the next example, we apply the following theorem, which can be proven by induction.

Example 12.5.3. Counting Paths with Diagonalization.

Consider the graph in Figure 12.5.4.
Counting Numbers of Paths
Figure 12.5.4. Counting Numbers of Paths
As we saw in Section 6.4, the adjacency matrix of this graph is A=(110101011).
Recall that Ak is the adjacency matrix of the relation rk, where r is the relation {(a,a),(a,b),(b,a),(b,c),(c,b),(c,c)} of the above graph. Also recall that in computing Ak, we used Boolean arithmetic. What happens if we use “regular” arithmetic? If we square A we get A2=(211121112)
How can we interpret this? We note that A33=2 and that there are two paths of length two from c (the third node) to c. Also, A13=1, and there is one path of length 2 from a to c. The reader should verify these claims by examining the graph.
How do we compute Ak for possibly large values of k? From the discussion at the beginning of this section, we know that Ak=PDkP1 if A is diagonalizable. We leave to the reader to show that λ=1,2, and 1 are eigenvalues of A with eigenvectors
(101),(111), and (121)
Then
Ak=P(10002k000(1)k)P1
where P=(111012111) and P1=(12012131313161316)
See Exercise 12.5.5.5 of this section for the completion of this example.

Subsection 12.5.3 Matrix Calculus

Example 12.5.5. Matrix Calculus - Exponentials.

Those who have studied calculus recall that the Maclaurin series is a useful way of expressing many common functions. For example, ex=k=0xkk!. Indeed, calculators and computers use these series for calculations. Given a polynomial f(x), we defined the matrix-polynomial f(A) for square matrices in Chapter 5. Hence, we are in a position to describe eA for an n×n matrix A as a limit of polynomials, the partial sums of the series. Formally, we write
eA=I+A+A22!+A33!+=k=0Akk!
Again we encounter the need to compute high powers of a matrix. Let A be an n×n diagonalizable matrix. Then there exists an invertible n×n matrix P such that P1AP=D, a diagonal matrix, so that
eA=ePDP1=k=0(PDP1)kk!=P(k=0Dkk!)P1
The infinite sum in the middle of this final expression can be easily evaluated if D is diagonal. All entries of powers off the diagonal are zero and the i th entry of the diagonal is
(k=0Dkk!)ii=k=0Diikk!=eDii
For example, if A=(2123), the first matrix we diagonalized in Section 12.3, we found that P=(1112) and D=(1004).
Therefore,
eA=(1112)(e00e4)(23131313)=(2e3+e43e3+e432e3+2e43e3+2e43)(20.011617.293334.586637.3049)

Remark 12.5.6.

Many of the ideas of calculus can be developed using matrices. For example, if A(t)=(t33t2+8tet2) then dA(t)dt=(3t26t+8et0)
Many of the basic formulas in calculus are true in matrix calculus. For example,
d(A(t)+B(t))dt=dA(t)dt+dB(t)dt
and if A is a constant matrix,
deAtdt=AeAt
Matrix calculus can be used to solve systems of differential equations in a similar manner to the procedure used in ordinary differential equations.

Subsection 12.5.4 SageMath Note - Matrix Exponential

Sage’s matrix exponential method is exp.

Exercises 12.5.5 Exercises

1.

  1. Write out all the details of Example 12.5.1 to show that the formula for Fk given in the text is correct.
  2. Use induction to prove the assertion made in the example that (FkFk1)=Ak1(11)

2.

  1. Do Example 8.3.14 using the method outlined in Example 12.5.1. Note that the terminology characteristic equation, characteristic polynomial, and so on, introduced in Chapter 8, comes from the language of matrix algebra,
  2. What is the significance of Algorithm 8.3.12, part c, with respect to this section?

3.

Solve S(k)=5S(k1)+4, with S(0)=0, using the method of this section.

4.

How many paths are there of length 6 from vertex 1 to vertex 3 in Figure 12.5.7? How many paths from vertex 2 to vertex 2 of length 6 are there?
Graph for exercise 4
Figure 12.5.7. Graph for exercise 4
Hint.
The characteristic polynomial of the adjacency matrix is λ44λ2.

5.

  1. Use matrices to determine the number of paths of length 1 that exist from vertex a to each of the vertices in the given graph. Verify using the graph. Do the same for vertices b and c.
  2. Verify all the details provided in the example.
  3. Use matrices to determine the number of paths of length 4 there between each pair of nodes in the graph. Verify your results using the graph.
Answer.
  1. Since A=A1=(110101011), there are 0 paths of length 1 from: node c to node a, node b to node b, and node a to node c; and there is 1 path of length 1 for every other pair of nodes.
  2. The characteristic polynomial is |AcI|=|1c101c1011c|=c3+2c2+c2
    Solving the characteristic equation c3+2c2+c2=0 we find solutions 1, 2, and -1.
    If c=1, we find the associated eigenvector by finding a nonzero solution to (010111010)(x1x2x3)=(000) One of these, which will be the first column of P, is (101)
    If c=2, the system (110121011)(x1x2x3)=(000) yields eigenvectors, including (111), which will be the second column of P.
    If c=1, then the system determining the eigenvectors is (210111012)(x1x2x3)=(000) and we can select (121), although any nonzero multiple of this vector could be the third column of P.
  3. Assembling the results of part (b) we have P=(111012111) .
    A4=P(1400024000(1)4)P1=P(1000160001)P1=(116101621161)(12012131313161316)=(655565556)
    Hence there are five different paths of length 4 between distinct vertices, and six different paths that start and end at the same vertex. The reader can verify these facts from Figure 12.5.4

6.

Let A=(2112)
  1. Find eA
  2. Recall that sinx=k=0(1)kxk(2k+1)! and compute sinA.
  3. Formulate a reasonable definition of the natural logarithm of a matrix and compute lnA.

7.

We noted in Chapter 5 that since matrix algebra is not commutative under multiplication, certain difficulties arise. Let A=(1100) and B=(0002).
  1. Compute eA, eB, and eA+B. Compare eAeB, eBeA and eA+B .
  2. Show that if 00 is the 2×2 zero matrix, then e00=I.
  3. Prove that if A and B are two matrices that do commute, then eA+B=eAeB, thereby proving that eA and eB commute.
  4. Prove that for any matrix A, (eA)1=eA.
Answer.
  1. eA=(ee00) , eB=(000e2), and eA+B=(ee2e0e2)
  2. Let 00 be the zero matrix, e00=I+00+0022+0036+=I .
  3. Assume that A and B commute. We will examine the first few terms in the product eAeB. The pattern that is established does continue in general. In what follows, it is important that AB=BA. For example, in the last step, (A+B)2 expands to A2+AB+BA+B2, not A2+2AB+B2, if we can’t assume commutativity.
    eAeB=(k=0Akk!)(k=0Bkk!)=(I+A+A22+A36+)(I+B+B22+B36+)=I+A+B+A22+AB+B22+A36+A2B2+AB22+B36+=I+(A+B)+12(A2+2AB+B2)+16(A3+3A2B+3AB2+B3)+=I+(A+B)+12(A+B)2+16(A+B)3+=eA+B
  4. Since A and A commute, we can apply part d;
    eAeA=eA+(A)=e00=I

8.

Another observation for adjacency matrices: For the matrix in Example 12.5.3, note that the sum of the elements in the row corresponding to the node a (that is, the first row) gives the outdegree of a. Similarly, the sum of the elements in any given column gives the indegree of the node corresponding to that column.
Graph for exercise 8
Figure 12.5.8. Graph for exercise 8
  1. Using the matrix A of Example 12.5.3, find the outdegree and the indegree of each node. Verify by the graph.
  2. Repeat part (a) for the directed graphs in Figure 12.5.8.
You have attempted of activities on this page.