In Chapter 5 we defined the inverse of an matrix. We noted that not all matrices have inverses, but when the inverse of a matrix exists, it is unique. This enables us to define the inverse of an matrix as the unique matrix such that , where is the identity matrix. In order to get some practical experience, we developed a formula that allowed us to determine the inverse of invertible matrices. We will now use the Gauss-Jordan procedure for solving systems of linear equations to compute the inverses, when they exist, of matrices, . The following procedure for a matrix can be generalized for matrices, .
Given the matrix , we want to find its inverse, the matrix , if it exists, such that and . We will concentrate on finding a matrix that satisfies the first equation and then verify that B also satisfies the second equation.
By definition of equality of matrices, this gives us three systems of equations to solve. The augmented matrix of one of the systems, the one equating the first columns of the two matrices is:
The critical thing to note here is that the coefficient matrix in (12.2.2) is the same as the matrix in (12.2.1), hence the sequence of row operations that we used in row reduction are the same in both cases.
to obtain and . Here again it is important to note that the sequence of row operations used to solve this system is exactly the same as those we used in the first system. Why not save ourselves a considerable amount of time and effort and solve all three systems simultaneously? This we can do this by augmenting the coefficient matrix by the identity matrix . We then have, by applying the same sequence of row operations as above,
As the following theorem indicates, the verification that is not necessary. The proof of the theorem is beyond the scope of this text. The interested reader can find it in most linear algebra texts.
Let be an matrix. If a matrix can be found such that , then , so that . In fact, to find , we need only find a matrix that satisfies one of the two conditions or .
It is clear from Chapter 5 and our discussions in this chapter that not all matrices have inverses. How do we determine whether a matrix has an inverse using this method? The answer is quite simple: the technique we developed to compute inverses is a matrix approach to solving several systems of equations simultaneously.
Example12.2.2.Recognition of a non-invertible matrix.
The reader can verify that if then the augmented matrix reduces to
(12.2.4)
Although this matrix can be row-reduced further, it is not necessary to do so since, in equation form, we have:
Table12.2.3.
Clearly, there are no solutions to the first two systems, therefore does not exist. From this discussion it should be obvious to the reader that the zero row of the coefficient matrix together with the nonzero entry in the fourth column of that row in matrix (12.2.4) tells us that does not exist.
Use the method of this section to find the inverses of the following matrices whenever possible. If an inverse does not exist, explain why.
Answer.
The inverse does not exist. When the augmented matrix is row-reduced (see below), the last row of the first half cannot be manipulated to match the identity matrix.
Express each system of equations in Exercise 12.1.7.1 in the form . When possible, solve each system by first finding the inverse of the matrix of coefficients.
Answer.
The solutions are in the solution section of Section 12.1, exercise 1, We illustrate with the outline of the solution to part (c). The matrix version of the system is
We compute the inverse of the matrix of coefficients and get
and
You have attempted 1 of 1 activities on this page.