Skip to main content
Logo image

Section MINM Matrix Inverses and Nonsingular Matrices

We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.

Subsection NMI Nonsingular Matrices are Invertible

We need a couple of technical results for starters. Some books would call these minor, but essential, results lemmas. We’ll just call ’em theorems. See Proof Technique LC for more on the distinction.
The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product. This result has an analogy in the algebra of complex numbers: suppose \(\alpha,\,\beta\in\complexes\text{,}\) then \(\alpha\beta\neq 0\) if and only if \(\alpha\neq 0\) and \(\beta\neq 0\) (see Theorem ZPZF). We can view this result as suggesting that the term “nonsingular” for matrices is like the term “nonzero” for scalars. Consider too that we know singular matrices, as coefficient matrices for systems of equations, will sometimes lead to systems with no solutions, or systems with infinitely many solutions (Theorem NMUS). What do linear equations with zero look like? Consider \(0x=5\text{,}\) which has no solution, and \(0x=0\text{,}\) which has infinitely many solutions. In the algebra of scalars, zero is exceptional (meaning different, not better), and in the algebra of matrices, singular matrices are also the exception. While there is only one zero scalar, and there are infinitely many singular matrices, we will see that singular matrices are a distinct minority. You will recall that at the end of Section NM we remarked that nonsingular seemed an odd word for matrices that led to systems with a single solution. Now, instead of thinking of “nonsingular” this way, rather think of it as being more like how “nonzero” is used for scalars.

Proof.

(⇒) 
For this portion of the proof we will form the logically-equivalent contrapositive and prove that statement using two cases. “\(AB\) is nonsingular implies \(A\) and \(B\) are both nonsingular” becomes “\(A\) or \(B\) is singular implies \(AB\) is singular.” (Be sure to undertstand why the “and” became an “or”, see Proof Technique CP.)
Case 1. \(B\) is singular.
Because \(B\) is singular there must be a nonzero vector \(\vect{z}\) that is a solution to \(\homosystem{B}\text{.}\) So
\begin{align*} (AB)\vect{z}&=A(B\vect{z})&&\knowl{./knowl/xref/theorem-MMA.html}{\text{Theorem MMA}}\\ &=A\zerovector&&\knowl{./knowl/xref/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=\zerovector&&\knowl{./knowl/xref/theorem-MMZM.html}{\text{Theorem MMZM}}\text{.} \end{align*}
With Theorem SLEMM we can translate this vector equality to the statement that \(\vect{z}\) is a nonzero solution to \(\homosystem{AB}\text{.}\) Thus \(AB\) is singular (Definition NM), as desired.
Case 2. \(B\) is nonsingular.
Since \(B\) is nonsingular, our hypothesis implies that \(A\) must be singular. Because \(A\) is singular, there is a nonzero vector \(\vect{y}\) that is a solution to \(\homosystem{A}\text{.}\) Now consider the linear system \(\linearsystem{B}{\vect{y}}\text{.}\) Since \(B\) is nonsingular, the system has a unique solution (Theorem NMUS), which we will denote as \(\vect{w}\text{.}\) We first claim \(\vect{w}\) is not the zero vector either. Assuming the opposite, suppose that \(\vect{w}=\zerovector\) (Proof Technique CD). Then
\begin{align*} \vect{y}&=B\vect{w}&&\knowl{./knowl/xref/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=B\zerovector&&\text{Hypothesis}\\ &=\zerovector&&\knowl{./knowl/xref/theorem-MMZM.html}{\text{Theorem MMZM}} \end{align*}
contrary to \(\vect{y}\) being nonzero. So \(\vect{w}\neq\zerovector\text{.}\)
The pieces are in place, so here we go,
\begin{align*} (AB)\vect{w}&=A(B\vect{w})&&\knowl{./knowl/xref/theorem-MMA.html}{\text{Theorem MMA}}\\ &=A\vect{y}&&\knowl{./knowl/xref/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=\zerovector&&\knowl{./knowl/xref/theorem-SLEMM.html}{\text{Theorem SLEMM}}\text{.} \end{align*}
With Theorem SLEMM we can translate this vector equality to the statement that \(\vect{w}\) is a nonzero solution to \(\homosystem{AB}\text{.}\) Thus \(AB\) is singular (Definition NM), as desired. And this conclusion holds for both cases.
(⇐) 
Now assume that both \(A\) and \(B\) are nonsingular. Suppose that \(\vect{x}\in\complex{n}\) is a solution to \(\homosystem{AB}\text{.}\) Then
\begin{align*} \zerovector&=\left(AB\right)\vect{x}&&\knowl{./knowl/xref/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=A\left(B\vect{x}\right)&&\knowl{./knowl/xref/theorem-MMA.html}{\text{Theorem MMA}}\text{.} \end{align*}
By Theorem SLEMM, \(B\vect{x}\) is a solution to \(\homosystem{A}\text{,}\) and by the definition of a nonsingular matrix (Definition NM), we conclude that \(B\vect{x}=\zerovector\text{.}\) Now, by an entirely similar argument, the nonsingularity of \(B\) forces us to conclude that \(\vect{x}=\zerovector\text{.}\) So the only solution to \(\homosystem{AB}\) is the zero vector and we conclude that \(AB\) is nonsingular by Definition NM.
This is a powerful result in the “forward” direction, because it allows us to begin with a hypothesis that something complicated (the matrix product \(AB\)) has the property of being nonsingular, and we can then conclude that the simpler constituents (\(A\) and \(B\) individually) then also have the property of being nonsingular. If we had thought that the matrix product was an artificial construction, results like this would make us begin to think twice.
The contrapositive of this entire result is equally interesting. It says that \(A\) or \(B\) (or both) is a singular matrix if and only if the product \(AB\) is singular. (See Proof Technique CP.)

Proof.

The matrix \(I_n\) is nonsingular (since it row-reduces easily to \(I_n\text{,}\) Theorem NMRRI). So \(A\) and \(B\) are nonsingular by Theorem NPNF, so in particular \(B\) is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix \(C\) so that \(BC=I_n\text{.}\) This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. \(B\) is nonsingular, so there must be a “right-inverse” for \(B\text{,}\) and we are calling it \(C\text{.}\)
Now
\begin{align*} BA &=(BA)I_n&& \knowl{./knowl/xref/theorem-MMIM.html}{\text{Theorem MMIM}}\\ &=(BA)(BC)&& \knowl{./knowl/xref/theorem-CINM.html}{\text{Theorem CINM}}\\ &=B(AB)C&& \knowl{./knowl/xref/theorem-MMA.html}{\text{Theorem MMA}}\\ &=BI_nC&&\text{Hypothesis}\\ &=BC&& \knowl{./knowl/xref/theorem-MMIM.html}{\text{Theorem MMIM}}\\ &=I_n&& \knowl{./knowl/xref/theorem-CINM.html}{\text{Theorem CINM}} \end{align*}
which is the desired conclusion.
So Theorem OSIS tells us that if \(A\) is nonsingular, then the matrix \(B\) guaranteed by Theorem CINM will be both a “right-inverse” and a “left-inverse” for \(A\text{,}\) so \(A\) is invertible and \(\inverse{A}=B\text{.}\)
So if you have a nonsingular matrix, \(A\text{,}\) you can use the procedure described in Theorem CINM to find an inverse for \(A\text{.}\) If \(A\) is singular, then the procedure in Theorem CINM will fail as the first \(n\) columns of \(M\) will not row-reduce to the identity matrix. However, we can say a bit more. When \(A\) is singular, then \(A\) does not have an inverse (which is very different from saying that the procedure in Theorem CINM fails to find an inverse). This may feel like we are splitting hairs, but it is important that we do not make unfounded assumptions. These observations motivate the next theorem.

Proof.

(⇐) 
Since \(A\) is invertible, we can write \(I_n=A\inverse{A}\) (Definition MI). Notice that \(I_n\) is nonsingular (Theorem NMRRI) so Theorem NPNF implies that \(A\) (and \(\inverse{A}\)) is nonsingular.
(⇒) 
Suppose now that \(A\) is nonsingular. By Theorem CINM we find \(B\) so that \(AB=I_n\text{.}\) Then Theorem OSIS tells us that \(BA=I_n\text{.}\) So \(B\) is \(A\)’s inverse, and by construction, \(A\) is invertible.
So for a square matrix, the properties of having an inverse and of having a trivial null space are one and the same. Cannot have one without the other.

Proof.

We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI.
In the case that \(A\) is a nonsingular coefficient matrix of a system of equations, the inverse allows us to very quickly compute the unique solution, for any vector of constants.

Proof.

By Theorem NMUS we know already that \(\linearsystem{A}{\vect{b}}\) has a unique solution for every choice of \(\vect{b}\text{.}\) We need to show that the expression stated is indeed a solution (the solution). That is easy, just “plug it in” to the vector equation representation of the system (Theorem SLEMM)
\begin{align*} A\left(\inverse{A}\vect{b}\right) &=\left(A\inverse{A}\right)\vect{b}&& \knowl{./knowl/xref/theorem-MMA.html}{\text{Theorem MMA}}\\ &=I_n\vect{b}&& \knowl{./knowl/xref/definition-MI.html}{\text{Definition MI}}\\ &=\vect{b}&& \knowl{./knowl/xref/theorem-MMIM.html}{\text{Theorem MMIM}}\text{.} \end{align*}
Since \(A\vect{x}=\vect{b}\) is true when we substitute \(\inverse{A}\vect{b}\) for \(\vect{x}\text{,}\) \(\inverse{A}\vect{b}\) is a (the!) solution to \(\linearsystem{A}{\vect{b}}\text{.}\) See the exercise’s solution for Exercise MM.T38 for an alternate approach to the uniqueness of the system’s solution.
The inverse of a triangular matrix is triangular, of the same type. We will need this result for the proof of Theorem OBUTR.

Proof.

We give the proof for the case when \(A\) is lower triangular of size \(n\text{,}\) and leave the case when \(A\) is upper triangular for you. Consider the process for computing the inverse of a matrix that is outlined in the proof of Theorem CINM. We augment \(A\) with the size \(n\) identity matrix, \(I_n\text{,}\) and row-reduce the \(n\times 2n\) matrix to reduced row-echelon form via the algorithm in Theorem REMEF. The proof involves tracking the peculiarities of this process in the case of a lower triangular matrix. Let \(M=\augmented{A}{I_n}\text{.}\)
First, by Theorem NTM, none of the diagonal elements of \(A\) are zero. We follow the procedure of Theorem REMEF) in just a slightly different order, first creating a row-equivalent matrix \(M^\prime\text{.}\) For each \(1\leq i\leq n\text{,}\) multiply row \(i\) by the nonzero scalar \(\matrixentry{A}{ii}^{-1}\text{.}\) This sets \(\matrixentry{M^\prime}{ii}=1\) and \(\matrixentry{M^\prime}{i,n+i}=\matrixentry{A}{ii}^{-1}\text{,}\) and leaves every zero entry of \(M\) unchanged.
Let \(M_j\) denote the matrix obtained from \(M^\prime\) after converting column \(j\) to a pivot column. We can convert column \(j\) of \(M_{j-1}\) into a pivot column with a set of \(n-j-1\) row operations of the form \(\rowopadd{\alpha}{j}{k}\) with \(j+1\leq k\leq n\text{.}\) The key observation here is that we add multiples of row \(j\) only to higher-numbered rows. This means that none of the entries in rows \(1\) through \(j-1\) is changed, and since row \(j\) has zeros in columns \(j+1\) through \(n\text{,}\) none of the entries in rows \(j+1\) through \(n\) is changed in columns \(j+1\) through \(n\text{.}\) The first \(n\) columns of \(M^\prime\) form a lower triangular matrix with 1’s on the diagonal. In its conversion to the identity matrix through this sequence of row operations, it remains lower triangular with 1’s on the diagonal.
What happens in columns \(n+1\) through \(2n\) of \(M^\prime\text{?}\) These columns began in \(M\) as the identity matrix, and in \(M^\prime\) each diagonal entry was scaled to a reciprocal of the corresponding diagonal entry of \(A\text{.}\) Notice that trivially, these final \(n\) columns of \(M^\prime\) form a lower triangular matrix. Just as we argued for the first \(n\) columns, the row operations that convert \(M_{j-1}\) into \(M_j\) will preserve the lower triangular form in the final \(n\) columns and preserve the exact values of the diagonal entries. By Theorem CINM, the final \(n\) columns of \(M_n\) is the inverse of \(A\text{,}\) and this matrix has the necessary properties advertised in the conclusion of this theorem.

Sage MI. Matrix Inverse.

Now we know that invertibility is equivalent to nonsingularity, and that the procedure outlined in Theorem CINM will always yield an inverse for a nonsingular matrix. But rather than using that procedure, Sage implements a .inverse() method. In the following, we compute the inverse of a \(3\times 3\) matrix, and then purposely convert it to a singular matrix by replacing the last column by a linear combination of the first two.
Notice how the failure to invert C is explained by the matrix being singular.
Systems with nonsingular coefficient matrices can be solved easily with the matrix inverse. We will recycle A as a coefficient matrix, so be sure to execute the code above.
If you find it more convenient, you can use the same notation as the text for a matrix inverse.

Sage NME3. Nonsingular Matrix Equivalences, Round 3.

For square matrices, Sage has the methods .is_singular() and .is_invertible(). By Theorem NI we know these two functions to be logical opposites. One way to express this is that these two methods will always return different values. Here we demonstrate with a nonsingular matrix and a singular matrix. The comparison != is “not equal.”
We could test other properties of the matrix inverse, such as Theorem SS.

Subsection UM Unitary Matrices

Recall that the adjoint of a matrix is \(\adjoint{A}=\transpose{\left(\conjugate{A}\right)}\) (Definition A).

Definition UM. Unitary Matrices.

Suppose that \(U\) is a square matrix of size \(n\) such that \(\adjoint{U}U=I_n\text{.}\) Then we say \(U\) is unitary.
This condition may seem rather far-fetched at first glance. Would there be any matrix that behaved this way? Well, yes, here is one.

Example UM3. Unitary matrix of size 3.

Let
\begin{equation*} U= \begin{bmatrix} \frac{1 + i }{{\sqrt{5}}} & \frac{3 + 2\,i }{{\sqrt{55}}} & \frac{2+2i}{\sqrt{22}} \\ \frac{1 - i }{{\sqrt{5}}} & \frac{2 + 2\,i }{{\sqrt{55}}} & \frac{-3 + i }{{\sqrt{22}}} \\ \frac{i }{{\sqrt{5}}} & \frac{3 - 5\,i }{{\sqrt{55}}} & -\frac{2}{\sqrt{22}} \end{bmatrix}\text{.} \end{equation*}
The computations get a bit tiresome, but if you work your way through the computation of \(\adjoint{U}U\text{,}\) you will arrive at the \(3\times 3\) identity matrix \(I_3\text{.}\)
Unitary matrices do not have to look quite so gruesome. Here is a larger one that is a bit more pleasing.

Example UPM. Unitary permutation matrix.

The matrix
\begin{equation*} P= \begin{bmatrix} 0&1&0&0&0\\ 0&0&0&1&0\\ 1&0&0&0&0\\ 0&0&0&0&1\\ 0&0&1&0&0 \end{bmatrix} \end{equation*}
is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the \(5\times 5\) identity matrix, \(I_5\) (Definition IM).
An interesting exercise is to build another \(5\times 5\) unitary matrix, \(R\text{,}\) using a different rearrangement of the columns of \(I_5\text{.}\) Then form the product \(PR\text{.}\) This will be another unitary matrix (Exercise MINM.T10). If you were to build all \(5!=5\times 4\times 3\times 2\times 1=120\) matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity (\(I_5\)), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative!
If a matrix \(A\) has only real number entries (we say it is a real matrix) then the defining property of being unitary simplifies to \(\transpose{A}A=I_n\text{.}\) In this case we, and everybody else, call the matrix orthogonal, so you may often encounter this term in your other reading when the complex numbers are not under consideration.
Unitary matrices have easily computed inverses. They also have columns that form orthonormal sets. Here are the theorems that show us that unitary matrices are not as strange as they might initially appear.

Proof.

By Definition UM, we know that \(\adjoint{U}U=I_n\text{.}\) The matrix \(I_n\) is nonsingular (since it row-reduces easily to \(I_n\text{,}\) Theorem NMRRI). So by Theorem NPNF, \(U\) and \(\adjoint{U}\) are both nonsingular matrices.
The equation \(\adjoint{U}U=I_n\) gets us halfway to an inverse of \(U\text{,}\) and Theorem OSIS tells us that then \(U\adjoint{U}=I_n\) also. So \(U\) and \(\adjoint{U}\) are inverses of each other (Definition MI).

Proof.

The proof revolves around recognizing that a typical entry of the product \(\adjoint{A}A\) is an inner product of columns of \(A\text{.}\) Here are the details to support this claim.
\begin{align*} \matrixentry{\adjoint{A}A}{ij} &=\sum_{k=1}^{n}\matrixentry{\adjoint{A}}{ik}\matrixentry{A}{kj}&& \knowl{./knowl/xref/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\matrixentry{\transpose{\conjugate{A}}}{ik}\matrixentry{A}{kj}&& \knowl{./knowl/xref/definition-A.html}{\text{Definition A}}\\ &=\sum_{k=1}^{n}\matrixentry{\,\conjugate{A}\,}{ki}\matrixentry{A}{kj}&& \knowl{./knowl/xref/definition-TM.html}{\text{Definition TM}}\\ &=\sum_{k=1}^{n}\conjugate{\matrixentry{A}{ki}}\matrixentry{A}{kj}&& \knowl{./knowl/xref/definition-CCM.html}{\text{Definition CCM}}\\ &=\sum_{k=1}^{n}\conjugate{\vectorentry{\vect{A}_i}{k}}\vectorentry{\vect{A}_j}{k}\\ &=\innerproduct{\vect{A}_i}{\vect{A}_j}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}} \end{align*}
We now employ this equality in a chain of equivalences,
\begin{align*} &S=\set{\vectorlist{A}{n}}\text{ is an orthonormal set}\\ &\iff \innerproduct{\vect{A}_i}{\vect{A}_j}= \begin{cases} 0 &\text{if }i\neq j\\ 1 &\text{if }i=j \end{cases}&& \knowl{./knowl/xref/definition-ONS.html}{\text{Definition ONS}}\\ &\iff \matrixentry{\adjoint{A}A}{ij}= \begin{cases} 0 &\text{if }i\neq j\\ 1 &\text{if }i=j \end{cases}\\ &\iff \matrixentry{\adjoint{A}A}{ij}=\matrixentry{I_n}{ij},\ 1\leq i\leq n,\ 1\leq j\leq n&& \knowl{./knowl/xref/definition-IM.html}{\text{Definition IM}}\\ &\iff \adjoint{A}A=I_n&& \knowl{./knowl/xref/definition-ME.html}{\text{Definition ME}}\\ &\iff A\text{ is a unitary matrix}&& \knowl{./knowl/xref/definition-UM.html}{\text{Definition UM}} \end{align*}

Example OSMC. Orthonormal set from matrix columns.

The matrix
\begin{equation*} U= \begin{bmatrix} \frac{1 + i }{{\sqrt{5}}} & \frac{3 + 2\,i }{{\sqrt{55}}} & \frac{2+2i}{\sqrt{22}} \\ \frac{1 - i }{{\sqrt{5}}} & \frac{2 + 2\,i }{{\sqrt{55}}} & \frac{-3 + i }{{\sqrt{22}}} \\ \frac{i }{{\sqrt{5}}} & \frac{3 - 5\,i }{{\sqrt{55}}} & -\frac{2}{\sqrt{22}} \end{bmatrix} \end{equation*}
from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns
\begin{equation*} \set{ \colvector{ \frac{1 + i }{{\sqrt{5}}}\\ \frac{1 - i }{{\sqrt{5}}}\\ \frac{i }{{\sqrt{5}}} },\, \colvector{ \frac{3 + 2\,i }{{\sqrt{55}}}\\ \frac{2 + 2\,i }{{\sqrt{55}}}\\ \frac{3 - 5\,i }{{\sqrt{55}}} },\, \colvector{ \frac{2+2i}{\sqrt{22}}\\ \frac{-3 + i }{{\sqrt{22}}}\\ -\frac{2}{\sqrt{22}} } } \end{equation*}
form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product \(\adjoint{U}U\text{.}\) Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12).
When using vectors and matrices that only have real number entries, orthogonal matrices are those matrices with inverses that equal their transpose. Similarly, the inner product is the familiar dot product. Keep this special case in mind as you read the next theorem.

Proof.

\begin{align*} \innerproduct{U\vect{u}}{U\vect{v}} &=\transpose{\left(\conjugate{U\vect{u}}\right)}U\vect{v}&& \knowl{./knowl/xref/theorem-MMIP.html}{\text{Theorem MMIP}}\\ &=\transpose{\left(\conjugate{U}\conjugate{\vect{u}}\right)}U\vect{v}&& \knowl{./knowl/xref/theorem-MMCC.html}{\text{Theorem MMCC}}\\ &=\transpose{\conjugate{\vect{u}}}\transpose{\conjugate{U}}U\vect{v}&& \knowl{./knowl/xref/theorem-MMT.html}{\text{Theorem MMT}}\\ &=\transpose{\conjugate{\vect{u}}}\adjoint{U}U\vect{v}&& \knowl{./knowl/xref/definition-A.html}{\text{Definition A}}\\ &=\transpose{\conjugate{\vect{u}}}I_n\vect{v}&& \knowl{./knowl/xref/definition-UM.html}{\text{Definition UM}}\\ &=\transpose{\conjugate{\vect{u}}}\vect{v}&& \knowl{./knowl/xref/theorem-MMIM.html}{\text{Theorem MMIM}}\\ &=\innerproduct{\vect{u}}{\vect{v}}&& \knowl{./knowl/xref/theorem-MMIP.html}{\text{Theorem MMIP}} \end{align*}
The second conclusion is just a specialization of the first conclusion.
\begin{align*} \norm{U\vect{v}} &=\sqrt{\norm{U\vect{v}}^2}\\ &=\sqrt{\innerproduct{U\vect{v}}{U\vect{v}}}&& \knowl{./knowl/xref/theorem-IPN.html}{\text{Theorem IPN}}\\ &=\sqrt{\innerproduct{\vect{v}}{\vect{v}}}\\ &=\sqrt{\norm{\vect{v}}^2}&& \knowl{./knowl/xref/theorem-IPN.html}{\text{Theorem IPN}}\\ &=\norm{\vect{v}}&& \end{align*}
Aside from the inherent interest in this theorem, it makes a bigger statement about unitary matrices. When we view vectors geometrically as directions or forces, then the norm equates to a notion of length. If we transform a vector by multiplication with a unitary matrix, then the length (norm) of that vector stays the same. If we consider column vectors with two or three slots containing only real numbers, then the inner product of two such vectors is just the dot product, and this quantity can be used to compute the angle between two vectors. When two vectors are multiplied (transformed) by the same unitary matrix, their dot product is unchanged and their individual lengths are unchanged. This results in the angle between the two vectors remaining unchanged.
A unitary transformation (matrix-vector products with unitary matrices) thus preserve geometrical relationships among vectors representing directions, forces, or other physical quantities. In the case of a two-slot vector with real entries, this is simply a rotation. These sorts of computations are exceedingly important in computer graphics such as games and real-time simulations, especially when increased realism is achieved by performing many such computations quickly. We will see unitary matrices again in subsequent sections (especially Theorem OD) and in each instance, consider the interpretation of the unitary matrix as a sort of geometry-preserving transformation. Some authors use the term isometry to highlight this behavior. We will speak loosely of a unitary matrix as being a sort of generalized rotation.

Sage UM. Unitary Matrices.

No surprise about how we check if a matrix is unitary. Here is Example UM3,
We can verify Theorem UMPIP, where the vectors u and v are created randomly. Try evaluating this compute cell with your own choices.
If you want to experiment with permutation matrices, Sage has these too. We can create a permutation matrix from a list that indicates for each column the row with a one in it. Notice that the product here of two permutation matrices is again a permutation matrix.
A final reminder: the terms “dot product,” “symmetric matrix” and “orthogonal matrix” used in reference to vectors or matrices with real number entries are special cases of the terms “inner product,” “Hermitian matrix” and “unitary matrix” that we use for vectors or matrices with complex number entries, so keep that in mind as you read elsewhere.

Reading Questions MINM Reading Questions

1. Calculate inverse, solve system.

Compute the inverse of the coefficient matrix of the system of equations below and use the inverse to solve the system.
\begin{align*} 4x_1 + 10x_2 &= 12\\ 2x_1 + 6x_2 &= 4 \end{align*}

2. Inverse doesn’t exist.

In the reading questions for Section MISLE you were asked to find the inverse of the \(3\times 3\) matrix below.
\begin{equation*} \begin{bmatrix} 2 & 3 & 1\\ 1 & -2 & -3\\ -2 & 4 & 6 \end{bmatrix} \end{equation*}
Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem’s acronym.

3. Unitary or not.

Is the matrix \(A\) unitary? Why?
\begin{equation*} A=\begin{bmatrix} \frac{1}{\sqrt{22}}\left(4+2i\right) & \frac{1}{\sqrt{374}}\left(5+3i\right) \\ \frac{1}{\sqrt{22}}\left(-1-i\right) & \frac{1}{\sqrt{374}}\left(12+14i\right) \\ \end{bmatrix} \end{equation*}

Exercises MINM Exercises

C20.

Verify that \(AB\) is nonsingular.
\begin{align*} A&= \begin{bmatrix} 1 & 2 & 1 \\ 0 & 1 & 1\\ 1 & 0 & 2 \end{bmatrix} & B &= \begin{bmatrix} -1 & 1 & 0\\ 1 & 2 & 1\\ 0 & 1 & 1 \end{bmatrix} \end{align*}

C40.

Solve the system of equations below using the inverse of a matrix.
\begin{align*} x_1+x_2+3x_3+x_4&=5\\ -2x_1-x_2-4x_3-x_4&=-7\\ x_1+4x_2+10x_3+2x_4&=9\\ -2x_1-4x_3+5x_4&=9 \end{align*}
Solution.
The coefficient matrix and vector of constants for the system are
\begin{align*} \begin{bmatrix} 1 & 1 & 3 & 1\\ -2 & -1 & -4 & -1\\ 1 & 4 & 10 & 2\\ -2 & 0 & -4 & 5 \end{bmatrix}&& \vect{b}=\colvector{5\\-7\\9\\9}\text{.} \end{align*}
\(\inverse{A}\) can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is
\begin{equation*} \inverse{A}\vect{b}= \begin{bmatrix} 38 & 18 & -5 & -2\\ 96 & 47 & -12 & -5\\ -39 & -19 & 5 & 2\\ -16 & -8 & 2 & 1 \end{bmatrix} \colvector{5\\-7\\9\\9} = \colvector{1\\-2\\1\\3}\text{.} \end{equation*}

M10.

Find values of \(x\text{,}\) \(y\text{,}\) \(z\) so that matrix \(A\) is invertible.
\begin{equation*} A = \begin{bmatrix} 1 & 2 & x\\ 3 & 0 & y \\ 1 & 1 & z \end{bmatrix} \end{equation*}
Solution.
There are an infinite number of possible answers. We want to find a vector \(\colvector{x \\ y \\ z}\) so that the set
\begin{gather*} S = \set{ \begin{bmatrix} 1 \\ 3 \\ 1 \end{bmatrix}, \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} x \\ y \\ z \end{bmatrix} } \end{gather*}
is a linearly independent set. We need a vector not in the span of the first two columns, which geometrically means that we need it to not be in the same plane as the first two columns of \(A\text{.}\) We can choose any values we want for \(x\) and \(y\text{,}\) and then choose a value of \(z\) that makes the three vectors independent.
I will (arbitrarily) choose \(x = 1\text{,}\) \(y = 1\text{.}\) Then, we have
\begin{align*} A = \begin{bmatrix} 1 & 2 & 1\\ 3 & 0 & 1 \\ 1 & 1 & z \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 2z-1 \\ 0 & \leading{1} & 1-z \\ 0 & 0 & 4 - 6z \end{bmatrix} \end{align*}
which is invertible if and only if \(4-6z \ne 0\text{.}\) Thus, we can choose any value as long as \(z \ne \frac{2}{3}\text{,}\) so we choose \(z = 0\text{,}\) and we have found a matrix \(A = \begin{bmatrix} 1 & 2 & 1\\ 3 & 0 & 1 \\ 1 & 1 & 0 \end{bmatrix}\) that is invertible.

M11.

Find values of \(x\text{,}\) \(y\) \(z\) so that matrix \(A\) is singular.
\begin{equation*} A = \begin{bmatrix} 1 & x & 1\\ 1 & y & 4 \\ 0 & z & 5 \end{bmatrix} \end{equation*}
Solution.
There are an infinite number of possible answers. We need the set of vectors
\begin{align*} S &= \set{ \begin{bmatrix}1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix}x \\ y \\ z \end{bmatrix}, \begin{bmatrix} 1 \\ 4 \\ 5 \end{bmatrix} } \end{align*}
to be linearly dependent. One way to do this by inspection is to have \(\colvector{x \\ y \\ z} =\colvector{1 \\ 4 \\ 5}\text{.}\) Thus, if we let \(x = 1\text{,}\) \(y = 4\text{,}\) \(z = 5\text{,}\) then the matrix \(A = \begin{bmatrix} 1 & 1 & 1\\ 1 & 4 & 4 \\ 0 & 5 & 5 \end{bmatrix}\) is singular.

M15.

If \(A\) and \(B\) are \(n \times n\) matrices, \(A\) is nonsingular, and \(B\) is singular, show directly that \(AB\) is singular, without using Theorem NPNF.
Solution.
If \(B\) is singular, then there exists a vector \(\vect{x}\ne\zerovector\) so that \(\vect{x}\in \nsp{B}\text{.}\) Thus, \(B\vect{x} = \vect{0}\text{,}\) so \(A(B\vect{x}) = (AB)\vect{x} = \zerovector\text{,}\) so \(\vect{x}\in\nsp{AB}\text{.}\) Since the null space of \(AB\) is not trivial, \(AB\) is a singular matrix.

M20.

Construct an example of a \(4\times 4\) unitary matrix.
Solution.
The \(4\times 4\) identity matrix, \(I_4\text{,}\) would be one example (Definition IM). Any of the 23 other rearrangements of the columns of \(I_4\) would be a simple, but less trivial, example. See Example UPM.

M80.

Matrix multiplication interacts nicely with many operations. But not always with transforming a matrix to reduced row-echelon form. Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix. Let \(P\) be a matrix that is row-equivalent to \(A\) and in reduced row-echelon form, \(Q\) be a matrix that is row-equivalent to \(B\) and in reduced row-echelon form, and let \(R\) be a matrix that is row-equivalent to \(AB\) and in reduced row-echelon form. Is \(PQ=R\text{?}\) (In other words, with nonstandard notation, is \(\text{rref}(A)\text{rref}(B)=\text{rref}(AB)\text{?}\))
Construct a counterexample to show that, in general, this statement is false. Then find a large class of matrices where if \(A\) and \(B\) are in the class, then the statement is true.
Solution.
Take
\begin{align*} A&= \begin{bmatrix} 1&0\\0&0 \end{bmatrix} & B&= \begin{bmatrix} 0&0\\1&0 \end{bmatrix}\text{.} \end{align*}
Then \(A\) is already in reduced row-echelon form, and by swapping rows, \(B\) row-reduces to \(A\text{.}\) So the product of the row-echelon forms of \(A\) is \(AA=A\neq\zeromatrix\text{.}\) However, the product \(AB\) is the \(2\times 2\) zero matrix, which is in reduced-echelon form, and not equal to \(AA\text{.}\) When you get there, Theorem PEEF or Theorem EMDRO might shed some light on why we would not expect this statement to be true in general.
If \(A\) and \(B\) are nonsingular, then \(AB\) is nonsingular (Theorem NPNF), and all three matrices \(A\text{,}\) \(B\) and \(AB\) row-reduce to the identity matrix (Theorem NMRRI). By Theorem MMIM, the desired relationship is true.

T10.

Suppose that \(Q\) and \(P\) are unitary matrices of size \(n\text{.}\) Prove that \(QP\) is a unitary matrix.

T11.

Prove that Hermitian matrices (Definition HM) have real entries on the diagonal. More precisely, suppose that \(A\) is a Hermitian matrix of size \(n\text{.}\) Then \(\matrixentry{A}{ii}\in\reals\text{,}\) \(1\leq i\leq n\text{.}\)

T12.

Suppose that we are checking if a square matrix of size \(n\) is unitary. Show that a straightforward application of Theorem CUMOS requires the computation of \(n^2\) inner products when the matrix is unitary, and fewer when the matrix is not orthogonal. Then show that this maximum number of inner products can be reduced to \(\frac{1}{2}n(n+1)\) in light of Theorem IPAC.

T25.

The notation \(A^k\) means a repeated matrix product between \(k\) copies of the square matrix \(A\text{.}\)
  1. Assume \(A\) is an \(n\times n\) matrix where \(A^2=\zeromatrix\) (which does not imply that \(A=\zeromatrix\text{.}\)) Prove that \(I_n-A\) is invertible by showing that \(I_n+A\) is an inverse of \(I_n-A\text{.}\)
  2. Assume that \(A\) is an \(n\times n\) matrix where \(A^3=\zeromatrix\text{.}\) Prove that \(I_n-A\) is invertible.
  3. Form a general theorem based on your observations from parts (1) and (2) and provide a proof.

T50.

This exercise asks you to construct a new proof of a result similar to Theorem NPNF by adding extra assumptions and using less powerful results than we did in this section. Add the hypothesis that \(A\) and \(B\) are both upper triangular matrices. Establish the same equivalence as in Theorem NPNF, but make use of Theorem NTM, Theorem PTMT, and Exercise MM.T28. In particular, you should not need any results that occur after Section MM.
You have attempted of activities on this page.