We need to begin with no assumptions about any relationships between
and
other than they are both in reduced row-echelon form, and they are both row-equivalent to
If and are both row-equivalent to then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of is linearly related to the rows of This relationship is different for each row of but once we fix a row, the relationship is the same across columns. More precisely, there are scalars such that for any
You should read this as saying that an entry of row
of
(in column
) is a linear function of the entries of all the rows of
that are also in column
and the scalars (
) depend on which row of
we are considering (the
subscript on
), but are the same for every column (no dependence on
in
). This idea may be complicated now, but will feel more familiar once we discuss “linear combinations” (
Definition LCCV) and moreso when we discuss “row spaces” (
Definition RSM). For now, spend some time carefully working
Exercise RREF.M40, which is designed to illustrate the origins of this expression. This completes our exploitation of the row-equivalence of
and
We now repeatedly exploit the fact that
and
are in reduced row-echelon form. Recall that a pivot column is all zeros, except a single one. More carefully, if
is a matrix in reduced row-echelon form, and
is the index of a pivot column, then
precisely when
and is otherwise zero. Notice also that any entry of
that is both below the entry in row
and to the left of column
is also zero (with below and left understood to include equality). In other words, look at examples of matrices in reduced row-echelon form and choose a leading 1 (with a box around it). The rest of the column is also zeros, and the lower left “quadrant” of the matrix that begins here is totally zeros.
Assuming no relationship about the form of
and
let
have
nonzero rows and denote the pivot columns as
For
let
denote the number of nonzero rows and denote the pivot columns as
(
Definition RREF). There are four steps in the proof, and the first three are about showing that
and
have the same number of pivot columns, in the same places. In other words, the “primed” symbols are a necessary fiction.
First Step. Suppose that Then
The entries of are all zero since they are left and below of the leading 1 in row 1 and column of This is a contradiction, so we know that By an entirely similar argument, reversing the roles of and we could conclude that Together this means that
Second Step. Suppose that we have determined that … Let us now show that Working towards a contradiction, suppose that For
Now,
This contradiction shows that By an entirely similar argument, we could conclude that and therefore
Third Step. Now we establish that Suppose that By the arguments above, we know that …, For
Now examine the entries of row of
So row is a totally zero row, contradicting that this should be the bottommost nonzero row of So By an entirely similar argument, reversing the roles of and we would conclude that and therefore Thus, combining the first three steps we can say that In other words, and have the same pivot columns, in the same locations.
Fourth Step. In this final step, we will not argue by contradiction. Our intent is to determine the values of the Notice that we can use the values of the interchangeably for and Here we go,
and for
Finally, having determined the values of the we can show that For
So and have equal values in every entry, and so are the same matrix.