Skip to main content
Logo image

Section RREF Reduced Row-Echelon Form

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

Subsection MVNSE Matrix and Vector Notation for Systems of Equations

Definition M. Matrix.

An \(m\times n\) matrix is a rectangular layout of numbers from \(\complexes\) having \(m\) rows and \(n\) columns. We will use upper-case Latin letters from the start of the alphabet (\(A,\,B,\,C,\dotsc\)) to denote matrices and squared-off brackets to delimit the layout. Many use large parentheses instead of brackets — the distinction is not important. Rows of a matrix will be referenced starting at the top and working down (i.e. row 1 is at the top) and columns will be referenced starting from the left (i.e. column 1 is at the left). For a matrix \(A\text{,}\) the notation \(\matrixentry{A}{ij}\) will refer to the complex number in row \(i\) and column \(j\) of \(A\text{.}\)
Be careful with this notation for individual entries, since it is easy to think that \(\matrixentry{A}{ij}\) refers to the whole matrix. It does not. It is just a number, but is a convenient way to talk about the individual entries simultaneously. This notation will get a heavy workout once we get to Chapter M.

Example AM. A matrix.

\begin{equation*} B=\begin{bmatrix} -1&2&5&3\\ 1&0&-6&1\\ -4&2&2&-2 \end{bmatrix} \end{equation*}
is a matrix with \(m=3\) rows and \(n=4\) columns. We can say that \(\matrixentry{B}{2,3}=-6\) while \(\matrixentry{B}{3,4}=-2\text{.}\)

Sage M. Matrices.

Matrices are fundamental objects in linear algebra and in Sage, so there are a variety of ways to construct a matrix in Sage. Generally, you need to specify what types of entries the matrix contains (more on that in a minute), the number of rows and columns, and the entries themselves. First, let us dissect an example:
QQ is the set of all rational numbers (fractions with an integer numerator and denominator), 2 is the number of rows, 3 is the number of columns. Sage understands a list of items as delimited by brackets ([,]) and the items in the list can again be lists themselves. So [[1, 2, 3], [4, 5, 6]] is a list of lists, and in this context the inner lists are rows of the matrix.
There are various shortcuts you can employ when creating a matrix. For example, Sage is able to infer the size of the matrix from the lists of entries.
Or you can specify how many rows the matrix will have and provide one big grand list of entries, which will get chopped up, row by row, if you prefer.
It is possible to also skip specifying the type of numbers used for entries of a matrix, however this is fraught with peril, as Sage will make an informed guess about your intent. Is this what you want? Consider when you enter the single character “2” into a computer program like Sage. Is this the integer \(2\text{,}\) the rational number \(\frac{2}{1}\text{,}\) the real number \(2.00000\text{,}\) the complex number \(2 + 0i\text{,}\) or the polynomial \(p(x)=2\text{?}\) In context, us humans can usually figure it out, but a literal-minded computer is not so smart. It happens that the operations we can perform, and how they behave, are influenced by the type of the entries in a matrix. So it is important to get this right and our advice is to be explicit and be in the habit of always specifying the type of the entries of a matrix you create.
Mathematical objects in Sage often come from sets of similar objects. This set is called the “parent” of the element. We can use this to learn how Sage deduces the type of entries in a matrix. Execute the following three compute cells, and notice how the three matrices are constructed to have entries from the integers, the rationals and the reals.
Sage knows a wide variety of sets of numbers. These are known formally as rings or fields, but we will refer to them generically as number systems here. Examples include: ZZ is the integers, QQ is the rationals, RR is the real numbers with reasonable precision, and CC is the complex numbers with reasonable precision. We will present the theory of linear algebra over the complex numbers. However, in any computer system, there will always be complications surrounding the inability of digital arithmetic to accurately represent all complex numbers. In contrast, Sage can represent rational numbers exactly as the quotient of two (perhaps very large) integers. So our Sage examples will begin by using QQ as our number system and we can concentrate on understanding the key concepts.
Once we have constructed a matrix, we can learn a lot about it (such as its parent). Sage is largely object-oriented, which means many commands apply to an object by using the “dot” notation. A.parent() is an example of this syntax, while the constructor matrix([[1, 2, 3], [4, 5, 6]]) is an exception. Here are a few examples, followed by some explanation:
The number of rows and the number of columns should be apparent, .base_ring() gives the number system for the entries, as included in the information provided by .parent().
Computer scientists and computer languages prefer to begin counting from zero, while mathematicians and written mathematics prefer to begin counting at one. Sage and this text are no exception. It takes some getting used to, but the reasons for counting from zero in computer programs soon becomes very obvious. Counting from one in mathematics is historical, and unlikely to change anytime soon. So above, the two rows of A are numbered 0 and 1, while the columns are numbered 0, 1 and 2. So A[1,2] refers to the entry of A in the second row and the third column, i.e. 6.
There is much more to say about how Sage works with matrices, but this is already a lot to digest. Use the space below to create some matrices (different ways) and examine them and their properties (size, entries, number system, parent).
When we do equation operations on system of equations, the names of the variables really are not very important. Use \(x_1\text{,}\) \(x_2\text{,}\) \(x_3\text{,}\) or \(a\text{,}\) \(b\text{,}\) \(c\text{,}\) or \(x\text{,}\) \(y\text{,}\) \(z\text{,}\) it really does not matter. In this subsection we will describe some notation that will make it easier to describe linear systems, solve the systems and describe the solution sets. Here is a list of definitions, laden with notation.

Definition CV. Column Vector.

A column vector of size \(m\) is an ordered list of \(m\) numbers, which is written in order vertically, starting at the top and proceeding to the bottom. At times, we will refer to a column vector as simply a vector. Column vectors will be written in bold, usually with lower case Latin letter from the end of the alphabet such as \(\vect{u}\text{,}\) \(\vect{v}\text{,}\) \(\vect{w}\text{,}\) \(\vect{x}\text{,}\) \(\vect{y}\text{,}\) \(\vect{z}\text{.}\) Some books like to write vectors with arrows, such as \(\vec{u}\text{.}\) Writing by hand, some like to put arrows on top of the symbol, or a tilde underneath the symbol, as in \(\underset{\sim}{\textstyle u}\text{.}\) To refer to the entry or component of vector \(\vect{v}\) in location \(i\) of the list, we write \(\vectorentry{\vect{v}}{i}\text{.}\)
Be careful with this notation. While the symbols \(\vectorentry{\vect{v}}{i}\) might look somewhat substantial, as an object this represents just one entry of a vector, which is just a single complex number.

Definition ZCV. Zero Column Vector.

The zero vector of size \(m\) is the column vector of size \(m\) where each entry is the number zero,
\begin{gather*} \zerovector=\colvector{0\\0\\0\\\vdots\\0} \end{gather*}
or defined much more compactly, \(\vectorentry{\zerovector}{i}=0\) for \(1\leq i\leq m\text{.}\)

Sage V. Vectors.

Vectors in Sage are built, manipulated and interrogated in much the same way as matrices (see Sage M). However as simple lists (“one-dimensional,” not “two-dimensional” such as matrices that look more tabular) they are simpler to construct and manipulate. Sage will print a vector across the screen, even if we wish to interpret it as a column vector. It will be delimited by parentheses ((,)) which allows us to distinguish a vector from a matrix with just one row, if we look carefully. The number of “slots” in a vector is not referred to in Sage as rows or columns, but rather by “degree.” Here are some examples (remember to start counting from zero):
Notice that if you use commands like .parent() you will sometimes see references to “free modules.” This is a technical generalization of the notion of a vector, which is beyond the scope of this course, so just mentally convert to vectors when you see this term.
The zero vector is super easy to build, but be sure to specify what number system your zero is from.
Notice that while using Sage, we try to remain consistent with our mathematical notation conventions. This is not required, as you can give items in Sage very long names if you wish. For example, the following is perfectly legitimate, as you can see.
In fact, our use of capital letters for matrices actually contradicts some of the conventions for naming objects in Sage, so there are good reasons for not mirroring our mathematical notation.

Definition CM. Coefficient Matrix.

For a system of linear equations,
\begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*}
the coefficient matrix is the \(m\times n\) matrix
\begin{equation*} A= \begin{bmatrix} a_{11}&a_{12}&a_{13}&\dots&a_{1n}\\ a_{21}&a_{22}&a_{23}&\dots&a_{2n}\\ a_{31}&a_{32}&a_{33}&\dots&a_{3n}\\ \vdots&\vdots&\vdots& &\vdots\\ a_{m1}&a_{m2}&a_{m3}&\dots&a_{mn}\\ \end{bmatrix}\text{.} \end{equation*}

Definition VOC. Vector of Constants.

For a system of linear equations,
\begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*}
the vector of constants is the column vector of size \(m\)
\begin{equation*} \vect{b}= \begin{bmatrix} b_1\\ b_2\\ b_3\\ \vdots\\ b_m\\ \end{bmatrix}\text{.} \end{equation*}

Definition SOLV. Solution Vector.

For a system of linear equations,
\begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*}
the solution vector is the column vector of size \(n\)
\begin{equation*} \vect{x}= \begin{bmatrix} x_1\\ x_2\\ x_3\\ \vdots\\ x_n\\ \end{bmatrix}\text{.} \end{equation*}
The solution vector may do double-duty on occasion. It might refer to a list of variable quantities at one point, and subsequently refer to values of those variables that actually form a particular solution to that system.

Definition MRLS. Matrix Representation of a Linear System.

If \(A\) is the coefficient matrix of a system of linear equations and \(\vect{b}\) is the vector of constants, then we will write \(\linearsystem{A}{\vect{b}}\) as a shorthand expression for the system of linear equations, which we will refer to as the matrix representation of the linear system.

Example NSLE. Notation for systems of linear equations.

The system of linear equations
\begin{align*} 2x_1+4x_2-3x_3+5x_4+x_5&=9\\ 3x_1+x_2+\quad\quad x_4-3x_5&=0\\ -2x_1+7x_2-5x_3+2x_4+2x_5&=-3 \end{align*}
has coefficient matrix
\begin{equation*} A= \begin{bmatrix} 2 & 4 & -3 & 5 & 1\\ 3 & 1 & 0 & 1 & -3\\ -2 & 7 & -5 & 2 & 2 \end{bmatrix} \end{equation*}
and vector of constants
\begin{equation*} \vect{b}=\colvector{9\\0\\-3} \end{equation*}
and so will be referenced as \(\linearsystem{A}{\vect{b}}\text{.}\)

Definition AM. Augmented Matrix.

Suppose we have a system of \(m\) equations in \(n\) variables, with coefficient matrix \(A\) and vector of constants \(\vect{b}\text{.}\) Then the augmented matrix of the system of equations is the \(m\times(n+1)\) matrix whose first \(n\) columns are the columns of \(A\) and whose last column (\(n+1\)) is the column vector \(\vect{b}\text{.}\) When described symbolically, this matrix will be written as \(\augmented{A}{\vect{b}}\text{.}\)
The augmented matrix represents all the important information in the system of equations, since the names of the variables have been ignored, and the only connection with the variables is the location of their coefficients in the matrix. It is important to realize that the augmented matrix is just that, a matrix, and not a system of equations. In particular, the augmented matrix does not have any “solutions,” though it will be useful for finding solutions to the system of equations that it is associated with. (Think about your objects, and review Proof Technique L.) However, notice that an augmented matrix always belongs to some system of equations, and vice versa, so it is tempting to try and blur the distinction between the two. Note that we use the symbolic expression, \(\augmented{A}{\vect{b}}\text{,}\) with the vertical bar and we would do the same in our concrete examples but have not yet done so due to technical considerations. Here is a quick example.

Example AMAA. Augmented matrix for Archetype A.

Archetype A is the following system of 3 equations in 3 variables.
\begin{align*} x_1 -x_2 +2x_3 & =1\\ 2x_1+ x_2 + x_3 & =8\\ x_1 + x_2 & =5 \end{align*}
Here is its augmented matrix.
\begin{align*} \begin{bmatrix} 1 & -1 & 2 & 1\\ 2 & 1 & 1 & 8\\ 1 & 1 & 0 & 5 \end{bmatrix} \end{align*}

Sage AM. Augmented Matrix.

Sage has a matrix method, .augment(), that will join two matrices, side-by-side provided they both have the same number of rows. The same method will allow you to augment a matrix with a column vector, as described in Definition AM, provided the number of entries in the vector matches the number of rows for the matrix. Here we reprise the construction in Example AMAA. We will now format our matrices as input across several lines, a practice you may use in your own worksheets, or not.
Notice that the matrix method .augment() needs some input, in the above case, the vector b. This will explain the need for the parentheses on the end of the “dot” commands, even if the particular command does not expect input.
Some methods allow optional input, typically using keywords. Matrices can track subdivisions, making breaks between rows and/or columns. When augmenting, you can ask for the subdivision to be included. Evalute the compute cell above if you have not already, so that A and b are defined, and then evaluate:
As a partial demonstration of manipulating subdivisions of matrices we can reset the subdivisions of M with the .subdivide() method. We provide a list of rows to subdivide before, then a list of columns to subdivide before, where we remember that counting begins at zero.

Subsection RO Row Operations

An augmented matrix for a system of equations will save us the tedium of continually writing down the names of the variables as we solve the system. It will also release us from any dependence on the actual names of the variables. We have seen how certain operations we can perform on equations (Definition EO) will preserve their solutions (Theorem EOPSS). The next two definitions and the following theorem carry over these ideas to augmented matrices.

Definition RO. Row Operations.

The following three operations will transform an \(m\times n\) matrix into a different matrix of the same size, and each is known as a row operation.
  1. Swap the locations of two rows.
  2. Multiply each entry of a single row by a nonzero quantity.
  3. Multiply each entry of one row by some quantity, and add these values to the entries in the same columns of a second row. Leave the first row the same after this operation, but replace the second row by the new values.
We will use a symbolic shorthand to describe these row operations.
  1. \(\rowopswap{i}{j}\text{:}\) Swap the location of rows \(i\) and \(j\text{.}\)
  2. \(\rowopmult{\alpha}{i}\text{:}\) Multiply row \(i\) by the nonzero scalar \(\alpha\text{.}\)
  3. \(\rowopadd{\alpha}{i}{j}\text{:}\) Multiply row \(i\) by the scalar \(\alpha\) and add to row \(j\text{.}\)

Definition REM. Row-Equivalent Matrices.

Two matrices, \(A\) and \(B\text{,}\) are row-equivalent if one can be obtained from the other by a sequence of row operations.

Example TREM. Two row-equivalent matrices.

The matrices
\begin{align*} A=\begin{bmatrix} 2&-1&3&4\\ 5&2&-2&3\\ 1&1&0&6 \end{bmatrix} && B=\begin{bmatrix} 1&1&0&6\\ 3&0&-2&-9\\ 2&-1&3&4 \end{bmatrix} \end{align*}
are row-equivalent as can be seen from
\begin{align*} \begin{bmatrix} 2&-1&3&4\\ 5&2&-2&3\\ 1&1&0&6 \end{bmatrix} \xrightarrow{\rowopswap{1}{3}} \begin{bmatrix} 1&1&0&6\\ 5&2&-2&3\\ 2&-1&3&4 \end{bmatrix} & \xrightarrow{\rowopadd{-2}{1}{2}} \begin{bmatrix} 1&1&0&6\\ 3&0&-2&-9\\ 2&-1&3&4 \end{bmatrix}\text{.} \end{align*}
We can also say that any pair of these three matrices are row-equivalent.
Notice that each of the three row operations is reversible (Exercise RREF.T10), so we do not have to be careful about the distinction between “\(A\) is row-equivalent to \(B\)” and “\(B\) is row-equivalent to \(A\text{.}\)” (Exercise RREF.T11)
The preceding definitions are designed to make the following theorem possible. It says that row-equivalent matrices represent systems of linear equations that have identical solution sets.

Proof.

If we perform a single row operation on an augmented matrix, it will have the same effect as if we did the analogous equation operation on the system of equations the matrix represents. By exactly the same methods as we used in the proof of Theorem EOPSS we can see that each of these row operations will preserve the set of solutions for the system of equations the matrix represents.
So at this point, our strategy is to begin with a system of equations, represent the system by an augmented matrix, perform row operations (which will preserve solutions for the system) to get a “simpler” augmented matrix, convert back to a “simpler” system of equations and then solve that system, knowing that its solutions are those of the original system. Here is a rehash of Example US as an exercise in using our new tools.

Example USR. Three equations, one solution, reprised.

We solve the following system using augmented matrices and row operations. This is the same system of equations solved in Example US using equation operations.
\begin{align*} x_1+2x_2+2x_3&=4\\ x_1+3x_2+3x_3&=5\\ 2x_1+6x_2+5x_3&=6 \end{align*}
Form the augmented matrix,
\begin{align*} A=\begin{bmatrix} 1&2&2&4\\ 1&3&3&5\\ 2&6&5&6 \end{bmatrix} \end{align*}
and apply row operations,
\begin{align*} \xrightarrow{\rowopadd{-1}{1}{2}} & \begin{bmatrix} 1&2&2&4\\ 0&1&1&1\\ 2&6&5&6 \end{bmatrix} \xrightarrow{\rowopadd{-2}{1}{3}} \begin{bmatrix} 1&2&2&4\\ 0&1&1&1\\ 0&2&1&-2 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{2}{3}} & \begin{bmatrix} 1&2&2&4\\ 0&1&1&1\\ 0&0&-1&-4 \end{bmatrix} \xrightarrow{\rowopmult{-1}{3}} \begin{bmatrix} 1&2&2&4\\ 0&1& 1&1\\ 0&0&1&4 \end{bmatrix}\text{.} \end{align*}
So the matrix
\begin{equation*} B=\begin{bmatrix} 1&2&2&4\\ 0&1& 1&1\\ 0&0&1&4 \end{bmatrix} \end{equation*}
is row equivalent to \(A\) and by Theorem REMES the system of equations below has the same solution set as the original system of equations.
\begin{align*} x_1+2x_2+2x_3&=4\\ x_2+ x_3&=1\\ x_3&=4\text{.} \end{align*}
Solving this “simpler” system is straightforward and is identical to the process in Example US.

Sage RO. Row Operations.

Sage will perform individual row operations on a matrix. This can get a bit tedious, but it is better than doing the computations (wrong, perhaps) by hand, and it can be useful when building up more complicated procedures for a matrix.
For each row operation, there are two similar methods. One changes the matrix “in-place” while the other creates a new matrix that is a modified version of the original. This is an important distinction that you should understand for every new Sage command you learn that might change a matrix or vector.
Consider the first row operation, which swaps two rows. There are two matrix methods to do this, a “with” version that will create a new, changed matrix, which you will likely want to save, and a plain version that will change the matrix it operates on “in-place.” The copy() function, which is a general-purpose command, is a way to make a copy of a matrix before you make changes to it. Study the example below carefully, and then read the explanation following. (Remember that counting begins with zero.)
Here is how each of these four matrices comes into existence.
  1. A is our original matrix, created from a list of entries.
  2. B is not a new copy of A, it is just a new name for referencing the exact same matrix internally.
  3. C is a brand new matrix, stored internally separate from A, but with identical contents.
  4. D is also a new matrix, which is created by swapping the rows of A.
And here is how each matrix is affected by the commands.
  1. A is changed twice “in-place”. First, its rows are swapped rows a “plain” matrix method. Then its entry in the lower-right corner is set to -6.
  2. B is just another name for A. So whatever changes are made to A will be evident when we ask for the matrix by the name B. And vice-versa.
  3. C is a copy of the original A and does not change, since no subsequent commands act on C.
  4. D is a new copy of A, created by swapping the rows of A. Once created from A, it has a life of its own, as illustrated by the change in its entry in the upper-left corner to -1.
An interesting experiment is to rearrange some of the lines above (or add new ones) and predict the result.
Just as with the first row operation, swapping rows, Sage supports the other two row operations with natural sounding commands, with both “in-place” versions and new-matrix versions.
Notice that the order of the arguments might feel a bit odd, compared to how we write and think about row operations. Also note how the “with” versions leave a trail of matrices for each step while the plain versions just keep changing A.

Subsection RREF Reduced Row-Echelon Form

The preceding example amply illustrates the definitions and theorems we have seen so far. But it still leaves two questions unanswered. Exactly what is this “simpler” form for a matrix, and just how do we get it? Here is the answer to the first question, a definition of reduced row-echelon form.

Definition RREF. Reduced Row-Echelon Form.

A matrix is in reduced row-echelon form if it meets all of the following conditions:
  1. If there is a row where every entry is zero, then this row lies below any other row that contains a nonzero entry.
  2. The leftmost nonzero entry of a row is equal to 1.
  3. The leftmost nonzero entry of a row is the only nonzero entry in its column.
  4. Consider any two different leftmost nonzero entries, one located in row \(i\text{,}\) column \(j\) and the other located in row \(s\text{,}\) column \(t\text{.}\) If \(s\gt i\text{,}\) then \(t\gt j\text{.}\)
A row of only zero entries is called a zero row and the leftmost nonzero entry of a nonzero row is a leading 1. A column containing a leading 1 will be called a pivot column. The number of nonzero rows will be denoted by \(r\text{,}\) which is also equal to the number of leading 1’s and the number of pivot columns.
The set of column indices for the pivot columns will be denoted by \(D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}\) where \(d_1\lt d_2\lt d_3\lt\cdots\lt d_r\text{,}\) while the columns that are not pivot columns will be denoted as \(F=\set{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r}}\) where \(f_1\lt f_2\lt f_3\lt\cdots\lt f_{n-r}\text{.}\)
The principal feature of reduced row-echelon form is the pattern of leading 1’s guaranteed by conditions (2) and (4), reminiscent of a flight of geese, or steps in a staircase, or water cascading down a mountain stream.
There are a number of new terms and notation introduced in this definition, which should make you suspect that this is an important definition. Given all there is to digest here, we will mostly save the use of \(D\) and \(F\) until Section TSS. However, one important point to make here is that all of these terms and notation apply to a matrix. Sometimes we will employ these terms and sets for an augmented matrix, and other times it might be a coefficient matrix. So always give some thought to exactly which type of matrix you are analyzing.

Example RREF. A matrix in reduced row-echelon form.

The matrix \(C\) is in reduced row-echelon form.
\begin{align*} C&= \begin{bmatrix} 1&-3&0&6&0&0&-5&9\\ 0&0&0&0&1&0&3&-7\\ 0&0&0&0&0&1&7&3\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0 \end{bmatrix} \end{align*}
This matrix has two zero rows and three pivot columns. So \(r=3\text{.}\) Columns 1, 5, and 6 are the three pivot columns, so \(D=\set{1,\,5,\,6}\) and then \(F=\set{2,\,3,\,4,\,7,\,8}\text{.}\)

Example NRREF. A matrix not in reduced row-echelon form.

The matrix \(E\) is not in reduced row-echelon form, as it fails each of the four requirements once.
\begin{align*} E&= \begin{bmatrix} 1&0&-3&0&6&0&7&-5&9\\ 0&0&0&5&0&1&0&3&-7\\ 0&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&-4&2\\ 0&0&0&0&0&0&1&7&3\\ 0&0&0&0&0&0&0&0&0 \end{bmatrix} \end{align*}
Our next theorem has a “constructive” proof. Learn about the meaning of this term in Proof Technique C.

Proof.

Suppose that \(A\) has \(m\) rows and \(n\) columns. We will describe a process for converting \(A\) into \(B\) via row operations. This procedure is known as Gauss-Jordan elimination. Tracing through this procedure will be easier if you recognize that \(i\) refers to a row that is being converted, \(j\) refers to a column that is being converted, and \(r\) keeps track of the number of nonzero rows. Here we go.
  1. Set \(j=0\) and \(r=0\text{.}\)
  2. Increase \(j\) by 1. If \(j\) now equals \(n+1\text{,}\) then stop.
  3. Examine the entries of \(A\) in column \(j\) located in rows \(r+1\) through \(m\text{.}\) If all of these entries are zero, then go to Step 2.
  4. Choose a row from rows \(r+1\) through \(m\) with a nonzero entry in column \(j\text{.}\) Let \(i\) denote the index for this row.
  5. Increase \(r\) by 1.
  6. Use the first row operation to swap rows \(i\) and \(r\text{.}\)
  7. Use the second row operation to convert the entry in row \(r\) and column \(j\) to a 1.
  8. Use the third row operation with row \(r\) to convert every other entry of column \(j\) to zero.
  9. Go to Step 2.
The result of this procedure is that the matrix \(A\) is converted to a matrix in reduced row-echelon form, which we will refer to as \(B\text{.}\) We need to now prove this claim by showing that the converted matrix has the requisite properties of Definition RREF. First, the matrix is only converted through row operations (Steps 6, 7, 8), so \(A\) and \(B\) are row-equivalent (Definition REM).
It is a bit more work to be certain that \(B\) is in reduced row-echelon form. We claim that as we begin Step 2, the first \(j\) columns of the matrix are in reduced row-echelon form with \(r\) nonzero rows. Certainly this is true at the start when \(j=0\text{,}\) since the matrix has no columns and so vacuously meets the conditions of Definition RREF with \(r=0\) nonzero rows.
In Step 2 we increase \(j\) by 1 and begin to work with the next column. There are two possible outcomes for Step 3. Suppose that every entry of column \(j\) in rows \(r+1\) through \(m\) is zero. Then with no changes we recognize that the first \(j\) columns of the matrix has its first \(r\) rows still in reduced-row echelon form, with the final \(m-r\) rows still all zero.
Suppose instead that the entry in row \(i\) of column \(j\) is nonzero. Notice that since \(r+1\leq i\leq m\text{,}\) we know the first \(j-1\) entries of this row are all zero. Now, in Step 5 we increase \(r\) by 1, and then embark on building a new nonzero row. In Step 6 we swap row \(r\) and row \(i\text{.}\) In the first \(j\) columns, the first \(r-1\) rows remain in reduced row-echelon form after the swap. In Step 7 we multiply row \(r\) by a nonzero scalar, creating a 1 in the entry in column \(j\) of row \(i\text{,}\) and not changing any other rows. This new leading 1 is the first nonzero entry in its row, and is located to the right of all the leading 1’s in the preceding \(r-1\) rows. With Step 8 we insure that every entry in the column with this new leading 1 is now zero, as required for reduced row-echelon form, and so column \(j\) is a pivot column. Also, rows \(r+1\) through \(m\) are now all zeros in the first \(j\) columns, so we now only have one new nonzero row, consistent with our increase of \(r\) by one. Furthermore, since the first \(j-1\) entries of row \(r\) are zero, the employment of the third row operation does not destroy any of the necessary features of rows \(1\) through \(r-1\) and rows \(r+1\) through \(m\text{,}\) in columns \(1\) through \(j-1\text{.}\)
So at this stage, the first \(j\) columns of the matrix are in reduced row-echelon form. When Step 2 finally increases \(j\) to \(n+1\text{,}\) then the procedure is completed and the full \(n\) columns of the matrix are in reduced row-echelon form, with the value of \(r\) correctly recording the number of nonzero rows.
The procedure given in the proof of Theorem REMEF can be more precisely described using a pseudo-code version of a computer program. Single-letter variables, like m, n, i, j, r have the same meanings as above. := is assignment of the value on the right to the variable on the left, A[i,j] is the equivalent of the matrix entry \(\matrixentry{A}{ij}\text{,}\) while == is an equality test and <> is a “not equals” test.
input m, n and A
r := 0
for j := 1 to n
   i := r+1
   while i <= m and A[i,j] == 0
       i := i+1
   if i < m+1
       r := r+1
       swap rows i and r of A (row op 1)
       scale A[r,j] to a leading 1 (row op 2)
       for k := 1 to m, k <> r
           make A[k,j] zero (row op 3, employing row r)
output r and A
Notice that as a practical matter the “and” used in the conditional statement of the while statement should be of the “short-circuit” variety so that the array access that follows is not out-of-bounds.
So now we can put it all together. Begin with a system of linear equations (Definition SLE), and represent the system by its augmented matrix (Definition AM). Use row operations (Definition RO) to convert this matrix into reduced row-echelon form (Definition RREF), using the procedure outlined in the proof of Theorem REMEF. Theorem REMEF also tells us we can always accomplish this, and that the result is row-equivalent (Definition REM) to the original augmented matrix. Since the matrix in reduced-row echelon form has the same solution set, we can analyze the row-reduced version instead of the original matrix, viewing it as the augmented matrix of a different system of equations. The beauty of augmented matrices in reduced row-echelon form is that the solution sets to the systems they represent can be easily determined, as we will see in the next few examples and in the next section.
We will see through the course that almost every interesting property of a matrix can be discerned by looking at a row-equivalent matrix in reduced row-echelon form. For this reason it is important to know that the matrix \(B\) is guaranteed to exist by Theorem REMEF is also unique.
Two proof techniques are applicable to the proof. First, head out and read two proof techniques: Proof Technique CD and Proof Technique U.

Proof.

We need to begin with no assumptions about any relationships between \(B\) and \(C\text{,}\) other than they are both in reduced row-echelon form, and they are both row-equivalent to \(A\text{.}\)
If \(B\) and \(C\) are both row-equivalent to \(A\text{,}\) then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of \(B\) is linearly related to the rows of \(C\text{.}\) This relationship is different for each row of \(B\text{,}\) but once we fix a row, the relationship is the same across columns. More precisely, there are scalars \(\delta_{ik}\text{,}\) \(1\leq i,k\leq m\) such that for any \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
\begin{align*} \matrixentry{B}{ij}&=\sum_{k=1}^{m}\delta_{ik}\matrixentry{C}{kj} \end{align*}
You should read this as saying that an entry of row \(i\) of \(B\) (in column \(j\)) is a linear function of the entries of all the rows of \(C\) that are also in column \(j\text{,}\) and the scalars (\(\delta_{ik}\)) depend on which row of \(B\) we are considering (the \(i\) subscript on \(\delta_{ik}\)), but are the same for every column (no dependence on \(j\) in \(\delta_{ik}\)). This idea may be complicated now, but will feel more familiar once we discuss “linear combinations” (Definition LCCV) and moreso when we discuss “row spaces” (Definition RSM). For now, spend some time carefully working Exercise RREF.M40, which is designed to illustrate the origins of this expression. This completes our exploitation of the row-equivalence of \(B\) and \(C\text{.}\)
We now repeatedly exploit the fact that \(B\) and \(C\) are in reduced row-echelon form. Recall that a pivot column is all zeros, except a single one. More carefully, if \(R\) is a matrix in reduced row-echelon form, and \(d_\ell\) is the index of a pivot column, then \(\matrixentry{R}{kd_\ell}=1\) precisely when \(k=\ell\) and is otherwise zero. Notice also that any entry of \(R\) that is both below the entry in row \(\ell\) and to the left of column \(d_\ell\) is also zero (with below and left understood to include equality). In other words, look at examples of matrices in reduced row-echelon form and choose a leading 1 (with a box around it). The rest of the column is also zeros, and the lower left “quadrant” of the matrix that begins here is totally zeros.
Assuming no relationship about the form of \(B\) and \(C\text{,}\) let \(B\) have \(r\) nonzero rows and denote the pivot columns as \(D=\set{\scalarlist{d}{r}}\text{.}\) For \(C\) let \(r^\prime\) denote the number of nonzero rows and denote the pivot columns as \(D^\prime=\set{\,\scalarlist{d^\prime}{r^\prime}}\) (Definition RREF). There are four steps in the proof, and the first three are about showing that \(B\) and \(C\) have the same number of pivot columns, in the same places. In other words, the “primed” symbols are a necessary fiction.
First Step. Suppose that \(d_1\lt d^\prime_1\text{.}\) Then
\begin{align*} 1 &=\matrixentry{B}{1d_1} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{1k}\matrixentry{C}{kd_1}\\ &=\sum_{k=1}^{m}\delta_{1k}(0)&& d_1\lt d^\prime_1\\ &=0 \end{align*}
The entries of \(C\) are all zero since they are left and below of the leading 1 in row 1 and column \(d^\prime_1\) of \(C\text{.}\) This is a contradiction, so we know that \(d_1\geq d^\prime_1\text{.}\) By an entirely similar argument, reversing the roles of \(B\) and \(C\text{,}\) we could conclude that \(d_1\leq d^\prime_1\text{.}\) Together this means that \(d_1=d^\prime_1\text{.}\)
Second Step. Suppose that we have determined that \(d_1=d^\prime_1\text{,}\) \(d_2=d^\prime_2\text{,}\) \(d_3=d^\prime_3\text{,}\)\(d_p=d^\prime_p\text{.}\) Let us now show that \(d_{p+1}=d^\prime_{p+1}\text{.}\) Working towards a contradiction, suppose that \(d_{p+1}\lt d^\prime_{p+1}\text{.}\) For \(1\leq\ell\leq p\text{,}\)
\begin{align*} 0 &=\matrixentry{B}{p+1,d_\ell} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{p+1,k}\matrixentry{C}{kd_\ell}\\ &=\sum_{k=1}^{m}\delta_{p+1,k}\matrixentry{C}{kd^\prime_\ell}\\ &= \delta_{p+1,\ell}\matrixentry{C}{\ell d^\prime_\ell}+ \sum_{\substack{k=1\\k\neq\ell}}^{m}\delta_{p+1,k}\matrixentry{C}{kd^\prime_\ell} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &= \delta_{p+1,\ell}(1)+ \sum_{\substack{k=1\\k\neq\ell}}^{m}\delta_{p+1,k}(0) &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\delta_{p+1,\ell} \end{align*}
Now,
\begin{align*} 1 &=\matrixentry{B}{p+1,d_{p+1}} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{p+1,k}\matrixentry{C}{kd_{p+1}}\\ &= \sum_{k=1}^{p}\delta_{p+1,k}\matrixentry{C}{kd_{p+1}}+ \sum_{k=p+1}^{m}\delta_{p+1,k}\matrixentry{C}{kd_{p+1}} &&\knowl{./knowl/xref/property-AACN.html}{\text{Property AACN}}\\ &= \sum_{k=1}^{p}(0)\matrixentry{C}{kd_{p+1}}+ \sum_{k=p+1}^{m}\delta_{p+1,k}\matrixentry{C}{kd_{p+1}}\\ &=\sum_{k=p+1}^{m}\delta_{p+1,k}\matrixentry{C}{kd_{p+1}}\\ &=\sum_{k=p+1}^{m}\delta_{p+1,k}(0) &&d_{p+1}\lt d^\prime_{p+1}\\ &=0 \end{align*}
This contradiction shows that \(d_{p+1}\geq d^\prime_{p+1}\text{.}\) By an entirely similar argument, we could conclude that \(d_{p+1}\leq d^\prime_{p+1}\text{,}\) and therefore \(d_{p+1}=d^\prime_{p+1}\text{.}\)
Third Step. Now we establish that \(r=r^\prime\text{.}\) Suppose that \(r^\prime\lt r\text{.}\) By the arguments above, we know that \(d_1=d^\prime_1\text{,}\) \(d_2=d^\prime_2\text{,}\) \(d_3=d^\prime_3\text{,}\) …, \(d_{r^\prime}=d^\prime_{r^\prime}\text{.}\) For \(1\leq\ell\leq r^\prime\lt r\text{,}\)
\begin{align*} 0 &=\matrixentry{B}{rd_\ell} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{rk}\matrixentry{C}{kd_\ell}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kd_\ell} + \sum_{k=r^\prime+1}^{m}\delta_{rk}\matrixentry{C}{kd_\ell} &&\knowl{./knowl/xref/property-AACN.html}{\text{Property AACN}}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kd_\ell} + \sum_{k=r^\prime+1}^{m}\delta_{rk}(0) &&\knowl{./knowl/xref/property-AACN.html}{\text{Property AACN}}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kd_\ell}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kd^\prime_\ell}\\ &=\delta_{r\ell}\matrixentry{C}{\ell d^\prime_\ell} + \sum_{\substack{k=1\\k\neq\ell}}^{r^\prime}\delta_{rk}\matrixentry{C}{kd^\prime_\ell} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &= \delta_{r\ell}(1) + \sum_{\substack{k=1\\k\neq\ell}}^{r^\prime}\delta_{rk}(0) &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\delta_{r\ell} \end{align*}
Now examine the entries of row \(r\) of \(B\text{,}\)
\begin{align*} \matrixentry{B}{rj} &=\sum_{k=1}^{m}\delta_{rk}\matrixentry{C}{kj}\\ &= \sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kj} + \sum_{k=r^\prime+1}^{m}\delta_{rk}\matrixentry{C}{kj} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kj} + \sum_{k=r^\prime+1}^{m}\delta_{rk}(0) &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{r^\prime}\delta_{rk}\matrixentry{C}{kj}\\ &=\sum_{k=1}^{r^\prime}(0)\matrixentry{C}{kj}\\ &=0 \end{align*}
So row \(r\) is a totally zero row, contradicting that this should be the bottommost nonzero row of \(B\text{.}\) So \(r^\prime\geq r\text{.}\) By an entirely similar argument, reversing the roles of \(B\) and \(C\text{,}\) we would conclude that \(r^\prime\leq r\) and therefore \(r=r^\prime\text{.}\) Thus, combining the first three steps we can say that \(D=D^\prime\text{.}\) In other words, \(B\) and \(C\) have the same pivot columns, in the same locations.
Fourth Step. In this final step, we will not argue by contradiction. Our intent is to determine the values of the \(\delta_{ij}\text{.}\) Notice that we can use the values of the \(d_i\) interchangeably for \(B\) and \(C\text{.}\) Here we go,
\begin{align*} 1 &=\matrixentry{B}{id_i} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{ik}\matrixentry{C}{kd_i}\\ &= \delta_{ii}\matrixentry{C}{id_i} + \sum_{\substack{k=1\\k\neq i}}^{m}\delta_{ik}\matrixentry{C}{kd_i} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &= \delta_{ii}(1) + \sum_{\substack{k=1\\k\neq i}}^{m}\delta_{ik}(0) &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\delta_{ii} \end{align*}
and for \(\ell\neq i\)
\begin{align*} 0 &=\matrixentry{B}{id_\ell} &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\sum_{k=1}^{m}\delta_{ik}\matrixentry{C}{kd_\ell}\\ &= \delta_{i\ell}\matrixentry{C}{\ell d_\ell} + \sum_{\substack{k=1\\k\neq\ell}}^{m}\delta_{ik}\matrixentry{C}{kd_\ell} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &= \delta_{i\ell}(1) + \sum_{\substack{k=1\\k\neq\ell}}^{m}\delta_{ik}(0) &&\knowl{./knowl/xref/definition-RREF.html}{\text{Definition RREF}}\\ &=\delta_{i\ell} \end{align*}
Finally, having determined the values of the \(\delta_{ij}\text{,}\) we can show that \(B=C\text{.}\) For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
\begin{align*} \matrixentry{B}{ij} &=\sum_{k=1}^{m}\delta_{ik}\matrixentry{C}{kj}\\ &= \delta_{ii}\matrixentry{C}{ij} + \sum_{\substack{k=1\\k\neq i}}^{m}\delta_{ik}\matrixentry{C}{kj} &&\knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &= (1)\matrixentry{C}{ij} + \sum_{\substack{k=1\\k\neq i}}^{m}(0)\matrixentry{C}{kj}\\ &=\matrixentry{C}{ij} \end{align*}
So \(B\) and \(C\) have equal values in every entry, and so are the same matrix.
We will now run through some examples of using these definitions and theorems to solve some systems of equations. From now on, when we have a matrix in reduced row-echelon form, we will mark the leading 1’s with a small box. This will help you count, and identify, the pivot columns. In your work, you can box ’em, circle ’em or write ’em in a different color — just identify ’em somehow. This device will prove very useful later and is a very good habit to start developing right now.

Example SAB. Solutions for Archetype B.

Let us find the solutions to the following system of equations.
\begin{align*} -7x_1 -6 x_2 - 12x_3 &=-33\\ 5x_1 + 5x_2 + 7x_3 &=24\\ x_1 +4x_3 &=5 \end{align*}
First, form the augmented matrix,
\begin{align*} \begin{bmatrix} -7&-6&- 12&-33\\ 5&5&7&24\\ 1&0&4&5 \end{bmatrix} \end{align*}
and work to reduced row-echelon form, first with \(j=1\text{,}\)
\begin{align*} \xrightarrow{\rowopswap{1}{3}} & \begin{bmatrix} 1&0&4&5\\ 5&5&7&24\\ -7&-6&-12&-33 \end{bmatrix} \xrightarrow{\rowopadd{-5}{1}{2}} \begin{bmatrix} 1&0&4&5\\ 0&5&-13&-1\\ -7&-6&-12&-33 \end{bmatrix}\\ \xrightarrow{\rowopadd{7}{1}{3}} & \begin{bmatrix} \leading{1}&0&4&5\\ 0&5&-13&-1\\ 0&-6&16&2 \end{bmatrix}\\ \end{align*}
Now, with \(j=2\text{,}\)
\begin{align*} \xrightarrow{\rowopmult{\frac{1}{5}}{2}} & \begin{bmatrix} \leading{1}&0&4&5\\ 0&1&\frac{-13}{5}&\frac{-1}{5}\\ 0&-6&16&2 \end{bmatrix} \xrightarrow{\rowopadd{6}{2}{3}} \begin{bmatrix} \leading{1}&0&4&5\\ 0&\leading{1}&\frac{-13}{5}&\frac{-1}{5}\\ 0&0&\frac{2}{5}&\frac{4}{5} \end{bmatrix}\\ \end{align*}
And finally, with \(j=3\text{,}\)
\begin{align*} \xrightarrow{\rowopmult{\frac{5}{2}}{3}} & \begin{bmatrix} \leading{1}&0&4&5\\ 0&\leading{1}&\frac{-13}{5}&\frac{-1}{5}\\ 0&0&1&2 \end{bmatrix} \xrightarrow{\rowopadd{\frac{13}{5}}{3}{2}} \begin{bmatrix} \leading{1}&0&4&5\\ 0&\leading{1}&0&5\\ 0&0&1&2 \end{bmatrix}\\ \xrightarrow{\rowopadd{-4}{3}{1}} &\begin{bmatrix} \leading{1}&0&0&-3\\ 0&\leading{1}&0&5\\ 0&0&\leading{1}&2 \end{bmatrix}\text{.} \end{align*}
This is now the augmented matrix of a very simple system of equations, namely \(x_1=-3\text{,}\) \(x_2=5\text{,}\) \(x_3=2\text{,}\) which has an obvious solution. Furthermore, we can see that this is the only solution to this system, so we have determined the entire solution set
\begin{align*} S&=\set{\colvector{-3\\5\\2}}\text{.} \end{align*}
You might compare this example with the procedure we used in Example US.
Archetypes A and B are meant to contrast each other in many respects. So let us solve Archetype A now.

Example SAA. Solutions for Archetype A.

Let us find the solutions to the following system of equations.
\begin{align*} x_1 -x_2 +2x_3 & =1\\ 2x_1+ x_2 + x_3 & =8\\ x_1 + x_2 & =5 \end{align*}
First, form the augmented matrix,
\begin{align*} \begin{bmatrix} 1 & -1 & 2 & 1\\ 2 & 1 & 1 & 8\\ 1 & 1 & 0 & 5 \end{bmatrix} \end{align*}
and work to reduced row-echelon form, first with \(j=1\text{,}\)
\begin{align*} \xrightarrow{\rowopadd{-2}{1}{2}} & \begin{bmatrix} 1 & -1 & 2 & 1\\ 0 & 3 & -3 & 6\\ 1 & 1 & 0 & 5 \end{bmatrix} \xrightarrow{\rowopadd{-1}{1}{3}} \begin{bmatrix} \leading{1} & -1 & 2 & 1\\ 0 & 3 & -3 & 6\\ 0 & 2 & -2 & 4 \end{bmatrix}\\ \end{align*}
Now, with \(j=2\text{,}\)
\begin{align*} \xrightarrow{\rowopmult{\frac{1}{3}}{2}} & \begin{bmatrix} \leading{1} & -1 & 2 & 1\\ 0 & 1 & -1 & 2\\ 0 & 2 & -2 & 4 \end{bmatrix} \xrightarrow{\rowopadd{1}{2}{1}} \begin{bmatrix} \leading{1} & 0 & 1 & 3\\ 0 & 1 & -1 & 2\\ 0 & 2 & -2 & 4 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{2}{3}} & \begin{bmatrix} \leading{1} & 0 & 1 & 3\\ 0 & \leading{1} & -1 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}
The system of equations represented by this augmented matrix needs to be considered a bit differently than that for Archetype B. First, the last row of the matrix is the equation \(0=0\text{,}\) which is always true, so it imposes no restrictions on our possible solutions and therefore we can safely ignore it as we analyze the other two equations. These equations are,
\begin{align*} x_1+x_3&=3\\ x_2-x_3&=2\text{.} \end{align*}
While this system is fairly easy to solve, it also appears to have a multitude of solutions. For example, choose \(x_3=1\) and see that then \(x_1=2\) and \(x_2=3\) will together form a solution. Or choose \(x_3=0\text{,}\) and then discover that \(x_1=3\) and \(x_2=2\) lead to a solution. Try it yourself: pick any value of \(x_3\) you please, and figure out what \(x_1\) and \(x_2\) should be to make the first and second equations (respectively) true. We’ll wait while you do that. Because of this behavior, we say that \(x_3\) is a free or independent variable. But why do we vary \(x_3\) and not some other variable? For now, notice that the third column of the augmented matrix is not a pivot column. With this idea, we can rearrange the two equations, solving each for the variable whose index is the same as the column index of a pivot column.
\begin{align*} x_1&=3-x_3\\ x_2&=2+x_3 \end{align*}
To write the set of solution vectors in set notation, we have
\begin{align*} S&=\setparts{\colvector{3-x_3\\2+x_3\\x_3}}{x_3\in\complexes} \end{align*}
We will learn more in the next section about systems with infinitely many solutions and how to express their solution sets. Right now, you might look back at Example IS.

Example SAE. Solutions for Archetype E.

Let us find the solutions to the following system of equations,
\begin{align*} 2x_1 + x_2 + 7x_3 - 7x_4 &= 2 \\ -3x_1 + 4x_2 -5x_3 - 6x_4 &= 3 \\ x_1 +x_2 + 4x_3 - 5x_4 &= 2 \end{align*}
First, form the augmented matrix,
\begin{align*} \begin{bmatrix} 2 & 1 & 7 & -7 & 2\\ -3 & 4 & -5 & -6 & 3\\ 1 & 1 & 4 & -5 & 2 \end{bmatrix} \end{align*}
and work to reduced row-echelon form, first with \(j=1\text{,}\)
\begin{align*} \xrightarrow{\rowopswap{1}{3}} & \begin{bmatrix} 1 & 1 & 4 & -5 & 2\\ -3 & 4 & -5 & -6 & 3\\ 2 & 1 & 7 & -7 & 2 \end{bmatrix} \xrightarrow{\rowopadd{3}{1}{2}} \begin{bmatrix} 1 & 1 & 4 & -5 & 2\\ 0 & 7 & 7 & -21 & 9\\ 2 & 1 & 7 & -7 & 2 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{1}{3}} & \begin{bmatrix} \leading{1} & 1 & 4 & -5 & 2\\ 0 & 7 & 7 & -21 & 9\\ 0 & -1 & -1 & 3 & -2 \end{bmatrix}\\ \end{align*}
Now, with \(j=2\text{,}\)
\begin{align*} \xrightarrow{\rowopswap{2}{3}} & \begin{bmatrix} \leading{1} & 1 & 4 & -5 & 2\\ 0 & -1 & -1 & 3 & -2\\ 0 & 7 & 7 & -21 & 9 \end{bmatrix} \xrightarrow{\rowopmult{-1}{2}} \begin{bmatrix} \leading{1} & 1 & 4 & -5 & 2\\ 0 & 1 & 1 & -3 & 2\\ 0 & 7 & 7 & -21 & 9 \end{bmatrix}\\ \xrightarrow{\rowopadd{-1}{2}{1}} & \begin{bmatrix} \leading{1} & 0 & 3 & -2 & 0\\ 0 & 1 & 1 & -3 & 2\\ 0 & 7 & 7 & -21 & 9 \end{bmatrix} \xrightarrow{\rowopadd{-7}{2}{3}} \begin{bmatrix} \leading{1} & 0 & 3 & -2 & 0\\ 0 & \leading{1} & 1 & -3 & 2\\ 0 & 0 & 0 & 0 & -5 \end{bmatrix}\\ \end{align*}
And finally, with \(j=5\text{,}\)
\begin{align*} \xrightarrow{\rowopmult{-\frac{1}{5}}{3}} & \begin{bmatrix} \leading{1} & 0 & 3 & -2 & 0\\ 0 & \leading{1} & 1 & -3 & 2\\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \xrightarrow{\rowopadd{-2}{3}{2}} \begin{bmatrix} \leading{1} & 0 & 3 & -2 & 0\\ 0 & \leading{1} & 1 & -3 & 0\\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{align*}
Let us analyze the equations in the system represented by this augmented matrix. The third equation will read \(0=1\text{.}\) This is patently false, all the time. No choice of values for our variables will ever make it true. We are done. Since we cannot even make the last equation true, we have no hope of making all of the equations simultaneously true. So this system has no solutions, and its solution set is the empty set, \(\emptyset=\set{\ }\) (Definition ES).
Notice that we could have reached this conclusion sooner. After performing the row operation \(\rowopadd{-7}{2}{3}\text{,}\) we can see that the third equation reads \(0=-5\text{,}\) a false statement. Since the system represented by this matrix has no solutions, none of the systems represented has any solutions. However, for this example, we have chosen to bring the matrix all the way to reduced row-echelon form as practice.
These three examples (Example SAB, Example SAA, Example SAE) illustrate the full range of possibilities for a system of linear equations — no solutions, one solution, or infinitely many solutions. In the next section we will examine these three scenarios more closely.
We (and everybody else) will often speak of “row-reducing” a matrix. This is an informal way of saying we begin with a matrix \(A\) and then analyze the matrix \(B\) that is row-equivalent to \(A\) and in reduced row-echelon form. So the term row-reduce is used as a verb, but describes something a bit more complicated, since we do not really change \(A\text{.}\) Theorem REMEF tells us that this process will always be successful and Theorem RREFU tells us that \(B\) will be unambiguous. Typically, an investigation of \(A\) will proceed by analyzing \(B\) and applying theorems whose hypotheses include the row-equivalence of \(A\) and \(B\text{,}\) and usually the hypothesis that \(B\) is in reduced row-echelon form.

Sage RREF. Reduced Row-Echelon Form.

There has been a lot of information about using Sage with vectors and matrices in this section. But we can now construct the coefficient matrix of a system of equations and the vector of constants. From these pieces we can easily construct the augmented matrix, which we could subject to a series of row operations. Computers are suppose to make routine tasks easy so we can concentrate on bigger ideas. No exception here, Sage can bring a matrix (augmented or not) to reduced row echelon form with no problem. Let us redo Example SAB with Sage.
And the solution \(x_1=-3\text{,}\) \(x_2=5\text{,}\) \(x_3=2\) is now obvious. Beautiful.
You may notice that Sage has some commands with the word “echelon” in them. For now, these should be avoided like the plague, as there are some subtleties in how they work. The matrix method .rref() will be sufficient for our purposes for a long, long time — so stick with it.

Reading Questions RREF Reading Questions

1. Matrix in RREF?

Is the matrix below in reduced row-echelon form? Why or why not?
\begin{equation*} \begin{bmatrix} 1 & 5 & 0 & 6 & 8\\ 0 & 0 & 1 & 2 & 0\\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \end{equation*}

2. Calculate RREF of a matrix.

Use row operations to convert the matrix below to reduced row-echelon form and report the final matrix.
\begin{equation*} \begin{bmatrix} 2 & 1 & 8\\ -1 & 1 & -1\\ -2 & 5 & 4 \end{bmatrix} \end{equation*}

3. Calculate system’s solution set, report RREF.

Find all the solutions to the system below by using an augmented matrix and row operations. Report your final matrix in reduced row-echelon form and the set of solutions.
\begin{align*} 2x_1 + 3x_2 - x_3&= 0\\ x_1 + 2x_2 + x_3&= 3\\ x_1 + 3x_2 + 3x_3&= 7 \end{align*}

Exercises RREF Exercises

C05.

Each archetype below is a system of equations. Form the augmented matrix of the system of equations, convert the matrix to reduced row-echelon form by using equation operations and then describe the solution set of the original system of equations.
Archetype A, Archetype B, Archetype C, Archetype D, Archetype E, Archetype F, Archetype G, Archetype H, Archetype I, Archetype J

Exercise Group.

For problems C10–C19, find all solutions to the system of linear equations. Use your favorite computing device to row-reduce the augmented matrices for the systems, and write the solutions as a set, using correct set notation.
C10.
\begin{align*} 2x_1-3x_2+x_3+7x_4&=14\\ 2x_1+8x_2-4x_3+5x_4&=-1\\ x_1+3x_2-3x_3&=4 \\ -5x_1+2x_2+3x_3+4x_4&=-19 \end{align*}
Solution.
The augmented matrix row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 1\\ 0 & \leading{1} & 0 & 0 & -3\\ 0 & 0 & \leading{1} & 0 & -4\\ 0 & 0 & 0 & \leading{1} & 1 \end{bmatrix}\text{.} \end{equation*}
This augmented matrix represents the linear system \(x_1=1\text{,}\) \(x_2=-3\text{,}\) \(x_3=-4\text{,}\) \(x_4=1\text{,}\) which clearly has only one possible solution. We can write this solution set then as
\begin{align*} S&=\set{\colvector{1\\-3\\-4\\1}}\text{.} \end{align*}
C11.
\begin{align*} 3x_1+4x_2-x_3+2x_4&=6\\ x_1-2x_2+3x_3+x_4&=2\\ 10x_2-10x_3-x_4&=1 \end{align*}
Solution.
The augmented matrix row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 1 & 4/5 & 0\\ 0 & \leading{1} & -1 & -1/10 & 0\\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{equation*}
Row 3 represents the equation \(0=1\text{,}\) which is patently false, so the original system has no solutions. We can express the solution set as the empty set, \(\emptyset=\set{\ }\text{.}\)
C12.
\begin{align*} 2x_1+4x_2+5x_3+7x_4&=-26\\ x_1+2x_2+x_3-x_4&=-4\\ -2x_1-4x_2+x_3+11x_4&=-10 \end{align*}
Solution.
The augmented matrix row-reduces to
\begin{align*} \begin{bmatrix} \leading{1} & 2 & 0 & -4 & 2\\ 0 & 0 & \leading{1} & 3 & -6\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}
n the spirit of Example SAA, we can express the infinitely many solutions of this system compactly with set notation. The key is to express certain variables in terms of others. More specifically, each pivot column number is the index of a variable that can be written in terms of the variables whose indices are non-pivot columns. Or saying the same thing: for each \(i\) in \(D\text{,}\) we can find an expression for \(x_i\) in terms of the variables without their index in \(D\text{.}\) Here \(D=\set{1,\,3}\text{,}\) so
\begin{align*} x_1&=2-2x_2+4x_4\\ x_3&=-6\quad\quad-3x_4\text{.} \end{align*}
As a set, we write the solutions precisely as
\begin{gather*} \setparts{\colvector{2-2x_2+4x_4\\x_2\\-6-3x_4\\x_4}}{x_2,\,x_4\in\complexes} \end{gather*}
C13.
\begin{align*} x_1+2x_2+8x_3-7x_4&=-2\\ 3x_1+2x_2+12x_3-5x_4&=6\\ -x_1+x_2+x_3-5x_4&=-10 \end{align*}
Solution.
The augmented matrix of the system of equations is
\begin{equation*} \begin{bmatrix} 1 & 2 & 8 & -7 & -2\\ 3 & 2 & 12 & -5 & 6\\ -1 & 1 & 1 & -5 & -10 \end{bmatrix} \end{equation*}
which row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 2 & 1 & 0\\ 0 & \leading{1} & 3 & -4 & 0\\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{equation*}
Row 3 represents the equation \(0=1\text{,}\) which is patently false, so the original system has no solutions. We can express the solution set as the empty set, \(\emptyset=\set{\ }\text{.}\)
C14.
\begin{align*} 2x_1+x_2+7x_3-2x_4&=4\\ 3x_1-2x_2+11x_4&=13\\ x_1+x_2+5x_3-3x_4&=1 \end{align*}
Solution.
The augmented matrix of the system of equations is
\begin{equation*} \begin{bmatrix} 2 & 1 & 7 & -2 & 4\\ 3 & -2 & 0 & 11 & 13\\ 1 & 1 & 5 & -3 & 1 \end{bmatrix} \end{equation*}
which row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1}& 0 & 2 & 1 & 3\\ 0 & \leading{1} & 3 & -4 & -2\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}
In the spirit of Example SAA, we can express the infinitely many solutions of this system compactly with set notation. The key is to express certain variables in terms of others. More specifically, each pivot column number is the index of a variable that can be written in terms of the variables whose indices are non-pivot columns. Or saying the same thing: for each \(i\) in \(D\text{,}\) we can find an expression for \(x_i\) in terms of the variables without their index in \(D\text{.}\) Here \(D=\set{1,\,2}\text{,}\) so rearranging the equations represented by the two nonzero rows to gain expressions for the variables \(x_1\) and \(x_2\) yields the solution set
\begin{equation*} S=\setparts{\colvector{3-2x_3-x_4\\-2-3x_3+4x_4\\x_3\\x_4}}{x_3,\,x_4\in\complexes}\text{.} \end{equation*}
C15.
\begin{align*} 2x_1+ 3x_2-x_3 -9x_4&=-16\\ x_1+ 2x_2+ x_3&=0\\ -x_1+ 2x_2+ 3x_3+ 4x_4&=8 \end{align*}
Solution.
The augmented matrix of the system of equations is
\begin{equation*} \begin{bmatrix} 2 & 3 & -1 & -9 & -16 \\ 1 & 2 & 1 & 0 & 0 \\ -1 & 2 & 3 & 4 & 8 \end{bmatrix} \end{equation*}
which row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 2 & 3 \\ 0 & \leading{1} & 0 & -3 & -5 \\ 0 & 0 & \leading{1} & 4 & 7 \end{bmatrix}\text{.} \end{equation*}
In the spirit of Example SAA, we can express the infinitely many solutions of this system compactly with set notation. The key is to express certain variables in terms of others. More specifically, each pivot column number is the index of a variable that can be written in terms of the variables whose indices are non-pivot columns. Or saying the same thing: for each \(i\) in \(D\text{,}\) we can find an expression for \(x_i\) in terms of the variables without their index in \(D\text{.}\) Here \(D=\set{1,\,2,\,3}\text{,}\) so rearranging the equations represented by the three nonzero rows to gain expressions for the variables \(x_1\text{,}\) \(x_2\) and \(x_3\) yields the solution set
\begin{equation*} S=\setparts{\colvector{3-2x_4\\-5+3x_4\\7-4x_4\\x_4}}{x_4\in\complexes}\text{.} \end{equation*}
C16.
\begin{align*} 2x_1+ 3x_2+19x_3 -4x_4&=2\\ x_1+ 2x_2+ 12x_3-3x_4&=1\\ -x_1+ 2x_2+ 8x_3-5x_4&=1 \end{align*}
Solution.
The augmented matrix of the system of equations is
\begin{equation*} \begin{bmatrix} 2 & 3 & 19 & -4 & 2 \\ 1 & 2 & 12 & -3 & 1 \\ -1 & 2 & 8 & -5 & 1 \end{bmatrix} \end{equation*}
which row-reduces to
\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 2 & 1 & 0 \\ 0 & \leading{1} & 5 & -2 & 0 \\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{equation*}
Row 3 represents the equation \(0=1\text{,}\) which is patently false, so the original system has no solutions. We can express the solution set as the empty set, \(\emptyset=\set{\ }\text{.}\)
C17.
\begin{align*} -x_1+5x_2&=-8\\ -2x_1+5x_2+5x_3+2x_4&=9\\ -3x_1-x_2+3x_3+x_4&=3\\ 7x_1+6x_2+5x_3+x_4&=30 \end{align*}
Solution.
We row-reduce the augmented matrix of the system of equations,
\begin{align*} \begin{bmatrix} -1 & 5 & 0 & 0 & -8 \\ -2 & 5 & 5 & 2 & 9 \\ -3 & -1 & 3 & 1 & 3 \\ 7 & 6 & 5 & 1 & 30 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 3 \\ 0 & \leading{1} & 0 & 0 & -1 \\ 0 & 0 & \leading{1} & 0 & 2 \\ 0 & 0 & 0 & \leading{1} & 5 \end{bmatrix}\text{.} \end{align*}
This augmented matrix represents the linear system \(x_1=3\text{,}\) \(x_2=-1\text{,}\) \(x_3=2\text{,}\) \(x_4=5\text{,}\) which clearly has only one possible solution. We can write this solution set then as
\begin{align*} S&=\set{\colvector{3\\-1\\2\\5}}\text{.} \end{align*}
C18.
\begin{align*} x_1+2x_2-4x_3-x_4&=32\\ x_1+3x_2-7x_3-x_5&=33\\ x_1+2x_3-2x_4+3x_5&=22 \end{align*}
Solution.
We row-reduce the augmented matrix of the system of equations,
\begin{align*} \begin{bmatrix} 1 & 2 & -4 & -1 & 0 & 32 \\ 1 & 3 & -7 & 0 & -1 & 33 \\ 1 & 0 & 2 & -2 & 3 & 22 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 2 & 0 & 5 & 6 \\ 0 & \leading{1} & -3 & 0 & -2 & 9 \\ 0 & 0 & 0 & \leading{1} & 1 & -8 \end{bmatrix}\text{.} \end{align*}
In the spirit of Example SAA, we can express the infinitely many solutions of this system compactly with set notation. The key is to express certain variables in terms of others. More specifically, each pivot column number is the index of a variable that can be written in terms of the variables whose indices are non-pivot columns. Or saying the same thing: for each \(i\) in \(D\text{,}\) we can find an expression for \(x_i\) in terms of the variables without their index in \(D\text{.}\) Here \(D=\set{1,\,2,\,4}\text{,}\) so we form expressions for ech dependent variable.
\begin{align*} x_1+2x_3+5x_5=6 \quad&\rightarrow\quad x_1=6-2x_3-5x_5\\ x_2-3x_3-2x_5=9 \quad&\rightarrow\quad x_2=9+3x_3+2x_5\\ x_4+x_5=-8\quad&\rightarrow\quad x_4=-8-x_5\text{.} \end{align*}
As a set, we write the solutions precisely as
\begin{align*} S&=\setparts{\colvector{6-2x_3-5x_5\\9+3x_3+2x_5\\x_3\\-8-x_5\\x_5}}{x_3,\,x_5\in\complexes}\text{.} \end{align*}
C19.
\begin{align*} 2x_1 + x_2 &= 6 \\ -x_1 - x_2 &= -2 \\ 3x_1 + 4x_2 &= 4 \\ 3x_1 + 5x_2 &= 2 \end{align*}
Solution.
We form the augmented matrix of the system,
\begin{align*} &\begin{bmatrix} 2 & 1 & 6 \\ -1 & -1 & -2 \\ 3 & 4 & 4 \\ 3 & 5 & 2 \end{bmatrix} \end{align*}
which row-reduces to
\begin{align*} &\begin{bmatrix} \leading{1} & 0 & 4 \\ 0 & \leading{1} & -2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}
This augmented matrix represents the linear system \(x_1=4\text{,}\) \(x_2=-2\text{,}\) \(0=0\text{,}\) \(0=0\text{,}\) which clearly has only one possible solution. We can write this solution set then as
\begin{align*} S&=\set{\colvector{4\\-2}}\text{.} \end{align*}

Exercise Group.

For problems C30–C33, row-reduce the matrix without the aid of a calculator, indicating the row operations you are using at each step using the notation of Definition RO.
C30.
\begin{align*} \begin{bmatrix} 2 & 1 & 5 & 10\\ 1 & -3 & -1 & -2\\ 4 & -2 & 6 & 12 \end{bmatrix} \end{align*}
Solution.
\begin{align*} & \begin{bmatrix} 2 & 1 & 5 & 10\\ 1 & -3 & -1 & -2\\ 4 & -2 & 6 & 12 \end{bmatrix} \xrightarrow{\rowopswap{1}{2}} \begin{bmatrix} 1 & -3 & -1 & -2\\ 2 & 1 & 5 & 10\\ 4 & -2 & 6 & 12 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{1}{2}} & \begin{bmatrix} 1 & -3 & -1 & -2\\ 0 & 7 & 7 & 14\\ 4 & -2 & 6 & 12 \end{bmatrix} \xrightarrow{\rowopadd{-4}{1}{3}} \begin{bmatrix} 1 & -3 & -1 & -2\\ 0 & 7 & 7 & 14\\ 0 & 10 & 10 & 20 \end{bmatrix}\\ \xrightarrow{\rowopmult{\frac{1}{7}}{2}} & \begin{bmatrix} 1 & -3 & -1 & -2\\ 0 & 1 & 1 & 2\\ 0 & 10 & 10 & 20 \end{bmatrix} \xrightarrow{\rowopadd{3}{2}{1}} \begin{bmatrix} 1 & 0 & 2 & 4\\ 0 & 1 & 1 & 2\\ 0 & 10 & 10 & 20 \end{bmatrix}\\ \xrightarrow{\rowopadd{-10}{2}{3}} & \begin{bmatrix} \leading{1} & 0 & 2 & 4\\ 0 & \leading{1} & 1 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
C31.
\begin{align*} \begin{bmatrix} 1 & 2 & -4 \\ -3 & -1 & -3 \\ -2 & 1 & -7 \end{bmatrix} \end{align*}
Solution.
\begin{align*} & \begin{bmatrix} 1 & 2 & -4 \\ -3 & -1 & -3 \\ -2 & 1 & -7 \end{bmatrix} \xrightarrow{\rowopadd{3}{1}{2}} \begin{bmatrix} 1 & 2 & -4 \\ 0 & 5 & -15 \\ -2 & 1 & -7 \end{bmatrix}\\ \xrightarrow{\rowopadd{2}{1}{3}} & \begin{bmatrix} 1 & 2 & -4 \\ 0 & 5 & -15 \\ 0 & 5 & -15 \end{bmatrix} \xrightarrow{\rowopmult{\frac{1}{5}}{2}} \begin{bmatrix} 1 & 2 & -4 \\ 0 & 1 & -3 \\ 0 & 5 & -15 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{2}{1}} & \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & -3 \\ 0 & 5 & -15 \end{bmatrix} \xrightarrow{\rowopadd{-5}{2}{3}} \begin{bmatrix} \leading{1} & 0 & 2 \\ 0 & \leading{1} & -3 \\ 0 & 0 & 0 \end{bmatrix} \end{align*}
C32.
\begin{align*} \begin{bmatrix} 1 & 1 & 1 \\ -4 & -3 & -2 \\ 3 & 2 & 1 \end{bmatrix} \end{align*}
Solution.
Following the algorithm of Theorem REMEF, and working to create pivot columns from left to right, we have
\begin{align*} \begin{bmatrix} 1 & 1 & 1 \\ -4 & -3 & -2 \\ 3 & 2 & 1 \end{bmatrix} \xrightarrow{\rowopadd{4}{1}{2}} & \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 2 \\ 3 & 2 & 1 \end{bmatrix} \xrightarrow{\rowopadd{-3}{1}{3}} \begin{bmatrix} \leading{1} & 1 & 1 \\ 0 & 1 & 2 \\ 0 & -1 & -2 \end{bmatrix}\\ \xrightarrow{\rowopadd{-1}{2}{1}} & \begin{bmatrix} \leading{1} & 0 & -1 \\ 0 & 1 & 2 \\ 0 & -1 & -2 \end{bmatrix} \xrightarrow{\rowopadd{1}{2}{3}} \begin{bmatrix} \leading{1} & 0 & -1 \\ 0 & \leading{1} & 2 \\ 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}
C33.
\begin{align*} \begin{bmatrix} 1 & 2 & -1 & -1 \\ 2 & 4 & -1 & 4 \\ -1 & -2 & 3 & 5 \end{bmatrix} \end{align*}
Solution.
Following the algorithm of Theorem REMEF, and working to create pivot columns from left to right, we have
\begin{align*} & \begin{bmatrix} 1 & 2 & -1 & -1 \\ 2 & 4 & -1 & 4 \\ -1 & -2 & 3 & 5 \end{bmatrix} \xrightarrow{\rowopadd{-2}{1}{2}} \begin{bmatrix} 1 & 2 & -1 & -1 \\ 0 & 0 & 1 & 6 \\ -1 & -2 & 3 & 5 \end{bmatrix}\\ \xrightarrow{\rowopadd{1}{1}{3}} & \begin{bmatrix} \leading{1} & 2 & -1 & -1 \\ 0 & 0 & 1 & 6 \\ 0 & 0 & 2 & 4 \end{bmatrix} \xrightarrow{\rowopadd{1}{2}{1}} \begin{bmatrix} \leading{1} & 2 & 0 & 5 \\ 0 & 0 & 1 & 6 \\ 0 & 0 & 2 & 4 \end{bmatrix}\\ \xrightarrow{\rowopadd{-2}{2}{3}} & \begin{bmatrix} \leading{1} & 2 & 0 & 5 \\ 0 & 0 & \leading{1} & 6 \\ 0 & 0 & 0 & -8 \end{bmatrix} \xrightarrow{\rowopmult{-\frac{1}{8}}{3}} \begin{bmatrix} \leading{1} & 2 & 0 & 5 \\ 0 & 0 & \leading{1} & 6 \\ 0 & 0 & 0 & 1 \end{bmatrix}\\ \xrightarrow{\rowopadd{-6}{3}{2}} & \begin{bmatrix} \leading{1} & 2 & 0 & 5 \\ 0 & 0 & \leading{1} & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \xrightarrow{\rowopadd{-5}{3}{1}} \begin{bmatrix} \leading{1} & 2 & 0 & 0 \\ 0 & 0 & \leading{1} & 0 \\ 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{align*}

M40.

Consider the two \(3\times 4\) matrices below
\begin{align*} B&= \begin{bmatrix} 1 & 3 & -2 & 2 \\ -1 & -2 & -1 & -1 \\ -1 & -5 & 8 & -3 \end{bmatrix} & C&= \begin{bmatrix} 1 & 2 & 1 & 2 \\ 1 & 1 & 4 & 0 \\ -1 & -1 & -4 & 1 \end{bmatrix} \end{align*}
  1. Row-reduce each matrix and determine that the reduced row-echelon forms of \(B\) and \(C\) are identical. From this argue that \(B\) and \(C\) are row-equivalent.
  2. In the proof of Theorem RREFU, we begin by arguing that entries of row-equivalent matrices are related by way of certain scalars and sums. In this example, we would write that entries of \(B\) from row \(i\) that are in column \(j\) are linearly related to the entries of \(C\) in column \(j\) from all three rows
    \begin{align*} \matrixentry{B}{ij} &= \delta_{i1}\matrixentry{C}{1j}+ \delta_{i2}\matrixentry{C}{2j}+ \delta_{i3}\matrixentry{C}{3j} & 1&\leq j\leq 4 \end{align*}
    For each \(1\leq i\leq 3\) find the corresponding three scalars in this relationship. So your answer will be nine scalars, determined three at a time.
Solution.
  1. Let \(R\) be the common reduced row-echelon form of \(B\) and \(C\text{.}\) A sequence of row operations converts \(B\) to \(R\) and a second sequence of row operations converts \(C\) to \(R\text{.}\) If we “reverse” the second sequence’s order, and reverse each individual row operation (see Exercise RREF.T10) then we can begin with \(B\text{,}\) convert to \(R\) with the first sequence, and then convert to \(C\) with the reversed sequence. Satisfying Definition REM we can say \(B\) and \(C\) are row-equivalent matrices.
  2. We will work this carefully for the first row of \(B\) and just give the solution for the next two rows. For row 1 of \(B\) take \(i=1\) and we have
    \begin{align*} \matrixentry{B}{1j} &= \delta_{11}\matrixentry{C}{1j}+ \delta_{12}\matrixentry{C}{2j}+ \delta_{13}\matrixentry{C}{3j} & 1&\leq j\leq 4\text{.} \end{align*}
    If we substitute the four values for \(j\) we arrive at four linear equations in the three unknowns \(\delta_{11}, \delta_{12}, \delta_{13}\text{,}\)
    \begin{align*} (j=1)& & \matrixentry{B}{11} &= \delta_{11}\matrixentry{C}{11}+ \delta_{12}\matrixentry{C}{21}+ \delta_{13}\matrixentry{C}{31} & &\Rightarrow & 1 &= \delta_{11}(1)+ \delta_{12}(1)+ \delta_{13}(-1)\\ (j=2)& & \matrixentry{B}{12} &= \delta_{11}\matrixentry{C}{12}+ \delta_{12}\matrixentry{C}{22}+ \delta_{13}\matrixentry{C}{32} & &\Rightarrow & 3 &= \delta_{11}(2)+ \delta_{12}(1)+ \delta_{13}(-1)\\ (j=3)& & \matrixentry{B}{13} &= \delta_{11}\matrixentry{C}{13}+ \delta_{12}\matrixentry{C}{23}+ \delta_{13}\matrixentry{C}{33} & &\Rightarrow & -2 &= \delta_{11}(1)+ \delta_{12}(4)+ \delta_{13}(-4)\\ (j=4)& & \matrixentry{B}{14} &= \delta_{11}\matrixentry{C}{14}+ \delta_{12}\matrixentry{C}{24}+ \delta_{13}\matrixentry{C}{34} & &\Rightarrow & 2 &= \delta_{11}(2)+ \delta_{12}(0)+ \delta_{13}(1)\text{.} \end{align*}
    We form the augmented matrix of this system and row-reduce to find the solutions,
    \begin{align*} \begin{bmatrix} 1 & 1 & -1 & 1 \\ 2 & 1 & -1 & 3 \\ 1 & 4 & -4 & -2 \\ 2 & 0 & 1 & 2 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 2 \\ 0 & \leading{1} & 0 & -3 \\ 0 & 0 & \leading{1} & -2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}
    So the unique solution is \(\delta_{11}=2\text{,}\) \(\delta_{12}=-3\text{,}\) \(\delta_{13}=-2\text{.}\) Entirely similar work will lead you to
    \begin{align*} \delta_{21}&=-1 & \delta_{22}&=1 & \delta_{23}&=1\\ \end{align*}
    and
    \begin{align*} \delta_{31}&=-4 & \delta_{32}&=8 & \delta_{33}&=5 \end{align*}

M45.

You keep a number of lizards, mice and peacocks as pets. There are a total of 108 legs and 30 tails in your menagerie. You have twice as many mice as lizards. How many of each creature do you have?
Solution.
Let \(l, m, p\) denote the number of lizards, mice and peacocks. Then the statements from the problem yield the equations:
\begin{align*} 4l+4m+2p&=108\\ l+m+p &= 30\\ 2l-m&=0 \end{align*}
We form the augmented matrix for this system and row-reduce
\begin{align*} \begin{bmatrix} 4 & 4 & 2 & 108\\ 1 & 1 & 1 & 30\\ 2 & -1 & 0 & 0 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 8\\ 0 & \leading{1} & 0 & 16\\ 0 & 0 & \leading{1} & 6 \end{bmatrix}\text{.} \end{align*}
From the row-reduced matrix, we see that we have an equivalent system \(l=8\text{,}\) \(m=16\text{,}\) and \(p=6\text{,}\) which means that you have 8 lizards, 16 mice and 6 peacocks.

M50.

A parking lot has 66 vehicles (cars, trucks, motorcycles and bicycles) in it. There are four times as many cars as trucks. The total number of tires (4 per car or truck, 2 per motorcycle or bicycle) is 252. How many cars are there? How many bicycles?
Solution.
Let \(c,\,t,\,m,\,b\) denote the number of cars, trucks, motorcycles, and bicycles. Then the statements from the problem yield the equations
\begin{align*} c+t+m+b&=66\\ c-4t&=0\\ 4c+4t+2m+2b&=252\text{.} \end{align*}
We form the augmented matrix for this system and row-reduce.
\begin{align*} \begin{bmatrix} 1 & 1 & 1 & 1 & 66\\ 1 & -4 & 0 & 0 & 0\\ 4 & 4 & 2 & 2 & 252 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 48\\ 0 & \leading{1} & 0 & 0 & 12\\ 0 & 0 & \leading{1} & 1 & 6 \end{bmatrix} \end{align*}
The first row of the matrix represents the equation \(c=48\text{,}\) so there are 48 cars. The second row of the matrix represents the equation \(t=12\text{,}\) so there are 12 trucks. The third row of the matrix represents the equation \(m+b=6\) so there are anywhere from 0 to 6 bicycles. We can also say that \(b\) is a free variable, but the context of the problem limits it to 7 integer values since you cannot have a negative number of motorcycles.

T10.

Prove that each of the three row operations (Definition RO) is reversible. More precisely, if the matrix \(B\) is obtained from \(A\) by application of a single row operation, show that there is a single row operation that will transform \(B\) back into \(A\text{.}\)
Solution.
If we can reverse each row operation individually, then we can reverse a sequence of row operations. The operations that reverse each operation are listed below, using our shorthand notation. Notice how requiring the scalar \(\alpha\) to be nonzero makes the second operation reversible.
\begin{align*} \rowopswap{i}{j}&\quad\quad\rowopswap{i}{j}\\ \rowopmult{\alpha}{i},\,\alpha\neq 0&\quad\quad\rowopmult{\frac{1}{\alpha}}{i}\\ \rowopadd{\alpha}{i}{j}&\quad\quad\rowopadd{-\alpha}{i}{j} \end{align*}

T11.

Suppose that \(A\text{,}\) \(B\) and \(C\) are \(m\times n\) matrices. Use the definition of row-equivalence (Definition REM) to prove the following three facts.
  1. \(A\) is row-equivalent to \(A\text{.}\)
  2. If \(A\) is row-equivalent to \(B\text{,}\) then \(B\) is row-equivalent to \(A\text{.}\)
  3. If \(A\) is row-equivalent to \(B\text{,}\) and \(B\) is row-equivalent to \(C\text{,}\) then \(A\) is row-equivalent to \(C\text{.}\)
A relationship that satisfies these three properties is known as an equivalence relation, an important idea in the study of various algebras. This is a formal way of saying that a relationship behaves like equality, without requiring the relationship to be as strict as equality itself. We will see it again in Theorem SER.

T12.

Suppose that \(B\) is an \(m\times n\) matrix in reduced row-echelon form. Build a new, likely smaller, \(k\times\ell\) matrix \(C\) as follows. Keep any collection of \(k\) adjacent rows, \(k\leq m\text{.}\) From these rows, keep columns \(1\) through \(\ell\text{,}\) \(\ell\leq n\text{.}\) Prove that \(C\) is in reduced row-echelon form.

T13.

Generalize Exercise RREF.T12 by just keeping any \(k\) rows, and not requiring the rows to be adjacent. Prove that any such matrix \(C\) is in reduced row-echelon form.
You have attempted of activities on this page.