It might be worthwhile for Sage to create a column space using actual columns of the matrix as a spanning set. But we can do it ourselves fairly easily. A discussion follows the example.
We see that A
has four pivot columns, numbered 0,1,2,4
. The matrix B
is just a convenience to hold the pivot columns of A
. However, the column spaces of A
and B
should be equal, as Sage verifies. Also B
will row-reduce to the same 0-1 pivot columns of the reduced row-echelon form of the full matrix A
. So it is no accident that the reduced row-echelon form of B
is a full identity matrix, followed by sufficiently many zero rows to give the matrix the correct size.
The vector space method
.span_of_basis()
is new to us. It creates a span of a set of vectors, as before, but we now are responsible for supplying a linearly independent set of vectors. Which we have done. We know this because
Theorem BCS guarantees the set we provided is linearly independent (and spans the column space), while Sage would have given us an error if we had provided a linearly dependent set. In return, Sage will carry this linearly independent spanning set along with the vector space, something Sage calls a “user basis.”
Notice how cs
has two linearly independent spanning sets now. Our set of “original columns” is obtained via the standard vector space method .basis()
and we can obtain a linearly independent spanning set that looks more familiar with the vector space method .echelonized_basis()
. For a vector space created with a simple .span()
construction these two commands would yield identical results — it is only when we supply a linearly independent spanning set with the .span_of_basis()
method that a “user basis” becomes relevant.
Finally, we check that cs
is indeed the column space of A
(we knew it would be) and then we provide a one-line, totally general construction of the column space using original columns.
This is an opportunity to make an interesting observation, which could be used to substantiate several theorems. When we take the original columns that we recognize as pivot columns, and use them alone to form a matrix, this new matrix will always row-reduce to an identity matrix followed by zero rows. This is basically a consequence of reduced row-echelon form. Evaluate the compute cell below repeatedly. The number of columns could in theory change, though this is unlikely since the columns of a random matrix are unlikely to be linearly dependent. In any event, the form of the result will always be an identity matrix followed by some zero rows.
With more columns than rows, we know by
Theorem MVSLD that we will have a reduced number of pivot columns. Here, we will almost always see an identity matrix as the result, though we could get a smaller identity matrix followed by zero rows.