Skip to main content
Logo image

Section O Orthogonality

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

Subsection CAV Complex Arithmetic and Vectors

We know how the addition and multiplication of complex numbers is employed in defining the operations for vectors in \(\complex{m}\) (Definition CVA and Definition CVSM). We can also extend the idea of the conjugate to vectors.

Definition CCCV. Complex Conjugate of a Column Vector.

Suppose that \(\vect{u}\) is a vector from \(\complex{m}\text{.}\) Then the conjugate of the vector, \(\conjugate{\vect{u}}\text{,}\) is defined by
\begin{align*} \vectorentry{\conjugate{\vect{u}}}{i} &=\conjugate{\vectorentry{\vect{u}}{i}} &&1\leq i\leq m\text{.} \end{align*}
With this definition we can show that the conjugate of a column vector behaves as we would expect with regard to vector addition and scalar multiplication.

Proof.

For each \(1\leq i\leq m\)
\begin{align*} \vectorentry{\conjugate{\vect{x}+\vect{y}}}{i} &=\conjugate{\vectorentry{\vect{x}+\vect{y}}{i}}&& \knowl{./knowl/xref/definition-CCCV.html}{\text{Definition CCCV}}\\ &=\conjugate{\vectorentry{\vect{x}}{i}+\vectorentry{\vect{y}}{i}}&& \knowl{./knowl/xref/definition-CVA.html}{\text{Definition CVA}}\\ &=\conjugate{\vectorentry{\vect{x}}{i}}+\conjugate{\vectorentry{\vect{y}}{i}}&& \knowl{./knowl/xref/theorem-CCRA.html}{\text{Theorem CCRA}}\\ &=\vectorentry{\conjugate{\vect{x}}}{i}+\vectorentry{\conjugate{\vect{y}}}{i}&& \knowl{./knowl/xref/definition-CCCV.html}{\text{Definition CCCV}}\\ &=\vectorentry{\conjugate{\vect{x}}+\conjugate{\vect{y}}}{i}&& \knowl{./knowl/xref/definition-CVA.html}{\text{Definition CVA}}\text{.} \end{align*}
Then by Definition CVE we have \(\conjugate{\vect{x}+\vect{y}}=\conjugate{\vect{x}}+\conjugate{\vect{y}}\text{.}\)

Proof.

For \(1\leq i\leq m\)
\begin{align*} \vectorentry{\conjugate{\alpha\vect{x}}}{i} &=\conjugate{\vectorentry{\alpha\vect{x}}{i}}&& \knowl{./knowl/xref/definition-CCCV.html}{\text{Definition CCCV}}\\ &=\conjugate{\alpha\vectorentry{\vect{x}}{i}}&& \knowl{./knowl/xref/definition-CVSM.html}{\text{Definition CVSM}}\\ &=\conjugate{\alpha}\,\conjugate{\vectorentry{\vect{x}}{i}}&& \knowl{./knowl/xref/theorem-CCRM.html}{\text{Theorem CCRM}}\\ &=\conjugate{\alpha}\,\vectorentry{\conjugate{\vect{x}}}{i}&& \knowl{./knowl/xref/definition-CCCV.html}{\text{Definition CCCV}}\\ &=\vectorentry{\conjugate{\alpha}\,\conjugate{\vect{x}}}{i}&& \knowl{./knowl/xref/definition-CVSM.html}{\text{Definition CVSM}}\text{.} \end{align*}
Then by Definition CVE we have \(\conjugate{\alpha\vect{x}}=\conjugate{\alpha}\,\conjugate{\vect{x}}\text{.}\)
These two theorems together tell us how we can “push” complex conjugation through linear combinations.

Subsection IP Inner products

Definition IP. Inner Product.

Given the vectors \(\vect{u},\,\vect{v}\in\complex{m}\) the inner product of \(\vect{u}\) and \(\vect{v}\) is the scalar quantity in \(\complexes\)
\begin{equation*} \innerproduct{\vect{u}}{\vect{v}}= \conjugate{\vectorentry{\vect{u}}{1}}\vectorentry{\vect{v}}{1}+ \conjugate{\vectorentry{\vect{u}}{2}}\vectorentry{\vect{v}}{2}+ \conjugate{\vectorentry{\vect{u}}{3}}\vectorentry{\vect{v}}{3}+ \cdots+ \conjugate{\vectorentry{\vect{u}}{m}}\vectorentry{\vect{v}}{m} = \sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{v}}{i}\text{.} \end{equation*}
This operation is a bit different in that we begin with two vectors but produce a scalar. Computing one is straightforward.

Example CSIP. Computing some inner products.

The inner product of
\begin{align*} \vect{u}=\colvector{2+3i\\5+2i\\-3+i}&&\text{and}&&\vect{v}=\colvector{1+2i\\-4+5i\\0+5i} \end{align*}
is
\begin{align*} \innerproduct{\vect{u}}{\vect{v}} &=(\conjugate{2+3i})(1+2i)+(\conjugate{5+2i})(-4+5i)+(\conjugate{-3+i})(0+5i)\\ &=(2-3i)(1+2i)+(5-2i)(-4+5i)+(-3-i)(0+5i)\\ &=(8+i)+(-10+33i)+(5-15i)\\ &=3+19i\text{.} \end{align*}
The inner product of
\begin{align*} \vect{w}=\colvector{2\\4\\-3\\2\\8}&&\text{and}&& \vect{x}=\colvector{3\\1\\0\\-1\\-2} \end{align*}
is
\begin{align*} \innerproduct{\vect{w}}{\vect{x}}&= (\conjugate{2})3+(\conjugate{4})1+(\conjugate{-3})0+(\conjugate{2})(-1)+(\conjugate{8})(-2)\\ &=2(3)+4(1)+(-3)0+2(-1)+8(-2)=-8\text{.} \end{align*}
In the case where the entries of our vectors are all real numbers (as in the second part of Example CSIP), the computation of the inner product may look familiar and be known to you as a dot product or scalar product. So you can view the inner product as a generalization of the scalar product to vectors from \(\complex{m}\) (rather than \({\mathbb R}^m\)).
Note that we have chosen to conjugate the entries of the first vector listed in the inner product, while it is almost equally feasible to conjugate entries from the second vector instead. In particular, prior to Version 2.90, we did use the latter definition, and this has now changed to the former, with resulting adjustments propogated up through Section CB (only). However, conjugating the first vector leads to much nicer formulas for certain matrix decompositions and also shortens some proofs.
There are several quick theorems we can now prove, and they will each be useful later.

Proof.

The proofs of the two parts are very similar, with the second one requiring just a bit more effort due to the conjugation that occurs. We will prove part 1 and you can prove part 2 (Exercise O.T10).
\begin{align*} \innerproduct{\vect{u}+\vect{v}}{\vect{w}} &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}+\vect{v}}{i}}\vectorentry{\vect{w}}{i}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}}\\ &=\sum_{i=1}^{m}\left(\conjugate{\vectorentry{\vect{u}}{i}+ \vectorentry{\vect{v}}{i}}\right)\vectorentry{\vect{w}}{i}&& \knowl{./knowl/xref/definition-CVA.html}{\text{Definition CVA}}\\ &=\sum_{i=1}^{m}\left(\conjugate{\vectorentry{\vect{u}}{i}}+ \conjugate{\vectorentry{\vect{v}}{i}}\right)\vectorentry{\vect{w}}{i}&& \knowl{./knowl/xref/theorem-CCRA.html}{\text{Theorem CCRA}}\\ &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{w}}{i} + \conjugate{\vectorentry{\vect{v}}{i}}\vectorentry{\vect{w}}{i}&& \knowl{./knowl/xref/property-DCN.html}{\text{Property DCN}}\\ &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{w}}{i} + \sum_{i=1}^{m}\conjugate{\vectorentry{\vect{v}}{i}}\vectorentry{\vect{w}}{i}&& \knowl{./knowl/xref/property-CACN.html}{\text{Property CACN}}\\ &=\innerproduct{\vect{u}}{\vect{w}}+\innerproduct{\vect{v}}{\vect{w}}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}} \end{align*}

Proof.

The proofs of the two parts are very similar, with the second one requiring just a bit more effort due to the conjugation that occurs. We will prove part 1 and you can prove part 2 (Exercise O.T11).
\begin{align*} \innerproduct{\alpha\vect{u}}{\vect{v}}&=\sum_{i=1}^{m}\conjugate{\vectorentry{\alpha\vect{u}}{i}}\vectorentry{\vect{v}}{i}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}}\\ &=\sum_{i=1}^{m}\conjugate{\alpha\vectorentry{\vect{u}}{i}}\vectorentry{\vect{v}}{i}&& \knowl{./knowl/xref/definition-CVSM.html}{\text{Definition CVSM}}\\ &=\sum_{i=1}^{m}\conjugate{\alpha}\,\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{v}}{i}&& \knowl{./knowl/xref/theorem-CCRM.html}{\text{Theorem CCRM}}\\ &=\conjugate{\alpha}\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{v}}{i}&& \knowl{./knowl/xref/property-DCN.html}{\text{Property DCN}}\\ &=\conjugate{\alpha}\innerproduct{\vect{u}}{\vect{v}}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}} \end{align*}

Proof.

\begin{align*} \innerproduct{\vect{u}}{\vect{v}} &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{v}}{i}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}}\\ &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\,\conjugate{\conjugate{\vectorentry{\vect{v}}{i}}}&& \knowl{./knowl/xref/theorem-CCT.html}{\text{Theorem CCT}}\\ &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}\conjugate{\vectorentry{\vect{v}}{i}}}&& \knowl{./knowl/xref/theorem-CCRM.html}{\text{Theorem CCRM}}\\ &=\conjugate{\left(\sum_{i=1}^{m}\vectorentry{\vect{u}}{i}\conjugate{\vectorentry{\vect{v}}{i}}\right)}&& \knowl{./knowl/xref/theorem-CCRA.html}{\text{Theorem CCRA}}\\ &=\conjugate{\left(\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{v}}{i}}\vectorentry{\vect{u}}{i}\right)}&& \knowl{./knowl/xref/property-CMCN.html}{\text{Property CMCN}}\\ &=\conjugate{\innerproduct{\vect{v}}{\vect{u}}}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}} \end{align*}

Subsection N Norm

If treating linear algebra in a more geometric fashion, the length of a vector occurs naturally, and is what you would expect from its name. With complex numbers, we will define a similar function. Recall that if \(c\) is a complex number, then \(\modulus{c}\) denotes its modulus (Definition MCN).

Definition NV. Norm of a Vector.

The norm of the vector \(\vect{u}\) is the scalar quantity in \(\complexes\)
\begin{equation*} \norm{\vect{u}}= \sqrt{ \modulus{\vectorentry{\vect{u}}{1}}^2+ \modulus{\vectorentry{\vect{u}}{2}}^2+ \modulus{\vectorentry{\vect{u}}{3}}^2+ \cdots+ \modulus{\vectorentry{\vect{u}}{m}}^2 } = \sqrt{\sum_{i=1}^{m}\modulus{\vectorentry{\vect{u}}{i}}^2}\text{.} \end{equation*}
Computing a norm is also easy to do.

Example CNSV. Computing the norm of some vectors.

The norm of
\begin{equation*} \vect{u}=\colvector{3+2i\\1-6i\\2+4i\\2+i} \end{equation*}
is
\begin{align*} \norm{\vect{u}}&= \sqrt{\modulus{3+2i}^2+\modulus{1-6i}^2+\modulus{2+4i}^2+\modulus{2+i}^2}\\ &=\sqrt{13+37+20+5}=\sqrt{75}=5\sqrt{3}\text{.} \end{align*}
The norm of
\begin{equation*} \vect{v}=\colvector{3\\-1\\2\\4\\-3} \end{equation*}
is
\begin{equation*} \norm{\vect{v}}= \sqrt{\modulus{3}^2+\modulus{-1}^2+\modulus{2}^2+\modulus{4}^2+\modulus{-3}^2} =\sqrt{3^2+1^2+2^2+4^2+3^2}=\sqrt{39}\text{.} \end{equation*}
Notice how the norm of a vector with real number entries is just the length of the vector. Inner products and norms are related by the following theorem.

Proof.

\begin{align*} \norm{\vect{u}}^2&=\left(\sqrt{\sum_{i=1}^{m}\modulus{\vectorentry{\vect{u}}{i}}^2}\right)^2&& \knowl{./knowl/xref/definition-NV.html}{\text{Definition NV}}\\ &=\sum_{i=1}^{m}\modulus{\vectorentry{\vect{u}}{i}}^2&&\text{Inverse functions}\\ &=\sum_{i=1}^{m}\conjugate{\vectorentry{\vect{u}}{i}}\vectorentry{\vect{u}}{i}&& \knowl{./knowl/xref/definition-MCN.html}{\text{Definition MCN}}\\ &=\innerproduct{\vect{u}}{\vect{u}}&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}} \end{align*}
When our vectors have entries only from the real numbers Theorem IPN says that the dot product of a vector with itself is equal to the length of the vector squared.

Proof.

From the proof of Theorem IPN we see that
\begin{equation*} \innerproduct{\vect{u}}{\vect{u}} = \modulus{\vectorentry{\vect{u}}{1}}^2+ \modulus{\vectorentry{\vect{u}}{2}}^2+ \modulus{\vectorentry{\vect{u}}{3}}^2+ \cdots+ \modulus{\vectorentry{\vect{u}}{m}}^2\text{.} \end{equation*}
Since each modulus is squared, every term is positive, and the sum must also be positive. (Notice that in general the inner product is a complex number and cannot be compared with zero, but in the special case of \(\innerproduct{\vect{u}}{\vect{u}}\) the result is a real number.)
The phrase, “with equality if and only if” means that we want to show that the statement \(\innerproduct{\vect{u}}{\vect{u}}= 0\) (i.e. with equality) is equivalent (“if and only if”) to the statement \(\vect{u}=\zerovector\text{.}\)
If \(\vect{u}=\zerovector\text{,}\) then it is a straightforward computation to see that \(\innerproduct{\vect{u}}{\vect{u}}= 0\text{.}\) In the other direction, assume that \(\innerproduct{\vect{u}}{\vect{u}}= 0\text{.}\) As before, \(\innerproduct{\vect{u}}{\vect{u}}\) is a sum of moduli. So we have
\begin{equation*} 0=\innerproduct{\vect{u}}{\vect{u}}= \modulus{\vectorentry{\vect{u}}{1}}^2+ \modulus{\vectorentry{\vect{u}}{2}}^2+ \modulus{\vectorentry{\vect{u}}{3}}^2+ \cdots+ \modulus{\vectorentry{\vect{u}}{m}}^2 \end{equation*}
Now we have a sum of squares equaling zero, so each term must be zero. Then by similar logic, \(\modulus{\vectorentry{\vect{u}}{i}}=0\) will imply that \(\vectorentry{\vect{u}}{i}=0\text{,}\) since \(0+0i\) is the only complex number with zero modulus. Thus every entry of \(\vect{u}\) is zero and so \(\vect{u}=\zerovector\text{,}\) as desired.
Notice that Theorem PIP contains three implications
\begin{align*} \vect{u}\in\complex{m}&\Rightarrow\innerproduct{\vect{u}}{\vect{u}}\geq 0 & \vect{u}=\zerovector&\Rightarrow\innerproduct{\vect{u}}{\vect{u}}=0 & \innerproduct{\vect{u}}{\vect{u}}=0&\Rightarrow\vect{u}=\zerovector\text{.} \end{align*}
The results contained in Theorem PIP are summarized by saying “the inner product is positive definite.”

Sage EVIC. Exact Versus Inexact Computations.

We are now at a crossroads in our use of Sage. So far our computations have involved rational numbers: fractions of two integers. Sage is able to work with integers of seemingly unlimited size, and then can work with rational numbers exactly. So all of our computations have been exactly correct so far. In practice, many computations, especially those that originate with data, are not so precise. Then we represent real numbers by floating point numbers. Since the real numbers are infinite, finite computers must fake it with an extremely large, but still finite, collection of numbers. The price we pay is that some computations will be just slightly imprecise when there is no number available that represents the exact answer.
You should now appreciate two problems that occur. If we were to row-reduce a matrix with floating point numbers, there are potentially many computations and if a small amount of imprecision arises in each one, these errors can accumulate and lead to wildly incorrect answers. When we row-reduce a matrix, whether or not an entry is zero or not is critically important in the decisions we make about which row operation to perform. If we have an extremely small number, such as \(10^{-16}\text{,}\) how can we be sure if it is zero or not?
Why discuss this now? What is \(\alpha=\sqrt{7/3}\text{?}\) Hard to say exactly, but it is definitely not a rational number. Norms of vectors will feature prominently in all our discussions about orthogonal vectors, so we now have to recognize the need to work with square roots properly. We have two strategies in Sage.
The number system QQbar, also known as the field of algebraic numbers, is a truly amazing feature of Sage. It contains the rational numbers, plus every root of every polynomial with coefficients that are rational numbers. For example, notice that \(\alpha\) above is one solution to the polynomial equation \(3x^2-7=0\) and thus is a number in QQbar, so Sage can work with it exactly. These numbers are called “algebraic numbers” and you can recognize them since they print with a question mark near the end to remind you that when printed as a decimal they are approximations of numbers that Sage carries internally as exact quantities. For example \(\alpha\) can be created with QQbar(sqrt(7/3)) and will print as 1.527525231651947?. Notice that complex numbers begin with the introduction of the imaginary number \(i\text{,}\) which is a root of the polynomial equation \(x^2+1=0\text{,}\) so the field of algebraic numbers contains many complex numbers. The downside of QQbar is that computations are slow (relatively speaking), so this number system is most useful for examples and demonstrations.
The other strategy is to work strictly with approximate numbers, cognizant of the potential for inaccuracies. Sage has two such number systems: RDF and CDF, which are comprised of double precision floating point numbers, first limited to just the reals, then expanded to the complexes. Double-precision refers to the use of 64 bits to store the sign, mantissa and exponent in the representation of a real number. This gives 53 bits of precision. Do not confuse these fields with RR and CC, which are similar in appearance but very different in implementation. Sage has implementations of several computations designed exclusively for RDF and CDF, such as the norm. And they are very, very fast. But some computations, like echelon form, can be wildly unreliable with these approximate numbers. We will have more to say about this as we go. In practice, you can use CDF, since RDF is a subset and only different in very limited cases.
In summary, QQbar is an extension of QQ which allows exact computations, but can be slow for large examples. RDF and CDF are fast, with special algorithms to control much of the imprecision in some, but not all, computations. So we need to be vigilant and skeptical when we work with these approximate numbers. We will use both strategies, as appropriate.

Sage CNIP. Conjugates, Norms and Inner Products.

Conjugates, of complex numbers and of vectors, are straightforward, in QQbar or in CDF.
The term inner product means slightly different things to different people. For some, it is the dot product that you may have seen in a calculus or physics course. Our inner product could be called the Hermitian inner product to emphasize the use of vectors over the complex numbers and conjugating some of the entries. So Sage has a .dot_product(), .inner_product(), and .hermitian_inner_product() — we want to use the last one.
From now on, when we mention an inner product in the context of using Sage, we will mean .hermitian_inner_product(). We will redo the first part of Example CSIP. Notice that the syntax is a bit asymmetric.
Norms are as easy as conjugates. Easier maybe. It might be useful to realize that Sage uses entirely distinct code to compute an exact norm over QQbar versus an approximate norm over CDF, though that is totally transparent as you issue commands. Here is Example CNSV reprised.
We have three different numerical approximations, the latter 30-digit number being an approximation to the answer in the text. But there is no inconsistency between them. The first, an algebraic number, is represented internally as \(5*a\) where \(a\) is a root of the polynomial equation \(x^2-3=0\text{,}\) in other words it is \(5\sqrt{3}\text{.}\) The CDF value prints with a few digits less than what is carried internally. Notice that our different definitions of the inner product make no difference in the computation of a norm.
One warning now that we are working with complex numbers. It is easy to “clobber” the symbol I used for the imaginary number \(i\text{.}\) In other words, Sage will allow you to assign it to something else, rendering it useless. An identity matrix is a likely reassignment. If you run the next compute cell, be sure to evaluate the compute cell afterward to restore I to its usual role.
We will finish with a verification of Theorem IPN. To test equality it is best if we work with entries from QQbar.

Subsection OV Orthogonal Vectors

Orthogonal is a generalization of perpendicular. You may have used mutually perpendicular vectors in a physics class, or you may recall from a calculus class that perpendicular vectors have a zero dot product. We will now extend these ideas into the realm of higher dimensions and complex scalars.

Definition OV. Orthogonal Vectors.

A pair of vectors, \(\vect{u}\) and \(\vect{v}\text{,}\) from \(\complex{m}\) are orthogonal if their inner product is zero, that is, \(\innerproduct{\vect{u}}{\vect{v}}=0\text{.}\)

Example TOV. Two orthogonal vectors.

The vectors
\begin{align*} \vect{u}&=\colvector{2 + 3i\\4 - 2i\\1 + i\\1 + i} & \vect{v}&=\colvector{1 - i\\2 + 3i\\4 - 6i\\1} \end{align*}
are orthogonal since
\begin{align*} \innerproduct{\vect{u}}{\vect{v}} &=(2-3i)(1-i)+(4+2i)(2+3i)+(1-i)(4-6i)+(1-i)(1)\\ &=(-1-5i)+(2+16i)+(-2-10i)+(1-i)\\ &=0+0i\text{.} \end{align*}
We extend this definition to whole sets by requiring vectors to be pairwise orthogonal. Despite using the same word, careful thought about what objects you are using will eliminate any source of confusion.

Definition OSV. Orthogonal Set of Vectors.

Suppose that \(S=\set{\vectorlist{u}{n}}\) is a set of vectors from \(\complex{m}\text{.}\) Then \(S\) is an orthogonal set if every pair of different vectors from \(S\) is orthogonal, that is \(\innerproduct{\vect{u}_i}{\vect{u}_j}=0\) whenever \(i\neq j\text{.}\)
We now define the prototypical orthogonal set, which we will reference repeatedly.

Definition SUV. Standard Unit Vectors.

Let \(\vect{e}_j\in\complex{m}\text{,}\) \(1\leq j\leq m\) denote the column vectors defined by
\begin{equation*} \vectorentry{\vect{e}_j}{i}= \begin{cases} 1 & \text{if }i=j\\ 0 & \text{if }i\neq j \end{cases}\text{.} \end{equation*}
Then the set
\begin{align*} \set{\vectorlist{e}{m}}&=\setparts{\vect{e}_j}{1\leq j\leq m} \end{align*}
is the set of standard unit vectors in \(\complex{m}\text{.}\)
Notice that \(\vect{e}_j\) is identical to column \(j\) of the \(m\times m\) identity matrix \(I_m\) (Definition IM) and is a pivot column for \(I_m\text{,}\) since the identity matrix is in reduced row-echelon form. These observations will often be useful. We will reserve the notation \(\vect{e}_i\) for these vectors. It is not hard to see that the set of standard unit vectors is an orthogonal set.

Example SUVOS. Standard Unit Vectors are an Orthogonal Set.

Compute the inner product of two distinct vectors from the set of standard unit vectors (Definition SUV), say \(\vect{e}_i\text{,}\) \(\vect{e}_j\text{,}\) where \(i\neq j\)
\begin{align*} \innerproduct{\vect{e}_i}{\vect{e}_j}&= \conjugate{0}0+ \conjugate{0}0+\cdots+ \conjugate{1}0+\cdots+ \conjugate{0}0+\cdots+ \conjugate{0}1+\cdots+ \conjugate{0}0+ \conjugate{0}0\\ &=0(0)+0(0)+\cdots+1(0)+\cdots+0(1)+\cdots+0(0)+0(0)\\ &=0\text{.} \end{align*}
So the set \(\set{\vectorlist{e}{m}}\) is an orthogonal set.

Example AOS. An orthogonal set.

The set
\begin{equation*} \set{\vect{x}_1,\,\vect{x}_2,\,\vect{x}_3,\,\vect{x}_4}= \set{ \colvector{1+i\\1\\1-i\\i},\, \colvector{1+5i\\6+5i\\-7-i\\1-6i},\, \colvector{-7+34i\\-8-23i\\-10+22i\\30+13i},\, \colvector{-2-4i\\6+i\\4+3i\\6-i} } \end{equation*}
is an orthogonal set.
Since the inner product is anti-commutative (Theorem IPAC) we can test pairs of different vectors in any order. If the result is zero, then it will also be zero if the inner product is computed in the opposite order. This means there are six different pairs of vectors to use in an inner product computation. We will do two and you can practice your inner products on the other four.
\begin{align*} \innerproduct{\vect{x}_1}{\vect{x}_3}&= (1-i)(-7+34i)+(1)(-8-23i)+(1+i)(-10+22i)+(-i)(30+13i)\\ &=(27+41i)+(-8-23i)+(-32+12i)+(13-30i)\\ &=0+0i\\ \end{align*}
and
\begin{align*} \innerproduct{\vect{x}_2}{\vect{x}_4}&= (1-5i)(-2-4i)+(6-5i)(6+i)+(-7+i)(4+3i)+(1+6i)(6-i)\\ &=(-22+6i)+(41-24i)+(-31-17i)+(12+35i)\\ &=0+0i\text{.} \end{align*}
So far, this section has seen lots of definitions, and lots of theorems establishing un-surprising consequences of those definitions. But here is our first theorem that suggests that inner products and orthogonal vectors have some utility. It is also one of our first illustrations of how to arrive at linear independence as the conclusion of a theorem.

Proof.

Let \(S=\set{\vectorlist{u}{n}}\) be an orthogonal set of nonzero vectors. To prove the linear independence of \(S\text{,}\) we can appeal to the definition (Definition LICV) and begin with an arbitrary relation of linear dependence (Definition RLDCV)
\begin{equation*} \lincombo{\alpha}{u}{n}=\zerovector\text{.} \end{equation*}
Then, for every \(1\leq i\leq n\text{,}\) we have
\begin{align*} &\alpha_i\innerproduct{\vect{u}_i}{\vect{u}_i}\\ &\quad\quad=\alpha_1(0)+\alpha_2(0)+\cdots+\alpha_i\innerproduct{\vect{u}_i}{\vect{u}_i}+\cdots+\alpha_n(0)&& \knowl{./knowl/xref/property-ZCN.html}{\text{Property ZCN}}\\ &\quad\quad= \alpha_1\innerproduct{\vect{u}_i}{\vect{u}_1}+ \cdots+ \alpha_i\innerproduct{\vect{u}_i}{\vect{u}_i}+ \cdots+ \alpha_n\innerproduct{\vect{u}_i}{\vect{u}_n}&& \knowl{./knowl/xref/definition-OSV.html}{\text{Definition OSV}}\\ &\quad\quad= \innerproduct{\vect{u}_i}{\alpha_1\vect{u}_1}+ \innerproduct{\vect{u}_i}{\alpha_2\vect{u}_2}+ \cdots+ \innerproduct{\vect{u}_i}{\alpha_n\vect{u}_n}&& \knowl{./knowl/xref/theorem-IPSM.html}{\text{Theorem IPSM}}\\ &\quad\quad=\innerproduct{\vect{u}_i}{\lincombo{\alpha}{u}{n}}&& \knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &\quad\quad=\innerproduct{\vect{u}_i}{\zerovector}&& \knowl{./knowl/xref/definition-RLDCV.html}{\text{Definition RLDCV}}\\ &\quad\quad=0&& \knowl{./knowl/xref/definition-IP.html}{\text{Definition IP}}\text{.} \end{align*}
Because \(\vect{u}_i\) was assumed to be nonzero, Theorem PIP says \(\innerproduct{\vect{u}_i}{\vect{u}_i}\) is nonzero and thus \(\alpha_i\) must be zero. So we conclude that \(\alpha_i=0\) for all \(1\leq i\leq n\) in any relation of linear dependence on \(S\text{.}\) But this says that \(S\) is a linearly independent set since the only way to form a relation of linear dependence is the trivial way (Definition LICV). Boom!

Subsection GSP Gram-Schmidt Procedure

The Gram-Schmidt Procedure is really a theorem. It says that if we begin with a linearly independent set of \(p\) vectors, \(S\text{,}\) then we can do a number of calculations with these vectors and produce an orthogonal set of \(p\) vectors, \(T\text{,}\) so that \(\spn{S}=\spn{T}\text{.}\) Given the large number of computations involved, it is indeed a procedure to do all the necessary computations, and it is best employed on a computer. However, it also has value in proofs where we may on occasion wish to replace a linearly independent set by an orthogonal set.
This is our first occasion to use the technique of mathematical induction for a proof, a technique we will see again several times, especially in Chapter D. So study the simple example described in Proof Technique I first.

Proof.

We will prove the result by using induction on \(p\) (Proof Technique I). To begin, we prove that \(T\) has the desired properties when \(p=1\text{.}\) In this case \(\vect{u}_1=\vect{v}_1\) and \(T=\set{\vect{u}_1}=\set{\vect{v}_1}=S\text{.}\) Because \(S\) and \(T\) are equal, \(\spn{S}=\spn{T}\text{.}\) Equally trivial, \(T\) is an orthogonal set. If \(\vect{u}_1=\zerovector\text{,}\) then \(S\) would be a linearly dependent set, a contradiction.
Suppose that the theorem is true for any set of \(p-1\) linearly independent vectors. Let \(S=\set{\vectorlist{v}{p}}\) be a linearly independent set of \(p\) vectors. Then \(S^\prime=\set{\vectorlist{v}{p-1}}\) is also linearly independent. So we can apply the theorem to \(S^\prime\) and construct the vectors \(T^\prime=\set{\vectorlist{u}{p-1}}\text{.}\) \(T^\prime\) is therefore an orthogonal set of nonzero vectors and \(\spn{S^\prime}=\spn{T^\prime}\text{.}\) Define
\begin{equation*} \vect{u}_p=\vect{v}_p -\frac{\innerproduct{\vect{u}_1}{\vect{v}_p}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 -\frac{\innerproduct{\vect{u}_2}{\vect{v}_p}}{\innerproduct{\vect{u}_2}{\vect{u}_2}}\vect{u}_2 -\frac{\innerproduct{\vect{u}_3}{\vect{v}_p}}{\innerproduct{\vect{u}_3}{\vect{u}_3}}\vect{u}_3 -\cdots -\frac{\innerproduct{\vect{u}_{p-1}}{\vect{v}_p}}{\innerproduct{\vect{u}_{p-1}}{\vect{u}_{p-1}}}\vect{u}_{p-1} \end{equation*}
and let \(T=T^\prime\cup\set{\vect{u}_p}\text{.}\) We need to now show that \(T\) has several properties by building on what we know about \(T^\prime\text{.}\) But first notice that the above equation has no problems with the denominators (\(\innerproduct{\vect{u}_i}{\vect{u}_i}\)) being zero, since the \(\vect{u}_i\) are from \(T^\prime\text{,}\) which is composed of nonzero vectors.
We show that \(\spn{T}=\spn{S}\text{,}\) by first establishing that \(\spn{T}\subseteq\spn{S}\text{.}\) Suppose \(\vect{x}\in\spn{T}\text{,}\) so
\begin{equation*} \vect{x}=\lincombo{a}{u}{p}\text{.} \end{equation*}
The term \(a_p\vect{u}_p\) is a linear combination of vectors from \(T^\prime\) and the vector \(\vect{v}_p\text{,}\) while the remaining terms are a linear combination of vectors from \(T^\prime\text{.}\) Since \(\spn{T^\prime}=\spn{S^\prime}\text{,}\) any term that is a multiple of a vector from \(T^\prime\) can be rewritten as a linear combination of vectors from \(S^\prime\text{.}\) The remaining term \(a_p\vect{v}_p\) is a multiple of a vector in \(S\text{.}\) So we see that \(\vect{x}\) can be rewritten as a linear combination of vectors from \(S\text{,}\) i.e. \(\vect{x}\in\spn{S}\text{.}\)
To show that \(\spn{S}\subseteq\spn{T}\text{,}\) begin with \(\vect{y}\in\spn{S}\text{,}\) so
\begin{equation*} \vect{y}=\lincombo{a}{v}{p}\text{.} \end{equation*}
Rearrange our defining equation for \(\vect{u}_p\) by solving for \(\vect{v}_p\text{.}\) Then the term \(a_p\vect{v}_p\) is a multiple of a linear combination of elements of \(T\text{.}\) The remaining terms are a linear combination of \(\vectorlist{v}{p-1}\text{,}\) hence an element of \(\spn{S^\prime}=\spn{T^\prime}\text{.}\) Thus these remaining terms can be written as a linear combination of the vectors in \(T^\prime\text{.}\) So \(\vect{y}\) is a linear combination of vectors from \(T\text{,}\) i.e. \(\vect{y}\in\spn{T}\text{.}\)
The elements of \(T^\prime\) are nonzero, but what about \(\vect{u}_p\text{?}\) Suppose to the contrary that \(\vect{u}_p=\zerovector\)
\begin{align*} \zerovector&=\vect{u}_p=\vect{v}_p -\frac{\innerproduct{\vect{u}_1}{\vect{v}_p}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 -\frac{\innerproduct{\vect{u}_2}{\vect{v}_p}}{\innerproduct{\vect{u}_2}{\vect{u}_2}}\vect{u}_2 -\frac{\innerproduct{\vect{u}_3}{\vect{v}_p}}{\innerproduct{\vect{u}_3}{\vect{u}_3}}\vect{u}_3 -\cdots -\frac{\innerproduct{\vect{u}_{p-1}}{\vect{v}_p}}{\innerproduct{\vect{u}_{p-1}}{\vect{u}_{p-1}}}\vect{u}_{p-1}\\ &\vect{v}_p= \frac{\innerproduct{\vect{u}_1}{\vect{v}_p}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 +\frac{\innerproduct{\vect{u}_2}{\vect{v}_p}}{\innerproduct{\vect{u}_2}{\vect{u}_2}}\vect{u}_2 +\frac{\innerproduct{\vect{u}_3}{\vect{v}_p}}{\innerproduct{\vect{u}_3}{\vect{u}_3}}\vect{u}_3 +\cdots +\frac{\innerproduct{\vect{u}_{p-1}}{\vect{v}_p}}{\innerproduct{\vect{u}_{p-1}}{\vect{u}_{p-1}}}\vect{u}_{p-1}\text{.} \end{align*}
Since \(\spn{S^\prime}=\spn{T^\prime}\) we can write the vectors \(\vectorlist{u}{p-1}\) on the right side of this equation in terms of the vectors \(\vectorlist{v}{p-1}\) and we then have the vector \(\vect{v}_p\) expressed as a linear combination of the other \(p-1\) vectors in \(S\text{,}\) implying that \(S\) is a linearly dependent set (Theorem DLDS), contrary to our lone hypothesis about \(S\text{.}\)
Finally, it is a simple matter to establish that \(T\) is an orthogonal set, though it will not appear so simple looking. Think about your objects as you work through the following — what is a vector and what is a scalar. Since \(T^\prime\) is an orthogonal set by induction, most pairs of elements in \(T\) are already known to be orthogonal. We just need to test “new” inner products, between \(\vect{u}_p\) and \(\vect{u}_i\text{,}\) for \(1\leq i\leq p-1\text{.}\) Here we go, using summation notation
\begin{align*} \innerproduct{\vect{u}_i}{\vect{u}_p}&= \innerproduct{\vect{u}_i}{ \vect{v}_p-\sum_{k=1}^{p-1}\frac{\innerproduct{\vect{u}_k}{\vect{v}_p}}{\innerproduct{\vect{u}_k}{\vect{u}_k}}\vect{u}_k }\\ &= \innerproduct{\vect{u}_i}{\vect{v}_p} - \innerproduct{\vect{u}_i}{ \sum_{k=1}^{p-1}\frac{\innerproduct{\vect{u}_k}{\vect{v}_p}}{\innerproduct{\vect{u}_k}{\vect{u}_k}}\vect{u}_k }&& \knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \innerproduct{\vect{u}_i}{\vect{v}_p} - \sum_{k=1}^{p-1}\innerproduct{\vect{u}_i}{ \frac{\innerproduct{\vect{u}_k}{\vect{v}_p}}{\innerproduct{\vect{u}_k}{\vect{u}_k}}\vect{u}_k }&& \knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \innerproduct{\vect{u}_i}{\vect{v}_p} - \sum_{k=1}^{p-1}\frac{\innerproduct{\vect{u}_k}{\vect{v}_p}}{\innerproduct{\vect{u}_k}{\vect{u}_k}}\innerproduct{\vect{u}_i}{\vect{u}_k}&& \knowl{./knowl/xref/theorem-IPSM.html}{\text{Theorem IPSM}}\\ &= \innerproduct{\vect{u}_i}{\vect{v}_p} - \frac{\innerproduct{\vect{u}_i}{\vect{v}_p}}{\innerproduct{\vect{u}_i}{\vect{u}_i}}\innerproduct{\vect{u}_i}{\vect{u}_i} - \sum_{k\neq i}\frac{\innerproduct{\vect{u}_k}{\vect{v}_p}}{\innerproduct{\vect{u}_k}{\vect{u}_k}}(0)&& \text{Induction Hypothesis}\\ &= \innerproduct{\vect{u}_i}{\vect{v}_p} - \innerproduct{\vect{u}_i}{\vect{v}_p} - \sum_{k\neq i}0\\ &=0\text{.} \end{align*}

Example GSTV. Gram-Schmidt of three vectors.

We will illustrate the Gram-Schmidt process with three vectors. Begin with the linearly independent (check this!) set
\begin{equation*} S=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3}=\set{ \colvector{1\\1+i\\1},\, \colvector{-i\\1\\1+i},\, \colvector{0\\i\\i} }\text{.} \end{equation*}
Then
\begin{align*} \vect{u}_1&=\vect{v_1}=\colvector{1\\1+i\\1} \qquad \vect{u}_2=\vect{v}_2 -\frac{\innerproduct{\vect{u}_1}{\vect{v}_2}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 =\frac{1}{4}\colvector{-2-3i\\1-i\\2+5i}\\ \vect{u}_3&=\vect{v}_3 -\frac{\innerproduct{\vect{u}_1}{\vect{v}_3}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 -\frac{\innerproduct{\vect{u}_2}{\vect{v}_3}}{\innerproduct{\vect{u}_2}{\vect{u}_2}}\vect{u}_2 =\frac{1}{11}\colvector{-3-i\\1+3i\\-1-i} \end{align*}
and
\begin{equation*} T=\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3} =\set{ \colvector{1\\1+i\\1},\, \frac{1}{4}\colvector{-2-3i\\1-i\\2+5i},\, \frac{1}{11}\colvector{-3-i\\1+3i\\-1-i} } \end{equation*}
is an orthogonal set (which you can check) of nonzero vectors and \(\spn{T}=\spn{S}\) (all by Theorem GSP). Of course, as a by-product of orthogonality, the set \(T\) is also linearly independent (Theorem OSLI).
One final definition related to orthogonal vectors.

Definition ONS. OrthoNormal Set.

Suppose \(S=\set{\vectorlist{u}{n}}\) is an orthogonal set of vectors such that \(\norm{\vect{u}_i}=1\) for all \(1\leq i\leq n\text{.}\) Then \(S\) is an orthonormal set of vectors.
Once you have an orthogonal set, it is easy to convert it to an orthonormal set — multiply each vector by the reciprocal of its norm, and the resulting vector will have norm 1. This scaling of each vector will not affect the orthogonality properties (apply Theorem IPSM).

Example ONTV. Orthonormal set, three vectors.

The set
\begin{equation*} T=\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3} =\set{ \colvector{1\\1+i\\1},\, \frac{1}{4}\colvector{-2-3i\\1-i\\2+5i},\, \frac{1}{11}\colvector{-3-i\\1+3i\\-1-i} } \end{equation*}
from Example GSTV is an orthogonal set.
We compute the norm of each vector
\begin{align*} \norm{\vect{u}_1}=2&& \norm{\vect{u}_2}=\frac{1}{2}\sqrt{11}&& \norm{\vect{u}_3}=\frac{\sqrt{2}}{\sqrt{11}} \end{align*}
Converting each vector to a norm of \(1\text{,}\) yields an orthonormal set
\begin{align*} \vect{w}_1&=\frac{1}{2}\colvector{1\\1+i\\1} \qquad \vect{w}_2=\frac{1}{\frac{1}{2}\sqrt{11}}\frac{1}{4}\colvector{-2-3i\\1-i\\2+5i}=\frac{1}{2\sqrt{11}}\colvector{-2-3i\\1-i\\2+5i}\\ \vect{w}_3&=\frac{1}{\frac{\sqrt{2}}{\sqrt{11}}}\frac{1}{11}\colvector{-3-i\\1+3i\\-1-i}=\frac{1}{\sqrt{22}}\colvector{-3-i\\1+3i\\-1-i}\text{.} \end{align*}

Example ONFV. Orthonormal set, four vectors.

As an exercise convert the linearly independent set
\begin{equation*} S=\set{ \colvector{1+i\\1\\1-i\\i},\, \colvector{i\\1+i\\-1\\-i},\, \colvector{i\\-i\\ -1+i\\1},\, \colvector{-1-i\\i\\1\\-1} } \end{equation*}
to an orthogonal set via the Gram-Schmidt Process (Theorem GSP) and then scale the vectors to norm 1 to create an orthonormal set. You should get the same set you would if you scaled the orthogonal set of Example AOS to become an orthonormal set.
We will see orthonormal sets again in Subsection MINM.UM. They are intimately related to unitary matrices (Definition UM) through Theorem CUMOS. Some of the utility of orthonormal sets is captured by Theorem COB in Subsection B.OBC. Orthonormal sets appear once again in Section OD where they are key in orthonormal diagonalization.

Sage OGS. Orthogonality and Gram-Schmidt.

It is easy enough to check a pair of vectors for orthogonality (is the inner product zero?). To check that a set is orthogonal, we just need to do this repeatedly. This is a redo of Example AOS.
Notice how the list comprehension computes each pair just once, and never checks the inner product of a vector with itself. If we wanted to check that a set is orthonormal, the “normal” part is less involved. We will check the set above, even though we can clearly see that the four vectors are not even close to being unit vectors. Be sure to run the above definitions of S before running the next compute cell.
Applying the Gram-Schmidt procedure to a set of vectors is the type of computation that a program like Sage is perfect for. Gram-Schmidt is implemented as a method for matrices, where we interpret the rows of the matrix as the vectors in the original set. The result is two matrices, where the first has rows that are the orthogonal vectors. The second matrix has rows that provide linear combinations of the orthogonal vectors that equal the original vectors. The original vectors do not need to form a linearly independent set, and when the set is linearly dependent, then zero vectors produced are not part of the returned set.
Over CDF the set is automatically orthonormal, and since a different algorithm is used (to help control the imprecisions), the results will look different than what would result from Theorem GSP. We will illustrate with the vectors from Example GSTV.
We formed the matrix A with the three vectors as rows, and of the two outputs we are interested in the first one, whose rows form the orthonormal set. We round the numbers to 5 digits, just to make the result fit nicely on your screen. Let us do it again, now exactly over QQbar. We will output the entries of the matrix as list, working across rows first, so it fits nicely.
Notice that we asked for orthonormal output, so the rows of G are the vectors \(\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3}\) in Example ONTV. Exactly. We can restrict ourselves to QQ and forego the “normality” to obtain just the orthogonal set \(\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3}\) of Example GSTV.
Notice that it is an error to ask for an orthonormal set over QQ since you cannot expect to take square roots of rationals and stick with rationals.

Reading Questions O Reading Questions

1. Given set orthogonal?

Is the set
\begin{equation*} \set{\colvector{1\\-1\\2},\,\colvector{5\\3\\-1},\,\colvector{8\\4\\-2}} \end{equation*}
an orthogonal set? Why?

2. Orthogonal vs orthonormal.

What is the distinction between an orthogonal set and an orthonormal set?

3. Output of Gram-Schmidt process.

What is nice about the output of the Gram-Schmidt process?

Exercises O Exercises

C20.

Complete Example AOS by verifying that the four remaining inner products are zero.

C21.

Verify that the set \(T\) created in Example GSTV by the Gram-Schmidt Procedure is an orthogonal set.

M60.

Suppose that \(\set{\vect{u},\,\vect{v},\,\vect{w}}\subseteq\complex{n}\) is an orthonormal set. Prove that \(\vect{u}+\vect{v}\) is not orthogonal to \(\vect{v}+\vect{w}\text{.}\)

T20.

Suppose that \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{n}\text{,}\) \(\alpha,\,\beta\in\complexes\) and \(\vect{u}\) is orthogonal to both \(\vect{v}\) and \(\vect{w}\text{.}\) Prove that \(\vect{u}\) is orthogonal to \(\alpha\vect{v}+\beta\vect{w}\text{.}\)
Solution.
Vectors are orthogonal if their inner product is zero (Definition OV), so we compute
\begin{align*} \innerproduct{\vect{u}}{\alpha\vect{v}+\beta\vect{w}} &= \innerproduct{\vect{u}}{\alpha\vect{v}}+ \innerproduct{\vect{u}}{\beta\vect{w}} &&\knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \alpha\innerproduct{\vect{u}}{\vect{v}}+ \beta\innerproduct{\vect{u}}{\vect{w}} &&\knowl{./knowl/xref/theorem-IPSM.html}{\text{Theorem IPSM}}\\ &= \alpha\left(0\right)+\beta\left(0\right) &&\knowl{./knowl/xref/definition-OV.html}{\text{Definition OV}}\\ &=0\text{.} \end{align*}
So by Definition OV, \(\vect{u}\) and \(\alpha\vect{v}+\beta\vect{w}\) are an orthogonal pair of vectors.

T21.

Suppose that \(\vect{u},\,\vect{v}\in\complex{m}\) are orthogonal vectors with equal norms. Prove that \(\vect{u}+\vect{v}\) and \(\vect{u}-\vect{v}\) are orthogonal vectors.
Solution.
Vectors are orthogonal if their inner product is zero (Definition OV), so we compute
\begin{align*} \innerproduct{\vect{u}+\vect{v}}{\vect{u}-\vect{v}} &= \innerproduct{\vect{u}}{\vect{u}-\vect{v}}+ \innerproduct{\vect{v}}{\vect{u}-\vect{v}} &&\knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \innerproduct{\vect{u}}{\vect{u}}+ \innerproduct{\vect{u}}{-\vect{v}}+ \innerproduct{\vect{v}}{\vect{u}}+ \innerproduct{\vect{v}}{-\vect{v}} &&\knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \innerproduct{\vect{u}}{\vect{u}}- \innerproduct{\vect{u}}{\vect{v}}+ \innerproduct{\vect{v}}{\vect{u}}- \innerproduct{\vect{v}}{\vect{v}} &&\knowl{./knowl/xref/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &= \innerproduct{\vect{u}}{\vect{u}}- 0+ 0- \innerproduct{\vect{v}}{\vect{v}} &&\knowl{./knowl/xref/definition-OV.html}{\text{Definition OV}}\\ &= \norm{\vect{u}}^2 - \norm{\vect{v}}^2 &&\knowl{./knowl/xref/theorem-IPN.html}{\text{Theorem IPN}}\\ &=0&&\text{Hypothesis}\text{.} \end{align*}
So by Definition OV, \(\vect{u}+\vect{v}\) and \(\vect{u}-\vect{v}\) are an orthogonal pair of vectors. Notice how this proof uses theorems about vectors, and never considers individual entries of those vectors.

T30.

Suppose that the set \(S\) in the hypothesis of Theorem GSP is not just linearly independent, but is also orthogonal. Prove that the set \(T\) created by the Gram-Schmidt procedure is equal to \(S\text{.}\) (Note that we are getting a stronger conclusion than \(\spn{T}=\spn{S}\) — the conclusion is that \(T=S\text{.}\)) In other words, it is pointless to apply the Gram-Schmidt procedure to a set that is already orthogonal.

T31.

Suppose that the set \(S\) is linearly independent. Apply the Gram-Schmidt procedure (Theorem GSP) twice, creating first the linearly independent set \(T_1\) from \(S\text{,}\) and then creating \(T_2\) from \(T_1\text{.}\) As a consequence of Exercise O.T30, prove that \(T_1=T_2\text{.}\) In other words, it is pointless to apply the Gram-Schmidt procedure twice.
You have attempted of activities on this page.