Skip to main content

Worksheet 3.4 Worksheet: dual basis

Let V be a vector space over R. (That is, scalars are real numbers, rather than, say, complex.) A linear transformation ϕ:VR is called a linear functional.
Here are some examples of linear functionals:
  • The map ϕ:R3R given by ϕ(x,y,z)=3x2y+5z.
  • The evaluation map eva:Pn(R)R given by eva(p)=p(a). (For example, ev2(34x+5x2)=24(2)+5(22)=14.)
  • The map ϕ:C[a,b]R given by ϕ(f)=abf(x)dx, where C[a,b] denotes the space of all continuous functions on [a,b].
Note that for any vector spaces V,W, the set L(V,W) of linear transformations from V to W is itself a vector space, if we define
(S+T)(v)=S(v)+T(v), and (kT)(v)=k(T(v)).
In particular, given a vector space V, we denote the set of all linear functionals on V by V=L(V,R), and call this the dual space of V.
We make the following observations:
  • If dimV=n and dimW=m, then L(V,W) is isomorphic to the space Mmn of m×n matrices, so it has dimension mn.
  • Since dimR=1, if V is finite-dimensional, then V=L(V,R) has dimension 1n=n.
  • Since dimV=dimV, V and V are isomorphic.
Here is a basic example that is intended as a guide to your intuition regarding dual spaces. Take V=R3. Given any vV, define a map ϕv:VR by ϕv(w)=vw (the usual dot product).
One way to think about this: if we write vV as a column vector [v1v2v3], then we can identify ϕv with vT, where the action is via multiplication:
ϕv(w)=[v1v2v3][w1w2w3]=v1w1+v2w2+v3w3.
It turns out that this example can be generalized, but the definition of ϕv involves the dot product, which is particular to Rn.
There is a generalization of the dot product, known as an inner product. (See Chapter 10 of Nicholson, for example.) On any inner product space, we can associate each vector vV to a linear functional ϕv using the procedure above.
Another way to work concretely with dual vectors (without the need for inner products) is to define things in terms of a basis.
Given a basis {v1,v2,,vn} of V, we define the corresponding dual basis {ϕ1,ϕ2,,ϕn} of V by
ϕi(vj)={1, if i=j0, if ij.
Note that each ϕj is well-defined, since any linear transformation can be defined by giving its values on a basis.
For the standard basis on Rn, note that the corresponding dual basis functionals are given by
ϕj(x1,x2,,xn)=xj.
That is, these are the coordinate functions on Rn.

1.

Show that the dual basis is indeed a basis for V.
Next, let V and W be vector spaces, and let T:VW be a linear transformation. For any such T, we can define the dual map T:WV by T(ϕ)=ϕT for each ϕW.

2.

Confirm that (a) T(ϕ) does indeed define an element of V; that is, a linear map from V to R, and (b) that T is linear.

3.

Let V=P(R) be the space of all polynomials, and let D:VV be the derivative transformation D(p(x))=p(x). Let ϕ:VR be the linear functional defined by ϕ(p(x))=01p(x)dx.
What is the linear functional D(ϕ)?

4.

Show that dual maps satisfy the following properties: for any S,TL(V,W) and kR,
  1. (S+T)=S+T
  2. (kS)=kS
  3. (ST)=TS
In item Item 3.4.4.c, assume SL(V,W) and TL(U,V). (Reminder: the notation ST is sometimes referred to as the “product” of S and T, in analogy with matrices, but actually represents the composition ST.)
We have one topic remaining in relation to dual spaces: determining the kernel and image of a dual map T (in terms of the kernel and image of T). Let V be a vector space, and let U be a subspace of V. Any such subspace determines an important subspace of V: the annihilator of U, denoted by U0 and defined by
U0={ϕV|ϕ(u)=0 for all uU}.

5.

Determine a basis (in terms of the standard dual basis for (R4)) for the annihilator U0 of the subspace UR4 given by
U={(2a+b,3b,a,a2b)|a,bR}.
Here is a fun theorem about annihilators that I won’t ask you to prove.
Here’s an outline of the proof. For any subspace UV, we can define the inclusion map i:UV, given by i(u)=u. (This is not the identity on V since it’s only defined on U. In particular, it is not onto unless U=V, although it is clearly one-to-one.)
Then i is a map from V to U. Moreover, note that for any ϕV, i(ϕ)U satisfies, for any uU,
i(ϕ)(u)=ϕ(i(u))=ϕ(u).
Thus, ϕkeri if and only if i(ϕ)=0, which is if and only if ϕ(u)=0 for all uU, which is if and only if ϕU0. Therefore, keri=U0.
By the dimension theorem, we have:
dimV=dimkeri+dimimi.
With a bit of work, one can show that imi=U, and we get the result from the fact that dimV=dimV and dimU=dimU.
There are a number of interesting results of this flavour. For example, one can show that a map T is injective if and only if T is surjective, and vice-versa.
One final, optional task: return to the example of Rn, viewed as column vectors, and consider a matrix transformation TA:RnRm given by TA(x)=Ax as usual. Viewing (Rn) as row vectors, convince yourself that (TA)=TAT; that is, that what we’ve really been talking about all along is just the transpose of a matrix!
You have attempted 1 of 6 activities on this page.