Skip to main content

Worksheet 3.4 Worksheet: dual basis

Let V be a vector space over R. (That is, scalars are real numbers, rather than, say, complex.) A linear transformation ϕ:V→R is called a linear functional.
Here are some examples of linear functionals:
  • The map Ο•:R3β†’R given by Ο•(x,y,z)=3xβˆ’2y+5z.
  • The evaluation map eva:Pn(R)β†’R given by eva(p)=p(a). (For example, ev2(3βˆ’4x+5x2)=2βˆ’4(2)+5(22)=14.)
  • The map Ο•:C[a,b]β†’R given by Ο•(f)=∫abf(x)dx, where C[a,b] denotes the space of all continuous functions on [a,b].
Note that for any vector spaces V,W, the set L(V,W) of linear transformations from V to W is itself a vector space, if we define
(S+T)(v)=S(v)+T(v), and (kT)(v)=k(T(v)).
In particular, given a vector space V, we denote the set of all linear functionals on V by Vβˆ—=L(V,R), and call this the dual space of V.
We make the following observations:
  • If dim⁑V=n and dim⁑W=m, then L(V,W) is isomorphic to the space Mmn of mΓ—n matrices, so it has dimension mn.
  • Since dim⁑R=1, if V is finite-dimensional, then Vβˆ—=L(V,R) has dimension 1n=n.
  • Since dim⁑Vβˆ—=dim⁑V, V and Vβˆ— are isomorphic.
Here is a basic example that is intended as a guide to your intuition regarding dual spaces. Take V=R3. Given any v∈V, define a map Ο•v:Vβ†’R by Ο•v(w)=vβ‹…w (the usual dot product).
One way to think about this: if we write v∈V as a column vector [v1v2v3], then we can identify Ο•v with vT, where the action is via multiplication:
Ο•v(w)=[v1v2v3][w1w2w3]=v1w1+v2w2+v3w3.
It turns out that this example can be generalized, but the definition of Ο•v involves the dot product, which is particular to Rn.
There is a generalization of the dot product, known as an inner product. (See Chapter 10 of Nicholson, for example.) On any inner product space, we can associate each vector v∈V to a linear functional Ο•v using the procedure above.
Another way to work concretely with dual vectors (without the need for inner products) is to define things in terms of a basis.
Given a basis {v1,v2,…,vn} of V, we define the corresponding dual basis {Ο•1,Ο•2,…,Ο•n} of Vβˆ— by
Ο•i(vj)={1, if i=j0, if iβ‰ j.
Note that each Ο•j is well-defined, since any linear transformation can be defined by giving its values on a basis.
For the standard basis on Rn, note that the corresponding dual basis functionals are given by
Ο•j(x1,x2,…,xn)=xj.
That is, these are the coordinate functions on Rn.
Next, let V and W be vector spaces, and let T:Vβ†’W be a linear transformation. For any such T, we can define the dual map Tβˆ—:Wβˆ—β†’Vβˆ— by Tβˆ—(Ο•)=Ο•βˆ˜T for each Ο•βˆˆWβˆ—.

2.

Confirm that (a) Tβˆ—(Ο•) does indeed define an element of Vβˆ—; that is, a linear map from V to R, and (b) that Tβˆ— is linear.

3.

Let V=P(R) be the space of all polynomials, and let D:Vβ†’V be the derivative transformation D(p(x))=pβ€²(x). Let Ο•:Vβ†’R be the linear functional defined by Ο•(p(x))=∫01p(x)dx.
What is the linear functional Dβˆ—(Ο•)?

4.

Show that dual maps satisfy the following properties: for any S,T∈L(V,W) and k∈R,
  1. (S+T)βˆ—=Sβˆ—+Tβˆ—
  2. (kS)βˆ—=kSβˆ—
  3. (ST)βˆ—=Tβˆ—Sβˆ—
In item Item 3.4.4.c, assume S∈L(V,W) and T∈L(U,V). (Reminder: the notation ST is sometimes referred to as the β€œproduct” of S and T, in analogy with matrices, but actually represents the composition S∘T.)
We have one topic remaining in relation to dual spaces: determining the kernel and image of a dual map Tβˆ— (in terms of the kernel and image of T). Let V be a vector space, and let U be a subspace of V. Any such subspace determines an important subspace of Vβˆ—: the annihilator of U, denoted by U0 and defined by
U0={Ο•βˆˆVβˆ—|Ο•(u)=0 for all u∈U}.

5.

Determine a basis (in terms of the standard dual basis for (R4)βˆ—) for the annihilator U0 of the subspace UβŠ†R4 given by
U={(2a+b,3b,a,aβˆ’2b)|a,b∈R}.
Here is a fun theorem about annihilators that I won’t ask you to prove.
Here’s an outline of the proof. For any subspace UβŠ†V, we can define the inclusion map i:Uβ†’V, given by i(u)=u. (This is not the identity on V since it’s only defined on U. In particular, it is not onto unless U=V, although it is clearly one-to-one.)
Then iβˆ— is a map from Vβˆ— to Uβˆ—. Moreover, note that for any Ο•βˆˆVβˆ—, iβˆ—(Ο•)∈Uβˆ— satisfies, for any u∈U,
iβˆ—(Ο•)(u)=Ο•(i(u))=Ο•(u).
Thus, Ο•βˆˆker⁑iβˆ— if and only if iβˆ—(Ο•)=0, which is if and only if Ο•(u)=0 for all u∈U, which is if and only if Ο•βˆˆU0. Therefore, ker⁑iβˆ—=U0.
By the dimension theorem, we have:
dim⁑Vβˆ—=dim⁑ker⁑iβˆ—+dim⁑imiβˆ—.
With a bit of work, one can show that imiβˆ—=Uβˆ—, and we get the result from the fact that dim⁑Vβˆ—=dim⁑V and dim⁑Uβˆ—=dim⁑U.
There are a number of interesting results of this flavour. For example, one can show that a map T is injective if and only if Tβˆ— is surjective, and vice-versa.
One final, optional task: return to the example of Rn, viewed as column vectors, and consider a matrix transformation TA:Rnβ†’Rm given by TA(xβ†’)=Axβ†’ as usual. Viewing (Rn)βˆ— as row vectors, convince yourself that (TA)βˆ—=TAT; that is, that what we’ve really been talking about all along is just the transpose of a matrix!
You have attempted 1 of 6 activities on this page.