124

A very important theorem in linear algebra that is rarely taught is:

A vector space has the same dimension as its dual if and only if it is finite dimensional.

I have seen a total of one proof of this claim, in Jacobson's "Lectures in Abstract Algebra II: Linear Algebra". The proof is fairly difficult and requires some really messy arguments about cardinality using, if I remember correctly, infinite sequences to represent $\mathbb{N}\times\mathbb{N}$ matrices. Has anyone come up with a better argument in the 57 years since Jacobson's book was published, or is the noted proof still the only way to prove this fact?

Edit: For reference, the proof is on pages 244-248 of Jacobson's

Lectures in Abstract Algebra: II. Linear Algebra.

Harry Gindi
  • 19,374
  • I've never seen any of the proofs of this, including the one you mention, but given that dimension is - I presume - the cardinality of a Hamel basis, it doesn't seem surprising to me that a proof in full generality requires some mess as opposed to a slick proof. I am tempted to be rash and claim that some kind of diagonal argument should come into play, but that's not based on any serious thought or intuition. – Yemon Choi Jan 29 '10 at 03:11
  • The only one I was able to find was this proof in Jacobson. This result is actually really important. It came to mind earlier tonight, when I used this fact to prove that a smooth representation of a locally profinite group is admissible if and only if the representation is isomorphic to its (smooth) double contragredient. – Harry Gindi Jan 29 '10 at 03:23
  • 6
    I do recall learning this result in my undergraduate days, so it is not so rarely taught. But you're right that this is one of these "well known" things that a lot of people seem not to know. I think part of the problem is that the proof is -- very unusually for a linear algebra fact -- pretty hard! – Pete L. Clark Jan 29 '10 at 06:47
  • 55
    "very important theorem" - I disagree. – darij grinberg Jan 29 '10 at 12:48
  • 81
    @darij: You're entitled to disagree, but saying so, without any justification whatsoever, is not a positive contribution. – Pete L. Clark Jan 29 '10 at 12:58
  • 56
    This theorem proves that we cannot ever under any circumstances extend the results of finite dimensional linear algebra without considering topology. I think that's pretty important. =p – Harry Gindi Jan 29 '10 at 13:23
  • 7
    Yes, but this is basically all it says. I consider a theorem important if it is useful in some proofs, not if it just stands there like a "wrong-way" sign. I don't know what an admissible representation is, but I doubt that the non-topological dual of a vector space is used anywhere in profinite group theory. – darij grinberg Jan 29 '10 at 16:30
  • 20
    Is for instance Gödel's incompleteness theorem not important? – Dan Petersen Jan 29 '10 at 16:48
  • 2
    It's an exception since it is more or less 1000 negative results in one theorem. Whenever anything has the answer "no" in logic, it is most likely proven by reduction to the halting problem / Gödel incompleteness. But I still consider the positive results (such as Gödel's own completeness theorem, though it is much simpler and less nontrivial than the incompleteness one) way more important. – darij grinberg Jan 29 '10 at 18:01
  • 15
    My comment was mainly a reply to at least 2 people here overestimating the impact of the original question. If it wouldn't have been called a "very important theorem" by the author or a "'well known' thing that a lot of people seem not to know", I wouldn't have objected. It's not like I wouldn't value actual mathematics over discussions about importance. – darij grinberg Jan 29 '10 at 19:43
  • 37
    Your comments seem disrespectful to Harry and especially to Mariano and Andrea, who have taken time to write out very interesting and instructive answers. Please refrain from making purely negative remarks. – Pete L. Clark Jan 29 '10 at 21:18
  • 34
    +1 for the question and for the answers. Here is an example for an application that I will give in a class, which shows that the theorem in question is not purely a no-go-result: the Poincare duality in de Rham theory states that $H^k (M) \cong (H^{n-k}_{cpt} )^{\ast}$ for an oriented manifold. If $M$ is compact, then $H^k (M) \cong (H^k (M))^{\ast \ast}$, so $H^k (M)$ is finite-dimensional.. – Johannes Ebert Dec 09 '12 at 18:52
  • 1
    I must be misunderstanding the question, because for $1 < p < \infty$, $L(p)$ is a Banach space whose dual is $L(q)$ (where $1/p + 1/q = 1$), and these have the same dimensions. –  Nov 11 '14 at 16:05
  • 16
    The question is about the dual in the sense of abstract vector spaces (i.e., the space of all functionals with no continuity requirement), not the continuous dual. – Jeremy Rickard Nov 11 '14 at 16:17
  • 6
    The proof of this is also Exercise 5 of Section 11.3 of Abstract Algebra, Dummit and Foote. – Leon Avery Feb 15 '15 at 19:12
  • @darijgrinberg: Perhaps the issue with "very important theorem" is the context. Nobody would argue that the plumbing infrastructure in a home isn't very important. But when you talk about plumbing you've specifically narrowed to an essentially civil-engineering context. This theorem is "important" in that sense. It's a statement that is true, and has to do with basic objects people use in mathematics. It perhaps has more of a relationship to technical issues of computation than anything else. But computability is certainly of some importance. "Important" is a bit of a loaded word. – Ryan Budney Apr 16 '22 at 18:38

4 Answers4

141

Here is a simple proof I thought, tell me if anything is wrong.

First claim. Let $k$ be a field, $V$ a vector space of dimension at least the cardinality of $k$ and infinite. Then $\operatorname{dim}V^{*} >\operatorname{dim}V$.

Indeed let $E$ be a basis for $V$. Elements of V* correspond bijectively to functions from $E$ to $k$, while elements of $V$ correspond to such functions with finite support. So the cardinality of $V^{*}$ is $k^E$, while that of $V$ is, if I'm not wrong, equal to that of $E$ (in this first step I am assuming $\operatorname{card} k \le \operatorname{card} E$).

Indeed $V$ is a union parametrized by $\mathbb{N}$ of sets of cardinality equal to $E$. In particular $\operatorname{card} V < \operatorname{card} V^{*}$, so the same inequality holds for the dimensions.

Second claim. Let $h \subset k$ two fields. If the thesis holds for vector spaces on $h$, then it holds for vector spaces on $k$.

Indeed let $V$ be a vector space over $k$, $E$ a basis. Functions with finite support from $E$ to $h$ form a vector space $W$ over $h$ such that $V$ is isomorphic to the extension of $W$, i.e. to $W\otimes_h k$. Every functional from $W$ to $h$ extends to a functional from $V$ to $k$, hence

$$\operatorname{dim}_k V = \operatorname{dim}_h W < \operatorname{dim}_h W^* \leq \operatorname{dim}_k V^*.$$

Putting the two claims together and using the fact that every field contains a field at most denumerable yields the thesis.

Mizar
  • 3,096
Andrea Ferretti
  • 14,454
  • 13
  • 78
  • 111
  • 1
    It looks correct to me, and if so, is a fantastic proof. – Pete L. Clark Jan 29 '10 at 12:55
  • 1
    Well, if you look at the equivalent statement dim V* <= dim V => card V* <= card V, it should be obvious. Since the hypothesis implies that V* is isomorphic to a subspace of V. – Andrea Ferretti Jan 29 '10 at 13:38
  • @Harry Gindi An interesting related problem: http://www.math.leidenuniv.nl/~naw/home/ps/pdf/2009-3.pdf (Problem C). – Ady Jan 29 '10 at 15:42
  • "Every functional from W to h extends to a functional from V to k": this should be obvious but I'm not managing to fill in the details. Any hint? – Mark Meckes Jan 29 '10 at 16:55
  • W and V have the same basis E, but over different fields of definition. A functional on W corresponds to a function from E to h, which is in particular a function from E to k. This gives your desired extension. – Andrea Ferretti Jan 29 '10 at 17:26
  • Of course, as I thought I was just being dense. Thanks. – Mark Meckes Jan 29 '10 at 18:07
  • 5
    You can extend functionals, but you must also show that if they were linearly independent over the base then they remain linearly independent after the extension. Once you fill in that detail, I think you have a full proof. – Pace Nielsen Jan 29 '10 at 18:40
  • 6
    Take a linear combination = 0 with coefficients in k. That involves finitely many functionals which are independent over h. Take elements of W which are dual to this functionals. Now evaluate the linear combination at each of these vectors to find that all coefficients are zero. – Andrea Ferretti Jan 29 '10 at 19:20
  • 46
    I nominate this answer for a hypothetical future "Best of MO" collection. I find it almost magical, and with a moral -- don't just stick with the field you're given! -- that I find very appealing. – Pete L. Clark Jan 29 '10 at 21:45
  • 2
    @Andrew: Sorry to resurrect an old answer you probably haven't thought about since January; I was looking for an easy proof of precisely the statement in the title. This looks very nice, but I'm having trouble seeing how to justify the existence in W of elements that are dual to a given (finite) set of functionals. I assume you mean that given $f_1,\ldots,f_n$ distinct elements of $h^E$ with finite support, you will find elements $v_j$ in $W$ (or $V$) with $f_i(v_j)=\delta_{ij}$. But I don't see how to justify that such elements exist. – Arturo Magidin Sep 24 '10 at 20:41
  • @Arturo: As the functional have finite (joint) support, you can reduce to the case of a finite-dimensional vector space, where the result is well-known from standard linear algebra (namely, it follows from the fact that $W \cong W^{**}$ in the finite-dimensional case). By the way, my name is Andrea, not Andrew. – Andrea Ferretti Sep 25 '10 at 18:11
  • 6
    @Andrea, @Arturo: functionals in $W^$ need not have finite support. I would argue instead as follows. Given $h$-lin. indep. $f_1,...,f_n$ in $W^$, consider $h$-linear map $W \rightarrow h^n$ where $w \mapsto (f_1(w),...,f_r(w))$. To show this is onto we show the only vector in $h^n$ orthogonal to image is $(0,...,0)$. If $(c_1,...,c_n)$ in $h^n$ is orthogonal to image then for all $w$ in $W$, $c_1f_1(w) + ... + c_nf_(w) = 0$. Thus $c_1f_1 + ... + c_nf_n$ is 0 in $W^*$, so each $c_i$ is 0 by $h$-lin. indep. of the $f_i$'s. Now use $w_j$ giving image $(0,...,1,...,0)$ with 1 in j-th slot. – KConrad May 08 '11 at 23:42
  • @KConrad: Since $h^n$ isn't an inner product space, how does orthogonality work that way? – Joshua P. Swanson Aug 31 '11 at 14:06
  • To reply to my previous comment, the full inner product space assumptions are unnecessary. The usual dot product on $h^n$ is a non-degenerate bilinear form, and the sum of the dimensions of the orthogonal complement of a subspace and of that subspace is still the dimension of the ambient space in this setting, as detailed at http://www.maths.bris.ac.uk/~maxmr/la2/notes_5.pdf (see Proposition 5.9).

    Everything works out then!

    – Joshua P. Swanson Sep 01 '11 at 06:36
  • Unfortunately this proof does not include the case when $\aleph_0\leq \dim V<|F|$. Here is a proof that includes this case http://math.stackexchange.com/a/35863/10976. – Shay Ben Moshe Nov 19 '13 at 21:46
  • 1
    @ShayBenMoshe I think the second claim in the answer shows that your case is taken care of. – user71815 Oct 17 '14 at 15:13
  • @KConrad: I'm sure I've got this wrong, but it seems to me there is still a hole in the argument. $f_i$ is a homomorphism from $W$ to $h$, but its extension to $V$ and $k$ may not be a homomorphism. For instance, $\mathbb{C}$ is a 2D vector space over $\mathbb{R}$, and $f:a+bi\mapsto a$ is a perfectly fine linear functional on this vector space. However, it doesn't extend to $\mathbb{C}$, since, for instance, $i=i\cdot f(1)\ne f(i\cdot 1)=0$. In fact, the extension of $\mathbb{C}$ over $\mathbb{R}$ to $\mathbb{C}$ over $\mathbb{C}$ is 1D. Where am I going wrong? – Leon Avery Feb 15 '15 at 18:35
  • 1
    @LeonAvery, you are using $h = \mathbf R$, $k = \mathbf C$, and $V = \mathbf C$. The space $W$ is defined to be the finitely supported functions from a $k$-basis of $V$ to $h$. Here $V$ has a basis of size one, say $E = {z_0}$ with $z_0 \not= 0$. Then $W$ is all functions $E \rightarrow \mathbf R$, which is canonically $\mathbf R$ as a real vector space by identifying such a function with its value at $z_0$. Then $W^$ (real dual space) has dim. 1 over $\mathbf R$ and any $f \in W^$ extends to $V^*$ (cpx. dual space) by $z \mapsto (z/z_0)f(z_0) = (f(z_0)/z_0)z$. That's $\mathbf C$-linear. – KConrad Feb 15 '15 at 22:16
  • @LeonAvery, your error is that you are not constructing $W$ correctly. The example you give, where $f$ is the real part function on $\mathbf C$, is not an element of $W$. – KConrad Feb 15 '15 at 22:19
  • @KConrad which part of this argument cannot be generalized to projective, or at least free modules over a noncommutative ring? – Exterior Nov 17 '15 at 16:44
  • @AndreaFerretti It seems to me that the cardinality of $V\cong k^{(E)}$ (the bracket means "finitely supported") is that of $k$ (if $E$ is infinite and $card(E)<card(k)$), no ? – Duchamp Gérard H. E. Mar 17 '16 at 08:02
  • @DuchampGérardH.E. In the first step I am assuming dim E > card(k) – Andrea Ferretti Mar 17 '16 at 19:26
  • Nice proof, btw +1. – Duchamp Gérard H. E. Mar 17 '16 at 21:46
30

It is clearly enough to show that an infinite dimensional vector space $V$ has smaller dimension that its dual $V^*$.

Let $B$ be a basis of $V$, let $\mathcal P(B)$ be the set of its subsets, and for each $A\in\mathcal P(B)$ let $\chi_A\in V^*$ be the unique functional on $V$ such that the restriction $\chi_A|_B$ is the characteristic function of $A$. This gives us a map $\chi:A\in\mathcal P(B)\mapsto\chi_A\in V^*$.

Now a complete infinite boolean algebra $\mathcal B$ contains an independent subset $X$ such that $|X|=|\mathcal B|$---here, that $X$ be independent means that whenever $n,m\geq0$ and $x_1,\dots,x_n,y_1,\dots,y_m\in X$ we have $x_1\cdots x_n\overline y_1\cdots\overline y_n\neq0$. (This is true in this generality according to [Balcar, B.; Franěk, F. Independent families in complete Boolean algebras. Trans. Amer. Math. Soc. 274 (1982), no. 2, 607--618. MR0675069], but when $\mathcal B=\mathcal P(Z)$ is the algebra of subsets of an infinite set $Z$, this is a classical theorem of [Fichtenholz, G. M; Kantorovich L. V. Sur les opérations linéaires dans l'espace des fonctions bornées. Studia Math. 5 (1934) 69--98.] and [Hausdorff, F. Über zwei Sätze von G. Fichtenholz und L. Kantorovich. Studia Math. 6 (1936) 18--19])

If $X$ is such an independent subset of $\mathcal P(B)$ (which is a complete infinite boolean algebra), then $\chi(X)$ is a linearly independent subset of $V^*$, as one can easily check. It follows that the dimension of $V^*$ is at least $|X|=|\mathcal P(B)|$, which is strictly larger than $|B|$.

Later: The proof of the existence of an independent subset is not hard; it is given, for example, in this notes by J. D. Monk as Theorem 8.9. In any case, I think this proof is pretty because it captures precisely the intuition (or, rather, my intuition) of why this is true. I have not seen the paper by Fichtenhold and Kantorovich (I'd love to get a copy!) but judging from its title one sees that they were doing similar things...

  • Is it easier to prove the relevant results about boolean algebras, or does that also rely on a proof like Jacobson's? – Harry Gindi Jan 29 '10 at 04:26
  • 2
    The paper by G. Fichtenholz and L. Kantorovitch can be found here : http://matwbn.icm.edu.pl/ksiazki/sm/sm5/sm519.pdf . – Ady Jan 29 '10 at 11:58
30

I know a fairly elementary proof in the case when the field is countable.

First, you prove that $Hom(\bigoplus_{i\in I}A_{i},B)\cong \prod_{i\in I}Hom(A_{i},B)$, where all terms are $R$-modules. (This should be fairly intuitive. A homomorphism from a direct sum is determined by its actions on each piece individually.)

Second, specialize $A_{i}$ and $B$ to equal your field. So the direct product is over a bunch of pieces (all isomorphic to your field).

Third, use the standard cardinality argument to show that a direct product of $I$ non-empty pieces has cardinality strictly greater than $I$.

This argument doesn't quite work when your field has large cardinality, but I still think it is nice. (Basically, this is thinking about the first part of Andrea's proof a little differently.)

Pace Nielsen
  • 18,047
  • 4
  • 72
  • 133
4

Here is a proof for $\dim(V^*)>\dim(V)$ for every infinite dimensional vector space $V$ over a field $k$. More precisely, we prove that $$\mathrm{dim}(V^*)=|V^*|=|k|^{\dim(V)}\geq 2^{\dim(V)}>\dim(V).$$

We first show that $\dim(V^*)\geq |k|$. Fix a basis $B$ for $V$ and choose a countable infinite subset $v_0,v_1,v_2,\ldots$ of vectors of $B$. Given $\alpha\in k$, we define $f_\alpha\in V^*$ to be the unique functional defined on the basis elements by setting $f_\alpha(v_n):=\alpha^n$ for all $n=0,1,2,\ldots$ and $f_\alpha(v)=0$ for all other elements $v\in B$. Notice that $(f_\alpha)_{\alpha\in k}$ is a family of pairwise distinct functionals of $V$.

We claim that the family $(f_\alpha)_{\alpha\in k}$ is linearly independent. Indeed, if $\alpha_1,\ldots,\alpha_n\in k$ are distinct elements and $$x_1f_{\alpha_1}+\ldots+x_nf_{\alpha_n}=0$$ for certain scalars $x_i\in k$, then evaluating at $v_0, v_1,\ldots, v_{n-1}$ we get a system of $n$ linear equations of the form $$x_1\alpha_1^i+\ldots+x_n\alpha_n^i=0,\quad i=0,1,\ldots,n-1$$ in the variables $x_1,\ldots,x_n$. The $n\times n$ matrix of this system is $$\left(\begin{array}{ccc} 1& 1 &\ldots & 1\\ \alpha_1& \alpha_2 &\ldots & \alpha_n\\ \alpha_1^2& \alpha_2^2 &\ldots & \alpha_n^2\\ \vdots & \ldots & \ldots &\vdots \\ \alpha_1^{n-1} &\alpha_2^{n-1} &\ldots &\alpha_n^{n-1} \end{array}\right)$$ This is a (transpose of a) Vandermonde matrix, which is therefore invertible, so that $x_1=x_2=\ldots=x_n=0$, as desired.

It follows that $$|V^*|=\max\{|k|,\dim(V^*)\}=\dim(V^*).$$ Since there is an isomorphism of vector spaces $V^*\cong k^B$ (where the right hand side denotes the space of all functions $B\to k$), it also follows $$\dim(V^*)=|k^B|=|k|^{\dim(V)}.$$

  • This is a restatement of Pace Nielsen's argument. – Ryan Budney Apr 16 '22 at 19:09
  • @Ryan Budney, I didn't pay too much attention to Pace Nielsen's argument in the first place. They are based partially on the same ideas. My point was too give a possibly elementary proof of the fact that $|V^*|\geq |k|$. Pace Nielsen adds at the end a comment that the argument does not work for fields of large cardinality. And my argument actually works for every field, of arbitrary cardinality. – Alcides Buss Apr 17 '22 at 23:11
  • I meant $\dim(V^)\geq |k|$ above, proving this inequality was the main point of my argument. And then it follows $\dim(V^)=|V^*|$. – Alcides Buss Apr 18 '22 at 14:16
  • Nice proof when the field has characteristic $0$. But may need adjustment in characteristic $p$, as $\alpha^p$ could be the same as $\alpha$ and the Vandermonde matrix may not be invertible. – PatrickR Jan 21 '23 at 06:38
  • @PatrickR, I didn't understand your comment. The Vandermonde matrix is always invertible, over any field, despite of its characteristic. – Alcides Buss Jan 31 '23 at 12:57
  • Yes, you are right. I mistakenly thought that if all $\alpha^p=\alpha$, the p-th row will be the same as the 1st row (starting to count at row 0). But in that case all these $\alpha$ must be in the prime subfield and there won't be a p-th row. – PatrickR Jan 31 '23 at 20:33