6

Here the definition of complete symmetric polynomial $h_{k}$ and elementary symmetric polynomial $e_{k}$ are:

$$ e_{k}=\sum_{1\le i_1<\cdots <i_k\le n}x_{i_1}\cdots x_{i_k}, h_{k}=\sum_{1\le i_1\le \cdots \le i_k\le n}x_{i_1}\cdots x_{i_k} $$

I know that they are "dual" to each other in the symmetric function ring $\Lambda_{k}$, for the map $e_{k}\rightarrow h_{k}$ is an involution of the ring. But this does not explain some other beautiful dual relationship to me.

For example in Pieri's formula we have $$ s_{\lambda}e_{k}=\sum_{\mu\in \lambda\otimes 1^{k}}s_{\mu}, s_{\lambda}h_{k}=\sum_{\mu\in \lambda\otimes k}s_{\mu} $$ And in the reverse side using Kostka numbers we have $$ h_{\mu}=\sum K_{\lambda \mu}s_{\lambda}, e_{\mu}=\sum_{\lambda}K_{\lambda \mu}s_{\lambda^{*}} $$ The Jacobi-Trudi formula claim that for $|\lambda|\le n$, we have $$ s_{\lambda}=\det(h_{\lambda_i-i+j})_{1\le i,j\le n},s_{\lambda^{*}}=\det(e_{\lambda_i-i+j})_{1\le i,j\le n} $$

To me, all these suggests that there some deeper relation underlying these dualizing relationships. The formulas are so stunningly beautiful that they cannot come from mere coincidence in computation. In particular if we consider their action on Schur polynomials using the Tableaux, we can visualize the dual relationship. I want to ask, is there any deep reason behind these dual relationships? The definition itself seems to reveal very little and I felt very puzzled by the unexpected beauty.

Bombyx mori
  • 6,141
  • 3
    All the formulae you give follow easily from the involution (using that it sends $s_\lambda$ to $s_{\lambda^\star}$ where $\lambda^\star$ is the conjugate partition to $\lambda$). There are many combinatorial proofs of the Pieri rules: see e.g. Loehr's article Abacus proofs of Schur function identities. Moving to the symmetric group, $s_\lambda$ is sent to $\chi^\lambda$ and the involution becomes multiplication by the sign character of $S_n$. For me this makes some dual identities more intuitive (especially when they involve plethysm) but it's just a change of language. – Mark Wildon May 31 '14 at 12:31
  • @MarkWildon: I see. I need some time to digest this as I learned all these formulas from a few days ago. – Bombyx mori May 31 '14 at 12:49
  • @MarkWildon: I managed to find the paper. Give me sometime to read it. – Bombyx mori May 31 '14 at 13:12

1 Answers1

9

I think of all of the duality statements you wrote as a consequence of the fact that there is a ring involution of $\Lambda$ sending $e_k$ to $h_k$, so let me give a manifestation of that. First, the $e_k$ and $h_k$ are algebraically independent generators, so the existence of an automorphism given by $e_k \mapsto h_k$ or by $h_k \mapsto e_k$ is not that interesting, and the important fact is that these two maps are inverses of one another. That can be derived from the relation

$\displaystyle \sum_{i+j=n} (-1)^i e_i h_j = \delta_{0,n}$.

I think of this as coming from the Koszul complex: $e_i$ is the character of the $i$th exterior power functor and $h_j$ is the character of the $j$th symmetric power functor, and the Koszul complex is

$Sym(V) \gets V \otimes Sym(V) \gets \wedge^2(V) \otimes Sym(V) \gets \cdots$

which is graded and exact in positive degrees. The Euler characteristic of the degree $n$ piece is the identity I stated. This identity can be proven more directly without the Koszul complex, but hopefully this perspective is useful for you.

Steven Sam
  • 10,197
  • What exactly is meant by "character of a functor"? – mlbaker Jun 09 '14 at 07:15
  • I mean to evaluate it on a vector space and take the trace function. To get symmetric functions (i.e., infinitely many variables), take a vector space to have a countably infinite basis. – Steven Sam Jun 09 '14 at 14:41
  • I'm familiar with the identity you state, Newton identities relating the symmetric polynomials, relation to determinants, etc., but not with Koszul complexes. Can you give me a reference at the most elementary level you are aware of that elaborates on the Koszul complex and its relation to the reciprocal identity? – Tom Copeland Jan 21 '15 at 21:47
  • Tom: It's probably best to learn the Koszul complex first and then learn about characters and piece the two together (I don't know any single reference that explains both carefully). The Koszul complex is explained in many places like Weibel's book Homological Algebra (Section 4.5) or Eisenbud's book Commutative Algebra (Chapter 17). Characters are explained in many places too -- one reference is Fulton and Harris' book Representation Theory (Chapter 6). – Steven Sam Jan 22 '15 at 04:08
  • Thanks, but I'm simply looking for how the concept of Koszulness could inform me of the combinatorics/geometry of the reciprocal identity (as in, say, OEIS A133314). These inverse relations (multiplicative, compositional, umbral compositional) always seem to contain a lot of underlying combinatorics and often geometry. – Tom Copeland Jan 23 '15 at 08:28
  • I'm not quite sure what you're asking for. It might make sense for you to post this as a new question so that you could provide some more details. – Steven Sam Jan 23 '15 at 18:51