4

I have two questions on Grassmann numbers. In particular, their naive representation using "huge matrices".

Apparently, the anticommutation relations $\{ \xi_i, \xi_j \} = 0$, $\{ \xi_i, \xi^{*}_j \} = 0$ hold for Grassmann numbers, in contrast to $\{ c_i, c_j \} = 0$, but $\{ c_i, c^{\dagger}_j \} = \delta_{ij}$ for fermionic ladder operators. I suppose the difference is related to the notion of being "the classical analogue" for fermionic modes.

It is mentioned e.g. in wikipedia that you can represent Grassmann numbers with fermionic raising (or why not lowering) operators using the Jordan-Wigner mapping $$ c^{\dagger}_i = \prod_{k=1}^{i-1} (-\sigma^{z}_{k}) \sigma^{+}_i . $$ Naively, you would represent their complex conjugates as lowering (raising) operators, to end up with the (apparently) wrong anticommutation relations. 1. Is it correct actually to represent $N$ modes using $2^{2 N} \times 2^{2 N}$ raising operators then (i.e. modes $1$ to $N$ to represent $\xi_i$, and modes $N+1$ to $2N$ to represent $\xi^{*}_i$)? But then hermitian conjugation does not work. There is an inconsistency, hence something in the above is incorrect. I'm starting to be suspicious about the relation $\{ \xi_i, \xi^{*}_j \} = 0$ (especially since I cannot quickly recover a believable reference now). But if $\{ \xi_i, \xi^{*}_j \}$ would instead be equal to $\delta_{ij}$, then what would be the point of Grassmann numbers in the first place, since they would literally be equal to the fermion ladder operators in every way?

2. Is there a way to numerically perform the phase space integral, of the form $$ \int \prod_i \mathrm{d} \xi^{*}_i \mathrm{d} \xi_i \, f(\xi_1,...,\xi_N,\xi^{*}_1,...,\xi^{*}_N) $$ by utilizing the matrix representation? The first hint is that the differentials behave much like the numbers. I think there are some hints in Creutz's sketch here, but the idea hasn't been fleshed out too rigorously.

Qmechanic
  • 201,751
  • 1
    Just curious: where does the phrase "king's way" come from? Since Physics SE is supposed to produce a searchable database of questions and answers, a better title might be "Can Grassmann numbers be represented by matrices?" It's less poetic, but it's more likely to be found by other people who are struggling with the same question. – Chiral Anomaly Aug 24 '19 at 14:14
  • Thank you for the comment! You make a fair point. I noticed that "related questions" changed as I typed the question, i.e. the search algorithm considers the whole question. Searching "representation of Grassmann numbers" finds this. [I just googled and] it's slightly misbacktranslated from Euclid's "royal road" quote, which I've heard in my native and translated back freely. (Matrix representation is the shortcut requiring in principle less trickery. Rich people can have better computers for brute-forcing things... Yeah I was going for catchy/funny for increased attention) – fermionic Aug 24 '19 at 14:36
  • 1
    Consider the case $N=1$, so we don't need subscripts. Let $\xi$ be a matrix, and $\xi^$ its adjoint, such that $\xi^2=0$ and $\xi\xi^+\xi^\xi=0$. Multiply the second equation by $\xi^\xi$ to get $(\xi^\xi)^2=0$, which implies $\xi^\xi=0$, which implies $\xi=0$. So a matrix representation cannot be faithful: the matrices must all be zero. The related questions https://physics.stackexchange.com/q/95259 and https://physics.stackexchange.com/q/29345 get a non-zero representation because they don't require that $\xi^*$ be the adjoint of $\xi$. Does this address most of the question? – Chiral Anomaly Aug 24 '19 at 14:46
  • Thank you for the helpful comment! I think that indeed answers most of question 1. If you assign independent modes for $\xi^{*}_i$ the way I suggest and introduce a formal conjugation rule which doesn't hold on the level of matrices, I wonder if something breaks down i.e. if this renders the representation useless? (I'm also still working to understand an answer to question 2.) – fermionic Aug 24 '19 at 14:54
  • If the matrices representing $\xi_j$ and $\xi_j^*$ are not required to be each other's adjoints, then a non-zero representation exists for every $N$. Whether or not this is useful may depend on the motive for using a matrix representation. It's not clear to me that a matrix representation would help evaluate integrals like the one shown in question 2, but maybe somebody else will step in and address that part. – Chiral Anomaly Aug 24 '19 at 15:42

1 Answers1

1

Please feel welcome to contribute still.

Before moving onto 2., let's discuss 1.

Let us have $N$ fermionic modes. We represent $\xi_i \, \hat{=} \, c_i$ with $i=1...N$, $c_i$ is a matrix given by the Jordan-Wigner mapping (see question), and $\xi^{*}_i \, \hat{=} \, c_i$ with $i=N+1...2N$. As pointed out by Chiral Anomaly, the representation is not faithful in that the representation of $\xi^{*}_i$ is not the hermitian conjugate of the representation of $\xi_i$. I believe it is at least briefly mentioned in various sources that $\xi^{*}_i$ need to be handled as modes independent of $\xi_i$, which is consistent with this. One can formally define rules for complex conjugation which maps $c_i$, $i\leq N$ to $c_j$, $j = N+i$. I don't know if a rule like this creates mayhem though. This also strangely means that you need $2^{2N} \times 2^{2 N}$ matrices, instead of $2^{N} \times 2^{N}$, as with real-valued Grassmann variables, and with quantum fermions! I don't think I understand the reason for this, but I just want to underline that I am not claiming that this is a useful way to perform the integrals for large-scale systems.

2. Creutz's sketched idea (see question) seems to work. Let the function of Grassmann variables be of the form $f(\boldsymbol{\xi},\boldsymbol{\xi^{*}}) = e^{-\sum_i \xi^{*}_i \xi_i} g(\boldsymbol{\xi},\boldsymbol{\xi^{*}})$, $\boldsymbol{\xi} = (\xi_1,...,\xi_N)$, where $e^{-\sum_i \xi^{*}_i \xi_i}$ is the normalization factor related to Bargmann coherent states (which you can consider to be part of the measure), and $g$ is a general function of Grassmann variables.

Using the above conventions, I believe the integral can be represented with \begin{align} I &= \int \prod_i \mathrm{d} \xi^{*}_i \xi_i \, e^{- \sum_j \xi^{*}_j \xi_j} g(\boldsymbol{\xi},\boldsymbol{\xi^{*}}) \\ \ &\hat{=}\ \langle 0 | (-1)^{N(N-1)/2} e^{- \sum_j c_{N+j} c_j} g(c_1,...,c_{2N}) \big( c^{\dagger}_{2N} ... c^{\dagger}_1 | 0 \rangle \big), \label{eq:main_result} \end{align} Note that $e^{- \sum_j c_{N+j} c_j} = \prod_j e^{- c_{N+j} c_j} = \prod_j (1 - c_{N+j} c_j)$, which looks like a projector. Everything on the rhs can now be plugged onto a computer.

We should probably try to motivate the result. Let us motivate the phase factor $(-1)^{N(N-1)/2}$, which changes when the number of even modes changes, since it was glanced over by Creutz. To this end, let's look at the constant function $g=1$, and evaluate $I_{1} = \langle 0|\prod_{k=1}^{N}(1 - c_{N+k} c_k) c^{\dagger}_{2N} ... c^{\dagger}_{1} |0\rangle$. We know that the non-zero contributions come from terms where the lhs operators annihilate each mode exactly once. Thus, we can simplify $$ \langle 0 | ... (1 - c_{N+k} c_k) c^{\dagger}_{2N} ... c^{\dagger}_{1} |0\rangle \to -\langle 0 | ... (c_{N+k} c_k) c^{\dagger}_{2N} ... c^{\dagger}_{1} |0\rangle, $$ since the term proportional to $1$ in $1-c_{N+k} c_k$ will vanish, because nowhere in the corresponding amplitude are modes $N+k$ and $k$ annihilated, leaving a mismatch compared with the occupied ket. For $k=N$, we swap $c_k$ over $N$ lowering operators, and employ $c_k c^{\dagger}_k = -c^{\dagger}_k c_k + 1$. Only the latter term survives, as can be seen by swapping the $c_k$ all the way to right to annihilate the vacuum. We then perform the same tricks for $c_{N+k} = c_{2N}$, which is already on the lhs of $c^{\dagger}_{2N}$. Thus far we have $$ I_{1} = \langle 0| (-1)(-1)^{N} \prod_{k=1}^{N-1}(1 - c_{N+k} c_k) c^{\dagger}_{2N-1} ... c^{\dagger}_{N+1} c^{\dagger}_{N-1} ... c^{\dagger}_{1} |0\rangle. $$ Notice that for $k=N-1$, $c_k$ only has to be swapped $N-1$ times. Plowing through the whole product, in total we gain the phase factor $(-1)^{2N}(-1)^{\sum_{l=0}^{N-1} (N - l)} = (-1)^{N^2 - N(N+1)/2} = (-1)^{N(N-1)/2}$. This is not a proof for integrating over an arbitrary function, but I believe the result remains unchanged. You can first reduce the contributions from the function, which gives you some phase $(-1)^{a(i,N)}$, where $a(i,N)$ is a function. You then reduce the Bargmann norm terms $k=N,...,N-(i-1)$, and $k=i-1...1$ as before, which gives you phases that cancel the $i$-dependent part in $(-1)^{a(i,N)}$ and leave just $(-1)^{2N}(-1)^{N(N-1)/2}$ as before. To be fair, I don't have a general proof, just tested some cases to see how it works.

Note that I have numerically tested the above expression for $I$ from $N = 1...8$, up to $5$th order terms, and got the same results as with pencil and paper.