14

The Wigner-Eckart theorem gives you the matrix element of a tensor transforming according to a representation of $\mathfrak{su}(2)$, when sandwiched between vectors transforming according to another (possibly different) representation of $\mathfrak{su}(2)$. Can this be generalised to other Lie algebras?

In its standard form, Wigner-Eckart allows you to compute $$ \langle j_1|T_{j_2}|j_3\rangle $$ where $j_1,j_2,j_3$ are three (possibly different) representations of $\mathfrak{su}(2)$. Can we do the same thing for matrix elements of the form $$ \langle R_1|T_{R_2}|R_3\rangle $$ where $R_1,R_2,R_3$ are three representations of three (possibly different) Lie algebras $\mathfrak g_1,\mathfrak g_2,\mathfrak g_3$?

I guess a necessary condition for the matrix element to be non-zero is that $R_1\otimes R_2\otimes R_3$ contains a copy of the trivial representation. My gut tells me that we also need $\mathfrak g_2\subseteq \mathfrak g_1=\mathfrak g_3$, for otherwise the matrix element is meaningless. Is this correct at all? Can we be any more precise than this?

AccidentalFourierTransform
  • 53,248
  • 20
  • 131
  • 253

1 Answers1

12

Yes. One can define tensor operators quite generally (transforming usually by a finite dimensional representations) and one does have $$ \langle \lambda_1;\alpha_1\vert T_{\lambda_2,\alpha_2}\vert \lambda_3,\alpha_3\rangle = \langle \lambda_1 \Vert T_{\lambda_2}\Vert \lambda_3\rangle \times \hbox{(something)} $$ with $\langle \lambda_1 \Vert T_{\lambda_2}\Vert \lambda_3\rangle$ depending on the representation labels $\lambda_1,\lambda_2,\lambda_3$ only but not on the "internal labels" $\alpha_k$. The "something" is usually proportional to a CG and, in the case of compact group, also contains some dimensionality factor. (The interesting case of non-compact su(1,1) is discussed in Ui, Haruo. "su (1, 1) quasi-spin formalism of the many-boson system in a spherical field." Annals of Physics 49.1 (1968): 69-92.)

Generator (as tensors) have $\lambda_1=\lambda_3$ by definition (since they can't change representation labels) but in general you don't necessarily need $\lambda_1=\lambda_3$. For instance, $x+iy$ is the component of an $L=1$ tensor for $su(2)$ and can certainly change $j$ between initial and final states.

In $su(2)$ all irreps are self conjugate so more generally the condition is that $\lambda_1^*\otimes\lambda_2\otimes\lambda_3$ contain the trivial representation.

There is a lot of literature on tensor operators for $SU(n)$ as this kind of technique was used (and is still in use) quite a bit in nuclear physics (particularly in the context of $SU(3)$ models and the so-called IBM models of Arima and Iachello). More generally, L.C. Biedenharn spent part of his career studying this, and in particular studying so-called shift tensors to help with the construction of CG coefficients. A representative paper is this one.

There are also tensor operators and formulation of the Wigner-Eckart theorem for finite groups.

You might want to check these:

  1. Agrawala, Vishnu K. "Wigner–Eckart theorem for an arbitrary group or Lie algebra." Journal of Mathematical Physics 21.7 (1980): 1562-1565.
  2. Jeevanjee, Nadir. An introduction to tensors and group theory for physicists, Birkhauser, 2016., Sec.6.2
  3. Rowe, D. J., and J. Repka. "Induced shift tensors in vector coherent state theory." Journal of Mathematical Physics 36.4 (1995): 2008-2029.

Edited to add:

For the reduced matrix element $\langle \lambda_1\Vert T_{\lambda_2}\Vert \lambda_3\rangle$ to be non-zero, not much can be said beyond the requirement that $\lambda_1$ be contained in the decomposition of $\lambda_2\otimes\lambda_3$. Even when this is the case, there is no guarantee that $\langle \lambda_1\Vert T_{\lambda_2}\Vert \lambda_3\rangle\ne 0$ as this may depend on the specific tensor and also on how $\lambda_k$'s are constructed.

In general, the tensor product $$ \lambda_2\otimes\lambda_3=\sum_k \gamma_k\lambda_k $$ with $\gamma_k$ the number of times $\lambda_k$ occurs in the decomposition of $\lambda_2\otimes\lambda_3$.

In $SU(2)$, $\gamma_k$ is always either $0$ or $1$, but this is not the general case. Even the tensoring the adjoint of $SU(3)$ with itself - i.e. $(1,1)\otimes (1,1)$ or $\boldsymbol{8}\otimes\boldsymbol{8}$ depending on your notation - will produce two copies of $(1,1)$ in the decomposition, as can be checked in a variety of ways, such as Young tableaux.

The problem is there is no easy algorithm to construct basis elements for the multiple copies of $\lambda_k$, which means you're out with no paddle when it comes to saying anything about zeros of the reduced matrix elements, except on a case by case basis. After all, the multiple copies of $\lambda_k$ are mathematically equivalent so it's perfectly possible to take (the same global) linear combinations of basis elements in two (or more) copies of $\lambda_k$ to be another legitimate basis. (This is quite analogous to degenerate perturbation theory, where no specific linear combinations of basis states are singled out in the degenerate subspace.) If $\rho$ distinguishes the occurrence of $\lambda_k$, there might be choices of bases in which more $0$ appear in $\langle \lambda_1;\rho_1\Vert T_{\lambda_2;\rho_2}\Vert \lambda_3;\rho_3\rangle$ for one or more specific tensors, but that's really case by case.

For completeness, I can add that the better known examples of procedure for constructing multiple copies of irreps are from $su(3)$. Of course this only directly addresses the computation of basis states, not the computation of reduced matrix elemetns, but it relies heavily on use of the WE theorem for "special" shift tensors.

The most famous algorithm is

  1. Draayer, J. P., and Yoshimi Akiyama. "Wigner and Racah coefficients for SU(3)." Journal of Mathematical Physics 14.12 (1973): 1904-1912

and the follow up

  1. Bahri, C., and J. P. Draayer. "SU (3) reduced matrix element package." Computer physics communications 83.1 (1994): 59-94. This is completely numerical.

Some early analytical work can be found in

  1. Hecht, K. T. "SU3 recoupling and fractional parentage in the 2s-1d shell." Nuclear Physics 62.1 (1965): 1-36 and will illustrate the complexity of the task.

The most elegant procedure uses vector coherent state theory:

  1. Rowe, D. J., and J. Repka. "An algebraic algorithm for calculating Clebsch–Gordan coefficients; application to SU (2) and SU (3)." Journal of Mathematical Physics 38.8 (1997): 4363-4388.

and was implemented numerically in

  1. Bahri, C., D. J. Rowe, and J. P. Draayer. "Programs for generating Clebsch–Gordan coefficients of SU (3) in SU (2) and SO (3) bases." Computer physics communications 159.2 (2004): 121-143.

This is also largely numerical at it requires the diagonalization of some matrices when repeated irreps occur in the decomposition, but some sense can be made of the construction of states in repeated irreps in the large representation limit.

ZeroTheHero
  • 45,515
  • I just realized I may have misinterpreted your question. You really mean different algebras? How would you deal with the tensor product of irreps from three different algebras? – ZeroTheHero Apr 20 '18 at 15:56
  • Nice. Do you by any chance know how this problem is called in the mathematical literature? Do they use "Wigner-Eckart" to refer to arbitrary Lie groups? or only for $\mathrm{SU}(2)$? – AccidentalFourierTransform Apr 20 '18 at 15:56
  • 1
    @AccidentalFourierTransform They don't usually use "Wigner-Eckart" theorem. I know who to ask and I'll track it down. – ZeroTheHero Apr 20 '18 at 15:57
  • (Two comments up) I guess that's partially the reason I thought that the question only makes sense for $\mathfrak g_2\subseteq \mathfrak g_1=\mathfrak g_3$. In such case, you are tensoring three representations of, say, $\mathfrak g_1$. – AccidentalFourierTransform Apr 20 '18 at 15:58
  • @AccidentalFourierTransform I asked a colleague full prof. in Math with specialization in Lie representation theory. Question: "What do people in math call the "Wigner Eckart theorem" in the math literature? Is there a mathematically-oriented review somewhere?". Answer: "I don't think it really has a name in math. People think of it as just a version of Schur's Lemma, which of course it is. I am not aware of a suitable review." – ZeroTheHero Apr 22 '18 at 23:07
  • Thanks! I read the first article you mentioned in the comment section, and it is quite nice; and, indeed, it's all about Schur's Lemma. Really cool, useful stuff. Perhaps you could add the articles you mentioned in the answer itself, just for reference. Cheers! – AccidentalFourierTransform Apr 22 '18 at 23:26
  • Excellent answer (have a bounty!). Can you comment on what are necessary and sufficient conditions for the matrix element to be nonzero? – Emilio Pisanty Apr 23 '18 at 10:06
  • @EmilioPisanty added some material to partially address your comment. – ZeroTheHero Apr 23 '18 at 13:14