8

In a text (Introduction to Quantum Mechanics by Griffiths) I am using it states without motivation that spin angular momentum has the same commutations relations as orbital angular momentum (these relations with the ladder operators were used to find the eigenvalue equations of orbital angular momentum) These are the spin angular momentum commutation relations: $$[\hat{S}_x, \hat{S}_y] = i \hbar \hat{S}_z,~~~[\hat{S}_y, \hat{S}_z] = i \hbar \hat{S}_x, ~~~ [\hat{S}_z, \hat{S}_x] = i \hbar \hat{S}_y$$ it follows then that spin angular momentum has the same eigenvalue equations as orbital angular momentum: $$\hat{S}^2 |s m \rangle = \hbar^2 s (s+1)|s m \rangle;~~~\hat{S}_z |s m \rangle = \hbar m |s m \rangle;$$

Further in the text we consider a two particle system of two spin-$\frac{1}{2}$ particles-for example the electron and proton of a hydrogen atom in ground state where we define the spin operator as $$\hat{S} := \hat{S}^{(1)} + \hat{S}^{(2)}.$$ It then states that in order to confirm eigenvectors belonging to this operator, we have to ensure that the eigenvalue equations above are satisfied. I just want to know if the commutation relations holds for any spin operator, even for multi-particle systems and is that then why the same eigenvalue equations arise? If this is true then we should also be able to apply the ladder operator in each case in about the same way as orbital angular momentum to derive the eigenvectors of a certain spin operator?

Qmechanic
  • 201,751
Alex
  • 1,023
  • 1
    You might want to take a look at Clebsch-Gordon coefficients. – Faser Nov 28 '16 at 23:33
  • I recently tried to explain on wikipdia why the commutation relations are the same for spin, orbital, and total angular momentum: https://en.wikipedia.org/wiki/Angular_momentum_operator - the section "Angular momentum as the generator of rotations" – Steve Byrnes Dec 02 '16 at 20:34
  • spin is some deep thing, this might help https://books.google.co.in/books/about/The_Dirac_Equation.html?id=LANBAQAAIAAJ&redir_esc=y&hl=en You might also want to learn clifford algerbras and dirac operators to understand spin –  Dec 05 '16 at 07:50

5 Answers5

6

Mathematically, the orbital angular momentum operators and the spin angular momentum operators are really two sides of the same coin. In group-theoretic language, we say that these operators arise from two different representations of the rotation group $SO(3)$ (to be precise, in quantum mechanics we are interested in projective representations because physically, two vectors that differ by a phase are indistinguishable. This requires a representation of the double cover of $SO(3)$, which is $SU(2)$). The group encodes information about the symmetries of the system and a representation of the group on a particular space gives us a way to realize these symmetries as operators on our space of states.

The difference between the spin operators and the angular momentum operators is really just what type of vector space they operate on. However, the group has a certain structure associated with it that carries through in its representations (This is related to the structure of the Lie-algebra of $SO(3)$, $\mathfrak{so}(3) \cong \mathfrak{su}(2)$ which I can elaborate on later if you would like). Hence, any representation of the group $SO(3)$ will have the same commutation relations. This includes extensions to multiple particle states. If we denote the Hilbert space of a single particle as $\mathcal{H}_1$ and the space of a second as $\mathcal{H}_2$, then the total space describing the two particles together is denoted $\mathcal{H}_1 \otimes \mathcal{H}_2$. This is nothing but a new vector space that we may represent $SO(3)$ on!

Thus, when we want to talk about the spin of a two-particle system, we are just talking about a different representation of $SO(3)$. There are some subtleties involved in this procedure due to the fact that the representation on the space $\mathcal{H}_1 \otimes \mathcal{H}_2$ is not irreducible. However, the Clebsh-Gordon decomposition gives us a way of decomposing this representation into a sum of reducible representations. This procedure gives the Clebsch-Gordon coefficients that arise when talking about multiple particle systems.

  • 2
    Spinors are not a representation of the SO(3) group, only the Lie algebra. We consider representation of SU(2) (including the vector ones) because we are really looking for projective representations in QM. – G. Bergeron Dec 05 '16 at 21:45
  • 1
    I know, but I thought that the idea of a covering group and projective reps. would be beyond the scope of OP's question. – Jackson Burzynski Dec 05 '16 at 23:15
  • 5
    It may be so, but I was myself frustrated when learning QM by basically all the textbooks sweeping this point under the rug. They often end up studying the Lie algebra and end up with all the representations, including the projective ones. Strictly speaking, these "extra" representations should be ignored were it not for the fact that physical states are rays in the Hilbert space. This leads to people saying that spin is a fundamentally relativistic phenomenon, which is false. The same goes for defining operators on composite systems... – G. Bergeron Dec 06 '16 at 09:48
  • 1
    @G.Bergeron I learned that spin was a fundamentally relativistic phenomenon. How can I correct this mistake and learn it properly? Can you give me references? Thank you. – QuantumBrick Dec 06 '16 at 17:45
  • @QuantumBrick This the second time I was asked for this on SE. I know my advisor is working on lecture notes to be a book covering those ideas since it is not widely covered by existing material. In the mean time, I suggest you go read about Galilean invariance in QM. The states in QM being vectors (really rays) of a Hilbert space implies that you are looking for the projective representations of the Galilean group. From there, the SO(3) part of this group essentially leads you to searching for representations of SU(2) which include the spin states. – G. Bergeron Dec 07 '16 at 10:18
5

Coupling two non-interacting quantum systems $\:\alpha,\beta\:$ with angular momenta $\:j_{\alpha},j_{\beta}\:$ (no matter if orbital or spin) we reach to the following equation for the angular momentum $\:j\:$ of the composite system $\:f\:$

\begin{equation} J_{\boldsymbol{n}}=\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\right) \tag{A-01} \end{equation} which, for the $\:\mathbf{n}\:$-components to be more clear, can be expressed as

\begin{equation} \mathbf{n}\boldsymbol{\cdot}\mathbf{J}=\Bigl[\bigl(\mathbf{n}\boldsymbol{\cdot}\mathbf{J}^{\boldsymbol{\alpha}} \bigr) \boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}+ \mathbb{I}_{\boldsymbol{\alpha}} \boldsymbol{\otimes} \bigl(\mathbf{n}\boldsymbol{\cdot}\mathbf{J}^{\boldsymbol{\beta}} \bigr)\Bigr] \tag{A-02} \end{equation} On above equations the symbol $''\boldsymbol{\otimes}''$ is used for the product of state vectors, spaces or operators. The vector $\:\mathbf{n}=\left(n_{1},n_{2},n_{3}\right) \:$ is of unit norm. The operators $\:\mathbf{J}^{\boldsymbol{\alpha}},\, J^{\boldsymbol{\alpha}}_{\boldsymbol{n}},\,\mathbb{I}_{\boldsymbol {\alpha}}\:$ act on the $(2j_{\alpha}+1)$-dimensional Hilbert space $\: \mathsf{H}_{\alpha}\:$ of system $\:\alpha\:$ and on the same footing the operators $\:J^{\boldsymbol{\beta}}_{\boldsymbol{n}},\,\mathbb{I}_{\boldsymbol {\beta}}\:$ act on the $(2j_{\beta}+1)$-dimensional Hilbert space $\: \mathsf{H}_{\beta}\:$ of system $\:\beta\:$, the symbol $\:\mathbb{I}\:$ being used for the identity. Finally the operators $\:\mathbf{J},\, J_{\boldsymbol{n}}\:$ act on the $(2j_{\alpha}+1)\cdot (2j_{\beta}+1)$-dimensional Hilbert space $\: \mathsf{H}_{f}=\mathsf{H}_{\alpha}\boldsymbol{\otimes}\mathsf{H}_{\beta}\:$ of the composite system $\:f\:$.

We write equation (A-02) for the three axes of a coordinate system \begin{align} J_{\boldsymbol{1}}=\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{1}}\right) \tag{A-03a}\\ J_{\boldsymbol{2}}=\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{2}}\right) \tag{A-03b}\\ J_{\boldsymbol{3}}=\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\right) \tag{A-03c} \end{align} These three component equations can be expressed symbolically in one vector equation \begin{equation} \mathbf{J}=\bigl(\mathbf{J}^{\boldsymbol{\alpha}} \boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\bigr)+\left(\mathbb{I}_{\boldsymbol{\alpha}} \boldsymbol{\otimes} \mathbf{J}^{\boldsymbol{\beta}} \right) \tag{A-04} \end{equation} Now we must check if this so constructed quantity $\:\mathbf{J}=\left({J}_{1},{J}_{2},{J}_{3}\right) \:$ of the composite system is a consistent angular momentum and the criterion for this is the validation of the equation \begin{equation} \mathbf{J}\boldsymbol{\times}\mathbf{J}= i \, \mathbf{J} \tag{A-05} \end{equation} or by components \begin{align} J_{\boldsymbol{1}}J_{\boldsymbol{2}}-J_{\boldsymbol{2}}J_{\boldsymbol{1}}= i \, J_{\boldsymbol{3}} \tag{A-06a}\\ J_{\boldsymbol{2}}J_{\boldsymbol{3}}-J_{\boldsymbol{3}}J_{\boldsymbol{2}}= i \, J_{\boldsymbol{1}} \tag{A-06b}\\ J_{\boldsymbol{3}}J_{\boldsymbol{1}}-J_{\boldsymbol{1}}J_{\boldsymbol{3}}= i \, J_{\boldsymbol{2}} \tag{A-06c} \end{align} To prove equations (A-06), let find a general expression for $\:J_{\boldsymbol{n}}J_{\boldsymbol{k}}\:$, where $\:J_{\boldsymbol{n}},\,J_{\boldsymbol{k}}\:$ the components of $\:\mathbf{J}\:$ parallel to the unit vectors $\:\mathbf{n}\:$ and $\:\mathbf{k}\:$ respectively. From equation (A-01) and the following multiplication rule \begin{equation} \left(\mathrm{A}_{2} \boldsymbol{\otimes} \mathrm{B}_{2}\right)\left(\mathrm{A}_{1} \boldsymbol{\otimes} \mathrm{B}_{1}\right)= \left( \mathrm{A}_{2}\mathrm{A}_{1}\right) \boldsymbol{\otimes} \left( \mathrm{B}_{2}\mathrm{B}_{1}\right) \tag{A-07} \end{equation} we have \begin{align} J_{\boldsymbol{n}}J_{\boldsymbol{k}} & = \Bigl[\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \Bigl(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr)\Bigr] \left[\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\right)\right] \nonumber\\ & =\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+\Bigl(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr)\left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\right) \nonumber\\ & + \Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)\left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\right) +\Bigl(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr)\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr) \tag{A-08} \end{align} so \begin{equation} J_{\boldsymbol{n}}J_{\boldsymbol{k}} = \Bigl[\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\Bigr)\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\Bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{n}}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\Bigr)\Bigr] +\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\Bigr) +\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr) \tag{A-09} \end{equation} Permutation of $\:n\:$ and $\:k\:$ yields \begin{equation} J_{\boldsymbol{k}}J_{\boldsymbol{n}} = \Bigl[\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\Bigr)\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\Bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{k}}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr)\Bigr] +\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr) +\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}\Bigr) \tag{A-10} \end{equation} Subtracting (A-10) from (A-09)

\begin{equation} J_{\boldsymbol{n}}J_{\boldsymbol{k}}-J_{\boldsymbol{k}}J_{\boldsymbol{n}}= \Bigl[\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}-J^{\boldsymbol{\alpha}}_{\boldsymbol{k}}J^{\boldsymbol{\alpha}}_{\boldsymbol{n}}\Bigr)\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\Bigl(J^{\boldsymbol{\beta}}_{\boldsymbol{n}}J^{\boldsymbol{\beta}}_{\boldsymbol{k}}- J^{\boldsymbol{\beta}}_{\boldsymbol{k}}J^{\boldsymbol{\beta}}_{\boldsymbol{n}}\Bigr)\Bigr] \tag{A-11} \end{equation} For $\:n=1\:$ and $\:k=2\:$ above equation (A-11) gives \begin{align} J_{\boldsymbol{1}}J_{\boldsymbol{2}}-J_{\boldsymbol{2}}J_{\boldsymbol{1}} & = \Bigl[\overbrace{\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}-J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}\Bigr)}^{i \,J^{\boldsymbol{\alpha}}_{\boldsymbol{3}} }\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes} \overbrace{\Bigl(J^{\boldsymbol{\beta}}_{\boldsymbol{1}}J^{\boldsymbol{\beta}}_{\boldsymbol{2}}- J^{\boldsymbol{\beta}}_{\boldsymbol{2}}J^{\boldsymbol{\beta}}_{\boldsymbol{1}}\Bigr)}^{i \,J^{\boldsymbol{\beta}}_{\boldsymbol{3}}}\Bigr] \nonumber\\ & = \Bigl[\Bigl(i \,J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\Bigr)\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol{\beta}}\Bigr] +\Bigl[\mathbb{I}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}\Bigl(i\,J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\Bigr)\Bigr] \nonumber\\ & = i\,\Bigl[\Bigl(J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol{\beta}}\Bigr) +\Bigl(\mathbb{I}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\Bigr)\Bigr] \nonumber\\ & = i \, J_{\boldsymbol{3}} \tag{A-12} \end{align} so proving (A-06a). By cyclic permutation (A-06b) and (A-06c) are proved also.

For the treatment of the angular momentum we make use of equation (A-03c), repeated here for convenience: \begin{equation} J_{\boldsymbol{3}}=\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\right) \tag{A-03c} \end{equation} This relation has the advantage that if the matrices representing the components $\:J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\:$ and $\:J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\:$ of the component systems are diagonal, then the matrix representing the component $\:J_{\boldsymbol{3}}\:$ of the composite system is diagonal too(1). But for the full treatment of the angular momentum we need the matrix representing the quantity $\:\mathbf{J}^{\boldsymbol{2}}=J^{\boldsymbol{2}}_{\boldsymbol{1}}+J^{\boldsymbol{2}}_{\boldsymbol{2}}+J^{\boldsymbol{2}}_{\boldsymbol{3}}\:$ also. We'll find an expression of $\:\mathbf{J}^{\boldsymbol{2}}\:$ convenient for the determination of its matrix, which isn't from the beginning diagonal as $\:J_{\boldsymbol{3}}\:$ does. So, inserting in equation (A-09) the pair of values $\:(n,k)=(1,1)\:$, $\:(n,k)=(2,2)\:$ and $\:(n,k)=(3,3)\:$ we have respectively

\begin{align} J_{\boldsymbol{1}}^{\boldsymbol{2}} = \Bigl[\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}\bigr)^{\boldsymbol{2}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{1}}\bigr)^{\boldsymbol{2}}\Bigr] +2\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{1}}\Bigr) \tag{A-13a}\\ J_{\boldsymbol{2}}^{\boldsymbol{2}} = \Bigl[\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}\bigr)^{\boldsymbol{2}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{2}}\bigr)^{\boldsymbol{2}}\Bigr] +2\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{2}}\Bigr) \tag{A-13b}\\ J_{\boldsymbol{3}}^{\boldsymbol{2}} = \Bigl[\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\bigr)^{\boldsymbol{2}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr]+\Bigl[\mathbb{I}_{\boldsymbol {\alpha}}\boldsymbol{\otimes}\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\bigr)^{\boldsymbol{2}}\Bigr] +2\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\Bigr) \tag{A-13c} \end{align} Having in mind that \begin{align} \bigl(\mathbf{J}^{\boldsymbol{\alpha}}\bigr)^{\boldsymbol{2}} & =\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{1}}\bigr)^{\boldsymbol{2}} +\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{2}}\bigr)^{\boldsymbol{2}}+\bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\bigr)^{\boldsymbol{2}} = j_{\alpha}(j_{\alpha}+1)\mathbb{I}_{\alpha} \tag{A-14}\\ \bigl( \mathbf{J}^{\boldsymbol{\beta}}\bigr)^{\boldsymbol{2}} &=\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{1}}\bigr)^{\boldsymbol{2}} +\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{2}}\bigr)^{\boldsymbol{2}}+\bigl( J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\bigr)^{\boldsymbol{2}} = j_{\beta}(j_{\beta}+1) \mathbb{I}_{\beta} \tag{A-15}\\ \mathbb{I}_{\alpha} \boldsymbol{\otimes}\mathbb{I}_{\beta} & \equiv \mathbb{I}_{f}=\text{identity in } \mathsf{H}_{f}=\mathsf{H}_{\alpha}\boldsymbol{\otimes}\mathsf{H}_{\beta} \tag{A-16} \end{align} addition of equations (A-13) yields \begin{equation} \mathbf{J}^{\boldsymbol{2}} =\bigl[ j_{\alpha}(j_{\alpha}+1)+ j_{\beta}(j_{\beta}+1) \bigr] \mathbb{I}_{f} +2\sum_{q=1}^{q=3}\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{q}}\boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{q}}\Bigr) \tag{A-17} \end{equation}


(1) More precisely : from the definition of the product of operators and given that $\:J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\:$ is represented by the $(2j_{\alpha}+1)$-square matrix \begin{equation} J^{\alpha}_{3} = \begin{bmatrix} j_{\alpha} & 0 & \cdots & 0 \\ 0 & j_{\alpha}-1 & \cdots & 0 \\ \vdots & \vdots & m_{\alpha} & \vdots \\ 0 & 0 & \cdots & -j_{\alpha} \end{bmatrix} \tag{foot-01} \end{equation} and $\:J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\:$ is represented by the $(2j_{\beta}+1)$-square matrix \begin{equation} J^{\beta}_{3} = \begin{bmatrix} j_{\beta} & 0 & \cdots & 0 \\ 0 & j_{\beta}-1 & \cdots & 0 \\ \vdots & \vdots & m_{\beta} & \vdots \\ 0 & 0 & \cdots & -j_{\beta} \end{bmatrix} \tag{foot-02} \end{equation} equation (A-03c) gives that $\:J_{\boldsymbol{3}}\:$ is represented by the following $(2j_{\alpha}+1)\cdot (2j_{\beta}+1)$-square diagonal matrix \begin{equation} J_{\boldsymbol{3}}=\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\right)= \nonumber\\ \end{equation} \begin{equation} \begin{bmatrix} \begin{matrix} j_{\alpha}+ j_{\beta} & 0 & \cdots & 0 \\ 0 & j_{\alpha}+ j_{\beta}-1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & j_{\alpha} -j_{\beta} \end{matrix} & & & \\ & \begin{matrix} j_{\alpha}-1+ j_{\beta} & 0 & \cdots & 0 \\ 0 & j_{\alpha}-1+ j_{\beta}-1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & j_{\alpha}-1-j_{\beta} \end{matrix} & & \\ & &\ddots & \\ & & & -j_{\alpha}-j_{\beta} \end{bmatrix} \end{equation} \begin{equation} \tag{foot-03} \end{equation} Example : for $\:j_{\alpha}=\tfrac{1}{2}\:$ and $\:j_{\beta}=1\:$ \begin{equation} J^{\alpha}_{3} = \begin{bmatrix} \begin{array}{cc} +\frac{1}{2}&0\\ &\\ 0&-\frac{1}{2} \end{array} \end{bmatrix} \:,\qquad J^{\beta}_{3} = \begin{bmatrix} \begin{array}{ccc} +1&0&0\\ 0&0&0\\ 0&0&-1 \end{array} \end{bmatrix} \tag{foot-04} \end{equation}
so \begin{equation} \Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)= \begin{bmatrix} \begin{array}{cc} +\frac{1}{2}\cdot\mathbb{I}_{\boldsymbol {\beta}}&0\cdot\mathbb{I}_{\boldsymbol {\beta}}\\ &\\ 0\cdot\mathbb{I}_{\boldsymbol {\beta}}&-\frac{1}{2}\cdot\mathbb{I}_{\boldsymbol {\beta}} \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{cccccc} +\frac{1}{2}&0&0&0&0&0\\ 0&+\frac{1}{2}&0&0&0&0\\ 0&0&+\frac{1}{2}&0&0&0\\ 0&0&0&-\frac{1}{2}&0&0\\ 0&0&0&0&-\frac{1}{2}&0\\ 0&0&0&0&0&-\frac{1}{2} \end{array} \end{bmatrix} \tag{foot-05} \end{equation} \begin{equation} \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\right)= \begin{bmatrix} \begin{array}{cc} 1\cdot J^{\boldsymbol{\beta}}_{\boldsymbol{3}}&0\cdot J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\\ &\\ 0\cdot J^{\boldsymbol{\beta}}_{\boldsymbol{3}}&1\cdot J^{\boldsymbol{\beta}}_{\boldsymbol{3}} \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{cccccc} +1&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-1&0&0&0\\ 0&0&0&+1&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&-1 \end{array} \end{bmatrix} \tag{foot-06} \end{equation} Adding (foot-05), (foot-06) we have \begin{equation} J_{\boldsymbol{3}} =\Bigl( J^{\boldsymbol{\alpha}}_{\boldsymbol{3}}\boldsymbol{\otimes}\mathbb{I}_{\boldsymbol {\beta}}\Bigr)+ \left(\mathbb{I}_{\boldsymbol {\alpha}} \boldsymbol{\otimes}J^{\boldsymbol{\beta}}_{\boldsymbol{3}}\right)= \begin{bmatrix} \begin{array}{cccccc} +\frac{3}{2}&0&0&0&0&0\\ 0&+\frac{1}{2}&0&0&0&0\\ 0&0&-\frac{1}{2}&0&0&0\\ 0&0&0&+\frac{1}{2}&0&0\\ 0&0&0&0&-\frac{1}{2}&0\\ 0&0&0&0&0&-\frac{3}{2} \end{array} \end{bmatrix} \tag{foot-07} \end{equation}
which after rearrangement of rows and columns becomes \begin{equation} \widehat{J}_{\boldsymbol{3}} = \begin{bmatrix} \begin{array}{cccccc} +\frac{3}{2}&0&0&0&0&0\\ 0&+\frac{1}{2}&0&0&0&0\\ 0&0&-\frac{1}{2}&0&0&0\\ 0&0&0&-\frac{3}{2}&0&0\\ 0&0&0&0&+\frac{1}{2}&0\\ 0&0&0&0&0&-\frac{1}{2} \end{array} \end{bmatrix} \tag{foot-08} \end{equation} recognized later on as the direct sum of $\:j_{1}=\tfrac{1}{2}\:$ and $\:j_{2}=\tfrac{3}{2}\:$ \begin{equation} \boldsymbol{2}\boldsymbol{\otimes}\boldsymbol{3}=\boldsymbol{2}\boldsymbol{\oplus}\boldsymbol{4} \tag{foot-09} \end{equation} a special case of the more general expression of the product space as the direct sum of mutually orthogonal and invariant under SU(2) subspaces \begin{equation} (2j_{\alpha}+1)\boldsymbol{\otimes}(2j_{\beta}+1)=\bigoplus_{j=\vert j_{\beta}-j_{\alpha} \vert }^{j=\left(j_{\alpha}+j_{\beta}\right)}(2j+1) \tag{foot-10} \end{equation}


For a more detailed treatment see my answers here : Total spin of two spin-1/2 particles.

Frobenius
  • 15,613
4

Both orbital angular momentum and spin are related to rotations in three dimensions. Their commutation relations can be derived from the properties of the group of rotations alone, so they should be equal.

The group of rotations of three-dimensional space is known as $SO(3)$. Quantum states are vectors in a space $V$ over which this group has a (projective) representation. This means that for each rotation $R$ there is a $n\times n$ matrix $U(R)$ ($n$ is the dimension of $V$) so that every quantum state $\left|\psi\right>$ changes to $U(R)\left|\psi\right>$ when the system is rotated by $R$.

You may know that a rotation in two dimensions (the complex plane) is given by multiplication by $e^{-i\theta}$, where $\theta$ is the only parameter that characterizes a rotation of two-dimensional space: the angle of the rotation. In three dimensions, a rotation can be parametrized by the angle $\theta$ and a unit vector $\hat{u}$ stating the axis of rotation. This is equivalent to just a vector $\vec{u}=\theta\hat{u}$. Proceeding in the same way as in two dimensions a rotation can be written as \begin{equation} U(\vec{u})=e^{-i\vec{u}\cdot \vec{J}} \end{equation} where now we need three objects $J_x$, $J_y$ and $J_z$ (the components of $\vec{J}$), one to multiply each component of $\vec{u}$. They must be $n\times n$ matrices, to make $U(R)$ be also such a matrix (the exponential of matrices can be defined by its power series).

Notice that the derivative of a rotation of a 3d vector is orthogonal to the axis of rotation and to the vector itself and it is proportional to the vector, so that it should be $\hat{u}\times \vec{v}$. You can imagine $\vec{v}$ as a point on a sphere with radius $|\vec{v}|$, and $\hat{u}\times \vec{v}$ as an arrow starting at that point and pointing in the direction to which it moves when it's rotated.

On the other hand $\frac{d}{d\theta}U(\theta\hat{u})= -i\hat{u}\cdot\vec{J}$, so, writing the vectors of the three dimensional representation as $\vec{v}$ instead of $\left|\psi\right>$ we have the equation $\hat{u}\times \vec{v}=(-i\hat{u}\cdot\vec{J})v$. Now we wish to compute the commutator $[J_x,J_y]$: \begin{align} [J_x,J_y]\vec{v}=J_xJ_y\vec{v}-J_yJ_x\vec{v}= -\hat{x}\times(\hat{y}\times\vec{v}) + \hat{y}\times(\hat{x}\times\vec{v}) = -(\hat{x}\times\hat{y})\times\vec{v} = -\hat{z}\times\vec{v} = iJ_z\vec{v} \end{align} where I have used the properties of the triple cross product. We have just derived one of the commutation relations: $[J_x,J_y]=iJ_z$. The others follow in the same manner.

The spin operators $S_x$, $S_y$, $S_z$ and the orbital angular momentum operators $L_x$, $L_y$, $L_z$ are both just the generators $J_x$, $J_y$, $J_z$ of three-dimensional rotations.

The only difference between them is that the name spin (and the notation $S_i$) refers to the representations under $SO(3)$ for states of a single particle without movement in space, the "internal" rotations, whereas the name orbital angular momentum (and the symbols $L_i$) is commonly used for the representations under $SO(3)$ of states of systems that have some extension or some movement in space.

The combination of representations of rotations is again a representation of rotations, so it will still have the same generators with the same commutation relations. This is true for any combination, such as the combined spins for the electron and the proton, the combination of the orbital angular momentum and the spin of a particle or angular momenta for multi-particle systems.

You're right about derivations with ladder operators. You can use the approach that you know in every case, because it's derived from commutation relations.

coconut
  • 4,653
  • Spinors are not a representation of the rotation group SO(3)! – G. Bergeron Dec 05 '16 at 21:43
  • Well, they are a projective representation. I didn't want to get into much mathematic detail (maybe going to the double cover $SU(2)$, etc.). I will do a little edit in that part, just not to say something incorrect – coconut Dec 05 '16 at 23:23
3

Spin operators do have the same commutation relations as the angular momentum operators. The precise reason is a little bit subtle. The notion of spin and angular momentum is related to the properties under rotations of the wavefunctions. In fact the angular momentum operators can be defined as the generators of the rotations.

The rotations in 3D space form the $SO(3)$ group. To be able to speak of rotations of a quantum state, we need to be able to act with $SO(3)$ on it in such a way that the group structure is preserved (a group homomorphism). Now since states are vectors in a Hilbert space, we are really asking for a representation of the group $SO(3)$ on the Hilbert space. The different ways the rotations can act on the Hilbert space corresponds to different representations of the $SO(3)$ group. From Lie theory, we know that finding representation of $SO(3)$ boils down to finding representations for the Lie algebra $\mathfrak{so}(3)$. This Lie algebra contains the infinitesimal generators of the transformations in the $SO(3)$ group.

Here, a subtlety arises. We are looking now at the different ways a quantum state (vector of the Hilbert space) can transform under rotations, but the physics is really contained in the squared amplitudes of the states. This is really the quantity we want to know how to act on with rotations. Effectively, this means we are looking for projective (or up to a phase) representations of $SO(3)$. It so happens that these are exactly the representations for $SU(2)$. It also happens that the Lie algebra $\mathfrak{su}(2)$ of the group $SU(2)$ is isomorphic to $\mathfrak{so}(3)$. This is why in a lot of textbooks they just construct representations for the Lie algebra and the spin appears magically. The real reason is that we are really looking for projective representations of $SO(3)$ which includes the spin representations. This is also why spin appears also in non-relativistic QM as the Galilean invariance group includes the rotation group. Spin is not a relativistic phenomenon!

In any case, since these two groups have the same Lie algebra, the commutation relations for their infinitesimal generators will be the same, enabling what is done in your textbook.

Regarding your second question, it is a bit complicated. Since the Hilbert space for composite systems are given by the tensor product of both subspaces, we now have to consider the tensor product of representations. Per se, this is not a representation and there should not be anyway to act on this space with a spin operator. However, we want composite system to also be projective representations of the rotation group $SO(3)$. What we are really asking for is that $\mathfrak{su}(2)$ can be made into a bi-algebra. Since it naturally is bi-algebra, there is an operation (really an homomorphism), called the coproduct, $\Delta$, that maps:

$\Delta:\mathfrak{su}(2) \longrightarrow \mathfrak{su}(2)\otimes\mathfrak{su}(2),$

allowing us to view tensor products of representations as a representation. Since this map is an homomorphism, it preserves the algebraic structure and thus, the commutation relations. This is the precise reason behind the uniqueness of the commutation relations for any spin operators, and consequently, why the same eigenvalue equations arise. The ladder operator approach rely only on the algebraic structure of the spin operators and as such, is equally valid when used for composite systems.

The reason we apply the operators on one, and separately, the other element is not just a definition or because of physical intuition. It is because this coproduct is acting on primitive elements and is untwisted, that is $\forall X \in \mathfrak{su}(2)$:

$\Delta(X)=X\otimes 1 + 1 \otimes X,$

leading to your $S = S_1 + S_2$. There is a deep connection between this form for the coproduct and the statistics of the particles in question. The above simple form is related to the symmetry under the permutations of identical particles. In 3+1 dimensions, every composite system can be desribed in terms of bosons and fermions, obeying the two standard statistics. As such, in most cases, we expect this form for the coproduct. However, in confined systems in 2 or 1 + 1 dimensions, more exotic statistics are possible. In these exotic cases, the coproduct is not always of this form (for examples: Anyons, Parabosons/Parafermions) and relying on intuition alone can lead one astray when considering composite systems.

A final remark concerning the different basis of a composite spin space is interesting. Indeed, a basis of the composite space $H_1\otimes H_2$ can now be given either by specifying eigenvectors on both tensor product subspaces (this would amount to specifying the spin of each particle in the composite system) or, seeing the whole as a representation, by specifying the total eigenvectors (this corresponds to specifying the total spin of the composite system). The matrix element between those two basis are called the Clebsch-Gordan coefficients and are often used when dealing with composite systems.

2

You're reading Griffiths, so I will try to stay within his vocabulary---but to answer your question I have to introduce perhaps some formalism that Griffiths doesnt.

In general, this is the story. Mathematically, we construct an $N$ particle system of noninteracting particles but taking the direct product of each Hilbert space $\mathcal{H}_i$ as $i$ goes from 1 to $N$ (one Hilbert space for each particle). Each of these spaces is completely independent from one another, and satisfies the familiar commutation relation. \begin{align} [S_{k}^{(i)},S_{l}^{(j)}]=i\hbar\epsilon_{klm}\delta_{ij} \end{align} where the $i$ multiplying $\hbar$ is the imaginary unit. In short, each particle has its own Hilbert space, each of which satisfies the usual angular momentum commutation relations and ladder operators defined.