Is there a way in which one can use the BCH relation to find the equivalent angle and the axis for two rotations? I am aware that one can do it in a precise way using Euler Angles but I was wondering whether we can use just the algebra of the rotation group to perform the same computation?
-
If you like this question you may also enjoy reading this answer. – Qmechanic Sep 04 '13 at 14:27
-
@QMechanic: I liked the question - it is short, concise and to the point - but I didn't much like the linked answer - which was long and rambling and appears to be everything that someone knows about Lie Theory. – Mozibur Ullah May 11 '22 at 08:06
2 Answers
The Baker-Campbell-Hausdorff (BCH) formula for the 3-dimensional rotations can indeed be summed up. Here we will just state the result in the notation of Ref. 1.
Three-dimensional rotations are described by the Lie group $SO(3)$. The corresponding Lie algebra $so(3)$ is $$ \begin{align} [L_j, L_k] ~=~& i\sum_{\ell=1}^3\epsilon_{jk\ell} L_{\ell}, \cr j,k,\ell~\in~& \{1,2,3\},\cr \epsilon_{123}~=~&1, \cr i^2~=~&-1.\end{align}\tag{1} $$ In the adjoint representation, the three Lie algebra generators $iL_{\ell}\in{\rm Mat}_{3\times 3}(\mathbb{R})$, $\ell\in\{1,2,3\}$, are $3\times 3$ real antisymmetric matrices, $$ \begin{align}i(L_j)_{k\ell} ~=~& \epsilon_{jk\ell} ,\cr j,k,\ell~\in~& \{1,2,3\}.\end{align} \tag{2} $$
A rotation matrix $$ R(\vec{\alpha})~\in~ SO(3)~ \subseteq ~{\rm Mat}_{3\times 3}(\mathbb{R}) \tag{3}$$ can be specified by a rotation axis and an rotation angle. Here we will use a 3-vector $$ \vec{\alpha}~=~\alpha \vec{n}_{\alpha}~\in~ \mathbb{R}^3, \tag{4}$$ where $\vec{n}_{\alpha}\in\mathbb{R}^3$ is a unit vector parallel to the rotation axis, $\vec{n}_{\alpha} \cdot \vec{n}_{\alpha}=1$; and $\alpha\in \mathbb{R}$ (without an arrow on top) is the angle of rotation.
The formula for the rotation matrix in terms of $\vec{\alpha}$ reads $$\begin{align} R(\vec{\alpha}) ~=~&e^{i \vec{\alpha}\cdot \vec{L}}\cr ~=~& {\bf 1}_{3\times 3} - (1-\cos\alpha) (\vec{n}_{\alpha}\cdot \vec{L})^2 + i\sin\alpha ~\vec{n}_{\alpha}\cdot \vec{L}.\end{align} \tag{5}$$
The composition of two rotations is again a rotation $$ R(\vec{\gamma})~=~R(\vec{\alpha})R(\vec{\beta}).\tag{6}$$ If we introduce the shorthand notation $$ c_{\alpha} ~:=~\cos\frac{\alpha}{2} ~\in~ \mathbb{R},\tag{7} $$ $$\vec{s}_{\alpha}~:=~\vec{n}_{\alpha} \sin\frac{\alpha}{2} ~\in~ \mathbb{R}^3,\tag{8} $$ $$ \vec{t}_{\alpha}~:=~\vec{n}_{\alpha} \tan\frac{\alpha}{2} ~\in~ \mathbb{R}^3,\tag{9} $$ the "addition formula" for the corresponding $3$-vectors can be neatly written as $$ \vec{t}_{\gamma} ~=~\frac{\vec{t}_{\alpha}+\vec{t}_{\beta}-\vec{t}_{\alpha}\times\vec{t}_{\beta} }{1-\vec{t}_{\alpha}\cdot \vec{t}_{\beta}}.\tag{10} $$
The derivation of eq. (10) simplifies if one uses the fact that $SU(2)\cong U(1,\mathbb{H})$ is the double cover of $SO(3)$. An $SU(2)$-matrix $$ X(\vec{\alpha})~\in~ SU(2)~ \subseteq ~{\rm Mat}_{2\times 2}(\mathbb{C})\tag{11}$$ can be written in terms of the Pauli matrices as $$\begin{align} X(\vec{\alpha}) ~=~&e^{i \vec{\alpha}\cdot \vec{\sigma}/2}\cr ~=~& c_{\alpha}{\bf 1}_{2\times 2} + i\vec{s}_{\alpha}\cdot \vec{\sigma}.\end{align} \tag{12}$$ The composition of two $SU(2)$-matrix is given by the same BCH formula $$ X(\vec{\gamma})~=~X(\vec{\alpha})X(\vec{\beta}).\tag{13}$$
References:
G 't Hooft, Introduction to Lie Groups in Physics, lecture notes, chapter 3. The pdf file is available here.
S. Weigert, J. Phys. A30 (1997) 8739, arXiv:quant-ph/9710024.
K. Engø, On the BCH-formula in so(3), Bit Num. Math. 41 (2001) 629. (Hat tip: WetSavannaAnimal aka Rod Vance.)

- 201,751
-
J Willard Gibbs (1884). Elements of Vector Analysis, New Haven, p. 67, and, in our days, in Wikipedia, of course: [http://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Rodrigues_parameters_and_Gibbs_representation] [Wikipedia] – Cosmas Zachos Feb 08 '16 at 13:25
-
Notes for later: To generalize BCH from $SU(2)$ to $SL(2,\mathbb{C})$ define $\vec{\alpha}=\alpha \vec{n}{\alpha}\in\mathbb{C}^3$ where $ \vec{n}{\alpha}\in\mathbb{C}^3$ has bilinear inner product $=1$ and $\alpha\in\mathbb{C}$. Then eqs. (10), (12) & (13) are still valid. – Qmechanic May 10 '22 at 21:39
-
We can remove $i/2$ and go to hyperbolic functions: $$ c_{\alpha} ~:=~\cosh\alpha, \quad \vec{s}{\alpha}~:=~\vec{n}{\alpha} \sinh\alpha, \quad \vec{t}{\alpha}~:=~\vec{n}{\alpha} \tanh\alpha,\tag{7'+8'+9'}$$ $$\vec{t}{\gamma} ~=~\frac{\vec{t}{\alpha}+\vec{t}{\beta}+i\vec{t}{\alpha}\times\vec{t}{\beta} }{1+\vec{t}{\alpha}\cdot \vec{t}{\beta}},\tag{10'} $$ $$ X(\vec{\alpha}) ~=~e^{\vec{\alpha}\cdot \vec{\sigma}} ~=~ c{\alpha}{\bf 1}{2\times 2} + \vec{s}{\alpha}\cdot \vec{\sigma}. \tag{12'}$$ Fails for null directions. – Qmechanic May 10 '22 at 23:05
As $\mathrm{SO}(3)$ is a connected group, $\exp(\mathsf{L}(\mathrm{SO}(3))) = \mathrm{SO}(3)$ and hence this should – in theory – work. Let us work in the fundamental representation of $\mathrm{SO}(3)$, that is orthogonal 3x3 matrices.
Assume you have a rotation $B$ acting first and a second rotation $A$, the resulting rotation is then given by $AB \equiv C \in \mathrm{SO}(3)$. Furthermore, we can express $A$, $B$ and $C$ by $\exp(a)$, $\exp(b)$ and $\exp(c)$ for $a,b,c \in \mathsf{L}(\mathrm{SO}(3))$. We then have¹
$$ \exp(a) \exp(b) = AB = C = \exp(c) = \exp\left(a + b + \frac{1}{2}[a,b] + \frac{1}{12} [ a, [a,b]] - \frac{1}{12}[b,[a,b]]+ \ldots\right)\quad.$$
Now, the problem with verifying this by an example is that these commutators are rather ugly. I shall do two examples:
First example: Two rotations about the $x$ axis
Take $A$ to rotate about $(1,0,0)$ by $\theta$ and $B$ to rotate about the same axis by $\phi$. We then have
$$ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{pmatrix}$$
and similarly for $B$. The associated $a$ is then simply:
$$ a = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta \\ 0 & \theta & 0 \end{pmatrix}\quad $$
and again similarly for $b$ with $\theta \to \phi$. You can check easily that $\exp(a)$ gives you indeed $A$. Now since $a$ and $b$ commute, we have $[a,b] = 0$ and hence $c = a + b$ - which is
$$ c = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta -\phi \\ 0 & \theta + \phi & 0 \end{pmatrix}\quad.$$
This very likely illuminates better than $AB$ that two rotations about the same axis are equivalent to one rotation by the sum of the angles. You can again check that $\exp(c)$ gives you $C$.
Second Example: One rotation about $y$, a second about $x$.
This one is more difficult, as we will have to calculate annoying commutators. The result presented here will hence only be approximate, not exact.
Take
$$ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{pmatrix} \qquad B = \begin{pmatrix} \cos(\phi) & 0 & \sin(\phi) \\ 0 & 1 & 0 \\ -\sin(\phi) & 0 & \cos(\phi) \end{pmatrix} \quad .$$
You can calculate that
$$ AB = C = \begin{pmatrix} \cos(\phi) & 0 & \sin(\phi) \\ \sin(\theta) \sin(\phi) & \cos(\theta) & -\sin(\theta) \cos(\phi) \\ \sin(\phi)\cos(\theta) & \sin(\theta) & \cos(\phi)\cos(\theta) \end{pmatrix} \quad .$$
Similarly to the above, we have
$$ a = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta \\ 0 & \theta & 0 \end{pmatrix} \qquad b = \begin{pmatrix} 0 & 0 & \phi \\ 0 & 0 & 0 \\ -\phi & 0 & 0 \end{pmatrix} \quad.$$
Now the tricky part is to calculate
$$ c = a + b + \frac{1}{2} [ a,b] + \frac{1}{12} [ a, [a,b]] - \frac{1}{12} [b,[a,b]] + \ldots $$
to such a precision that $\exp(c)$ gives remotely sensible results. At this, I mostly failed, but here's what I got:
$$ \frac{1}{2} [ a,b] = \frac{1}{2} \begin{pmatrix} 0 & -\theta\phi & 0 \\ \theta\phi & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\quad,$$
which looks an awful lot like the element of the Lie algebra basis corresponding to a rotation about the $z$ axis, but unfortunately doesn’t fit in at all (something linear in either $a$ or $b$ would have been nice…). I then went on to calculate $[a,[a,b]]$ and $[b,[a,b]]$ and arrived at
$$ c \approx \begin{pmatrix} 0 & -\frac{1}{2}\theta\phi & \phi - \frac{1}{12} \theta^2 \phi \\ \frac{1}{2} \theta \phi & 0 & -\theta + \frac{1}{12} \theta\phi^2 \\ -\phi + \frac{1}{12} \theta^2 \phi & \theta - \frac{1}{12} \theta \phi^2 & 0 \end{pmatrix} \quad . $$
The nice thing here is that this is still an antisymmetric matrix and hence (can be) in $\mathsf{L}(\mathrm{SO}(3))$. In order to now compare this to anything, we have to approximate $C$. Recall the expression from above. As a first approximation, I will set $\cos(x) = 1 - \frac{1}{2}x^2$, $\sin(x) = x - \frac{1}{6} x^3$. I then get
$$ C \approx \begin{pmatrix} 1 - \frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \left(\theta - \frac{\theta^3}{6}\right) \left(\phi - \frac{\phi^3}{6}\right) & 1 - \frac{\theta^2}{2} & -\left(1-\frac{\phi^2}{2}\right)\left(\theta - \frac{\theta^3}{6}\right) \\ -\left(1-\frac{\theta^2}{2}\right)\left(\phi-\frac{\phi^3}{6}\right) & \theta - \frac{\theta^3}{6} & \left(1 - \frac{\theta^2}{2}\right)\left(\phi - \frac{\phi^3}{6}\right) \end{pmatrix} \quad ,$$
expanding out the brackets and throwing away anything of order four, I arrive at
$$ C \approx \begin{pmatrix} 1 - \frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \theta\phi & 1 - \frac{\theta^2}{2} & -\theta + \frac{\phi^2\theta}{2} \\ -\phi + \frac{\theta^2\phi}{2} & \theta - \frac{\theta^3}{6} & \phi - \frac{\theta^2\phi}{2} \end{pmatrix}\quad.$$
This expression should be roughly equal to
$$ 1_3 + c + \frac{1}{2} c^2 + \frac{1}{6} c^3 \quad,$$
which is the expansion of $\exp(c)$. After again throwing away everything of order four, we arrive at
$$ \exp(c) \approx \begin{pmatrix} 1-\frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \theta\phi & 1 - \frac{\theta^2}{2} & -\theta+\frac{\theta^3}{6} +\frac{\theta\phi^2}{2}\\ -\phi +\frac{\theta^2 \phi}{2} & \theta - \frac{\theta^3}{6} & 1 - \frac{\theta^2}{2} - \frac{\phi^2}{2} \end{pmatrix} \quad .$$
The remaining ‘wrong’ terms here most probably cancel with higher orders of $c$, but I have to admit I am slightly too lazy for that.
Conclusion
The main problem with the BCH formula is really that, in general, $[a,b] \neq 0$ and you hence most often not even get an exact expression for $c$ – from which one could most likely deduce angle and axis of rotation without evaluating that pesky exponential. Without an exact expression for $c$, however, all is lost, as non-exact expressions merely rely on the fact that for infinitesimal angles of rotation, all rotations commute.
I would love to hear other opinions, though, especially regarding the ‘theoretical’ part what one could do with $c$, if it was known exactly.

- 2,309
-
This is right; the BCH formula gives the right answer, but the expansion doesn't terminate, so it's only so useful. The easiest way I can think of to quickly extract axis and angle is to use quaternions. – Muphrid Dec 05 '12 at 21:09
-
It does not terminate only if it is mishandled! In effect, of course, it sums to the standard Gibbs formula: [Pauli matrices]](https://en.wikipedia.org/wiki/Pauli_matrices#The_group_composition_law_of_SU.282.29 ) ... Further see above comments to Qmechanic's answer. – Cosmas Zachos Feb 08 '16 at 14:54