3

Suppose the geometric algebra defined by

$$ \frac{1}{2}(e_\mu e_\nu +e_\nu e_\mu)=g_{\mu\nu} $$

where $e_\mu,e_\nu$ are generators of the algebra, and where $g_{\mu\nu}$ are elements of the reals. I am struggling to find a matrix representation of this algebra.

In the case of $Cl_{3,0}$ it is well-known that the matrix representation is given by the Pauli matrices, and the case of $Cl_{3,1}$, they are the Dirac matrices. However, these two Clifford algebras do not describe curved spaces. Indeed, the generators form an orthogonal basis and are given by this relation: $\frac{1}{2}(\gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu)= \eta_{\mu\nu} $

I am interested in the matrix representation of the geometric algebra of curved space. For simplicity let us assume 2D space. Then, the constraints are:

$$ e_x e_x = g_{xx}\\ e_ye_y = g_{yy}\\ e_xe_y+e_ye_x=g_{xy}=g_{yx} $$

To closest I was able to get to finding the correct matrix representation gives me the freedom to set $g_{xx}$ and $g_{yy}$ to any value of the reals (but not the cross-term $g_{yx}$):

$$ e_x= \pmatrix{-\sqrt{g_{xx}} & 0 \\ 0 & \sqrt{g_{xx}}}\\ e_y= \pmatrix{0 & \sqrt{g_{yy}} \\ \sqrt{g_{yy}} & 0} $$

Then, with these matrices I get

$$ e_x e_x=g_{xx}\\ e_ye_y=g_{yy}\\ e_xe_y+e_ye_x=0 $$

What matrices will give me the full set of relations including $e_xe_y+e_ye_x=g_{xy}=g_{yx}$?

Anon21
  • 1,546
  • 2
    In curved space, $e_\mu$ are just linear combinations of Clifford algebra $\gamma_a$ vectors as $e_\mu = e^a_\mu(x)\gamma_a$. See here https://physics.stackexchange.com/q/514592/ – MadMax Nov 21 '19 at 20:44

1 Answers1

5

In curved space the coordinate basis will not be orthonormal, but an orthonormal basis still exists, and can be used to construct an arbitrary basis.

So...

Start with $\gamma_\mu$ as a matrix representation of an orthonormal basis in curved space. Then you can express any arbitrary basis $e_\mu$ as their linear combination. To get a useful basis this way, you should be able to write an orthonormal basis in terms of the coordinate basis, then invert the transformation. Then you should have $e_\mu \cdot e_\nu = g_{\mu\nu}$ by linearity of the dot.

Although, it's probably easier to use GA in curved space by forgetting the matrix representation altogether.

EDIT: As requested in comments, here's an example. This is in 2d using just $\sigma_x,\sigma_y$.

Let $$ e_k = a_k \, \sigma_x +b_k \, \sigma_y = \left(\begin{array}{cc} 0 & a_k - i b_k \\ a_k + i b_k & 0 \end{array}\right) = \left(\begin{array}{cc} 0 & c_k^* \\ c_k & 0 \end{array}\right) $$

Then $$ (e_1)^2 = |c_1|^2 \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right), \qquad (e_2)^2 = |c_2|^2 \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right), $$ $$ \qquad \tfrac{1}{2} (e_1 e_2 + e_2 e_1) = \textrm{Re}(c_1^* c_2) \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right). $$ Thus $$ \tfrac{1}{2} (e_i e_j + e_i e_j) = g_{ij} \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) $$ with $$ g_{ij} = \left(\begin{array}{cc} |c_1|^2 & \textrm{Re}(c_1^* c_2) \\ \textrm{Re}(c_1^* c_2) & |c_2|^2 \end{array}\right). $$

An arbitrary metric can be explicitly realized by choosing $c_k = \sqrt{g_{kk}} \; e^{i \phi_k}$ with $\Delta \phi = \phi_2 - \phi_1$ obeying

$$ \cos(\Delta \phi) = \frac{g_{12}}{\sqrt{g_{11}} \sqrt{g_{22}}} $$

where the magnitude of the RHS is less than 1 by the Cauchy-Schwarz inequality. Note that the "arbitrary" metric under consideration still must be positive-definite if $\sigma_i$ are to provide a representation, so the C-S inequality must hold.

Note that when you extend this to 3d by including $\sigma_z$, the matrix representations involved will still be 2x2, but $g_{ij}$ will be 3x3. Good luck!

But again, using matrix representations is not in the spirit of geometric algebra --- for most purposes it's better to just use the formal rules and geometric interpretations.

  • For instance, using the Pauli matrices as the orthonormal basis, I should be able to define three basis elements $e_x = a_x \sigma_x + a_y \sigma_y + a_z\sigma_z$, $e_y = b_x \sigma_x + b_y \sigma_y + b_z\sigma_z$ and $e_z = c_x \sigma_x + c_y \sigma_y + c_z\sigma_z$ and those three elements ${e_x,e_y,e_z}$ would have the desired properties? – Anon21 Nov 21 '19 at 19:03
  • Yes, that should work – Joe Schindler Nov 21 '19 at 21:49
  • I find myself struggling with this. Is there any chance you would be so kind as to add an explicit example to your answer? – Anon21 Nov 22 '19 at 02:55
  • For instance, if I set my basis as follows $$ e_x=a\sigma_x+b\sigma_y + c \sigma_z\e_y=d\sigma_x+e\sigma_y+f\sigma_z$$, then $$e_xe_x=(a^2+b^2+c^2) I \ e_ye_y=(d^2+e^2+f^2)I\e_xe_y+e_ye_x=2 (a d + b e + c f)$$ To my naked eye, it seems that the terms $2 (a d + b e + c f)$,$(a^2+b^2+c^2)$ and $(d^2+e^2+f^2)$ are not independant. But I expected that $g_{xx},g_{yy},g_{yx}$ to be independant. – Anon21 Nov 22 '19 at 03:31
  • 1
    I think your confusion comes from conflating the matrix $g_{\mu\nu}$ with the matrix of the representation. I'll edit the answer to show a quick example. – Joe Schindler Nov 22 '19 at 03:35
  • Looking at your work I now see that your issue was different from what I implied. But actually, the terms you have there are indeed independent. Think of it this way: as you have it now, you have 6 free parameters and only 3 constraints. Plenty of freedom! Although if you use $\sigma_z$ you should also include $e_z$. Then you'll have 9 parameters and 6 constraints, still plenty of freedom. If you still doubt, note that the metric can be explicitly realized in my example above. I'll edit one more time to show how. – Joe Schindler Nov 22 '19 at 19:06
  • 1
    @JoeSchindler, the degree of freedom of unconstrained parameters is the numbers of rotation generators, due to global rotation/Lorentz symmetry ($g$ and $e$ are not bijection). For 3D it's 3 (in the example), while for 4D it's 6. – MadMax Nov 22 '19 at 21:00
  • @MadMax Thanks, good point! – Joe Schindler Nov 22 '19 at 21:06
  • Can I ask you to provide explicit values for $c_1$ and $c_2$ under the constraints that $g_{00}=1$, $g_{11}=2$ and $g_{01}=200$? – Anon21 Nov 23 '19 at 03:29
  • 1
    No you can't. By using the $\sigma_i$ you have represented a metric of Euclidean signature, which means that the metric must be positive definite, so all vectors have non-negative norm. Since the metric is a positive definite inner product it must obey the Cauchy-Schwarz inequality as I stated above. In the metric you are trying to give, some vectors have a negative norm (for example $|e_1-e_2|^2 = -397$). You must specify a positive definite matrix (i.e. all positive eigenvalues). Otherwise you can't use $\sigma_i$ as a representation. – Joe Schindler Nov 23 '19 at 03:48