41

It's common for students to be introduced to tensors as "things that transform like tensors" - that is, their components must transform in a certain way when we change coordinates. However, we can do better by defining a tensor as a multilinear map from $ V\times...\times V\times V^\ast\times ...\times V^\ast\to \mathbb{F} $, where $V$ is a vector space over $\mathbb{F}$ (often taken to be a tangent space). The transformation law then follows.

My current understanding of spinors feels like the first, dissatisfying definition: they're just "things that transform like spinors" - that is, they're elements of a vector space which transform according to a projective representation of $SO(n)$ which is genuinely multi-valued (i.e. it's not just a true representation of $SO(n)$). We could call this the "spinor transformation law". Note that this is something we've put in "by hand": the way a spinor transforms is not a property of some underlying object, but is built into our definition.

My question is: can we define spinors without reference to the way they transform, just as we did for tensors? Is there some object "underlying" the definition of spinors in terms of transformations, just as tensors are "really" multilinear maps?

\begin{align} \text{Tensor Transformation Law}&\to \text{Tensors as multilinear maps}\\ \text{Spinor Transformation Law}&\to \text{??? } \end{align}

Qmechanic
  • 201,751
  • 4
    Just to let you know, there's a really cool, and small, book by Cartan called The Theory of Spinors, maybe you'll find what you need there. – Davide Morgante May 23 '20 at 17:47
  • What does bother you in the spinor transformation laws? I find much more diffcult for imagining the fact that the projections of the spin are disctere whatever spin vaue is - integer or half-integer. – Vladimir Kalitvianski May 23 '20 at 18:00
  • 1
    It is a property of the object. The tensor transformation map that you propose as better than "things that transfrom like tensors" doesn't make sense to me. It's pure math. How do you make the association to things that exist in our world? Start with your map, and then say "things that transform like spinors" – garyp May 23 '20 at 18:08
  • Good question +1. I also always felt uncomfortable with the physicist's definition. In time, though, I learned to appreciate it a bit more. – lcv May 23 '20 at 18:51
  • I wonder why many people seem satisfied with "Tensors as multilinear maps", because in the end, these maps also have to transform correctly. –  May 23 '20 at 19:14
  • 4
    Saying a tensor is a multilinear map doesn't actually answer anything. That just means a tensor transforms like a multilinear map on vectors, which depends on how vectors transform. You can just as well say that a vector is a bilinear map on spinors, but I bet that wouldn't satisfy you. – knzhou May 23 '20 at 19:28
  • I feel any intuitive understanding of Spinors must include watching the videos in the Introduction section of the wikipedia page on spinors.and at least one Candle Dance which has the dancers lying on the ground at some point. – Cort Ammon May 23 '20 at 21:08
  • 3
    Seeking a spinor analog of "tensors as multilinear maps" might not be the path that leads most physicists to become comfortable with spinors. The path might be more like this: Quantum physics is expressed in terms of observables. If we only require the pattern of observables to be Poincaré-symmetric, without requiring that the scaffolding we use to construct them be Poincaré-symmetric, then we are led to consider scaffolding that uses the covering group of the Poincaré group -- spinor fields. (This perspective generalizes to curved spacetime, even though I expressed it in flat spacetime.) – Chiral Anomaly May 24 '20 at 01:41
  • 1
    @ChiralAnomaly I agree with this but IMO the question, seen as one of pure curiosity (probably more suitable for Math SE), still stands: Is there a basis independent, purely geometric, formulation of spinors in which the transformation properties come out as theorems? –  May 24 '20 at 01:46
  • @DvijD.C. I agree. The question asks "I'm not happy with X, can you give me Y instead?" My comment is checking to see if the first part ("I'm not happy with X") is more important to the OP than the second part ("can you give me Y instead?"). It's sort of a request for clarification, but without requesting any reply if the answer is no. – Chiral Anomaly May 24 '20 at 02:05
  • 6
    @knzhou I don't think I agree with that. Vectors are just maps $C^\infty (\mathcal{M}) \to \mathbb{F}$ satisfying linearity and Leibniz. No reference needed to transformations. This gives us a "foundation" on which to build tensors, again without reference to transformation properties. Indeed, you could do most differential geometry without ever even knowing how the components of these tensors transform. Spinors do not enjoy this property. – Jacob Drori May 24 '20 at 09:59
  • 1
    A few levels below, one could ask "What is avector?" - and there is no other answer possible than "an element of a vector space". So "behaves like" is really the best there is – Hagen von Eitzen May 24 '20 at 10:37
  • I have to agree with the "it's pure math" crowd. There are, of course, visualizations of vectors in the form of arrows, but an arrow is not "a vector object". It's just a different level of abstraction, one that happens to agree with our (rather limited) visual system. The real problem is that vectors do not behave the same in all cases. Vectors in one dimension are scalars, vectors in 3 and 7 dimensions have a cross product etc... which means that saying "vector" alone is meaningless. What matters is the geometry of the space they live in. Same for tensors and spinors. – FlatterMann Apr 17 '23 at 18:44

6 Answers6

22

The proper analogous formalization of spinors is not to view them as some sort of different functions from tensors on the same underlying vector space $V$, but instead to expand our idea of the underlying geometry: Where tensors are multilinear functions on vector spaces, tensors with "spinor" and "vector" parts are multilinear functions on super vector spaces $V = V_0\oplus V_1$ where the odd part $V_1$ is a spinorial representation of $\mathrm{Spin}(V_0)$. (nlab calls these spaces super-Minkowski spacetimes).

Via the dual representation, the linear functions on $V_1$ inherit a representation of the spin group. The (multi)linear functions also inherit the super-grading (a linear function that is zero on the odd part is even, and a linear function that is zero on the even part is odd), and purely even such functions are just ordinary tensors, and purely odd functiona are pure spinors.

Note that we still put in the spin representation $V_1$ by hand - the choice is not determined by the base space $V_0$. This is, in some way, not surprising - a notion of "spin" and spinor is genuinely more than just having a vector space: All (pseudo-Riemannian) manifolds (modeled on the vector spaces $\mathbb{R}^n$) have a notion of tensors built on tensor products of the (co)tangent spaces, but not all manifolds have spinors, i.e. the possibility to associate consistently a spinorial representation to every point of the manifold. For simple vector spaces the choice of a notion of spin is not obstructed, but it is still a choice.

That the supergeometric approach is nevertheless the "correct" (or at least a useful) one is seen when we turn to field theory, where one must represent fermionic/spinorial degrees of freedom by anti-commuting variables, and the $\mathbb{Z}/2$-grading of the underlying vector space then allows us to do this simply by declaring that the odd components anti-commute.

ACuriousMind
  • 124,833
  • Any hints on understanding how a spinor can be (ab)used as a coordinate? I know this is pretty popular but I just don't get it. –  May 23 '20 at 18:36
  • 3
    This answer seems like exactly what I was looking for. Is it standard for physicists to treat spacetime as this suitably enlarged manifold ("super-Minkowski spacetime"), or is this only used in certain theories (e.g. supersymmetric theories)? – Jacob Drori May 23 '20 at 18:47
  • @JacobDrori It is very common in supersymmetric theories, but you will see this more or less frequently also in all contexts that worry about quantization - you need a way to have "classical" fermionic variables in order to define a quantization process on them, and so they are usually added to the spacetime, phase space or whatever else the relevant configurational space in the respective context is. – ACuriousMind May 23 '20 at 18:51
  • 1
    @JacobDrori Sorry to interrupt, but he explicitly states that $V_1$ is a spinorial representation How is it now suddenly OK to use representations? –  May 23 '20 at 18:51
  • @DoctorNuu I have just now added a paragraph discussing that. – ACuriousMind May 23 '20 at 18:52
  • @DoctorNuu You're right, re-reading it I can see the answer isn't exactly what I was hoping for. But the added paragraph shows that what I was hoping for isn't possible. It seems that we really do have to presuppose a transformation rule - I may have misunderstood though. – Jacob Drori May 23 '20 at 19:02
  • This answer seems to say nothing more than "a spinor is a spinor". If you define spinors by transformation properties you can at least try to work out what sort of geometric object would transform that way. From this construction you seemingly can't learn anything, since the spinors you get out are the ones you put in. If you could motivate super vector spaces in some way that constrains what can be in the direct sum, that might help. – benrg Apr 17 '23 at 21:35
18

I think you're asking for intuition in the wrong direction here.

Suppose that somebody is already familiar with vectors, and wants to understand tensors. That's possible, because tensor representations are built out of vectors, i.e. the rank $2$ tensor representation is just the product of two vector representations, or equivalently a rank $2$ tensor is a bilinear map on two vectors.

But it's precisely the opposite with spinors. Spinors aren't built out of vectors, instead vectors are built out of spinors! Spinors are the simplest possible representations of the Lorentz group, and the vector representation is the product of a left-handed spinor and a right-handed spinor (or equivalently, a vector is a bilinear map on two spinors).

In other words, asking what underlies spinors is the wrong question. Spinors are the structure underlying everything you already knew. You need to rebuild your understanding with the spinors at the bottom, not at the top.

This happens a lot in physics: you can't ask for an intuitive derivation of a fundamental thing from a composite thing. What you're asking for is analogous to asking which atoms a proton is made of, or how many protons are inside a quark, or how to build a vector out of tensors. (Incidentally, learning fancier mathematics, as suggested in the other answers, never answers such questions, because these questions inherently don't have answers. What really happens is that in the process of learning the math, you get familiar with these new elementary objects. Once you can work with them fluently, you stop worrying about explaining them in terms of things you knew before, because you understand them on their own terms.)

knzhou
  • 101,976
  • 1
    That's basically what I was trying to say (but apparently failed). Taken as building blocks, spinors are most fundamental. A point to note, however, is that when starting from a manifold, directions seem to come first and naturally, while for spinors one needs to consider the Geometric/Clifford algebra generated by those directions and find spinors in there. –  May 23 '20 at 20:00
  • 5
    I think I get the gist of what you're saying. The thing that is making it hard to accept is the fact that vectors really do seem to be the more "basic" objects on a manifold, as DoctorNuu said above. They're just maps on the tangent space, and we don't really care about how they transform unless we want to choose some coordinates to do calculations. But with spinors, it seems there is no definition which doesn't involve their transformation properties. Hence vectors seem genuinely more "natural". – Jacob Drori May 23 '20 at 20:16
  • 1
    Yes, but we can talk about vector spaces axiomatically, with no mention of transformations, and then later derive the transformation properties from the axioms. Can we do the same thing with "spinor spaces"? – The_Sympathizer May 26 '20 at 02:53
  • @The_Sympathizer But that's not what we do for vectors. At the level of all vector spaces (whether they describe a spinor, vector, tensor, heatmap, inventory of a warehouse, stock price timeseries, or array in a C program), we can always derive the transformation rule for the components under an arbitrary change of basis, completely axiomatically. – knzhou May 26 '20 at 03:02
  • @The_Sympathizer In order to derive the transformation rule for the components under a physical rotation, you need to define the words "physical rotation". A priori the formalism of vector spaces knows nothing about physical space, which is why they can describe both warehouse inventories (where physical rotations are meaningless) and tensors. You put in what a rotation means by hand to get how a vector rotates, you can do the same for a spinor. – knzhou May 26 '20 at 03:03
  • @knzhou : ??? Um, yes, that's exactly what I just said. So then can we do that for spinors, i.e. write down a list of axioms for a "spinor space", then derive transformation rules from that analogously to the derivation for vectors from the axioms for a vector space. – The_Sympathizer May 26 '20 at 03:03
  • @knzhou : Yes, so then what sort of axiomatic object can we then do that when we "plug in by hand" that rotation process in a special case, out pops the transformation rules for spinors? – The_Sympathizer May 26 '20 at 03:04
  • @knzhou : thanks, yes I just saw it. – The_Sympathizer May 26 '20 at 03:05
  • FWIW, the behavior of spinors sounds interestingly like the behavior of the complex square root "function": when you rotate the argument once around zero, the output becomes its negative, then you rotate again and you get back the original. Does this mean one can consider the pair (input, output) as being like a spinor, or part of/related to one, provided we define "negation" to be negation of the output part only? – The_Sympathizer May 26 '20 at 03:07
  • @The_Sympathizer In general, to define which changes of basis correspond to "physical rotations", we demand that some tensor is left invariant. It's easiest to talk about Lorentz transformations here: for vectors the invariant tensor is the Minkowski metric $\eta_{\mu\nu}$, for Weyl spinors it is the Levi-Civita symbol $\epsilon_{ab}$. So actually the analogy between the cases is very tight. – knzhou May 26 '20 at 03:11
  • @The_Sympathizer I'm not really sure! Certainly spinors are related to the double-valuedness of the square root function, but I'm not sure how to make it any more precise than that. – knzhou May 26 '20 at 03:13
11

Yes. Spinors are elements of representation spaces of objects known as a Clifford algebras.

A Clifford algebra is basically a vector space turned into an algebra via the product rule

$$ v\cdot w=2g(v,w)\Bbb{1} $$

where $g$ is some metric on the vector space itself. The most famous Clifford algebra is the Dirac algebra, i.e. the algebra of the Dirac matrices (for which the vector space is $\Bbb{R}^{4}$ and the metric is the Minkowski metric). If instead you use $\Bbb{R}^{3}$ as the base vector space, with the Euclidean metric, you obtain the Pauli algebra.

Once you have a Clifford algebra you can look for its representations (or "modules", as they are known in the literature). The elements of these representations are the spinors. The spinors corresponding to $\Bbb{R}^{4}$ with the Minkowski metric are the Dirac spinors, whereas those corresponding to $\Bbb{R}^{3}$ with the Euclidean metrics are the spinors of $SO(3)$/$SU(2)$.

  • 1
    There we have it again. The old race between physics and maths. Physics was there first, but maths gets all the credit ;-) –  May 23 '20 at 18:23
  • 1
    All this is correct but I'm not sure it answers OP's question - they seem to be asking for a notion of spinors that is not just "spinors are elements of the following vector space". – ACuriousMind May 23 '20 at 18:27
  • 2
    Thanks for the explanation. I'm new to stackexchange and am pretty amazed at how quickly people respond. ACuriousMind is right, though: I was hoping for a definition which doesn't involve any "new" vector spaces other than what the base space already gives us. Defining spinors as elements of a representation space seems to be essentially defining them by their transformation properties, which is what I was trying to avoid. Apologies for my rather ambiguous question, and thanks for the answer regardless. – Jacob Drori May 23 '20 at 18:35
  • Good answer +1, this is what I would have said. But how does this relate to tensors (as multilinear objects...)? – lcv May 23 '20 at 18:55
  • 2
    @DoctorNuu Actually, as far as I know, Clifford algebras were invented by Clifford himself. Who died in 1879. So let's give the mathematicians some credit! – Giorgio Comitini May 23 '20 at 19:14
  • @JacobDrori This is the only way I know of for defining spinors in an abstract (and simple) setting. I see what you're saying, but maybe you should be willing to accept that spinors simply cannot be defined as maps on the underlying space. ACuriousMind's answer in my opinion makes things more complicated, by unnecessarily extending the underlying space by adding to it precisely the same space I used in my definition (i.e. $V_{1}$: Spin($V_{0}$) is just a subgroup of the Clifford group, which is contained in the Clifford algebra) and then defining linear maps onto it. – Giorgio Comitini May 23 '20 at 19:29
  • 1
    Also I think spinors were first invented by Cartan (what a surprise /s) in ~1913 independently of physics and in great generality, but the term "spinor" was first used by physicists who rediscovered it in 3 and 4 dimensions as a part of quantum mechanics, then later Cartan also used the term. So giving mathematicians the credit here is absolutely proper – Bence Racskó Jun 19 '20 at 21:26
10

Well, you should look at (irreducible) representations of the Lorentz group. Basically you want all of your ingredients to have correct and consistent transformations under the Lorentz group.

The Weyl and Dirac spinors are the most basic objects satisfying that requirement.

Starting from those you can build vectors as a (multiplicative) combos of two spinors. That's why in old texts you sometimes see spinors referred to as 'half-vectors'. Also, in this context, they use only 'half' of the transformation of a vector, i.e. one-sided vs two-sided.

In that sense its Spinors->Vectors->Tensors.

If you feel fancy, you can also look at things in the context of Geometric Algebra or Spacetime Algebra going back to David Hestenes. Here you can have spinors free of any matrix representation.

Two other references with different perspectives also come to mind: Spinors and space-time (Penrose) and GRAVITATION (Misner Thorne Wheeler)

The common theme of all approaches is however the special, fundamental transformation properties they have. You can't get around that.

1

I'm taking the Clifford algebra route as pointed out by non-user38741 and Giorgio Comitini, but I'll try to justify intuitively how to end up there and how the spinor transformation law appears inevitable. So I start with geometric algebra, which is simply another name for Clifford algebra when used in a physics context, and the vectors are taken to be elements of the algebra themselves (i.e. we're not imposing a separate matrix algebra). So take $\mathbb{R}^{n, m}$ with inner product $<\cdot,\cdot>$, and define the geometric algebra $\mathcal{G}(\mathbb{R}^{n, m})$ as the freest associative algebra over $\mathbb{R}^{n, m}$ which satisfies

\begin{equation} v^2 = <v, v>, \end{equation} where the square is, of course, the algebra multiplication. We will call the multiplication in this algebra the geometric product.

Admittedly, this does introduce another space, but that is an extremely natural one: the elements of the geometric algebra can be interpreted to consist of the scalars, the vectors of $\mathbb{R}^{n, m}$, the bivectors $u\wedge v$ where $u$ and $v$ are vectors and $u\wedge v := \frac{1}{2}(uv - vu)$, the 3-vectors $u\wedge v\wedge w$ and so on, up to (n + m) -vectors. The $n$ -vectors can be interpreted as directed area/volume/n-volume elements. For a whimsical introduction, see "Imaginary numbers are not real", or as a thorough introduction either Hestenes' "Clifford algebra to Geometric Calculus" or Doran and Lasenby's Geometric Algebra for Physicists.

Now, it turns out that a rotation of the vector $v$ in the plane defined by a simple bivector $\omega$ by $|\omega|$ radians (where the absolute value is $\sqrt{-\omega^2}$, since the square of $\omega$ is negative) can be expressed in geometric algebra (GA) as

\begin{equation} v \mapsto \exp(\omega) v \exp(-\omega), \end{equation} where the exponential is defined by the usual power series, with the multiplication being the geometric product, and a simple bivector is a bivector that can be written as the wedge product $a \wedge b$ for some vectors $a, b$. A general rotation is then given by the same formula, but with the $\omega$ not being necessarily simple (i.e. it may need to be a sum of several simple bivectors). The result of the exponential is then in the even subalgebra, i.e. built out of objects that can be expressed as a sum of products of an even number of vector factors. We call the result of the exponentiation a rotor, and often denote $R = \exp(\omega)$. Then the object on the right hand side of the transformation can also be written as $\tilde{R}$, where the tilde denotes reversion, which simply means taking each factor in a geometric product and reversing their order. Further, $R \tilde{R} = 1$ when $R$ is a rotor.

The first glimpse of a spinor-like transformation law appears: in general, we can rotate all elements of the space by the two-sided rotation law given above, and nothing changes. However, if we represent rotations by the rotor $\exp(\omega)$, then the composition of rotations is given by $\exp(\omega_1) \exp(\omega_2)$, which is also a rotor.

Now, let us stick specifically to $\mathbb{R}^{1, 3}$. Then we can write the free Dirac equation as \begin{equation} \nabla \psi I_3 + m \psi = 0, \end{equation} where $\nabla$ is the vector derivative $\nabla = e^\mu \partial_\mu$, and the $e^\mu$ are basis vectors acting via the geometric product (so that $\nabla$ itself is algebraically a vector). The Dirac field $\psi$ takes values in the even subalgebra of the geometric algebra. $I_3$ is a three-vector which appears to pick a preferred slice of spacetime, and therefore break Lorenz invariance. However, consider another choice given by $I'_3 = R I_3 \tilde{R}$. Then the corresponding new Dirac equation is

\begin{equation} \nabla \psi' R I_3 \tilde{R}+ m \psi' = 0. \end{equation} Now if $\psi$ solves the original Dirac equation, then clearly $\psi' = \psi \tilde{R}$ solves this new equation with $I_3'$. In other words, when the object $I_3$ transforms like a (three)-vector under rotations, then $\psi$ transforms like a spinor, and the transformation law has appeared.

Then note that the physical predictions of the theory only depend on the Dirac bilinears, which in this language can be written analogously to \begin{equation} \psi I_3 \tilde{\psi}, \end{equation} and that when $I_3$ transforms as a three-vector and $\psi$ as a spinor, the physical predictions remain unchanged. In other words, the spinor transformation law is required here to keep the physical predictions of the theory independent of the choice of the directed volume element $I_3$.

Indeed, there is a natural interpretation of the object $\psi$ as a product of a rotor, scaling and a transform between scalars and pseudoscalars in $\mathbb{R}^{1,3}$. In this way, the spinor transformation law appears naturally as the composition of rotors (or rotor-like objects). Of course, since there is no treatment of quantum field theory in the geometric algebra language, it's not clear how far or seriously this can be taken as an interpretation of the physical Dirac equation, but nonetheless it at least provides an example where spinors appear naturally, without manually imposing the transformation law. Rather, it comes from transformations of the solutions of the Dirac equation when the choice of the constant $I_3$ transforms by rotations.

I'm sure that this flash-introduction to the subject leaves many questions unanswered and it may be a bit confusing, but if I piqued your interest I suggest you follow some of the links here and continue further that way.

Timo
  • 141
  • I would have written something along these lines if I had more time/patience. The $I_3$ makes it a little confusing, however. There are several other ways to write the Dirac equation in this framework, which I prefer. The nice part (in all of them) is that spinors have natural one-sided transformations, while vectors (like the derivative or A-fields after 'gauging') have two-sided transformation with the same $R$, no need for the usual awkward translations between representations. –  May 25 '20 at 13:35
  • @non-user38741 what's your preferred way of defining the Dirac eq in this framework? – Timo May 25 '20 at 14:02
  • My own, similar to older papers from Doran, Lasenby. The curious thing is that it never quite matches in D=1+3 because Cl(1,3) or Cl(3,1) has representations on $\mathbb{R}(4)$ or $\mathbb{H}(2)$. –  May 26 '20 at 19:38
-1

I was wrong here {

One can generalize spinors and create infinitely many kinds of objects by 'mimicking' a tensor. It is known that Metvv=v^2. So it would be logical to do va^2=a^2. If a spinor is the square root in some sense, one then can do va^2=v2;

So one must define the map vaa->v, which can be recovered through QM, or whatever the situation is. So spinors are likely not the most fundamental, because this can be applied as many times as one wants. It is seen that the square of a vector is a scalar due to the metric which is a vector squared. If the square of a spinor is a vector, then a vector times two spinors which is a spinor squared which is a vector. So this kind of context where baa=b2=a2^2 supports the existence of a spinor when going this route (and infinitely many other things). A spinor keeps the train of thought rolling past the assignment of real numbers in the metric, to create such objects. This is a bunch of careful choices to implement the square root of an object. But it is consistent in the result, and it works. Time evolution should also be determined. If Uvaa=v2, (where U defines the type of object such as a spinor) then d^n(Uvaa)=d^n(v2) determines the time evolution and an even larger whole new set of objects to explore.

In Short:So one does Uvss=v (s=spinor like object, v=vector).

For a general nth root one does Uva^n=v;

Another way to define a spinor is to 'slow down' the generators of the lie algebra by half.

Then one can restrict the values of U, or use that idea as another whole set of objects.

A spinor seems to act like both of these. The generators are slowed down and two of those objects can change a vector. } Doing more research and attempting to make it abelian to abelian, I found that {,}^(-1)(2*Met)=>Tensor_Product_Power(Met,1/2);

Simply put, it maps the undoing of the averaging (divide by 2) anticommutator binary operation (typically with the metric being the result of the anti-commutator of each object) to the half tensor product of that object (typically the metric (again)).

It goes like this: If {a,b}=2*Met, then A2XB2=Met;

A2 is the transformation of A, and a is in A, same deal with b,B,B2. The map is consistent (both ends are abelian so no need to worry yet).

The dimension for these items a and b is typically 2^k, while the dimensions for A2 and B2 is likely k.

I first thought today, that It maps the anti...-anti-commutator (look above to see what I mean, it's crazier than just a commutator. I didn't stutter nor am I using double negatives to make either positive or negative clauses. ), to the anti-product (gets a set).

That is obviously wrong, since one is abelian the other is not.

So I found this. I need more prefixes than the english language offers to make this sentence. Anti2-Anti-Average-Like-Commutator of the Metric goes to Anti2-Tensor-Product-Square of the Metric. That is the way to remember it (At least for me). The gamma matrices are the anti2-anti-commutator of the metric. I put a 2 there to warn you that this means undoing a binary operation to recover a set of items, rather than the anti as in anti-commutator kind of anti.

The best way to write it I found is,

Make_Spinor({,}^(-1)(Met))=(X)^(1/2)(Met).

The dimensions are likely either a matter of combinatorics, or quantum mechanics.

Another way to think about it quantum mechanically, is SMAT_{universal}(sqrt((...)a^2+(...)b^2+...),a,b,...) maps to MetX^(1/2).

What I mean by that, is solve a constant matrix for some square root of the sum of squares (or another combination via the metric) and map it to the vector that it's square tensor product is the metric. So that becomes even simpler. One does SMAT(sqrt(Met(s,s)),s)->sqrt_{Tensor_Product)(Met);

That becomes SMAT(sqrt O Met*s^2,...)->sqrt(SMAT(ds,v^2)).

That becomes SMATsqrt(...)->sqrtSMAT(...) (the simplest it can be).

SMAT(OUT,IN) is basically 'solve matrix' (like neural networking). I find that is the best way to notate it. Partial derivatives aren't always clear on what it doesn't include, while a matrix problem is 100% clear and one can execute it the same way every time.

My notation is different (I develop my own notation like newton,.... (long list)), so beware. But it is well suited to all of these concepts.

Hopefully I simplified it.

The main result (in very short form) is: sqrt*SMAT(...)<->SMAT * sqrt(..)...;

That is the main rule that recovers it all. To me it is currently the most important. I don't know about others (they might just call me crazy), but the work on spinors cares in itself.

Misha
  • 1
  • Hello Misha and welcome to Physics SE. I suggest you use MathJax to type your symbols and equations because it will make your question a lot clearer. To me, it is not clear what each equation means (although I have to note that I am not very familiar with the context of your question). – ZaellixA Apr 17 '23 at 20:47
  • Sorry about that. I will format it when I am 1000% sure. I am currently 100% sure, but formatting requires 1000% certainty. I am 100% sure, but for me to format it, I need to quantum mechanically measure ten-fold. Klien and Dirac equations are often -10% sure (If you know what I mean. So, I can be 1000% sure soon.). I call myself an ASCII-junky. Mega-Future-Time-Translation => { Simplification of math <=> Simplification of binary}=> Not Prepared for Math Jax Yet. – Misha Jun 01 '23 at 23:05
  • I'll see how you like it first. – Misha Jun 01 '23 at 23:05
  • I am also busy on my own 'localization procedure' that converts lattices to arbituary graphs, and I need to envision it's insane impacts on spacetime. I don't know which lattice to choose. The lattice might go from discrete chessboard, to strong nuclear force wavefunction to GR black hole before applying the conversion. I am speechless of what I created. Literally a GR black hole spacetime can be plugged in as a object that would originally be an abstract chessboard to the theory. 10 years of work and missing life from exhaustion and forgetting to eat is finally paying off. – Misha Jun 01 '23 at 23:11
  • I am glad your efforts bear fruits but I am not sure how all this is related to MathJax. MathJax is just "a way" to format mathematical forms to look better (unlike program code). – ZaellixA Jun 02 '23 at 09:32
  • The way one writes math is a state of consciousness. I could use MathJax but it'll 'lose' that specific state that is required to prepare it's own self understanding. I could make it look 'better', BUT it won't be the same 'hill' that leads one to the same result. I am an ascii-junky. Destroying the hill destroys the 8-bit junky houses built upon it. Tis' the reason for the season. – Misha Jun 03 '23 at 19:29
  • I am not sure exactly how this is related to formatting but this may be because I am not any kind of "computer junky" so I am unable to feel it. The final choice of the format is yours but I would expect more people to read your answer if it was formatted in such a way as to be easier for them to do so. – ZaellixA Jun 03 '23 at 20:28