I'm having a hard time making sense of an expression like $$\left[\Psi(x), \bar{\Psi}(y)\right].$$ Up until now I imagined a spinor operator to be something like a column vector of operators, something along the lines of $$\Psi = \left( \matrix{\Psi_1 \\ \Psi_2 \\ \Psi_3 \\ \Psi_4}\right).$$ And in a similar way, the Dirac adjoint would be $$\bar{\Psi} = ( \matrix{\bar{\Psi}_1\, \bar{\Psi}_2\, \bar{\Psi}_3\, \bar{\Psi}_4}).$$ However, when I see terms containing operators, or commutators, all operators are just written down one after another, without any indication of what it means in terms of their matrix structures. For operators with just one "component", this works fine, since one can define something like multiplication without running into too much trouble. However, with a commutator: $$\left[\Psi, \bar{\Psi}\right]= \Psi \bar{\Psi} - \bar{\Psi} \Psi,$$ if I assume the the map between the two consecutive operators to be matrix multiplication, one yields a 4 $\times$ 4 matrix, while the other one yields just a single component.
How should I think of something written down like that? Or does this mean that any equation involving a spinor is just meant to be interpreted component-wise?
I ran into the question when reading the section in peskin & schröder, where they try to use the commutator for quantisation (and afterwards see it doesn't work).
TBH I don't see a problem here. The question doesn't get better / worse if I ask about the commutator instead of the anticommutator, although the anticommutator is used way more often.
– Quantumwhisp Jul 21 '21 at 16:47