0

In special relativity, we write contravariant and covariant vectors respectively as $$A^\mu=(A^0, A^1, A^2, A^3), \quad A_\mu=(A_0, A_1, A_2, A_3).$$ Since $A_\mu=\eta_{\mu\nu}A^\nu$, we read off the relationship between each component, choosing the signature $(+,-,-,-$$)$: $$A_0=A^0, \quad A_1=-A^1, \quad A_2=-A^2, \quad A_3=-A^3 $$ and thus $A_\mu=(A^0, -A^1, -A^2, -A^3)$ in terms of the contravariant components. But in order to have $A^\mu A_\mu=(A^0)^2-(\mathbf A)^2$ as we should, with this notation where the metric $\eta_{\mu\nu}$ has been 'absorbed' into one of the vectors, we need to use the standard Euclidean product: $$A^\mu\eta_{\mu\nu}A^\nu=A^\mu A_\mu= (A_0, A_1, A_2, A_3)^T(A^0, -A^1, -A^2, -A^3)=(A^0)^2-(\mathbf A)^2.$$ Does this mean that going into Minkowski space correspond to staying in the Euclidean space, but inverting the spatial coordinates of one of the vectors?

  • You can write the metric as a matrix, in which case the inner product becomes $\tilde A\eta \vec A$ – Charlie Apr 05 '21 at 16:22
  • This way of thinking has similarities with the “ict” trick to avoid the Minkowski (and Lorentz-signature) metrics. It might work in some special cases, but I think you’ll miss a lot of the physics. See https://physics.stackexchange.com/a/327516/148184 – robphy Apr 05 '21 at 23:01

1 Answers1

0

we need to use the standard Euclidean product

I wouldn't call it a Euclidean product but rather the usual Einstein convention for summation over the repeated indices:

$$A_\mu A^\mu = \sum_{\mu=0}^3A_\mu A^\mu = A_0 A^0 + A_1 A^1 + A_2 A^2 + A_3 A^3$$

This expression is the same in both Euclidean and Minkowski space. The distinction between them comes when you want to relate $A_\mu$ to $A^\mu$. As you correctly pointed out, in Minkowski this is done with the help of Minkowski metric and as a result you get

$$A_\mu A^\mu = (A^0)^2 - (\mathbf{A})^2$$

How you choose to interpret it is up to you: saying that

going into Minkowski space correspond to staying in the Euclidean space, but inverting the spatial coordinates of one of the vectors

is equivalent to just using the Minkowski metric. I am not sure that this interpretation is going to help you gain physical intuition, though. It's better to just get accustomed to using Minkowski metric.

Viking
  • 719
  • I think a potential source of confusion is the fact that $A^\mu$ can be used to mean both the four-vector and a specific component. In my post, I wrote $A_\mu A^\mu$ as the dot product of a row vector and a column vector. Once you write the vector $A_\mu$ in terms of the components $A^\mu$, then the summation convention and the standard Euclidean product are basically indistinguishable, right? Then I agree with you that it makes more sense to just use the standard conventions in any practical situation. – ForgiveMyNoobness Apr 05 '21 at 16:41
  • Dot product of a row and column vectors is yet another way of representing the summation convention. Euclidean product is superficially given by the same formula because Euclidean metric is just a unit matrix. There might be indeed a confusion between vectors and components. Either you use index-free notation and write inner product as $A^T\eta A$ (like Charlie pointed out in the comment to your post) with $A$ denoting the vector itself; or you use $A^\mu$ and $A_\mu$ denoting components of a contravariant and covariant vectors respectively and inner product is $A^\mu A_\mu$ with summation. – Viking Apr 05 '21 at 21:06