14

If everything you are working with is in Euclidean 3-space (or $n$-space) equipped with the dot product, is there any reason to bother with distinguishing between 1-forms and vectors? or between covariant and contravariant tensor components? I'm fairly certain that if you do not, then none of your calculations or relations will be numerically wrong, but are they "mathematically wrong"?

Example: I'll write some basic tensor relations without distinguishing between vectors and 1-forms/dual vectors or between covariant/contravariant components. All indices will be subscripts. Tell me if any of the following is wrong:

I would often describe a rigid transformation of an orthonormal basis (in euclidean 3-space), ${\hat{\mathbf{e}}_i}$, to some new orthonormal basis ${\hat{\mathbf{e}}'_i}$ as $$ \hat{\mathbf{e}}'_i\;=\; \mathbf{R}\cdot\hat{\mathbf{e}}_i \;=\; R_{ji}\hat{\mathbf{e}}_j \qquad\qquad (i=1,2,3) $$ For some proper orthogonal 2-tensor ${\mathbf{R}\in SO(3)}$ (or whatever the tensor equivalent of $SO(3)$ is, if that's a thing). It's then pretty straightforward to show that the components, $R_{ij}$, of ${\mathbf{R}}$ are the same in both bases and $\mathbf{R}$ itself is given in terms of ${\hat{\mathbf{e}}_i}$ and ${\hat{\mathbf{e}}'_i}$ by

$$ \mathbf{R}=R_{ij}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j = R_{ij}\hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}'_j = \hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}_i \qquad,\qquad\quad R_{ij}=R'_{ij}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}'_j $$

Then, given the basis transformation in the first equation, the components of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ and some 2-tensor $\mathbf{T}=T_{ij}\hat{\mathbf{e}}_i\otimes \hat{\mathbf{e}}_j = T'_{ij}\hat{\mathbf{e}}'_i\otimes \hat{\mathbf{e}}'_j$ would transform as

$$ u'_i = R_{ji}u_j \qquad \text{matrix form: } \qquad [u]'= [R]^{\top}[u] \\ T'_{ij} = R_{ki}R_{sj}T_{sj} \qquad \text{matrix form: } \qquad [T]' = [R]^{\top}[T][R] $$ and for some p-tensor we would have $$S'_{j_1j_2\dots j_p} \;=\; \big( R_{ i_1j_1}R_{ i_2j_2} \dots R_{ i_pj_p} \big) S_{ i_1 i_2\dots i_p} $$

and if ${\hat{\mathbf{e}}_i}$ is an inertial basis and ${\hat{\mathbf{e}}'_i}$ is some rotating basis, then the skew-symmetric angular velocity 2-tensor of the ${\hat{\mathbf{e}}'_i}$ basis realtive to ${\hat{\mathbf{e}}_i}$ is given by

$$ \boldsymbol{\Omega} \;=\; \dot{\mathbf{R}}\cdot\mathbf{R}^{\top} \qquad,\qquad \text{componenets in } \hat{\mathbf{e}}_i \;: \qquad \Omega_{ij} = \dot{R}_{ik}R_{jk} $$

Or, in matrix form (in the $\hat{\mathbf{e}}_i$ basis) the above would be $[\Omega]=[\dot{R}][R]^{\top}$. The third equation can be used to convert to the $\hat{\mathbf{e}}'_i$ basis. The familiar angular velocity (pseudo)vector is then given by

$$ \vec{\boldsymbol{\omega}}= -\tfrac{1}{2}\epsilon_{ijk}(\hat{\mathbf{e}}_j\cdot \boldsymbol{\Omega}\cdot \hat{\mathbf{e}}_k)\hat{\mathbf{e}}_i \qquad,\qquad \text{componenets in } \hat{\mathbf{e}}_i \;: \qquad \omega_i = -\tfrac{1}{2}\epsilon_{ijk}\Omega_{jk} $$

where $\epsilon_{ijk}$ are the components of the Levi-Civita 3-(pseudo)tensor, $\pmb{\epsilon}$, which itself may be written in any right-handed orthonormal bases as

$$ \pmb{\epsilon} = \epsilon_{ijk}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j \otimes \hat{\mathbf{e}}_k = \tfrac{1}{3!}\epsilon_{ijk}\hat{\mathbf{e}}_i\wedge\hat{\mathbf{e}}_j \wedge \hat{\mathbf{e}}_k = \hat{\mathbf{e}}_1\wedge\hat{\mathbf{e}}_2 \wedge \hat{\mathbf{e}}_3 \quad,\quad \epsilon_{123}=1 $$

The time-derivative of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ would then be given in terms of the components in the inertial and rotating bases by the familiar kinematic transport equation $$ \dot{\vec{\mathbf{u}}} = \dot{u}_i\hat{\mathbf{e}}_i = \dot{u}'_i\hat{\mathbf{e}}'_i + \boldsymbol{\Omega}\cdot\vec{\mathbf{u}} \;=\; (\dot{u}'_i + \Omega'_{ij}u'_j )\hat{\mathbf{e}}'_i $$ where $\boldsymbol{\Omega}\cdot\vec{\mathbf{u}} = \vec{\boldsymbol{\omega}}\times\vec{\mathbf{u}}$.
end example

question: So, I'm pretty sure that none of the above would give me numerically incorrect relations. But I called everything either a vector, 2-tensor, or 3-tensor. Nothing about forms, (1,1)-tensors, (0,2)-tensors, dual vectors, etc. Is the above formulation mathematically ''improper''? For instance, do I need to write ${\mathbf{R}}$ as a (1,1)-tensor, ${\mathbf{R}}=R^{i}_{\,j}\hat{\mathbf{e}}_i\otimes\hat{\boldsymbol{\sigma}}^j$, using the basis 1-forms $\hat{\boldsymbol{\sigma}}^j$? Does the angular velocity tensor need to be written as a 2-form or (0,2)-tensor?

context: My BS is in physics and I am currently a PhD student in engineering. Aside from a graduate relativity course I took in the physics department, I have never once seen raised indices or mention of dual vectors/1-forms in any class I have ever taken or in any academic paper, I have ever read. That was until I recently started teaching myself some differential geometry in hopes of eventually understanding Hamiltonian mechanics from the geometric view. So far, I have mostly only succeeded in destroying my confidence in my knowledge of basic tensor algebra involved in classical dynamics.

3 Answers3

23

As long as you restrict yourself to orthonormal bases, then that's fine. The reason for this is that indices are "raised" or "lowered" via the metric, and in an orthonormal basis the metric components are $g_{ij}=\delta_{ij}$.

As soon as your basis is non-orthonormal, however, this goes out the window. There are many good reasons to use non-orthonormal bases in various circumstances, but since you've explicitly stated that you'd ultimately like to understand Hamiltonian mechanics from a geometrical standpoint, I'll highlight the most glaring problem: in Hamiltonian mechanics on a symplectic manifold, there is no metric, and so the entire concept of orthonormality goes out the window.

It is still useful to define an isomorphism between tangent vectors and their duals on a symplectic manifold, but we need something other than a metric to do so. The structure we use is the symplectic form $\Omega$, which is by definition antisymmetric; this immediately implies that $\Omega_{ij}=\delta_{ij}$ is ruled out as a possibility in any coordinate system. As a result, vectors and their duals always have different components, and distinguishing between them and their transformation behaviors is crucial.

J. Murray
  • 69,036
  • ahhh. So if we stay in euclidean 3-space as an example, but for some reason choose some $\vec{\mathbf{e}}i$ basis that is neither orthogonal nor orthonormal bus instead $\vec{\mathbf{e}}_i\cdot\vec{\mathbf{e}}_j=g{ij}$, then I can see why a reciprocal basis $\vec{\mathbf{k}}i$ satisfying $\vec{\mathbf{k}}_i\cdot\vec{\mathbf{k}}_j=g^{-1}{ij}$ and $\vec{\mathbf{k}}i\cdot\vec{\mathbf{e}}_j=\delta{ij}$ and would be useful (maybe necessary?). But these $\vec{\mathbf{k}}_i$ would still be vectors wouldn't they? they wouldn't be 1-forms (members of the dual space). Or would they? – J Peterson Jul 11 '22 at 22:15
  • 7
    The old-school approach to differential geometry took the route you describe, in which we would talk about contravariant and covariant components of a vector which corresponded to expanding in the original and reciprocal basis, respectively. The more modern treatment makes a structural distinction between vectors and covectors, with the latter being elements of the dual space. In my opinion, this ends up being far cleaner, but it has other benefits as well. In particular, it is important to note that vectors, covectors, and tensors are all perfectly well-defined on a smooth manifold [...] – J. Murray Jul 11 '22 at 22:22
  • Also, I'm getting ahead of myself here, but am I correct in thinking that in the geometric view of Lagrangian mechanics, the configuration manifold does have a metric for its tangent bundle? Why do we lose this metric (rather than transform it) in going to the Hamiltonian formulation? This question might be way too involved to answer in a comment. – J Peterson Jul 11 '22 at 22:23
  • 2
    @JPeterson [...] with no additional structure (in particular, in the absence of a metric/inner product). Making the presence of a metric an intrinsic part of your approach to differential geometry becomes self-limiting as soon as you want to go beyond the study of metric manifolds, which is your end goal anyway. – J. Murray Jul 11 '22 at 22:26
  • 2
    @JPeterson That's a good question. The answer is that the configuration space $Q$ does typically have a metric, but the tangent bundle to the configuration space $TQ$ (which is a manifold in its own right!) does not. To be clear, it is possible to define one, but in this context we'd be doing so purely to blur the lines between vectors and covectors, which is (to me) wildly counterproductive. – J. Murray Jul 11 '22 at 22:33
  • 1
    @JPeterson You may be interested in this answer to a related question and the comments therein. – J. Murray Jul 11 '22 at 22:56
5

No, it's not mathematically wrong. To show it mathematically wrong, you'd need to show that it leads to a contradiction.

As for what you should use in physics, that depends on the problem, how your intuition works, how it helps with the formulation, whether it illuminates or obfuscates what's going on, ...

John Doty
  • 20,827
  • I suppose the only contradiction would be in the labeling/classification of objects? The math texts I've been trying to read (Abraham & Marsden, Boothby, Lee, Nakahara), seem to require using the dual space in order to do anything with vectors (I may be misinterpreting this). If I write $\vec{\mathbf{u}}\cdot\vec{\mathbf{u}}=u_i u_i$, then it seems like, by definition, one of the $u_i$ terms must be the components of a 1-form, not a vector. It's just that in an orthonormal basis in flat space, these components happen to be the same. – J Peterson Jul 11 '22 at 22:39
  • As for obfuscation, it seems to me that involving the dual space and k-forms in flat Euclidean space unnecessarily complicates otherwise simple things (assuming orthonormal bases). But that may be because it's all new to me. Regardless, I'm mostly using flat 3-space as a "playground" to learn this material. – J Peterson Jul 11 '22 at 22:43
  • @JPeterson In math, you make up any rules you wish, as long as they lead to consistent results. However, the concept of "distinction without a difference" may apply. – John Doty Jul 11 '22 at 22:46
  • @JPeterson You're taking the most common path into the material. If it starts to matter, use more sophisticated concepts. But there's no virtue to unnecessary abstraction in physics. – John Doty Jul 11 '22 at 22:49
0

It is very much wrong in my opinion. If you don't the contravariant, co-variant distinction, then the transformation rule of different derivative quantities like the gradient, curl, div etc doesn't make sense. Many basic Vector calculus books derive for each coordinate system through geometry, and tells to find the conversion rules between coordinate system using geometry

But, it maybe unclear to a student how it all algebraically relates. To get the right algebraic conversion rules of the expressions between coordinate systems, we can not do it without Covariant and contravariant indices.

  • Interesting. Could someone explain why they think my answer is erroneous – tryst with freedom Jul 12 '22 at 19:33
  • You contradict yourself. "Many vector calculus books" do it the wrong way according to you, but, of course, there's nothing wrong with the math in those books. What I see is that unnecessary abstraction gets in the way of understanding the physics, and, of course, that's what matters here, not abstract algebra. – John Doty Jul 13 '22 at 00:21
  • Interesting, I challenge you to explain the algebraic transformation rules from one coordinate system to another of the gradient, div etc from what is taught. Furthermore, Contravariant, covariant also have geometric basis to them. It's not like they are abstract for the sake of being abstract... @JohnDoty – tryst with freedom Jul 13 '22 at 00:28
  • As you say, the books do it. Does understanding the algebraic transformation rules illuminate the physics? Sometimes yes, sometimes, no. If you're a mathematician, of course, the algebraic transformation rules are central, but this is a physics group. – John Doty Jul 13 '22 at 00:32
  • Personally, I find tensor representation more intuitive for angular velocity but vector representation more intuitive for magnetic fields. Similar mathematical objects, but different physical intuitions for different sorts of problems. – John Doty Jul 13 '22 at 00:42