6

In introductory courses, vectors are defined as objects with direction and magnitude. I guess everyone has arrows in mind when talking about vectors and that's probably the most intuitive description, but I was wondering if mathematical physicists have found more well-founded ways of describing vectors. I am not talking about arbitrary vectors (elements of a vector space), but about three dimensional vector quantities like velocity, current density, electric field and so on.

I am aware of two concepts that seem promising: The translation space of an affine space (1.) and the tangent spaces of a manifold (2.). But there are probably other concepts I don't even know of.

My thoughts:

  1. Once we have chosen a system (not necessarely a frame) of reference, we can model space as a three dimensional affine space. So one guess would be that vectors are elements of the translation space.

  2. Since maps like to electric or magentic field (velocity/force fields) are called vector fields in introductery courses, one might expect that there is a connection to the vector fields from differential geometry. The $3$-dim. affine space mentioned above has an associated $3$-dim. manifold, so it seems plausible that they can be modeled as vector fields on that manifold.$^1$

This are really just some ideas - I haven't found a reference yet. If this goes in the right direction, I'd be happy about a confirmation/elaboration.

Any thoughts and comments are much appreciated!

$^1$As far as I know, the electric field can't be modeled as a vector field on spacetime, because it is only defined with respect to a specific frame of reference. (I am aware of the formulation of Electrodynamics using the EM field tensor $F\colon M\to \Lambda^2T^*M.)$

Filippo
  • 1,781
  • Vectors have a very precise mathematical definition, see here. – Charlie Dec 04 '20 at 16:34
  • A comment to your second footnote. $F$ should take values in the space of 2-forms (antisymmetric subspace of $T^M\otimes T^M$), or even better, it is a connection on a principal $U(1)$-bundle on the spacetime manifold. – NDewolf Dec 04 '20 at 19:44
  • @NDewolf Thank you for the comment - you think that writing $F^{\sharp\sharp}\colon M\to TM\otimes TM$ would have been better, right? – Filippo Dec 04 '20 at 20:01
  • I guess you could do that, but that doesn't make it a lot clearer. – NDewolf Dec 05 '20 at 15:43
  • 1
    @NDewolf I just wanted to make sure we're talking about the same thing - I see that your notation is better since we want to write $dF=0$ and $dF=4\pi J/c$ and not $dF^{\flat\flat}=0$ and $dF^{\flat\flat}=4\pi J/c$. Thanks for pointing that out. – Filippo Dec 05 '20 at 16:44
  • Your comment about tangent spaces (and cotangent spaces and so on) seems to precisely answer your question. I'm not sure what additionally you are interested in. A confirmation? An intuitive explanation? A generalization? Perhaps you can elaborate on what direction you would want an answer to go. – Maximal Ideal Dec 15 '20 at 08:26
  • 1
    @MaximalIdeal Thank you for your comment :) First of all, my "thoughts" are really nothing more than some ideas I had (I have not found a reference yet) - so yes, if this goes in the right direction, I'd be interested in a confirmation/elaboration/further references. My second motivation was that there might be concepts I haven't even heard of (e.g. the concept of vector flows in Richard Myers answer). – Filippo Dec 15 '20 at 08:46
  • @NDewolf I have been studying fiber bundles and connections and am finally able to understand your comment. It's not the first time that I read that electrodynamics can be formuated using a $U(1)$-bundle, however, it always remains unspecified what the total space $P$ is. Since we are talking about a bundle, there must a projection $\pi\colon P\to M$, where $M$ is probably spacetime, but what about $P$? – Filippo Jun 22 '21 at 17:12
  • 1
    @Filippo So that is indeed a good question, $P$ is often not explicitly specified. But that is not a problem. Given a cover of $M$, a Lie group $G$ and a cocycle/set of transition functions, one can recover the total space $P$. These data are the ones that are being used in physics all the time. And since physicists prefer to work locally, they do not care about an explicit description of the total space. – NDewolf Jun 23 '21 at 08:32
  • @NDewolf Thank you! "Given a cover of M, a Lie group G and a cocycle/set of transition functions, one can recover the total space P" Could you please eloborate or give a reference? If you think that's a good idea, I can post a new question. – Filippo Jun 23 '21 at 08:48
  • 1
    @Filippo This is simply the "fibre bundle (re)construction theorem": https://en.wikipedia.org/wiki/Fiber_bundle_construction_theorem. Any fibre bundle is determined by exactly that data. – NDewolf Jun 23 '21 at 09:23
  • @NDewolf Thank you very much! – Filippo Jun 23 '21 at 09:32
  • @NDewolf If we only consider classical electrodynamics, we cannot motivate the postulation of a cocycle, can we? – Filippo Jul 22 '21 at 15:27
  • Cocycles as in the functions that relate different patches of a fibre bundle? – NDewolf Jul 23 '21 at 09:31
  • @NDewolf I think the answer is yes. From what I've heard, it is possible to motivate the idea that the electromagnetic field is given by a connection $1$-form on a $U(1)$-bundle in the context of QED. But what about classical electrodynamics (I am thinking about Maxwell equations and special relativity)? Is there some naturally occuring cocycle such that we can refer to the fiber construction theorem? – Filippo Jul 23 '21 at 11:51
  • You don't need QED, you just need the gauge theory interpretation which already holds classically. – NDewolf Jul 23 '21 at 12:16
  • @NDewolf I see how the the electromagnetic field can be identified with a $2$-form $F\in\Omega^2(M)$ and I understand that there exists a (not uniquely defined) $A\in\Omega^1(M)$ such that $F=\mathrm{d}A$, but I don't see how there naturally occurs a cocyle such that we can regard $A$ as a local connection associated to a connection $\omega\in\Omega^1(P)$ on a principal bundle. – Filippo Jul 23 '21 at 12:44
  • @NDewolf I have started a bounty on this closely related question, maybe you are willing to have a look. – Filippo Jul 23 '21 at 13:01
  • When looking at change of coordinates, the transformation rule is different between a connection and an ordinary differential form (at least on topologically nontrivial spaces). This should also be the case for EM fields – NDewolf Jul 24 '21 at 17:36
  • @NDewolf Are you talking about the transformation rule for local gauges, $A_{s'}=\mathrm{Ad}_{g^{-1}}A_s+g^\mu$ with $A_s:=s^A$? – Filippo Jul 24 '21 at 18:12
  • Exactly, this is not how an ordinary form transforms. – NDewolf Jul 24 '21 at 20:13
  • @NDewolf As you know, a connection one-form on a principal bundle defines a family $I\ni i\mapsto A_i\in\Omega^1(U_i)\otimes\mathfrak{g}$ with $A_j=\mathrm{Ad}{g^{-1}}A_i+g{ij}{}^*\mu$. Can it be shown that the converse is also true, that is, that such a family defines a connection? I'm asking, because this result together with the fiber bundle construction theorem might show that both approaches to classical ED are equivalent (the classical approach and the modern approach with the Lagrangian). – Filippo Jul 25 '21 at 07:29
  • Yes, as far as i know this can be shown. There are, i think, three equivalent approaches. Through distributions, through the local patching of one-forms and through parallel transport. – NDewolf Jul 25 '21 at 09:20
  • @NDewolf Just to make sure I understand: Not only can we prove what I asked for, but we can prove it in three different ways? – Filippo Jul 25 '21 at 09:35

5 Answers5

5

The notion of vector (not just in the sense of a point in a vector space) can be and is generalized in countless different ways. Here I just mention one that I personally find very intriguing, especially in relation to electromagnetism, and give some personal thoughts. References are given throughout the answer and at the end.

I think I heard of theories that use differential manifolds (eg to model spacetime) with a bundle of affine spaces, rather than vector spaces. Curtis & Miller mention such bundles in chap. 16:

Personally I see affine space more like a particular case of a differential manifold: one in which there is a flat affine connection. From this viewpoint the notion of vector on an affine space is just the same as that on a differential manifold (tangent vector), with the only difference that there's a "friendlier" (but less interesting) parallel transport in affine space.

At the same time, there's also an alternative way of seeing vectors (and their generalizations discussed below) in an affine space. It appears in the geometric calculus of Grassmann and Peano:

In geometric calculus, vectors aren't introduced as translations (even though they can also be used that way there), but as sort of points at infinity. This leads to a framework in which one works simultaneously on affine space, projective space, and (multi)vector space, yet keeping them somewhat distinct. I think it has great pedagogic potential for introducing and teaching these kinds of space.

There are a couple of brilliant and clear articles by Goldman that explain this "dialogue" between these three spaces in geometric calculus (which can only be glimpsed in Peano, although it's there on closer inspection). Goldman uses it for computer graphics:

I think it would be great if this viewpoint could be brought to differential geometry, but don't know of any works that have done that.


Ideas and objects extremely close to those of geometric calculus can also be constructed on a vector space, however, and from there they're naturally brought to differential geometry. This is an old and well-developed framework, albeit not yet as widely known as it deserves to be. The basic idea is very intuitive, and it's the same as in the geometric calculus:

A vector is usually identified by: (1) a direction, in the sense of a specific straight line; (2) an orientation on that line, with two possible choices; (3) a magnitude on that line, in the sense of a chosen segment there. All these three characteristics can be generalized, in an ambient space of dimension $D$:

  1. Instead of a line we can take any $k$-dimensional flat subset with $k\le D$ (plane, hyperplane, etc.).

  2. The orientation can be chosen on the $k$-dimensional flat subset itself (two choices), or on its complement, that is, the equivalence class of all subsets parallel to the specific subset. Figuratively, this corresponds to taking the orientation "around" the subset, rather than on it.

  3. Also the magnitude ($k$-volume) can be taken on the flat subset, or on its complement. (This notion of magnitude doesn't require a metric; e.g. see Affine and convex spaces: blending the analytic and geometric viewpoints for an explanation.)

The freedom in these three choices, which can be combined, leads to "generalized vectors" that also have intuitive visual representations in 2D and 3D. Here are the drawings from Schouten's book, for the case of a 3D space:

generalized vector from Schouten 1 generalized vector from Schouten 1

The first kind of generalization leads to multivectors, which represent oriented areas, volumes, and so on (in differential geometry, tangent areas, volumes, and so on). The second kind of generalization leads to twisted vectors, which can represent notions such as rotational momentum, or the direction of flux through areas, and so on. The third kind of generalization leads to (multi)covectors (or forms in differential geometry), objects that can acts as basic measurement operations, or, in differential geometry, meant to be integrated.

So what we obtain are none other than the multivectors and multicovectors (that is, elements of the dual space) of linear algebra, with so called "twisted" or "straight" orientations, or "odd" and "even" as de Rham called them. They also correspond to completely antisymmetric, fully contravariant or fully covariant tensors. And they also correspond to the objects treated by the geometric calculus of Grassmann and Peano.

These objects represent very well many physical quantities, such as densities, forces, and especially the objects in electromagnetism. They fully express the symmetries and operational meaning of those objects.

Just two examples:
Charge density is something that's supposed to be integrated in a region of space, to give the total charge in that region. And such integration must not depend on distances or metric, since it's a purely topological notion: we choose a closed surface in space, and we say what's the total charge in there. It turns out that this notion and operational meaning can be fully expressed by one of the objects above: a twisted 3-covector.
– The electric field corresponds to a straight 1-covector, and again this stems from its operational meaning: integrated on a curve (again, this is all topological), it yields a potential difference.

...And many other physical notions: all of electromagnetism and most part of mechanics. The most fascinating aspect is that these notions allow you to interpret the electromagnetic (Faraday) field as the motion of a continuum of strings (1D objects) in spacetime, just like classical mechanics treats of a continuum of particles (0D objects).

One object that still has some mystery from this point of view is the stress-energy-momentum tensor.

I recommend the works of Bossavit and of Hehl & al. below to see what the various electromagnetic fields correspond to.

There is a very extensive literature on these notions and their applications. Here are some further references.

To get acquainted with them and their use in electromagnetism:

For their geometric properties and application to physics in general:

Some remarks about the "natural" multivector form of the stress-energy-momentum tensor are given here:


This is just the tip of the iceberg, and it's a huge iceberg. Some forward- and backward-searches from these references will reveal many many other interesting ones.

pglpm
  • 3,721
  • 1
    This is a great answer, thank you very much. The references are much appreciated. – Filippo Dec 15 '20 at 08:54
  • @Filippo You're welcome! Please do read them, especially Bossavit and Burke, because they're a fun, intuitive read, and freely available. As an ante, let me quote the epigraph in Burke's book: "To all those who, like me, have wondered how in hell you can change $\dot{q}$ without changing $q$" :) – pglpm Dec 15 '20 at 09:28
  • @Filippo By the way, if you read Italian, this is a great reading: https://archive.org/details/calcologeometric00gpea – pglpm Dec 15 '20 at 12:30
  • I do :) Didn't expect a book from 1888 ^^ – Filippo Dec 15 '20 at 12:48
  • @Filippo "A procedure which tums out to be illuminating, but is nonetheless seldom followed, is to read the classics." (G. Astarita) :) – pglpm Dec 15 '20 at 14:46
  • 1
    This is a great answer.... which very much aligns with my way of thinking and visualizing, especially with regards to references by Burke, Schouten, van Dantzig, Hehl, and Bossavit. Schouten's 1924 Der Ricci-Kalkul had earlier versions of some his drawings of "directed quantities" (as in his Tensor Analysis for Physicists), which I first saw in Misner-Thorne-Wheeler's Gravitation and then in Burke's Applied Differential Geometry. Bossavit applies these ideas to computational electromagnetism. In addition to electromagnetism, this can be used for thermodynamics, mechanics, and relativity. – robphy Dec 15 '20 at 18:35
2

I will note that these two notions of a vector field are actually related by a concept known as flow. Essentially, a vector field generates a set of "translations" on a manifold, and similarly such a set of "translations" will define a vector field.

Flows are extremely important both in geometry generally and physics specifically. For example, these give an exceptional framework within which the canonical formalism of classical mechanics (and large swaths of quantum mechanics) can be understood. This Wiki page might get you started on the topic if you're unfamiliar, though any reasonable book covering differential geometry will discuss flows. For example, Nakahara's book has a section dedicated to them.

As for your thoughts on E&M specifically, let me say that you are correct that there are issues defining the $E$ and $B$ fields in a Lorentz invariant way. The correct way to do electrodynamics in a Lorentz invariant way is to use the vector potential. Not the 3-component vector, but the 4-vector potential (the zero component is the potential while the other three components are the 3-vector potential). For example, see this Wiki page.

The vector potential is, in essentially all respects, the correct quantity to work with if you want Lorentz invariance. For instance, it actually transforms correctly as a 4-vector under Lorentz transformations whereas the $E$ and $B$ fields do not (look up their transformations in any E&M book, it's not pretty). The $E$ and $B$ fields should instead be understood as the components of the field strength tensor, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ and transform as such under Lorentz transformations.

The vector potential is also needed to write down the Lagrangian of Maxwell's equations coupled to currents/densities.

So in the end, you're right that there are issues with choosing reference frames, but as is almost always the case in physics, there's a very neat and tidy way to circumvent all those issues at the cost of dealing instead with a slightly different object (in this case, the vector potential).

1

I would carefully separate the concept of the vector from the more complex notion of a vector field. The latter implies the definition of the former.

I would say that there is only one definition of a vector: a member of a set whose elements satisfy all the axioms defining a vector space. See, for instance, the corresponding Wikipedia page.

Once that abstract definition is clear, one can check that the set of arrows with coinciding tails is an example of vectors simple to visualize. However, one should not forget that the arrows' set is just a simple geometric example of vector space. Physical vectors are not arrows.

Moreover, the arrows' example may induce the false definition of vectors in 3D as something having direction and magnitude. Even if it is an often stated definition, it is wrong. The composition laws are an equally important part of the concept of a vector. If the composition law does not satisfy the vector space's axioms, even an object equipped with magnitude and direction cannot be considered a vector. A well-known example is the case of rotations in space. It is undoubtedly possible to assign a direction to a rotation around an axis (the axis direction) and a magnitude (the angle of rotation). However, suppose we define the sum of two rotations as the resulting final rotation. In that case, we realize that this composition law is not commutative. Take a book, rotate it of pi/2 around an x-axis, and then by the same amount around a y-axis, restart from the initial orientation, perform before the y-axis, and then the x-axis rotation. The final result will be different.

On the other hand, every particular example (vectors in an affine space, vector in the tangent plane of a differential manifold, elements of a Hilbert space of functions, etc.) may hide the concept's underlying unity of a vector. In my opinion, even at the elementary level, it could be possible to start with a few examples of vector spaces, including the case of vectors quite far from the idea of an arrow (for instance, function spaces or polynomial spaces).

  • Thank you for the answer :) I know the definition of vector spaces in mathematics, but I'd like to limit the discussion to physical vectors that live in three dimensions (like the ones mentioned in the title). I like the part about rotations though, I wasn't aware that one can think about them as "objects with direction and magnitude" :) – Filippo Dec 14 '20 at 20:03
  • They all have in common that they can be represented by a triple of real numbers (the components with respect to some basis), but in my physics lecture we never talked about the vector space the basis lives in. – Filippo Dec 14 '20 at 20:06
  • @Filippo the basis of a vector space "lives" in the vector space. It is just a set of independent vectors. – GiorgioP-DoomsdayClockIsAt-90 Dec 14 '20 at 21:01
  • Yes, but what vector space? – Filippo Dec 14 '20 at 21:19
  • @Filippo if your vectors are forces, the vector space of forces on a body, if it is a vector space of velocities, the vector space of velocities. Being a vector is an intrinsic property of a physical quantity. It does not depend on some "external" space. But maybe I have not understood your question. – GiorgioP-DoomsdayClockIsAt-90 Dec 14 '20 at 22:57
  • Well, it's not exactly an answer I expected, but it adds interesting information :) Does your answer imply that SO(3) is a group, not a vector space? – Filippo Dec 15 '20 at 09:12
  • @Filippo it is not a vector space if the composition of two rotations is the usual one. It is certainly a group and it may be equipped with multiplication by a scalar, but non-abelian groups cannot be used as a group to build a vector space. – GiorgioP-DoomsdayClockIsAt-90 Dec 15 '20 at 09:31
  • @Filippo SO(3) is a group that is also a differentiable manifold (a Lie group), meaning that its tangent space is a vector space locally. The meaningful tangent space is at the identity of the group, where the vector operation is the Lie bracket. – PeaBrane Dec 17 '20 at 07:54
1

As Charlie mentioned in the comments, the mathematical definition of vector is very precise. You begin with a set whose elements satisfying certain axioms of operations (see the Wiki page), and that set is referred to as a vector space. Elements of that set are referred to as vectors.

In fact, the vector operation doesn't need to be addition, nor does the field which the vector space is over needs to be continuous. For instance, in graph theory, the set of cycles is a vector space over a discrete field, with the operation being symmetric difference. All the usual concepts such as dimensionality still applies, albeit being a bit more abstract.

For physics (or differential geometry), it's generally assumed that whenever we talk about "vector space", it's the tangent space of some manifold. And when the manifold is transformed via some diffeomorphism, the elements in the tangent space transforms, as the name implies, "like vectors", which essentially means the transformation is linear.

PeaBrane
  • 715
0

To have a vector field, you need a vector space at each point of a manifold (spatial manifold in nonrelativistic physics, or spacetime manifold in relativity). In general this structure is called a vector bundle. Most commonly in physics, one considers vectors in the tangent bundle (i.e. the bundle of tangent vectors to a manifold). Vectors in the tangent bundle relate to infinitesimal motions in the manifold (like velocity vectors), unlike vectors in an arbitrary bundle.

The electric field and magnetic field can be considered tangent vector and tangent pseudovector fields (respectively) on a spatial manifold in some reference frame. Together the electromagnetic field forms a tangent vector field on spacetime.

  • Thank you for your answer :) Unfortunately, I don't understand what you mean by tangent vectors relating to infinitesimal motions. – Filippo Dec 14 '20 at 20:10
  • Let's say our manifold is 3d space. An arbitrary vector bundle might assign a 100d vector space to each point--the vectors don't relate to space. The tangent vector space is 3d, just like the space is. The connection to "infinitesimal motions" is somewhat loose language. But you can think of tangent vectors as velocity vectors tangent to parameterized curves in the manifold. This is easier to see for embedded manifolds in Euclidean space, but in general it comes from how the tangent space is defined. – Joe Schindler Dec 16 '20 at 03:20
  • Maybe it's better to make clear that a vector field requires an assignment of a vector at each point in the base space. – PeaBrane Dec 17 '20 at 07:26