0

Related: Covariant vs contravariant vectors

Note: by vector I try to refer to the physics entity, not to the list of components.

From wikipedia page and others:

$$ \mathbf{v} = q_i \mathbf{e^i} = q^i \mathbf{e_i} $$

Question1: do all vectors belong to both contravariant and covariant spaces? Because they can be written as linear combination of the respective basis ${\mathbf{e}^i}$ and $\mathbf{e}_i$, and the metric allows to convert components from contravariant to covariant.

Question2: if the answer to the previous question is "yes", does it make sense to talk about "contravariant/covariant vectors"? Should we say "contravariant/covariant descriptions of a vector"?

This question is the same or near the same as the one in the linked question. However, it has two answers that seem contradictory and not focused on the question itself. For this reason, I rephrase it.

It is easy to find texts about how the change of the components after a change of the basis is related to the concept of contravariant/covariant component. Also, about how invariance of the scalar product is related to the definition of covariant components and metric tensor. But even taking into account these developments, the previous question seems open.

Question3: if a vector is a linear combination of contravariant vectors, does this imply it is contravariant ?

In case the answer to the previous question was "yes", we reach a curious contradiction taking into account another usual expression:

$$ \mathbf{e_i} = g_{ji} \mathbf{e^j} $$

Since the $\mathbf{e}_i$ vectors can be written as a linear combination of the contravariant vectors $\mathbf{e}^j$, they are contravariant. Since any vector in the cotangent space can be written as linear combination of $e_i$ vectors, all of them are contravariant. Conclusion: all covariant vectors are contravariant vectors.

Qmechanic
  • 201,751

2 Answers2

2

I only managed to sort these things in my head after Halmos and Greub

Lets talk about abstract vectors first. Lets define the contra-variant vectors as the 'usual' vectors:

$$ \mathbf{V}=V^i\,\mathbf{e}_i\in \mathcal{V} $$

Where $\mathbf{V}$ is a vector, $V^i$ are the components, and $\{\mathbf{e}_i\}_{i=1\dots N}$ is the basis for this N-dimensional vector space $\mathcal{V}$ (only dealing with finite-dimentional vector spaces here).

Once you have this structure, you will find that we rarely can apply it to real world directly, the reason is that we can normally measure scalars, not vectors. Measuring several scalars can result in a vector, but this is not a single-step process.

So what we need are ways of reducing vectors to scalars (real-valued for now). These we call functionals:

$$ \mathbf{u}:\mathcal{V}\to\mathbb{R} $$

A special class amongst these functionals are the linear homogeneous functionals. Lets call the space of such functionals $\mathcal{V}^*$. So if $\mathbf{u}\in \mathcal{V}^*$, and $\mathbf{v},\,\mathbf{w}\in \mathcal{V}$ and $\alpha,\beta\in \mathbb{R}$

$$ \begin{align} \mathbf{u}:\mathcal{V}\to\mathbb{R}\\ \mathbf{u}\left(\alpha\mathbf{v}+\beta\mathbf{w}\right)=\alpha\cdot\mathbf{u}\left(\mathbf{v}\right)+\beta\cdot\mathbf{u}\left(\mathbf{w}\right) \\ \mathbf{u}\left(\mathbf{0}\right)=0 \end{align} $$

An example of such functional would be $\mathbf{u}$ that simply returns the 'x-component' of any vector given to it.

We can then ask how can we systematically investigate the possible members of $\mathcal{V}^*$. We will then find that the only thing that matters is what numbers are assigned to the basis vectors of $\mathcal{V}$. Basically we define a following set of functionals:

$$ \mathbf{q}^j\left(\mathbf{e}_i\right)=\begin{cases} 1,\quad i=j\\ 0,\quad otherwise \end{cases}=\delta^j_i $$

And then any $\mathbf{u}\in\mathcal{V}^*$ can be expressed as:

$$ \mathbf{u}=u_i\mathbf{q}^i $$

So that for any $\mathbf{v}=v^i\mathbf{e}_i$ in $\mathcal{V}$:

$$ \mathbf{u}\left(\mathbf{v}\right)=u_j \mathbf{q}^j\left(v^i\mathbf{e}_i\right)=u_jv^i \mathbf{q}^j\left(\mathbf{e}_i\right)=u_iv^i $$

Essentially $\mathcal{V}^*$ is itself a vector space, with basis $\{\mathbf{q}^j\}_{j=1\dots N}$, which is induced by $\mathcal{V}$. This we call the dual space. That's your co-variant vectors.

So that's why co-variant and contra-variant vectors are different, the former are linear functionals of latter (and vice verse)

A non-GR example where distinction between co- and contra-variant vectors becomes important is crystallography. One usually aligns the basis vectors with crystalline axis, and co-variant vectors are then in the reciprocal space.

We can often pretend that dual space is the same as the original vector space because the two are isomorphic (for finite vector spaces), that's where the confusion comes from.

Question 1: No, vectors can belong to contra-variant vector space, if it belongs to co-variant vector space it is a linear functional. Having said that, things like direct sums and tensor products can be used to build new vector spaces: $\mathcal{V}\oplus\mathcal{V}^*$ and $\mathcal{V}\otimes\mathcal{V}^*$ - there things get complicated.

Question 3: Yes. This is in the definition of a vector space. Any linear combination of vectors in the space belongs to the space

Finally you talk about an object $g_{ij}$. Given suitable properties, it establishes a map $\mathcal{V}\to\mathcal{V}^*$. Whilst I have never worked with vector spaces where such map cannot be defined, I see no reason for it always to be present, and no reason for it to be unique. So treat $g_{ij}$ as an add-on. Hence no contradiction. $\mathcal{V}$ and $\mathcal{V}^*$ are isomorphic but different for finite vector-spaces, hence you can define an isomprphism $g:\mathcal{V}\to\mathcal{V}^*$ between them.

PS: When it comes to manifolds, one constructs a vector space at each point of the manifold out of partial derivatives, so vectors are defined as $V=v^i\partial_i$. This is the tangent vector space. The dual space to that is the space of differential forms, again defined at each point of the manifold.

Cryo
  • 3,061
1

$$ \mathbf{v} = q_i \mathbf{e^i} = q^i \mathbf{e_i} $$

Ouch! The basis vectors $\mathbf{e_i}$ and $\mathbf{e^i}$ belong to different vector spaces, namely a vector space and its dual. The use of an equal sign in this context is simply wrong! A vector space and its dual are mathematically different objects, but equals means they are mathematically the same which is wrong. I did say in my answer to the linked question that

we tend to think of the contravariant and covariant vectors as different descriptions of the same vector

but formally, in mathematics, they are different objects. So, it depends. Do you want an informal, intuitive, notion, which is fine for most practical purposes, in which case you can think of contravariant and covariant vectors as different descriptions of the same object, or are you looking for a mathematically rigorous and precise statement which will never lead you to the sort of contradiction you arrive at? If the latter, then you must treat the vector space as distinct from its dual, and stick with mathematically rigorous language.

Charles Francis
  • 11,546
  • 4
  • 22
  • 37