1

I've problems in the interpretation of the expression:

$$ \mathbf{e}^i = g^{ij} \mathbf{e}_j$$

that can be found, by example, in this wiki chapter. Also here.

Step by step of my erroneous logic:

  1. The elements of the expression are a vector $\mathbf{e}_i$ belonging to the basis of the tangent space; $\mathbf{e}^i$ of the basis of cotangent space; and the metric tensor $g$.
  2. Since $\mathbf{e}_i$ is a vector of the basis of the tangent space, it is a contravariant vector.
  3. Since $\mathbf{e}_i$ is a contravariant vector, it can be expressed in index notation as $e^\alpha_{\;i}$.
  4. By usual lowering/raising index $g^{ij}e^\alpha_{\;j} = e^{\alpha\,i}$
  5. By paralelism between initial expression $ \mathbf{e}^i = g^{ij} \mathbf{e}_j$ and previous one $e^{\alpha\;i}=g^{ij}e^\alpha_j$, I can say that $\mathbf{e}^i$ corresponds to $e^{\alpha\,i}$
  6. Since $\mathbf{e}^i$ vector is expressed as $e^{\alpha\,i}$, it is a contravariant vector.
  7. But $\mathbf{e}^i$ can not be contravariant because it is a vector of the basis of the cotangent space. Contradiction.

Not found where is the error in previous sequence, all steps seems basic and true.

Addendum:

Another way to reach same contradiction:

1b. The set of all vectors that forms the basis of the tangent space $\{\mathbf{e}_1,\mathbf{e}_2,\dots\}$ is expressed in index form as $e^\alpha_{\;i}$.

2b. $e^\alpha_{\;i}$ expresses all the set of basis vectors of the tangent space. $e^\alpha_{\;i}$ is a tensor with two indexes, $\alpha$ contravariant (related to the space components) and $i$ covariant (related to the index in the basis set).

3b. $g^{ij}$ is a tensor that, given two covariant tensors produces an scalar. In other words, given a covariant vector/tensor, produces a contravariant vector/tensor. Or, more generically, maps from a (n+m)-tensor with n contravariant dimensions and m covariant ones to another (n+m)-tensor with (n+1) contravariant dimensions and (m-1) covariant ones.

4b. Applying $g^{ij}$ over $e^\alpha_{\;j}$ we map the $j$ covariant dimension of $e$ to contravariant, obtaining a tensor twice contravariant $e^{\alpha\;i}$

5b. Since $e^{\alpha\;i}$ has two contravariant indexes, it can not be the set of basic vectors of the cotangent space. The basis of cotangent space is expected in the form $e_\alpha^{\;i}$.

  • 1
    $g^{ij}$ is an map from space of vectors to space of functionals. Since the two spaces are usually isomorphic, for finite dimensional vector spaces, one can have isomorphic maps. So no contradiction, see https://physics.stackexchange.com/q/603251/ – Cryo Dec 27 '20 at 10:30
  • 2
    having said that, notation you start with is a bit misleading, yes – Cryo Dec 27 '20 at 10:38
  • @Cryo: thanks for your comments. In the list of steps, could you say which ones are erroneous ? – pasaba por aqui Dec 27 '20 at 11:09
  • I wouldn't call it erroneous, but 4. Raising/lowering of indices is not a trivial summation, it is mapping between two vector spaces. The mapping is not unique, and may not even be defined (singular metric). Perhaps if you gave an example of a real calculation you want to do, it would be easier to see how one can do it in a more careful way. I would define metric as a scalar product between vectors or co-vectors and then use it to induce a map between two spaces – Cryo Dec 27 '20 at 11:20
  • i meant 4 -5, more 5 – Cryo Dec 27 '20 at 11:32
  • I don't seem to understand how you can justify number 3 as this is basically writing e as a tensor with covariant alpha and contravariant i – nemo Dec 27 '20 at 12:47
  • You keep trying to reduce basis vectors to components, but you cannot do this, the components are scalars, the basis consists of vectors, and to reduce vectors to scalars you need functionals, by definition. You proceed to implicitly invoke functionals, and then arrive at a 'contradiction'. – Cryo Dec 27 '20 at 13:21
  • @IronicalCoffee: "i" is the index already present in the expression under discussion $\mathbf{e}_i$ and $\alpha$ is the index used to identify the space component (let say $\alpha \in { t, x, y, z, }$ by example). $\alpha$ must be contravariant because these vectors are the basis of the tangent space. Thus, the set of all vectors of the tangent basis becames a 2-tensor $e^{\alpha}_i$ with one contravariant dimension and one covariant. – pasaba por aqui Dec 27 '20 at 13:27
  • @Cyro: $\mathbf{e}_i$ s a vector (note bold "e"). As vector, it has components $\mathbf{e}_i=(e^t_i,e^x_i,e^y_i,e^z_i)$ (example). I can express that in index notation as $e^{\alpha}_i$. No reduction of vectors to scalars but the usual index notation. – pasaba por aqui Dec 27 '20 at 13:38
  • 1
    @pasabaporaqui As a general rule, don't trust wikipedia too much. Remember that what's written there may not be the result of agreement, but just of persistence of some particular user – who may be completely wrong. I've found several erroneous statements in Wikipedia. It's good to get a general idea maybe, but at the very least check its references. – pglpm Dec 27 '20 at 14:15
  • @pasabaporaqui That said, the initial expression you write is not a raising of indices. Raising/lowering of indices acts on tensor indices, but the "$i$" in $\pmb{e}i$ is just a label; as Schouten would put it, it's "part of the typographical symbol $\pmb{e}$". To be honest I don't like that expression at all, it's very misleading. No metric is needed in defining a reciprocal basis. If we develop that formula in coordinates we'd see that "$g$" disappears altogether. Regarding your point 6., $\pmb{e}^i$ are covariant vectors, with components ${e^i}\alpha$. – pglpm Dec 27 '20 at 14:25
  • @pglpm: "i" comes usually from $\mathbf{e}_i=\frac{\varphi(\dots)}{\partial x^i}$. I think this expression gives to "i" its characteristic as covariant index. – pasaba por aqui Dec 27 '20 at 16:25

1 Answers1

8

I think your confusion comes from the fact that there are several different ways of looking at covariance and contravariance.

The old-school way to treat this issue is to say that given a basis $\mathbf e_i$ for a vector space $V$, we can define a dual basis for $V$ which we write as $\mathbf e^i = g^{ij}\mathbf e_j$. In this framework, both $\mathbf e_i$ and $\mathbf e^i$ belong to $V$. Correspondingly, a vector $\mathbf v\in V$ can be expanded in terms of the original basis or the dual basis, i.e. $\mathbf v = v^i\mathbf e_i = v_i \mathbf e^i$. The $v^i$ are called the contravariant components of $\mathbf v$, while the $v_i$ are the covariant components of $\mathbf v$. In order for this equality to hold, we must have that $v_i = g_{ij} v^j$, where $g_{ij}$ and $g^{ij}$ are matrix inverses of one another.

The inner product between vectors is given by $\mathbf e_i \cdot \mathbf e_j = g_{ij}$. As a result, $\mathbf e^i \cdot \mathbf e_j = g^{ik}\mathbf e_k \mathbf e_j = g^{ik}g_{kj} = \delta ^i_j$. Therefore, we can write the inner product between two vectors in any of the following equivalent ways:

$$\mathbf v\cdot \mathbf w = v^i w^j\mathbf e_i \cdot \mathbf e_j = v_i w^j = v^i w^j g_{ij}$$

Note that at no point have we left the vector space $V$. There is no notion of a dual space here; everything takes place in a single vector space, and contravariance and covariance of vector or tensor components is purely a property of which basis you elect to expand the vector or tensor in. This convention is still in use in fields like crystallography, where $\mathbf e_i$ might represent the lattice vectors of some crystal and the $\mathbf e^i$ are the reciprocal lattice vectors.


The more modern treatment is to say that given a vector space $ V$ and a basis $\mathbf e_i$, we can define a basis $\boldsymbol \epsilon^i$ for the (algebraic) dual space $V^*$ by the condition that $\boldsymbol \epsilon^i(\mathbf e_j) = \delta^i_j$. Any non-degenerate bilinear form (such as a metric) defines an isomorphism between $V$ and $V^*$. Any vector $\mathbf v\in V$ has a covector partner $\mathbf v^\flat\in V^*$ given by $$\mathbf v^\flat = \mathbf g(\mathbf v,\bullet)$$ whose action on a vector $\mathbf w\in V$ is then $$\mathbf v^\flat(\mathbf w) = \mathbf g(\mathbf v,\mathbf w) = g_{ij} v^i w^j$$

This approach is ultimately much cleaner in my opinion. Vectors and covectors become clearly different geometrical objects with different transformation properties, and the differences can be manifested in clearly basis-independent ways. However, it should be noted that the older and newer perspectives are ultimately equivalent.

J. Murray
  • 69,036
  • Great answer, much better and cleaner than mine. Thanks – Cryo Dec 27 '20 at 17:24
  • Great answer, I didn't know about this older point of view. Can you suggest some references? Another advantage of the separation between vector and covector spaces is that it allows us to introduce many notions (fluxes, differential, parallel connection, and so on) relying only on (differential) topological notions, without invoking a metric structure. – pglpm Dec 27 '20 at 17:56
  • 1
    @pglpm For examples of the older convention, you can consult basically any intro text in solid state physics which discusses crystal structures. David Tong has some solid state notes online (see p. 52 of these notes ), or you could find a copy of one of the standard texts like Ashcroft and Mermin or Kittel. I'm definitely in agreement with you; I think the main impediment to adopting this perspective in crystallography is that the field universally works in $\mathbb R^n$ for $n=1,2,3$ so the power and flexibility of a more general [...] – J. Murray Dec 27 '20 at 18:01
  • 1
    [...] manifold treatment probably isn't worth the hassle. – J. Murray Dec 27 '20 at 18:02
  • Thank you. If you happen to have time, please do add some of your remarks and references in that Wikipedia section, I think that'd be helpful for many people who stumble there. – pglpm Dec 27 '20 at 18:13
  • @pglpm I'll take a look if I get a moment. I would also like to add that even some (usually older) established relativity texts use this convention. For example, Weinberg II.5 includes the phrase "Although any vector can be written in a contravariant or a covariant form [...]." This book (and others like it) work entirely in index notation, do not distinguish between a tensor and its components, and define vectors by their transformation rules; not my preferred style, but one which exists. – J. Murray Dec 27 '20 at 21:20
  • Indeed the metric tensor appears everywhere in those older-style texts (it was even called "world tensor" sometimes). A bit surprising, seeing that Schouten in the 1950s had the different geometric meanings quite clear (and Cartan even before). I find the "modern" presentations also more likely to lead to new ideas. They make clear, for example, that charge- or magnetic flux conservation have nothing to do with metric tensors. Stackexchange is also a good medium to make this viewpoint more widely known. – pglpm Dec 27 '20 at 21:45
  • I have doubt here. If we have got two different vector spaces then if we talk about a vector $V$ then it can live either in one space or in its dual (different) space. Why do then books in GR say that this vector $V$ is the only geometrical object and its components in one space are contravariant and in the dual space are called covariant. The vector which has covariant components lives in other space so it’s another geometrical entity. Why do we assign both contravariant and covariant components to the same geometrical vector when the covariant components are of a different geometrical entity – Shashaank Apr 07 '21 at 12:24
  • @Shashaank I'm not sure how to answer that in a way that differs appreciably from the full answer I wrote. It is possible to formulate differential geometry without talking about the dual space as a distinct space, in which contravariant and covariant components refer to the same object expressed in two different bases. However, the more modern approach which distinguishes between the vector space and its dual space is far cleaner in my opinion, which is why it's the formulation I almost universally refer to. – J. Murray Apr 07 '21 at 13:05
  • @J.Murray Thanks. Could you please let me know, that the kind of treatment presented in Caroll is suggestive of treating the contravariant and covariant vectors as two two completely different geometrical entities or not. Also any basic book that you would like two suggest explains this issue of contravaint and covariant vector as different geometrical objects deeply. – Shashaank Apr 07 '21 at 13:10
  • @Shashaank Carroll adopts the modern perspective, and is the best introductory text for that purpose which I know of. MTW also adopts this perspective while giving extensive geometrical pictures for covectors (he uses the terminology "one-form" in anticipation of developing the framework of differential forms). Wald is a third such resource (my favorite GR text), but it tends toward mathematical precision more than the others. For an old-school counterexample, see Weinberg. – J. Murray Apr 07 '21 at 13:29
  • @J.Murray Many thanks. Just for a clarification, Weinberg uses that other approach by saying that a vector has both contravariant and covariant component, right. I am basically referring just Caroll, a bit afraid to pickup Wald and MTW ( its very heavy and lengthy). But I was basically asking a pure mathematical text which deals with the mathematics of GR, differential geometry etc. I fell there are gals in my mathematical knowledge at certain points. I was thinking of picking up a pure mathematical book. Do you sugges5 that or should I pick up Wald instead. – Shashaank Apr 07 '21 at 13:45
  • @Shashaank I doubt a pure math text about differential geometry would be easier to get through than Wald or MTW, so I would suggest one of those if you wanted more rigor. – J. Murray Apr 07 '21 at 13:54