I'm trying to study tensors. Given a coordinates transformation from cartesian to $u_i$ ones: $$ u_1 = u_1 (x,y,z) \qquad u_2 = u_2 (x,y,z) \qquad u_3 = u_3 (x,y,z) $$ I can write a vector $\mathbf{A}= A_x \mathbf{i} + A_y \mathbf{j} + A_z \mathbf{k}$ in at least two different ways $$ C_1 \frac{\partial \mathbf{r}}{\partial u_1} + C_2 \frac{\partial \mathbf{r}}{\partial u_2} + C_3 \frac{\partial \mathbf{r}}{\partial u_3} = c_1 \nabla u_1 + c_2 \nabla u_2 + c_3 \nabla u_3 $$ (where $C_i$ and $c_i$ are appropriate constants) and show that the corresponding coordinates in a different coordinates system $\bar{u}_i$ are related to this ones by (using summation convention) $$ \bar{C}_p = {C}_q \frac{\partial \bar{u}_p}{\partial u_q} \qquad \bar{c}_p = {c}_q \frac{\partial {u}_q}{\partial \bar{u}_p} \qquad $$ I understood proofs until here. If this can help in writing an answer, I say that I understood the meaning of metric tensor too (to do quickly scalar product if we know the contravariant description of the vectors). I can't see how this is useful but on trust I'm going on studying hoping that one day all will be clear. But first of all I don't understand when they started speaking about contravariant and covariant vectors: a vector (a displacement, a speed, an electric field, etc.) is contravariant if I use $\frac{\partial \mathbf{r}}{\partial u_i}$ basis and covariant if I use $\nabla u_i$ basis to describe it? Co or contravariant is not a property of the vector itself but rather a consequence of the choice we do in describing it? In addiction things became very confused when they start speaking about tensors with rank greater than one. I crashed into $$ \bar{A}^{pr} = \frac{\partial \bar{x}^p}{\partial x^q} \frac{\partial \bar{x}^r}{\partial x^s} A^{qs} $$ which is said to be a contravariant tensor of rank two. I suppose this is in some way related to the fact that they used $\frac{\partial \mathbf{r}}{\partial u_i}$ basis in describing a matrix with some physical meaning (say an inertia tensor for example) but I can't see in which way this happens: a matrix is not a linear combination of some basis. When tensor rank become grater than one things look harder and paradoxically books start going faster. How can I proof the last equation I wrote? "This is the definition of a tensor of second rank" shouldn't be an answer. It should be possible to show that if I claim that this entity is the same in both system of coordinates, and if I use the $\frac{\partial \mathbf{r}}{\partial u_i}$ basis in describing it, then coordinates transform in this way. Maybe to be concrete and so clearer, I should play on physical grounds. But anyway I can't do that.
Edit
The ground beyond $ \bar{C}_p = {C}_q \frac{\partial \bar{u}_p}{\partial u_q} $ is $$ C_1 \frac{\partial \mathbf{r}}{\partial u_1} + C_2 \frac{\partial \mathbf{r}}{\partial u_2} + C_3 \frac{\partial \mathbf{r}}{\partial u_3} = \bar{C}_1 \frac{\partial \mathbf{r}}{\partial \bar{u}_1} + \bar{C}_2 \frac{\partial \mathbf{r}}{\partial \bar{u}_2} + \bar{C}_3 \frac{\partial \mathbf{r}}{\partial \bar{u}_3} $$ but I can't see the ground beyond $\bar{A}^{pr} = \frac{\partial \bar{x}^p}{\partial x^q} \frac{\partial \bar{x}^r}{\partial x^s} A^{qs}$.
Edit 2
The first part of my question find an excellent answer in Fleisch statement "It's not the vector itself that is contravariant or covariant, it's the set of components that you form through its parallel or perpendicular projections" (see link given by Void). But I am in trouble whit the second part.
I'm not sure if it is better ask here or in math community. I understood rules of general transformations of coordinates for vectors, but I'm trying to give a sense to definition of secon rank tensor. I imagine the simpler way is starting considering metric tensor (in a second step I'll try to understand why what works for $\mathsf{g}$ will work for every matrix whit some physical meaning). Let's consider 2 basis $\{\mathbf{e}_i\}$ and $\{\mathbf{\bar{e}}_i\}$ in $\mathbb{R}^N$. If $a_{ij}$ is $j$ component of vector $\mathbf{\bar{e}}_i$ in not barred base, we have $$ \bar{g}_{ij} \equiv \mathbf{\bar{e}}_i \cdot \mathbf{\bar{e}}_j = (a_{i1} \mathbf{e}_1 + \dots + a_{iN} \mathbf{e}_N ) \cdot (a_{j1} \mathbf{e}_1 + \dots + a_{jN} \mathbf{e}_N ) $$ which lead to (using summation convention) $$ \bar{g}_{ij} = a_{ip} a_{jq} g_{pq} $$ where $ {g}_{pq} \equiv \mathbf{{e}}_p \cdot \mathbf{{e}}_q$. Now, if $\mathbf{e}_i$ and $\mathbf{\bar{e}}_i$ are tangent basis $\frac{\partial \mathbf{r}}{\partial u_i}$ and $\frac{\partial \mathbf{r}}{\partial \bar{u}_i}$ for some coordinate change, components $p$ and $q$ transform in this way (I use $m$ and $n$ as dummy index) $$ {C}_p = \bar{C}_m \frac{\partial {u}_p}{\partial \bar{u}_m} \qquad {C}_q = \bar{C}_n \frac{\partial {u}_q}{\partial \bar{u}_n} $$ so $$ a_{ip} = \bar{a}_{im} \frac{\partial {u}_p}{\partial \bar{u}_m} \qquad a_{jq} = \bar{a}_{jn} \frac{\partial {u}_q}{\partial \bar{u}_n} $$ and then $$ \bar{g}_{ij} = \bar{a}_{im} \bar{a}_{jn} \frac{\partial {u}_p}{\partial \bar{u}_m} \frac{\partial {u}_p}{\partial \bar{u}_n} g_{pq} $$ But $\bar{a}_{im}$ is $i$ component of vector $\mathbf{\bar{e}}_m$ in barred base, so it is zero if $i \neq m$ and $1$ if $i=m$. In other words $\bar{a}_{im} = \delta_{im}$. The same goes with $\bar{a}_{jn}$ so we have $$ \bar{g}_{ij} = \delta_{im} \delta_{jn} \frac{\partial {u}_p}{\partial \bar{u}_m} \frac{\partial {u}_p}{\partial \bar{u}_n} g_{pq} $$ and then $$ \bar{g}_{ij} = \frac{\partial {u}_p}{\partial \bar{u}_i} \frac{\partial {u}_p}{\partial \bar{u}_j} g_{pq} $$ By having used tangent basis $\frac{\partial \mathbf{r}}{\partial u_i}$ and $\frac{\partial \mathbf{r}}{\partial \bar{u}_i}$ I would have expected a completely different rules if transformations (I wouldn't?), the one that books claim is transformation for contravariant tensor: $$ \bar{g}_{ij} = \frac{\partial \bar{u}_i}{\partial {u}_p} \frac{\partial \bar{u}_j}{\partial {u}_q} g_{pq} $$ Bar signs in partial derivatives are inverted. What went wrong?
Yes, it is. For example, $$ \begin{pmatrix}a & b \ c & d\end{pmatrix} = a \begin{pmatrix}1 & 0 \ 0 & 0\end{pmatrix}
$$ so ${ \begin{pmatrix}1 & 0 \ 0 & 0\end{pmatrix}, \ \begin{pmatrix}0 & 1 \ 0 & 0\end{pmatrix}, \ \begin{pmatrix}0 & 0 \ 1 & 0\end{pmatrix}, \ \begin{pmatrix}0 & 0 \ 0 & 1\end{pmatrix} }$ is a basis.
– md2perpe Jan 04 '19 at 18:05