3

First, I basically just want to know what exactly what is the difference between tensors and matrices?

Second, looking for clarification on the following two (possibly confused) thoughts) of my own on this, which follow here.

(1) I was basically taught in school that a tensor is any object that transforms as:

$T_{\alpha \beta }^{'}\rightarrow \frac{\partial x _{\alpha }^{'}}{\partial x _{\alpha }} \frac{\partial x _{}\beta ^{'}}{\partial x _{}\beta } T_{\alpha \beta }$

However, is it correct that not all matrices follow this transformation law? Meaning all tensors can be represented as matrices, but not all matrices are necessarily tensors? (Hence, THIS is the difference between matrices and tensors?)

However, if that is true, then it seems a little loose to say matrices are tensors of rank 2, if indeed all matrices do not actually transform as tensors, and therefore are not tensors?

(2) Is it correct that one of things the law of general covariance says is that if a tensor equation is true in one frame then it is true in all frames? Secondly, is this made possible by the tensor transformation law?

EthanT
  • 517
  • Please ask one focused question per post. This post asks two questions 1) What's the difference between a matrix and a tensor, and 2) Do matrices follow the same transformation laws as tensors. The first question is probably too vague anyway, so maybe get rid of it and just ask the second one. Also, I strongly recommend reading our FAQ on good question titles and improving the title of this post. – DanielSank Jul 03 '17 at 17:11
  • Possible duplicates: https://physics.stackexchange.com/q/20437/2451 and links therein. – Qmechanic Jul 03 '17 at 18:56

4 Answers4

4

However, if that is true, then it seems a little loose to say matrices are tensors of rank 2

Where did you read that?

A matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of a 2 × 3 matrix are two rows and three columns.

Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Examples of such relations include the dot product, the cross product, and linear maps.

Above definitions from Wikipedia.

If you look at Einstein's Equation:

$${\displaystyle R_{\mu \nu }-{1 \over 2}{Rg_{\mu \nu }}+\Lambda g_{\mu \nu }=8\pi {G \over c^{4}}T_{\mu \nu }} $$

Because this equation is written in tensor form, it is true in all frames.

Is it correct that one of things the law of general covariance says is that if a tensor equation is true in one frame then it is true in all frames? Secondly, is this made possible by the tensor transformation law?

Yes and yes. Tensors, because of their transformation properties, are essential in writing GR related equations.

In comparison, a matrix is basically just a book keeping exercise.

This same question is covered in Matrices and Tensors on MathSE.

This extract from Tensors by James Rowland is a better description than I can give. It is longer than I would like to quote, but informative, imo. My tablet, for some reason, will not copy the link, but it is easy to find. My apologies for this.

Notice how in my example for a rank 2 tensor I specified a basis to work in. At this point you might think that a matrix is a rank 2 tensor, or that a vector is a rank 1 tensor. This is not quite right. These are representations of tensors in some basis. The tensor is a more general thing that doesn’t care about the basis you work in. If you have the representation of a rank 1 tensor in some basis (a vector), you can obtain the representation in another basis by coordinate transformation. When you change bases, you must change the representation of your tensor v → P.v, and both of these vectors represent the same rank 1 tensor in some basis. A matrix A doesn’t change when you change bases. It is silly to say “the matrix $P^{−1}AP$ is the matrix A in some basis”, unless P is the identity matrix. However, if A represents the rank 2 tensor B in some basis, then the matrix $P^{−1}AP$ represents B in another basis. The key here is to note the difference between a tensor and its representation. When you specify a direction and distance to your house by pointing, notice that you do not refer to a coordinate system. If you decided your coordinates are North, East, and away from the center of the Earth, then you can specify a vector which represents the direction your arm is pointing. Now if you change to a coordinate system based on South, West, and towards the center of the Earth, the coordinates would all be flipped, but your arm still points in the same direction.

A direction does not depend on a coordinate system, its coordinates do. [My emphasis]

  • 1
    This answer is correct, but it would be a lot more meaningful to show why a matrix is a useful representation of a 1-1 tensor. – DanielSank Jul 03 '17 at 18:34
  • Thanks this was the most helpful post, especially the link to the math site. The second description down helped me get the best feel for the differences. Still a bit uneasy about it, but at least headed in the right direction. – EthanT Jul 04 '17 at 20:26
  • You are more than welcome, one book I would highly recommend to see tensors in action is Relativity Demysfied, by McMahon. It's not big blocks of text, it is worked example after worked example. It's still basic but I keep referring back to it to remember how to solve problems. If you think about it, mass, energy, and EM potentials (and more) are all invariants, so we need a coordinate independent system to describe them, and tensors do that, they "rotate" to give everyone the same view, if that makes sense. –  Jul 04 '17 at 20:35
2

I don't know where you read anyone implying all matrices are tensors, but you're right to define tensors by their transformation law. (As a very simple example of a matrix that's not a tensor in general relativity, take $\sqrt{\left|g\right|}g_{\mu\nu}$, a tensor density of weight $1$.) You're also right to think that tensorial equations are invariant under arbitrary changes in the choice of coordinates; this is what the "general" in general relativity refers to.

J.G.
  • 24,837
2

1) A matrix is literally nothing else than a map: $\mathbb{N}_m\times\mathbb{N}_n\rightarrow R$, where $\mathbb{N}_m$ is a subset of $\mathbb{N}$ that contains $m$ distinct elements, and $R$ is a commutative ring (you can take $R$ to be noncommutative as well tbh, as the case is when using differential form valued matrices for example).

If you are being more general about it, it doesn't have to be a binary map, it then produces "arrays" that are indexed by more than one or two natural numbers. An example of such a matrix that isn't a tensor are the connection coefficients/Christoffel symbols $\Gamma^\sigma_{\mu\nu}$. You can view this as a more general matrix, or as a collection usual 'binary' matrices, one for every value of $\mu$, but these components do not transform tensorially.

A tensor, depending on where you are coming from has inherently more geometric or algebraic content than a matrix. From the geometric point of view, tensors may be identified with matrices, but only if you gauge-fix a frame and they satisfy certain transformation rules, one of which you have stated in your post. From the algebraic point of view, the space of tensors has to satisfy what is called the universal factorization property, stated as

Universal factorization property: Let $(V\otimes W,p)$ be the tensor product of the vector spaces $V$ and $W$. Then for any multilinear map $A:V\times W\rightarrow X$, there exists a unique linear map $A^\otimes:V\otimes W\rightarrow X$ such that $A=A^\otimes\circ p$ .

2) I find it impossibly hard to explain this without the use of principal fiber bundles, but the concept of "covariance" is not as simple. Here are several statements:

  • An equation between two componentical objects is always frame-independent, if the two objects transform equally during change of frame. If two spinors are equal in some frame, they are equal in other frames as well. If two densities are equal in some frame, they are equal in other frames as well. If two connections are equal in some frame, they are equal in other frames as well.

  • Because tensors transform homogenously, if a tensor is zero in some frame, then it is zero in other frames as well. But, this is not unique to tensors, for example, densities also transform homogenously, so this is true for them as well.

  • The previous statement implies that a tensor equation reduced to zero ($S=T\Leftrightarrow S-T=0$) stays that way during change of frame. Note that this is once again not unique to tensors. Connections do not transform homogenously, so $\Gamma^\sigma_{\ \mu\nu}=0$ is not a frame-independent equation, but if $\Gamma$ and $\omega$ are connections, then $\Gamma^\sigma_{\ \mu\nu}-\omega^\sigma_{\ \mu\nu}=0$ is frame independent, because the difference between two connexions transform homogenously (well, tensorially, in fact, but homogenous would be enough).

  • The above show that the usual shizzle about "general covariance" is not enough to fix tensors as objects of interest. The thing about tensors is that their components have (multi)linear dependence on directions. This is something neither densities nor connections possess (spinors kind of do, but that is a very different matter). $$\ $$ For this reason tensors are used to represent physical quantities that depend linearly on one or more directions. These are the quantities that are 1) frame-independent 2) possible to be measured pointwise. By contrast, densities are used to represent physical quantities for which only integrals are frame-independent and connections are used to represent physical quanties that are inherently frame-dependent (the gravitational field, for example).

Now, using more advanced terminology, I would say that a class of fields is covariant with respect to a Lie group $G$, if there exists a principal fiber bundle $(P,\pi,M,G)$, such that $G$ admits a representation $\rho:G\rightarrow GL(k)$ ($GL(k)$ might be real or complex general linear group) and such that the fields in question are sections of an associated vector bundle $(P\times_\rho \mathbb{F}^k,\pi,M,\mathbf{F}^k,\rho(G))$.

For $P=F(M)$ the frame bundle, $G=GL(n,\mathbb{R})$ and $\rho$ being contragredient and tensor product representations of the fundamental representation, this produces tensors and for $\rho:G\rightarrow GL(1,\mathbb{R}),\ \rho(A)=|\det(A)|\cdot$, this produces scalar densities of weight 1, so these objects are all "covariant" with respect to $GL(n,\mathbb{R})$, but for example connexions sit outside this framework.

Bence Racskó
  • 10,948
  • 1
    Exactly the same comment as above: thanks very much:
    +1 I can't speak for the OP, but I found your answer very helpful as a transition between the "just use them" idea of tensors I adopt, and the more formal way of treating them when a deeper understanding is required.
    –  Jul 04 '17 at 20:41
1

A tensor field with 2 indices down is a smoothly varying field $B(-,-)$ on a smooth manifold $M$ which is at a point $x \in M$:

$B_x(-,-): T_x \times T_x \to \mathbb{R}$

where $T_x$ is the tangent space of $M$ at $x$. As an example of that, a metric $g$ is such a tensor, and a symplectic form is another such example.

Similarly, a tensor field with 1 index up and 1 index down is a smoothly varying field A(-) which is at a point $x \in M$:

$A_x(-): T_x \to T_x$.

So a tensor field with 2 indices down is a field of bilinear maps on each tangent space, while a tensor field with 1 index up and 1 index down is a field of linear endomorphisms on each tangent space.

More generally, tensor fields are fields of elements in some tensor product of a number of copies of the tangent space with a number of copies of its dual. Intuitively, as the manifold curves, the tensor fields have to curve accordingly. This is the intuitive meaning of the transformation laws.

A single matrix with constant entries can be thought of as a field on a single point (if one wants to think about it that way), but the tangent space of a single point is just 0. So a single matrix is not a tensor field.

However, given a tensor field, of one of the 2 above types, and given any point $x \in M$, with respect to some basis of $T_x$, one can represent the value of the tensor at $x$ by a single matrix. So in this sense, a tensor field of one of the above 2 types can be thought of in local coordinates as a field of n by n matrices, where $n = \dim M$.

I thought I'd provide a more geometric answer, though I completely understand if the OP may prefer an answer that is phrased in a more physical language. It may provide an alternative point of view though.

Malkoun
  • 641
  • +1 I can't speak for the OP, but I found your answer very helpful as a transition between the "just use them" idea of tensors I adopt, and the more formal way of treating them when a deeper understanding is required. –  Jul 04 '17 at 20:39
  • @Countto10 Thank you! This is why I think Mathematicians and Physicists should communicate more! – Malkoun Jul 04 '17 at 21:24