22

I was going through the definition of stress. I came to know that stress is a tensor quantity. When I checked the net about - 'What is a tensor quantity', I came to know that it is neither a scalar quantity nor a vector quantity. So how can a quantity be neither vector nor scalar?

Physicists on this site, please explain your answers in simple ways too, that would be just enough for others (who are school students, etc.) to understand. You can do so by keeping the heading "SIMPLER WAY" on top of your answers.

Student
  • 347

11 Answers11

44

The best way to settle all the problems with vectors, tensors, and other related concepts, is to forget about the sloppy definitions quite widespread among physicists and start with mathematically clean definitions. Only after this first step can it make sense of most of the specialized (and sometimes confusing) physical language.

I'll try to provide some reasons for this introductory statement. You start with the question "How can objects be neither vectors nor scalar?". To provide a direct answer, it would be useful to start with the definition of a vector.

The mathematical definition is that a (real) vector is a member of a set of objects such that a binary composition (sum) is defined for every pair of vectors. Moreover, for each vector, the concept of multiple by a real number of that vector is defined as an element of the set. The sum of vectors and multiplication by a real number must obey a set of rules (see, for instance, the definition section on the Wikipedia page for vector spaces). In brief, the sum must be commutative, associative, equipped with a neutral element (the zero vector), and for each vector, an (additive) inverse vector must exist. Multiplication by a scalar (any real number) must also be associative, multiplication by $1$ must be the identity, and multiplication must be distributive with respect to the sum of vectors and scalars.

This definition looks much more complicated than the usual physical definition of a vector as a quantity characterized by magnitude and direction. Still, it is more complete (it explicitly contains the relevant information about how to combine vectors), and it is more general (it applies even to vectors for which the concept of direction would be unclear, like the case of functions).

Therefore a first partial answer to the original question appears: yes, there are physical quantities that are neither scalar nor vectors. Interestingly, this situation is not confined to the tensors. The state of a fluid at some point, as described by the local velocity, density, and internal energy, is a tuple of physical quantities, which is neither a scalar nor a vector, but instead the combination of one vector and two scalars. Even more important, finite rotations in three dimensions, even if they are characterized by magnitude and direction, fail to be vectors because finite rotations do not commute.

Let's go to the tensors. In linear algebra, they emerge naturally from a combination of vectors different from the sum. A tensor space is a vector space made out of two or more starting vector spaces. As such, a tensor is a vector belonging to a set made by the original vector spaces. Therefore, a tensor shares with any other vector all the basic axioms. Tensors of the same tensor space can be summed and multiplied by a scalar.

The meaning of the sentence "tensor is not a vector" is that a tensor does not belong to any of the vector spaces used to build the tensor product. It is a vector of a different vector space.

To add complexity to this picture, tensor spaces can be used to describe linear transformations of vector spaces into vector spaces. That is the case of the stress or the strain tensor. The stress tensor maps a vector (describing the orientation of a surface element) into another vector (belonging to a different vector space): the force on that surface element. The meaning of the statement that the stress tensor is not a vector refers uniquely to the fact that it is neither a direction nor a force. However, the set of all possible stress tensors can be equipped with a sum and a scalar multiplication obeying all the axioms of a vector space.

At a first glance, it may sound confusing, but some investment of time to understand the mathematical definition of a vector space is highly rewarding in terms of physical understanding.

  • 16
    This is not helpful at all. Yes, a physicist’s tensor is a special case of what a mathematician would call a vector. But it’s still a geometrically distinct object from a physicist’s vector, and you haven’t addressed this. – knzhou May 31 '21 at 17:09
  • 3
    In addition, the physicist’s definitions of scalars, vectors, and tensors are not unrigorous at all. They correspond to objects which transform in given representations of $SO(3)$. – knzhou May 31 '21 at 17:12
  • 14
    @knzhou Therefore, you think that understanding the definition of a vector space is not useful? (its usefulness was a strong point in my main message). If you think it is not useful, please, have a look at Deschele Schindler's answer. Your comment about the representations of $SO(3)$ is even less understandable. The whole representation theory of groups hinges on the concept of vector space and certainly, you know that you may have different representations on spaces of different dimensionality. But... how do you define the dimensionality of the space? – GiorgioP-DoomsdayClockIsAt-90 May 31 '21 at 17:57
  • The second-to-last paragraph is confusing, because engineers historically wrote the six independent components of symmetric tensors like stress and strain as so-called "vectors with 6 components" which do not behave anything like genuine vectors. Thankfully, modern continuum mechanics ignores that notational aberration and uses "genuine" tensors throughout. Most of the confusion about stress and strain at high-school level comes from using individual components of the stress and strain tensors and pretending they are scalars. – alephzero Jun 01 '21 at 21:36
  • 1
    Hi all -- I tried to clear out the discussion comments but leave ones related to the content specifically... If somebody feels I didn't represent the criticisms and responses appropriately, let me know and I can try again. – tpg2114 Jun 02 '21 at 11:13
20

There's a popular misconception that physicists define vectors as quantities with magnitude and direction. The only people I know who define vectors that way are schoolteachers, and they hope nobody asks if $\pm$ are directions in $\Bbb R$. @GiorgioP's answer has already explained what vector spaces and tensor spaces are, so vectors and tensors respectively live in those. But I want to talk about another notion of both italic terms in (more than just) physics.

For my present purposes, a tensor is more than just a multidimensional array of numbers; it's a quantity whose contents transform a certain way under a change in coordinate systems. A rank-$(p,\,q)$ order tensor $T$ of components $T^{a_1\cdots a_p}_{b_1\cdots b_q}$ in coordinates $x^c$ transforms, in coordinates $y^C$, to $T^{A_1\cdots A_p}_{B_1\cdots B_q}$ satisfying$$T^{a_1\cdots a_p}_{b_1\cdots b_q}=\sum_{A,\,B}\prod_{i=1}^p\frac{\partial x^{a_i}}{\partial y^{A_i}}\prod_{j=1}^Q\frac{\partial y^{B_j}}{\partial x^{b_j}}T^{A_1\cdots A_p}_{B_1\cdots B_q}.$$This is a special case of a rank-$p+q$ tensor. Let's consider four special cases:

  • $p=q=0$ so $T$ has one unlabelled component, invariant under the change in coordinates. We call $T$ a scalar. The other cases I'll discuss have a nonzero but fixed number of indices at each height, so I'll presently introduce a shorthand.
  • $p=0,\,q=1$ so $T_b=\sum_B\frac{\partial y^B}{\partial x^b}T_B$. We know exactly how many indices we're summing over; so, as promised, let's write $T_b=\frac{\partial y^B}{\partial x^b}T_B$. We say $T$ is a covariant vector. The choice $T_b:=\frac{\partial f}{\partial x^b}$ with $f$ a function of the coordinates satisfies $df=T_bdx^b=\nabla f\cdot dx$, so the gradient of a function is a covariant vector. In particular, we can drop the $f$ from $\frac{\partial f}{\partial x^b}=\frac{\partial y^B}{\partial x^b}\frac{\partial f}{\partial y^B}$ to state this behaviour as an operator equation,the chain rule $\frac{\partial}{\partial x^b}=\frac{\partial y^B}{\partial x^b}\frac{\partial}{\partial y^B}$.
  • $p=1,\,q=0$ so $T^a=\frac{\partial x^a}{\partial y^A}T^A$. We say $T$ is a contravariant vector. In $df=f_adx^a$ with $f_a:=(\nabla f)_a$, $dx$ is contravariant.
  • $p=q=1$ so $T^a_b=\frac{\partial x^a}{\partial y^A}\frac{\partial y^B}{\partial y^b}T^A_B$. We say $T$ is a mixed tensor of rank $2$. One can raise or lower an index so it won't be mixed. In fact, the most important examples of rank-$2$ tensors that aren't mixed are the metric tensors used to do this, viz. $T^a_b=g^{ac}T_{cb}=g_{bd}T^{ad}$.

As you can see, a vector is just a rank-$1$ tensor.

Where possible, physicists like to write an equation governing a system in the form "tensor is zero" or "tensor equals a tensor". This simultaneously implies the invariance of the equation's validity under changes of coordinates and tells us how (if at all) the components transform. For example, $\partial_aF^{ab}=\mu_0j^b$ is a truth, regardless of coordinate system, about electromagnetism that equates two contravariant vectors.

One of Einstein's most important contributions to physics was realizing which coordinate transforms should preserve laws such as these. Before him, Galilean invariance was attempted instead of Lorentz invariance. Newtonian physics is an approximation that uses the former. For example, $f^i=\frac{d}{dt}p^i$ is Newton's second law. (In Cartesian coordinates, Galilean invariances amounts to taking the metric tensors to be identity matrices, so the height of indices is irrelevant.) The $3$-vectors of Newtonian mechanics, while perhaps more familiar to ordinary experience than $4$-vectors (although both are measurable), are invariant only under Galilean transforms, so from a modern perspective they're "not true vectors" because they don't transform the right way. But in the terms I've presented here, they would have been considered true vectors before such discoveries.

J.G.
  • 24,837
12

In simplest terms: tensors encode more information than vectors (= rank 1 tensors), which encode more information than scalars (= rank 0 tensors).

For example:

  • If you have a metal bar with a certain temperature distribution, not necessarily the same everywhere within the bar, for each point you only need to know one number to describe the temperature there.
  • If you put the bar next to a powerful magnet, it will be subject to a force, again, not the same everywhere. This time, one number is not enough to describe the force: if you look at a certain point, you need to know both the magnitude and the direction, or alternatively, the $x$, $y$ and $z$ components of the force. This is a 3-dimensional quantity.
  • If the bar ends up deformed, now you need even more information. One way to describe a deformation is to imagine a grid inside the bar before the deformation and then to look at how it deforms in terms of “If I go along the $i$-th dimension of the grid, by how much does the deformation cause it to veer in the direction of the $j$-th dimension?”—this set of 9 questions requires a 9-dimensional quantity as the answer, or a rank 2 tensor in the 3D space.

It doesn’t stop there, for example, the Christoffel symbol in general relativity tells us that if we go a small distance along the $i$-th basis vector, then the curvature of spacetime will cause the $j$-th basis vector to increase by this much in the direction of the $k$-th basis vector, making this a rank 3 tensor. And so on.

  • 1
    Good answer, except that the Christoffel symbols are not a tensor. You could write a similar sentence about the Riemann tensor though. – knzhou Jun 01 '21 at 15:48
  • Christoffel symbols are examples of a connection, if I remember correctly. – Duncan W Jun 01 '21 at 16:05
  • What would be a better example of a rank 3 tensor with a simple but rigorous explanation of what it does to 3 vectors? – Roman Odaisky Jun 01 '21 at 16:53
5

Imagine a vector as you know it. Sticking with stress example, say it is the restoring force inside a rubber band.

So if we displace the object along a direction $\hat r$ by a magnitude $dr$, the force developed is given by: $$\vec F = k \times d\vec r$$

Here $k$ is a scalar as it scales the vector $d\vec r$ to a new magnitude. Note multiplying by a scalar cannot change the direction.

But now imagine instead of a rubber band we have a rubber sheet. And lets say (important) the sheet is such that it is easier to stretch along one direction and harder along the other. let's call this directions $\hat{x'}$ and $\hat{y'}$. And let the proportionality constant $k$ along each direction be $k_1$ and $k_2$. So if we wanted to find the restoring force developed inside the sheet given we stretched the sheet in some random direction $d\vec r$,

We will resolve our displacement vector $d\vec r$ along $\hat{x'}$ and $\hat{y'}$ as :
$d\vec r = dx'\hat{x'} + dy'\hat{y'}$

enter image description here

Now the restoring forces for each of this component displacements are :

$F_{x'} = k_1 dx'$ along $\hat{x'}$

$F_{y'} = k_2 dy'$ along $\hat{y'}$

So net restoring force developed is : $$\vec F = k_1 dx' \ \mathbf{\hat{x'}}+ k_2 dy' \ \mathbf{\hat{y'}}$$

Notice that this cannot be written as $\vec F = k d\vec r$ for some scalar $k$. That means $F$ is not in the same direction as $dr$. So a scalar cannot describe how force and displacement are related.

But we can write the relation as : $$\pmatrix{F_{x'} \\ F_{y'} } = \pmatrix{k_1 & 0 \\ 0 & k_2} \pmatrix{dx' \\ dy' }$$

This 2x2 matrix that is neither a scalar nor a vector is the thing you are in search for. It is something that not only scales a vector but also changes its direction. This is the stress tensor which basically is a matrix.

The same happens in many cases... Electric polarisation, Moment of inertia etc also are not just scalars but matrices. This is because Some substances have different Polarisations along, different moments of inertia etc along different directions.

Now in this case, I had my coordinate axes along the two special directions $\hat{x'}$ and $\hat{y'}$. But if our axes are not so, the off diagonal elements of $\mathbf k$ matrix will also be non-zero. And if our sheet has edges not perpendicular to the direction of force, we may have to resolve the forces along the normal vectors as well... So the matrix gets more and more complex but I hope I conveyed how something can be neither a scalar nor a vector.

4

Both vectors and scalars are subsets of tensors. A zero order tensor is a scalar. A first order tensor is a vector.

user256872
  • 6,601
  • 2
  • 12
  • 33
  • 4
    So do you mean that tensor is like a larger group that constitutes both vectors and scalars? In that case, force - a vector quantity is a tensor(1st order), and even mass, a scalar quantity is also a tensor (zero-order). Have I gotten you correct?? – Student May 30 '21 at 11:28
  • Yes, exactly. Furthermore, the same way that the entries in a column vector are scalars, you can construct a second-order tensor with vectors. E.g. let $\mathbf v = \pmatrix{1 \ 1 \ 0}$ and $\mathbf u = \pmatrix{0 \ 1 \ 1}$, then some tensor $\mathbf T$ can be expressed as $\mathbf T = \pmatrix{1 & 0 \ 1 & 1 \ 0 & 1} $ using vectors as $\bf T = \pmatrix{ \bf v & \bf u}$ – user256872 May 30 '21 at 19:17
  • 1
    Can't really downvote because it is quite common in physics and engineering to see vectors and scalars as special cases of tensors, but I disagree with this sentiment. Mathematically, it's much more sensible to not define tensors by themselves at all, but instead talk about tensor products of vector spaces. – leftaroundabout May 31 '21 at 08:03
  • Formally, the number 3 and the vector [3] are actually different things. Practically, everyone outside of a theoretical mathematics paper or proof ignores that and just conflates them, but technically they are different. – RBarryYoung May 31 '21 at 21:49
4

For the case of the stress tensor, it contains the information of 2 entities that can be represented as arrows in the space: the normal direction to a surface, and distributed forces in this surface.

If the normal direction is the unit vector $\mathbf {\hat n}$ and $\mathbf T$ is the distributed force: $\sigma \mathbf {\hat n} = \mathbf T$

The entries of $\sigma$ change if the coordinates are rotated, and the rule of change is $\sigma' = R\sigma R^T$, where R is the rotation matrix. In the case of $\mathbf {\hat n}$ (the same for $\mathbf T$), the rule is: $\mathbf {\hat n'} = R\mathbf {\hat n}$

It is that rules of change what makes a square matrix a tensor, or a column matrix a vector.

2

Much like @GiorgioP, I'm of the opinion that mathematical objects ought be described in mathematical terms. However, appealing to and subsequently glossing over the specific, technical definition of the tensor product is only likely to lead to confusion if anyone attempts to read further.

As such, I'll summarize/explain the definition of a mathematician's "tensor" as appearing in the appendix of Lee's Introduction to Riemannian Manifolds; this is what was helpful for me in understanding tensors without tensor products, which subsequently helped me understand the tensor product itself. After all, this definition is what the corresponding product's obtuse, categorical definition was created to be consistent with. My physics is 3-6 years behind my mathematics, so I'm not certain this definition conforms to the physicist's understanding. It should, though.

Vector Spaces

An abstract, mathematical vector space over a field of scalars (for simplicity, $\mathbb{C}$) is a set of objects (not necessarily being a list of numbers like usual) that we can do a few things with:

  • add objects to each other to get a new object, in analogy with the "usual" vector addition (e.g. is commutative, associative, has $\vec{0}$ and so on)
  • multiply objects by scalars, once again in analogy with the "usual" scalar multiplication.

Vector Space Product

Since a vector space is a set of objects, we can take Cartesian products of the underlying sets of (not necessarily) different vector spaces. The Cartesian product of two sets $S$ and $T$ is written $S \times T$ and corresponds to the set of all ordered pairs of objects from $S$ and $T$, e.g. the $x$-$y$ plane $\mathbb{R}\times \mathbb{R}$.

It turns out we can put an addition and a scalar multiplication on this new set of objects that are "usual" in just the same way as the old one; this gives rise to a new vector space called the product space. We denote the product space by $\times$, just like in the case for ordinary sets.

Dual/Bra Space

Associated with any abstract vector space $V$ is a dual space $V'$ consisting of all linear functionals over the vector space. A functional is a function $f: V \to \mathbb{C}$, and a linear functional is one with the property that $f(av+w) = af(v)+f(w)$ for all $a\in \mathbb{C}$ and $v,w\in V$.

This is an enormously useful construction in mathematics, often having "nicer" properties than the original space. However, it is arguably more important in physics: we represent all possible states a quantum system can be in by such a vector space, and linear functionals represent measurements of the system. These correlate to bras in Dirac notation, as it turns out that in an inner product space (a vector space with a generalized dot product) any linear functional over $V$ can be represented as $w \mapsto \langle v, w\rangle $ for some $v$, so we write an arbitrary functional in physics as $\langle v|$ for an arbitrary $v$.

Tensors

A natural extension of the linear functional is the multilinear functional, a function $F: V_1\times...\times V_k \to \mathbb{C}$ that is linear in each argument: $F(av + w,...)= aF(v,...) + F(w,...)$, $F(u, av+w,...) = aF(u,v,...) + F(u,w,...)$ and so on. This could be useful, for instance, if one is concerned with simultaneous measurements of multiple quantum systems.

Now, a covariant tensor of rank $k$ over $V$ is a multilinear functional on $V\times...\times V$, where there are $k$ products.

Similarly, a contravariant tensor of rank $l$ over $V$ is a multilinear functional on $V'\times...\times V'$, where there are $l$ products.

Mixed tensors are multilinear functionals that act on spaces formed not purely from the products of $V$ or $V'$, and are called $k$-covariant, $l$-contravariant tensors.

These derive their use in physics primarily from their use in Riemannian/pseudo-Riemannian geometry: a metric tensor on a surface (picture a complex, curved 2D sheet in a 3D space) takes in two tangent vectors $v,w$ to the surface at a point and outputs a scalar, and as such is a rank-2 covariant tensor over the tangent space to the surface. This is then used to compute a notion of the intrinsic curvature of the surface at a point, which you'll likely know is closely related in GR to the gravitational field.

The Cauchy stress tensor can be understood to be a 2-covariant tensor, since it maps two vectors in $\mathbb{R}$, the normal surface vector and the force vector, to the corresponding stress value. I'm not sure what exactly Gio's getting at with saying the tensor "maps the normal vector to the force vector;" that seems inconsistent with this definition of a tensor. I'm sure he's more familiar with the specific tensor than me--perhaps there's a tensor product map involved being confused/alternatively defined with the mathematical tensor?

Tensor Spaces

For complete correspondence with Gio's answer, the formal construction of tensor spaces is simply as the space of all tensors of a given type, for example:

  • $T^k(V)$, the space of all $k$-covariant tensors over $V$,
  • $T^l(V')$, the space of all $l$-contravariant tensors over $V$,
  • $T^{k,l}(V)$, the space of all $k$-covariant, $l$-contravariant tensors over $V$

There's about as many more topics in this subject as I've already explained, like tensor bundles, tensor fields, tensor products, and Lie derivatives. But this should suffice to answer your question, hopefully a little better or more mathematically.

Oh and it just occurred to me to add: one often hears that "a vector is a rank-one tensor." It's more accurate to say that a vector describes a rank-one tensor, since there's little relationship between the original vector and the resulting functional. This fact results from functionals being able to be constructed as a linear combination of basis elements: if a vector $v$ decomposes as $a_1v_1+...+a_nv_n$, then $a_1v_1'+...+a_nv_n'$ is a functional, where $v_1',...,v_n'$ are elements of some dual basis, each functionals themselves. Similarly, scalars are associated with the "tensor" $a\mapsto a$, the identity on the scalar field.

Consequently, the relationship between vectors and scalars and tensors is not terribly useful, as scalars are entirely useless as "tensors" and vectors are almost entirely so.

Duncan W
  • 288
  • Wow! Thanks. I wanted something like that too so that I could understand all the important terms otherwise I would have to research a lot and spend a lot of time. I would request everyone to just go through these and like if there is any edit required, please do so. Thanks a lot :) – Student Jun 01 '21 at 16:53
  • Actually, if all the complex terms that may not be understood are also included, it would be awesome! Not only for me but also for other people I believe. – Student Jun 01 '21 at 16:54
  • Two comments: 1) manifolds are not necessary to introduce vectors and tensors. I think that from a logical and pedagogical point of view, it is better to start with vector spaces and tensor spaces before adding further mathematical structures like vector or tensor fields on manifolds; ii) the natural way to introduce Cauchy's stress tensor is as a set of 9 coefficients in the linear relation between a unitary vector defining a direction and the force (per unit area) on a surface element in a continuum medium. – GiorgioP-DoomsdayClockIsAt-90 Jun 01 '21 at 17:38
  • @GiorgioP 1) I don't think I introduced manifolds before vectors and tensors. I agree with what you said, I just don't think I did it differently. My only pedagogical point was that tensor products aren't necessary to understand tensors; it's the reverse. ii) This is probably the case; I haven't much experience with this particular tensor. I just wanted to share the mathematical perspective I've found helpful to understand tensors generally. – Duncan W Jun 02 '21 at 08:29
1

The key to this question is: A tensor, what is it that is not a vector nor a scalar (but, we care about it)? What it is, is in a object that transforms as a representation of $SO(3)$ (the group of rotations in $\mathbb{R}^3$... so I'm talking 3-vectors. 4-vectors are similar).

The reason we use scalars, vectors, and tensors is because they are objects that respect certain symmetries of space in a mathematically precise way. They can be grouped into sub-sets (irreducible subspaces) that do not inter-mix under rotations. Moreover, they can put into a basis that are eigenvalues of rotations: they transform into themselves-scaled by a factor. That is a very special property.

At any given order $l\in(0,1,2,\ldots)$, there are $2l+1$ basis tensors, $T^{(l,m)}$, that are eigentensors of rotations($R_{\theta}$), about an arbitrary axis (chosen to be $z$):

$$ R_{\theta}(T^{(l,m)})= e^{im\theta}T^{(l,m)}$$

where $m\in(-l,-l+1,\ldots,l-1,l)$.

Scalars are the trivial representation ($m=0 \implies e^{im\theta}=1$). They are unchanged by rotations. Physics examples of scalars are pressure, density, energy density. (There area also pseudo scalars like $\vec E \cdot \vec B$, which are rotationally invariant, but change sign under reflection). Since there is only one basis scalar, this representation or anything that transforms like it is called a "singlet", and is written as ${\bf 1}$.

Vectors in the fundamental representation are:

$${\bf \hat{e}}^{\pm} = \mp({\bf \hat{x}} \pm i {\bf \hat{y}})/\sqrt{2}$$ $${\bf \hat{e}}^0={\bf \hat{z}}$$

You can verify that they are closed under rotations, and they are eigenvectors of $z$-rotations. This representation is labeled ${\bf 3}$.

At this point, if you encounter a physical object with different transformation properties, scalars and vectors will be insufficient to described it. The 1st step is to create a tensor product space, whose basis are 9 dyads: $({\bf\hat{x}}{\bf\hat{x}},{\bf\hat{x}}{\bf\hat{y}}, \cdots, {\bf\hat{z}}{\bf\hat{z}})$. Alternatively, one could take dyads of the various ${\bf\hat{e}}^m$, such as:

$$ {\bf\hat{e}}^{(2,\pm 2)}\equiv {\bf\hat{e}}^{\pm}{\bf\hat{e}}^{\pm} = {\bf\hat{x}}{\bf\hat{x}}-{\bf\hat{y}}{\bf\hat{y}}+i ({\bf\hat{x}}{\bf\hat{y}}+{\bf\hat{y}}{\bf\hat{x}})$$

which are also eigentensors of $z$-rotations (with eigenvalue $e^{\pm 2i\theta}$). The trouble is that, in general, the tensor product of two (or more) eigenvectors is not an eigentensor. Nevertheless, that can be solved with Clebsch-Gordan coefficients, giving:

$$ {\bf\hat{e}}^{(2,\pm 1)}\equiv ({\bf\hat{e}}^{\pm}{\bf\hat{e}}^{0}+{\bf\hat{e}}^{0}{\bf\hat{e}}^{\pm})/\sqrt 2 = \frac 1 2[\mp({\bf\hat{x}}{\bf\hat{z}}+{\bf\hat{z}}{\bf\hat{x}}) -i({\bf\hat{y}}{\bf\hat{z}}+{\bf\hat{z}}{\bf\hat{y}}) ]$$

$$ {\bf\hat{e}}^{(2,0)}\equiv ({\bf\hat{e}}^{+}{\bf\hat{e}}^{-}+{\bf\hat{e}}^{-}{\bf\hat{e}}^{+} +2{\bf\hat{e}}^{0}{\bf\hat{e}}^{0})/\sqrt 6 = (-{\bf\hat{x}}{\bf\hat{x}}-{\bf\hat{y}}{\bf\hat{y}}+2{\bf\hat{z}}{\bf\hat{z}})/\sqrt 6$$

The ${\bf \hat{e}}^{(2,m)}$ are the 5 basis for pure rank 2 tensors. Under an arbitrary rotation, they transform among themselves.

You've probably noticed that $5<9$...we're missing some degrees-of-freedom. This is solved by representation theory, via:

$${\bf 3}\otimes{\bf 3}={\bf 5}\oplus{\bf 3}\oplus{\bf 1}$$

which says that a tensor product of triplets can be decomposed into a tensor sum of the quintet, another triplet, and another singlet.

The singlet is the isotropic part of a rank-2 tensor. It is invariant under rotations:

$$ {\bf\hat{e}}^{(0,0)}\equiv ({\bf\hat{e}}^{+}{\bf\hat{e}}^{-}+{\bf\hat{e}}^{-}{\bf\hat{e}}^{+})/\sqrt 3 -{\bf\hat{e}}^{0}{\bf\hat{e}}^{0} \propto ({\bf\hat{x}}{\bf\hat{x}}+{\bf\hat{y}}{\bf\hat{y}}+{\bf\hat{z}}{\bf\hat{z}})$$

It is proportional to:

$$ \delta_{ij}=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 &1&0\\ 0&0&1 \end{array}\right)$$

(note that $ \delta_{ij}$ is not a matrix, even though I was able to represent it as one an a Cartesian basis). As part of the Cauchy stress-tensor, it represents the sum of pressure and velocity divergence. The remaining part it is called the deviator:

$$s_{ij}=\sigma_{ij}- \frac 1 2 {\rm Tr}(\sigma)\delta_{ij}$$

which has the same 5 independent components as ${\bf{\hat e}}^{(2,m)}$ , which in a fluid can be associated with quadrupole shape changes of a fiducial volume. Basically, these are the only allowed geometric shapes for a rank-2 process, just as vector can only be associated with 1 direction.

The remaining triplet are the antisymmetric part of the tensor:

$$ {\bf\hat{e}}^{(1,\pm 1)}\equiv ({\bf\hat{e}}^{\pm}{\bf\hat{e}}^{0}-{\bf\hat{e}}^{0}{\bf\hat{e}}^{\pm})/\sqrt 2 = \mp\frac 1 2[({\bf\hat{y}}{\bf\hat{z}}-{\bf\hat{z}}{\bf\hat{y}}) \pm i({\bf\hat{z}}{\bf\hat{z}}-{\bf\hat{x}}{\bf\hat{z}}) ]$$

$$ {\bf\hat{e}}^{(1,0)}\equiv ({\bf\hat{x}}{\bf\hat{y}}-{\bf\hat{y}}{\bf\hat{x}})/\sqrt 2$$

The last formula is equivalent to the Cartesian form: it is the cross-product, and describes axial-vector objects like torque, angular momentum, magnetic field, etc.

At higher ranks, the algebra becomes more tedious (c.f., combing three spin-one particles in quantum mechanics). You will find that the 27-components break down as follows:

$${\bf 3}\otimes{\bf 3}\otimes{\bf 3}={\bf 7}\oplus {\bf 5}\oplus{\bf 5}\oplus{\bf 3}\oplus{\bf 3}\oplus{\bf 3}\oplus{\bf 1}$$

For example, you can verify the following rank-3 tensors (part of ${\bf 7}$) are eigentensors of $z$-rotations:

$${\bf\hat{e}}^{(3,0)}\propto\left(\begin{array}{ccc} 0 & 0 & -1\\ 0 &0&0\\ -1&0&0 \end{array}\right) \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 &0& -1\\ 0&-1&0 \end{array}\right) \left(\begin{array}{ccc} -1 & 0 & 0\\ 0 &-1&0\\ 0&0&2 \end{array}\right) $$

$${\bf \hat{e}}^{(3, -3)}\propto\left(\begin{array}{ccc} 1 & i & 0\\ i &-1&0\\ 0&0&0 \end{array}\right) \left(\begin{array}{ccc} i & -1 & 0\\ -1 &-i& 0\\ 0&0&0 \end{array}\right) \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 &0&0\\ 0&0&0 \end{array}\right) $$

At rank-4, one encounters the elasticity tensor relating stress and strain (both rank-2 tensors):

$$ \epsilon_{ij}=c_{ijkl}\sigma_{ij}$$

which is the most general linear form of Hooke's Law. Because of the symmetries of stress and strain, only 21 of the 81 entries in $c_{ijkl}$ are independent. A break down into spherical components would be involved, but it would further elucidate the geometric nature of tensors as the irreducible objects that are representations of the rotation group.

In general reality, the Riemann curvature tensor is also rank-4. There it is clearly a "geometric object"; moreover, the curved nature of the metric mandates consideration of covariance and contra-variance.

Meanwhile, in special relativity, it is the fundamental representations of the Poincare group that give us the allowed particle/field types: right and left spinors, 4-scalars, 4-vectors, and so on.

Finally, if you change your rotation group from $SO(3)$ to $SU(2)$, the irreducible reps are parameterized by half-integers, and the fundamental representation becomes two 2-component spinor, which can be considered the most basic "rotatable" object. Moreover, their tensor product space becomes the sum of a vector and scalar space, per:

$${\bf 2}\otimes{\bf 2}={\bf 3}\oplus{\bf 1}$$

JEB
  • 33,420
1

I'm wondering just how you came across a stress tensor without meeting a linear algebra course first. Okay, never mind, I'll try to give you some hint of an answer, in accessible terms only.

Basically, a tensor is an even more intricate thingie than a vector.

(That's not entirely accurate, actually scalars and vectors are a special case of tensors, but you won't really hear people refer to them that way.)

Where a scalar has one component and a vector in your usual euclidean 3D space has three (x, y, z), most tensors you come across will have nine (even if probably not all of them independent). When writing these down, such a tensor will be represented by a 3x3 matrix of components:

xx   xy   xz

yx   yy   yz

zx   zy   zz

(Sorry, I don't know how to format this properly here.)

There are even more monstrous kinds of tensors than this, but you don't find a rank 3 or higher tensor all that often.

Divizna
  • 11
  • Uh huh, when u are on a search to find the answer of everything lol! – Student Jun 14 '21 at 17:54
  • Actually the concept of stress strain is taught in lower grades like 7th grade but enclosed in a do you know box....so now I am in 10 th grade and I need to know it ig – Student Jun 14 '21 at 17:56
0

For another specific example you might consider gauge transformations. They are given by an application of some local symmetry at each point, so it is neither scalar nor vector: it is an operator.

-3

To put it in simple terms.
Scalars have a magnitude and they have always the direction plus or minus in one-dimensional space. Vectors have a direction and a length. In the case of three-dimensional space, it is represented by three scalars. These let you know how big the vector is in each direction.
Tensors have more directions but no length. A vector (instead of a scalar in the case of vectors) is assigned to each direction. When a vector property of a material is different in three directions you can use a tensor to express this.
For example, if the force I have to apply to compress a material in one direction is different from the force I need to compress it in a different direction then you have to use a tensor. You can't use a vector in this case because each component is a scalar while you need a vector (the components of the tensor). You can't say (to stay within the example) that you have to apply a scalar of force in each direction. Of course, you define a direction by choosing a basis (onto which the components of a vector can be projected) but in each of these directions, you have to apply a vector. You can say that the components of the tensor (vectors) are projected onto the basis. That means that compressibility (defined as the force you need to compress a material in a certain direction) is a tensor quantity.
This can be generalized to spaces of higher dimensions. A third rank tensor in ten-dimensional space, for example, has as its components ten-dimensional tensors of the second-order (a second-order tensor is one like the one I described above for compressibility). Each n-th order tensor has (n-1)-th order tensors as components. A second-order tensor has vectors (a first-order tensor) as components. A vector has scalars (a zeroth order tensor) as components.