8

I try to make this as short and concise as possible. For equilibrium systems in statistical mechanics, we have the Liouville's theorem which says that the volume in phase space is conserved when the system evolves in time. Thus formally, one could rephrase this as: time evolution of equilibrium/stationary systems is a measure invariant transformation (where the measure here is the volume in phase space).

Now for stationary systems, we have a density function $f(\mathbf{q},\mathbf{p})$ that fulfils Liouville's theorem, using which we can write an ensemble average of some phase space function $A$ as follows: $$ \langle A \rangle = \int f(\mathbf{q},\mathbf{p})A(\mathbf{q},\mathbf{p}) d\mathbf{q} d\mathbf{p} \tag{1} $$ Similarly the time average of the same function $A$ is defined by: $$ \langle A \rangle_\rm{time} = \lim_{t\to \infty} \frac{1}{t}\int_0^t A(t)dt \tag{2} $$ The most important, physically relevant, statement of the ergodic theorem is that (1) and (2) are equal, i.e. the ensemble average and time average of phase space functions are the same. This leads to an important interpretation regarding the time evolution of an ergodic system, which is that all the regions of the accessible part of phase space (i.e. consistent with the system's energy) are visited by the system regardless of the initial condition at $t=0$, and that the system spends an equal amount of time in all of them. This also intuitively explains why the averages (1) and (2) ought to be equal.

From a mathematical point of view, we know that the time average (2) converges for all trajectories taken by the system, since we started with the equilibrium assumption, which allowed us to treat the time evolution as a measure preserving transformation. The question is, how is the ergodic theorem generalized to also account for non-equilibrium systems, i.e. systems with dissipative dynamics and with sources/sinks of particles?

For one thing we no longer have the volume preserving transformation, thus Liouville's theorem is violated and the phase space volume is no longer incompressible under time transformations. Thus evaluating the convergence of (1) and (2) becomes non-trivial. Admittedly, the mathematical challenges aside, I am more interested in the physical aspect of the generalization of the ergodic theory. Does this concept even extend to non-equilibrium? If yes, how is it interpreted?

valerio
  • 16,231
user929304
  • 4,635
  • "we have the Liouville's theorem which says that the volume in phase space is conserved when the system evolves in time" There is no "the volume". The theorem states that probability distribution is conserved along any trajectory of the system in the phase space that is solution to Hamilton's equations of motion. – Ján Lalinský Dec 03 '15 at 18:58
  • Liouville's theorem is valid irrespective of the assumption of equilibrium. The Hamiltonian evolution is sufficient. – Ján Lalinský Dec 03 '15 at 19:04
  • @JánLalinský : I think that he's interested in "open" Hamiltonian systems, for which Liouville's theorem indeed does not hold. – Yvan Velenik Dec 04 '15 at 14:39
  • 1
    I am not at all an expert in these topics, but you might have a look at the book Mathematical Theory of Nonequilibrium Steady States, Lecture Notes in Mathematics, Volume 1833, 2004, and references therein. If you want something shorter, maybe look at this review by Ruelle. – Yvan Velenik Dec 04 '15 at 14:44
  • @YvanVelenik Thanks for the references, I will look into them asap. – user929304 Dec 04 '15 at 14:59
  • I wrote an answer at: http://physics.stackexchange.com/a/177972/59023. The applicable part is in relation to when $df/dt = 0$ in my answer. – honeste_vivere Dec 07 '15 at 14:03
  • @honeste_vivere Thanks for the link, very interesting read. Unfortunately in that discussion you don't touch on the ergodicity aspect which is of interest here, would you be interested to take a stab at this question? – user929304 Dec 07 '15 at 14:26
  • @user929304 - If I had more time, yes. In lieu of that, I would recommend a great review paper by Oliver Penrose from 1979 entitled "Foundations of statistical mechanics" in Rep. Prog. Phys.. The first section (18 pages long) focuses almost entirely on the answer to your question. It's a good read and he provides physically intuitive explanations, not just math, which is very helpful. – honeste_vivere Dec 07 '15 at 14:33
  • @user929304 - After reading your question and many of the comments posted here in more detail, I strongly suggest you take a look at Penrose's 1979 review paper. He spends an entire section on non-equilibrium systems and he explains, quite clearly, the difference between the ensemble average and the time-average (they are only equal under rather limited circumstances and should not be confused). – honeste_vivere Dec 09 '15 at 00:39

3 Answers3

9

Non-equilibrium systems are most often considered in the approximation where local equilibrium is valid, yielding a hydrodynamic or elasticity description. Local equilibrium means that equilibrium is assumed to hold on a scale large compared to the microscopic scale but small compared with the scale where observations are made. In this case, one considers a partition of the macroscopic system into cells of this intermediate scale and assumes that each of these cells is in equilibrium, but with possibly different values of the thermodynamic variables.

From a macroscopic point of view, these cells are still infinitesimally small - in the sense that a continuum limit can be taken that disregards the discrete nature of the cells, without introducing too much error. Therefore the thermodynamic variables that vary form cell to cell become fields, tractable with the techniques of continuum mechanics.

On the other hand, from a microscopic point of view, these cells are already infinitely large - in the sense that the ideal thermodynamic limit, that strictly speaking requires an infinite volume, already hold to a sufficient approximation. (The errors in bulk scale with $N^{-1/2}$ for $N$ particles, which is small already for macroscopically very tiny cells.) Thus one can apply all arguments from statistical mechanics to the cells.

To the extent that one believes that an ergodic argument applies to the cell, it will justify (subjectively) the statistical mechanics approximation. However, the ergodic argument is theoretically supported only in few situations, and should be regarded more as a pedagogical aid for one's intuition rather than as a valid tool for deriving results.

  • I gather the O.P. is not actually asking about non-equilibrium closed systems, but about open systems. The question used incorrect terminology. An open system is (almost never) ergodic and Liouville's theorem does not apply either. – joseph f. johnson Dec 08 '15 at 17:04
  • @josephf.johnson: If an open or closed system is in local equilibrium, ergodic arguments can (with caution) be applied to the individual cells. This is a way - and the only way - his question can be made sense of. He asked for ''[...] ergodic theory. Does this concept even extend to non-equilibrium? If yes, how is it interpreted?'', and my answer gives the only feasible interpretation. – Arnold Neumaier Dec 08 '15 at 17:16
  • @user929304: (i) For local equilibrium, one bins space-time (a short time average is also needed to account for contributions of very high frequencies). One can also bin in (macroscopic) phase space, but then gets Boltzmann-like kinetic equations rather than hydrodynamic equations, as all the fields then depend on position and momentum. Each cell has its own set of thermodynamic variables defining its macrostate. (ii) roughly, yes. – Arnold Neumaier Dec 09 '15 at 08:51
  • @user929304: In principle, the cell size is a free parameter on which the coarse-grained model depends. But for a macroscopic system, the result is nearly independent of it, if it is far away from both microscopic and macroscopic scales. In practice, its choice is therefore not critical, taking the geometric mean of microscopic and macroscopic scales works well. In phase space, things are more delicate as there is no canonical metric on it. – Arnold Neumaier Dec 09 '15 at 11:14
  • ''did you mean that a turbulent flow would stay turbulent indefinitely?'' It depends on the boundary condition. Running water can well be turbulent indefinitely. A closed and isolated turbulent system with enough friction at the container will ultimately settle in an equilibrium state. Dissipation is represented by the parabolic terms in the Navier-Stokes equations. The corresponding conservative system would satisfy instead the Euler equations. – Arnold Neumaier Dec 09 '15 at 11:17
  • Thanks a lot for your patience with me. If there are references you'd recommend for further reading (on these binning discussions of system and its phase space and ergodicity) I would be most interested of course. By the way you may also be interested in this discussion on mixing and ergodicity that arose recently in this post. – user929304 Dec 09 '15 at 11:34
  • @user929304: I have no useful comment on that. – Arnold Neumaier Nov 09 '16 at 18:01
5

There are quite a few conceptual confusions in this question.

A system is either closed or open. A system is not "equilibrium" or "non-equilibrium". Also, a system is either conservative or dissipative. The ergodic theorem does not apply to open systems, neither to dissipative systems, since they tend to tend to a fixed point or something like that.

A state of a system can be an equilibrium state, or not, depending on whether it is invariant with the passing of time. A system has two concepts of "state": the one relevant for statistical mechanics is a macrostate, which means not a point in the phase space, but a probability distribution on the phase space. Usually the energy is fixed. If Liouville measure were a probability distribution, which it is not, it would be an equlibrium state since it is invariant under the passing of time. If a fixed-energy surface has finite volume, which it usually does, then if one restricts Liouville measure to that surface, one gets an equilibrium state. This can be done for any closed Hamiltionian conservative system. This has nothing to do with ergodicity.

The ergodic theorem does not apply to every dynamical system, yet the above remarks do. A dynamical system could have equilibrium states whether or not the system is ergodic.

Very few of the dynamical systems are known to be ergodic. Even if a system is ergodic, it is a fallacy that that means the paths nearly always enter into every region. That is the concept of "mixing", which is even harder to prove, and rarer. Ergodic just means the time averages are almost always equal to the phase averages, nothing more nor less. And this has to be true for non-equilibrium states too, so it has nothing to do with equilibrium.

The question of open systems cannot use Liouville's theorem since it is false for open or dissipative systems. And so is the ergodic theorem.

Interestingly, if a system is composed of a very large number of similar components which interact somewhat weakly, one can prove that some of the important time averages are approximately equal to their phase averages even though the ergodic theorem is not applicable to that system. (Khinchin, The Mathematical Foundations of Statistical Mechanics.)

A good reference, a little old but easy to understand, for non-equilibrium statistical mechanics, is the book by de Groot and Mazur, recently reprinted by Dover. It studies fluctuations near equilibrium, which can be related to the amount of dissipation present in the system.

  • Thanks for your answer, I really appreciate that took the time to also clarify some of the poorly expressed parts (or potentially confused), very helpful. I think I grapsed the core of your answer, namely that if out of equilibrium, then the system tends to evolve towards some fixed point, thus ergodicity becomes ill-defined there. But unless i ve misunderstood your answer, which is very likely, there s a potential contradiction: where you say that "ergodic just means time... And this has to be true for non-equilibrium states too..." But didnt we just say it is ill defined out of equil.? – user929304 Dec 06 '15 at 00:56
  • Let us make precise your meaning: when you said "non-equilibrium system" did you mean an open system? Or a dissipative system? (This was suggested by one of the people who commented on your question). – joseph f. johnson Dec 08 '15 at 04:07
  • To be precise I meant "open"(particle sinks, absorbing boundaries, heat exchange and so on) in general. But now that you put it like this I'm confused as to why dissipative also doesn't imply openness! – user929304 Dec 08 '15 at 09:01
  • It does in a way. but dissipative includes the idea of describing a system in a reduced way, ignoring internal degrees of freedom: e.g., energy gets dissipated into internal heat, not necessarily to an outside system across a boundary. Still, with an incomplete description, it's not a closed system in the sense of classical mechanics. – joseph f. johnson Dec 08 '15 at 13:51
  • 1
    A dissipative system doesn't need to tend to a fixed point. An important example is the case of turbulence in fluids satisfying the Navier-Stokes equations. – Arnold Neumaier Dec 08 '15 at 17:19
4

While I know this question has already been answered, I felt obligated to get around to writing a formal answer. I will not go into detail about irreversible/dissipative systems, as Arnold Neumaier's answer already addressed that issue. Rather, my answer will focus on the mathematics behind ergodicity and mixing.

Note: Most of my comments were taken from Penrose [1979] in the following.

Background

First let us define $\boldsymbol{\Gamma}$ to be the whole of phase space, described by the position and momentum coordinates, $\mathbf{q}$ and $\mathbf{p}$, respectively. Then if we define the phase space density as that which satisfies: $$ \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \rho\left( \mathbf{q}, \mathbf{p} \right) = 1 \tag{1} $$ where $n$ is the number of degrees of freedom and $\rho\left( \mathbf{q}, \mathbf{p} \right)$ is the phase space probability density.

Now if we use a generic variable, $G\left( \mathbf{q}, \mathbf{p} \right)$, to describe any dynamical variable (e.g., energy), then the ensemble average of $G$ is denoted by: $$ \langle G \rangle = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \rho\left( \mathbf{q}, \mathbf{p} \right) \tag{2} $$

Three Principles

However, there is an issue to be aware of at this point [i.e., page 1940 in Penrose, 1979]:

The fundamental problem of statistical mechanics is what ensemble – that is, what phase-space probability density $\rho$ – corresponds to a given physical situation... It is, however, possible to state three principles that the phase-space density should satisfy; and it turns out, rather surprisingly, that these principles when combined with a study of the dynamics of our mechanical models give enough information to answer the fundamental problem satisfactorily in some important cases.

1st Principle
The first of the three principles is just Liouville's theorem in the limit where $d \rho/dt = 0$. It's another way of saying that the Hamiltonian of the system does not explicitly depend upon time, which is how we define the system to be isolated.

2nd Principle
The second principle is stated as [i.e., page 1941 in Penrose, 1979]:

The second of the three principles is more general, since it does not require the system to be isolated... The principle, which I shall call the principle of causality, is simply that the phase-space density at any time is completely determined by what happened to the system before that time, and is unaffected by what will happen to the system in the future.

3rd Principle
Finally, the third principle is stated as [i.e., page 1941 in Penrose, 1979]:

The last of the three principles is that the probabilities in the ensemble really can be described by a phase-space density $\rho$ with $\rho$ a well-behaved (say, piecewise continuous) function, rather than some more general measure.

Now the last principle, it is important to note, is actually very important but often overlooked. It is important because if we require it, we cannot include systems like a gas of hard spheres in a cubical box where all spheres bounce between the same two faces for eternity (i.e., the spheres move only along one-dimension). That is to say, a time-average of this imaginary system will not be the same as an ensemble average (see explanation below). Let us define any system like this as an exceptional system, for brevity.

As an aside, the problems with time-averages in classical electricity and magnetism are well known and it is now known that spatial ensemble averages are the correct operations for converting between the micro- and macroscopic forms of Maxwell's equations [e.g., see pages 248-258 in Jackson, 1999 for a detailed discussion].

Ergodicity and Mixing

Ergodicity

If $G$ is a dynamical variable, then we can define the ensemble average over time as: $$ \langle G \rangle_{t} = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \rho_{t}\left( \mathbf{q}, \mathbf{p} \right) \tag{3} $$ where we can obtain $\rho_{t}$ using the assumption that Liouville's theorem holds (i.e., $d \rho/dt = 0$).

Note that when $\langle G \rangle_{t}$ exists, we can define it as an equilibrium value of $G$. However, it is worth noting that $\langle G \rangle_{t}$ does not necessarily exist, as in the case of any oscillating system without damping (e.g., simple harmonic oscillator). In other words, $\lim_{t \rightarrow \infty} \langle G \rangle_{t}$ will not approach a single value, it will oscillate indefinitely.

The time-average, however, always exists and one can avoid calculating a nonexistent value by redefining the equilibrium value of $G$ as: $$ \langle G \rangle_{eq} \equiv \lim_{t \rightarrow \infty} \ \frac{1}{T} \int_{0}^{T} \ dt \ \langle G \rangle_{t} \tag{4} $$ which is equal to $\lim_{t \rightarrow \infty} \langle G \rangle_{t}$ if $\langle G \rangle_{t}$ exists.

If we define the time-average of $\rho$ as $\bar{\rho}$, we can write this as: $$ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) = \lim_{t \rightarrow \infty} \ \frac{1}{T} \int_{0}^{T} \ dt \ \rho_{t}\left( \mathbf{q}, \mathbf{p} \right) \tag{5} $$ which allows us to redefine $\langle G \rangle_{eq}$ as: $$ \langle G \rangle_{eq} = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \tag{6} $$

It is important to note some properties of ergodic theory here [e.g., page 1949 in Penrose, 1979]:

It follows from the ergodic theorem of Birkhoff (1931) that $\bar{\rho}$ is well-defined at almost all phase points... consequently the integral in (1.16) is well-defined... Birkhoff's theorem also shows that $\bar{\rho}$ is constant on the trajectories in phase space...

where the integral (1.16) in the quote refers to the version of $\langle G \rangle_{eq}$ in Equation 6. The last statement, namely that $\bar{\rho}$ is an invariant, is crucial here. Were it not an invariant, it "...would require us to solve the equations of motion for $10^{23}$-odd particles..." [e.g., page 1945 of Penrose, 1979].

Important Side Note: Recall again that Equation 6 given above for $\langle G \rangle_{eq}$ does not always hold, as in the trivial case of an undamped simple harmonic oscillator because the integral on the right-hand side oscillates forever.

Assume we can write $\bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) = \phi\left( x \right)$, where $\phi$ is an arbitrary function of only one variable. If $\phi\left( x \right) \rightarrow \phi\left( H \right)$, where $H$ is the Hamiltonian, for all $\bar{\rho}$ in a system, then the system is said to be ergodic. Another way of stating this is that if the system were ergodic, the trajectories would cover all parts of an energy manifold if given enough time.

Mixing

Let us define the microcanonical average over energy of $G$ as: $$ \langle G \rangle_{E} = \frac{ \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \delta\left( H\left( \mathbf{q}, \mathbf{p} \right) - E \right) }{ \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \delta\left( H\left( \mathbf{q}, \mathbf{p} \right) - E \right) } \tag{7} $$ where $\delta()$ is the Dirac delta function, $H\left( \mathbf{q}, \mathbf{p} \right)$ is the Hamiltonian, and $E$ are energy manifolds (i.e., systems that have energy $E$).

Thus, we can redefine $\langle G \rangle_{eq}$ as: $$ \begin{align} \langle G \rangle_{eq} & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \tag{8a} \\ & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \phi\left( H \right) \tag{8b} \\ & = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ G\left( \mathbf{q}, \mathbf{p} \right) \ \left[ \int_{-\infty}^{\infty} \ dE \ \phi\left( E \right) \ \delta\left( E - H\left( \mathbf{q}, \mathbf{p} \right) \right) \right] \tag{8c} \\ & = \int_{-\infty}^{\infty} \ dE \ P\left( E \right) \ \langle G \rangle_{E} \tag{8d} \end{align} $$ where $P\left( E \right)$ is given by: $$ P\left( E \right) = \int_{\boldsymbol{\Gamma}} \ d^{n}q \ d^{n}p \ \bar{\rho}\left( \mathbf{q}, \mathbf{p} \right) \ \delta\left( E - H\left( \mathbf{q}, \mathbf{p} \right) \right) \tag{9} $$ Note that $P\left( E \right)$ is just the probability density of $H$ in the time-averaged ensemble.

Now to define mixing we consider whether the following holds: $$ \lim_{t \rightarrow \infty} \ \langle \rho_{0}\left( \mathbf{q}, \mathbf{p} \right) \ G\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} = \langle \rho_{0}\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} \ \langle G\left( \mathbf{q}, \mathbf{p} \right) \rangle_{E} \tag{10} $$ where $\rho_{0}$ is just the initial value of $\rho_{t}$.

If the system, for every $E$ and functions $\rho_{0}$ and $G$, satisfies the above relationship, the system is said to be mixing [i.e., pages 1948-1949 in Penrose, 1979]:

Mixing can easily be shown to imply ergodicity (e.g. Arnold and Avez (1968, p20); the equivalence of our definition of mixing and theirs follows from their theorem 9.8), but is not implied by it; for example, as mentioned earlier, the harmonic oscillator is ergodic but not mixing... The precise definition of mixing is... 'whether an ensemble of isolated systems has any tendency in the course of time toward a state of statistical equilibrium'...

Note that mixing is not sufficient to imply a system will approach equilibrium [i.e., page 1949 in Penrose, 1979]:

Mixing tells us that the average $\langle G \rangle_{t}$ of a dynamical variable $G$, taken over the appropriate ensemble, approaches an equilibrium value $\langle G \rangle_{eq}$; it does not tell us anything about the time variation of $G$ in any of the individual systems comprised in that ensemble. To make useful predictions about the behaviour of G in any individual system we must show that the individual values of G are likely to be close to $\langle G \rangle$, i.e. that the fluctuations of $G$ are small, and to do this we have to use the large size of the system as well as its mixing property...

Additional and/or Related Answers

References

  • Evans, D.J. "On the entropy of nonequilibrium states," J. Statistical Phys. 57, pp. 745-758, doi:10.1007/BF01022830, 1989.
  • Evans, D.J., and G. Morriss Statistical Mechanics of Nonequilibrium Liquids, 1st edition, Academic Press, London, 1990.
  • Evans, D.J., E.G.D. Cohen, and G.P. Morriss "Viscosity of a simple fluid from its maximal Lyapunov exponents," Phys. Rev. A 42, pp. 5990–5997, doi:10.1103/PhysRevA.42.5990, 1990.
  • Evans, D.J., and D.J. Searles "Equilibrium microstates which generate second law violating steady states," Phys. Rev. E 50, pp. 1645–1648, doi:10.1103/PhysRevE.50.1645, 1994.
  • Gressman, P.T., and R.M. Strain "Global classical solutions of the Boltzmann equation with long-range interactions," Proc. Nat. Acad. Sci. USA 107, pp. 5744–5749, doi:10.1073/pnas.1001185107, 2010.
  • Hoover, W. (Ed.) Molecular Dynamics, Lecture Notes in Physics, Berlin Springer Verlag, Vol. 258, 1986.
  • J.D. Jackson, Classical Electrodynamics, Third Edition, John Wiley & Sons, Inc., New York, NY, 1999.
  • O. Penrose, "Foundations of statistical mechanics," Rep. Prog. Phys. 42, pp. 1937-2006, 1979.