6

The following statement, and similar extensions of it to various other statistical ensembles, forms a basis for many computations in statistical physics:

If we construct a large number of identical copies of a system, and if the state of each system is prepared by bringing it into thermal contact with a heat bath having temperature $T$ and waiting for sufficiently long (meta comment: for "thermalization), then measurement of the energy of each system will yield a distribution of energies that is Boltzmann.

Applying this statement to microscopic models of systems can be used to correctly predict well-known empirical, thermodynamic facts such as equations of state. In addition, the apparent general theoretical and empirical success of the predictions the statistical mechanical apparatus based on the statement above is pretty convincing evidence in its favor. However, I would personally find a more "elementary, direct" test most convincing.

Question. Has an experiment of the following structure or something morally equivalent ever been performed in the laboratory?

  1. Construct a reasonably large number of nearly identical systems.
  2. Bring each into contact with a heat bath at a certain temperature.
  3. Wait for a while.
  4. Measure the energy of each system.
  5. Construct a histogram of energy frequencies.
  6. Determine if the histogram is consistent with the Boltzmann distribution.

References appreciated.

joshphysics
  • 57,120
  • Would you count confirmations of Planck's law or the Maxwell-Boltzmann distribution? – knzhou Jun 22 '17 at 17:04
  • @knzhou It depends on the form the experiments took. Do you have references describing experiments you have in mind that one could inspect? – joshphysics Jun 22 '17 at 17:06
  • 1
    Related: Slow thermal equilibrium. Does this count as an answer here? =P – Emilio Pisanty Jun 22 '17 at 17:08
  • When I took stat mech in college my professor (a theorist) offered us a reference (from the 1970s) for the measurement of the temperature fluctuations in a small system which is—perhaps—almost what you are asking for. He called the result "a tour-de-force in measurement". Alas I can't recall what the paper might be, but perhaps that offers a place to start looking. – dmckee --- ex-moderator kitten Jun 22 '17 at 17:09
  • If you believe the ergodic hypothesis, then the two processes are equivalent in the limit $N \to \infty$: 1) Create $N$ copies of your system and measure the energy of each copy 2) Perform $N$ measurements on the same system. – valerio Jun 22 '17 at 17:11
  • @dmckee I would love to see that paper even though it's not precisely what I'm looking for. Please let me know if you remember any other details. I'll try to search for it in the meantime. – joshphysics Jun 22 '17 at 17:14
  • 1
    @valerio92 I'm hesitant to invoke ergodic hypotheses here because (1) I want to avoid invoking high-powered theoretical statements that link time sampling and ensemble sampling since I'm looking for a direct measurement verifying the validity of the ensemble picture. (2) I'm not sufficiently confident in attempts to invoke ergodic hypotheses in anything but the simplest of classical systems for which mathematical results are known, and those cases don't seem directly applicable to "real" systems. – joshphysics Jun 22 '17 at 17:17
  • I agree with josh - it's probably best to keep the ergodic hypothesis out of this one. – Emilio Pisanty Jun 22 '17 at 17:46
  • 1
    Some thermal fluctuation references (but not the one I was talking abut earlier): https://journals.aps.org/pr/abstract/10.1103/PhysRev.120.1551 https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.040602 (AKA https://arxiv.org/abs/0908.3227). I don't think that you need the full blown ergodic hypothesis to relate thermal fluctuations to the ensemble picture you just need to believe that thermal equilibrium is a state that has forgotten the system's history (which is also a pretty big ask as a postulate, to be sure). – dmckee --- ex-moderator kitten Jun 22 '17 at 18:00
  • Whenever we speak of probability in frequentist sense, there appears the problem of ensembles. I who do experiments on turbulent flows, face exactly the same problem in its statistical description. But experiments are usually (very) expensive, not to mention the time and effort involved, so I don't think anyone would spend their money and lifespan doing an ensemble of experiments of one particular kind. It is unlikely that you will find what you are seeking. That's why I love E.T. Jaynes' way of looking at probability, as a function of information and do away with ensembles. – Deep Jun 23 '17 at 04:39
  • @Deep I agree that there is something quite appealing about the information theoretic viewpoint in this case and in particular about the interpretation of assignments of probabilities as indications of one's degree of ignorance inherent in the state preparation procedure. However, one problem I have with that picture is that it's not clear to me how "fluctuations" fit into it. Do you happen to know of references that address that point from the Jaynesian perspective? – joshphysics Jun 23 '17 at 04:57
  • In Jaynes' view of looking at probability, probability is a function of the information that we possess. Physical "fluctuations" are not part of this viewpoint. For e.g., if there is a body of unknown mass then based on our (limited) information we can draw up a probability distribution for various possible values of its mass. But the mass of the body is a constant (i.e. there are no "fluctuations"), albeit unknown. The dispersion of the probability distribution represents our own uncertainty rather than anything physical. Jaynes' book and papers should be relevant reading. – Deep Jun 24 '17 at 06:09

1 Answers1

4

It might not quite fit the bill, but a recent experiment in ultracold gases out of the Greiner group does something like this. I think I already wrote about this paper for some other similar question, but I can't find it.

To summarize: the authors take an isolated quantum "many-body" system of six particles, initialize it in some definite non-equilibrium state, then allow it to thermalize with each individual particle seeing the other five particles as a bath. Looking at the occupation statistics at each site, they see this evolve from the initial condition of one particle per site to the canonical ensemble distribution on each site, with a temperature determined by the initial energy density. Here's the relevant plot:

enter image description here

where the red points are the canonical ensemble prediction.

As is evident, despite the small system size it does thermalize to a good approximation, at least when looking at single sites. They repeat this many times, with a new copy of this system each time, to get statistics.

Although not directly relevant to your question, the authors are additionally able to directly verify that the many-body quantum state remains pure while the subsystems become thermal mixed states, so they also test the picture of quantum thermalization as being due to development of entanglement.

So the differences between this and your desired experiment are that it uses rather small systems, there is no heat bath held at a fixed temperature, and they cannot measure the energy distribution directly. However, it does show the evolution of an observable towards a canonical ensemble distribution, and as a bonus shows that this happens everywhere in the system even though it is isolated.

Rococo
  • 7,671
  • 1
  • 23
  • 54
  • 1
    +1 thanks. This is indeed interesting, related, and empirical. I'll definitely be reading that paper. In some sense my question is ultimately about the extent to which thermalization has been observed in the lab seeing as how one might define a system achieving a canonical distribution as the process of thermalization. – joshphysics Jun 23 '17 at 05:00