3

There is something I really don't get about entropy.

Let's consider a classical system (not quantum mechanics here).

We can compute the entropy of a system via the formula $$S=-\sum_l P_l Log(P_l)$$ where $P_l$ is the probability to find the system studies in the configuration $l$.

If we work at equilibrium with a thermostat we can have for example $$P_l=e^{-\beta E_l}/\mathcal{Z}.$$

But the thing is that those probability to find the system in a given state are purely subjective. It is because the equation of motion are "too hard" to solve that we use a probabilistic approach.

But from this subjective view we are able to link with objective quantities, indeed if I assume a reversible transformation I would have $Q=k_b T\Delta S$: the heat received by the system if he changes its entropy of $\Delta S$.

So in summary: how is it possible that a subjective notion such as entropy leads to objective conclusion such as heat transfer?

Qmechanic
  • 201,751
StarBucK
  • 1,350
  • I asked a similar question here. – knzhou Apr 27 '18 at 22:11
  • One can calculate the entropy of a system also for a computer simulation, where everything is known. –  Apr 27 '18 at 22:27
  • I don't get how doing things probabilistically means that they are subjective. As long as you can express your system as a Hamiltonian, you can apply Liouville's Theorem to get the probability density objectively: https://physics.stackexchange.com/questions/76028/probability-density-in-hamiltonian-mechanics – probably_someone Apr 28 '18 at 02:55
  • @probably_someone You still have to make a choice for the initial probability distribution though. Liouville's Theorem just lets you evolve it in time. – Dominic Else Apr 28 '18 at 03:02
  • @DominicElse That can also be done objectively: just let it be a delta function at your initial conditions (or, if your initial conditions have some uncertainty, then smear accordingly). – probably_someone Apr 28 '18 at 03:04
  • @probably_someone Well the initial conditions always have uncertainty for macroscopic systems (we certainly don't know the positions and velocities of every particle). But how to assign a probability distribution to characterize that uncertainty is the potentially subjective part. – Dominic Else Apr 28 '18 at 03:10
  • @DominicElse In that case, is there any part of physics that isn't subjective? Characterizing uncertainty is always, by your definition, subjective, and every measurement we make has uncertainty. Theoretical physics relies on these subjective results, so it can't be objective either. So in what way is objective vs. subjective a meaningful distinction at that point? – probably_someone Apr 28 '18 at 03:13
  • @probably_someone My answer outlines in what sense the thermodynamic entropy (not the same as information entropy) is objective. That is the one that gives experimental predictions, not information entropy. – Dominic Else Apr 28 '18 at 03:14
  • 1
    Statistical mechanics does NOT rely on our own subjectivity to provide the relevant distribution, it relies on the ergodic hypothesis! – G. Bergeron Apr 28 '18 at 03:15
  • @DominicElse Ah, so something is "objective" if it gives experimental predictions. Then in that case, the objectively correct initial distribution to pick is the one that best predicts your measurements. – probably_someone Apr 28 '18 at 03:17
  • @DominicElse Also, information entropy can give experimental predictions. Verifying the Nyquist rate, which relies on information entropy for its proof, is a way that information entropy makes experimental predictions. – probably_someone Apr 28 '18 at 03:22
  • You may be interested in my answer to a related question: https://physics.stackexchange.com/questions/145795/what-is-the-entropy-of-a-pure-state/145851#145851 – Rococo May 01 '18 at 16:08

2 Answers2

4

Information entropy is subjective, but thermodynamic entropy is not. This is important to emphasize because the two concepts are often confused (and indeed they are closely related, but not the same).

The thermodynamic entropy is defined to be the highest possible information entropy over all probability distributions that are consistent with the information that is accessible to us (which is to say, information about macroscopic quantities such as temperature, pressure...). Thus, it is a measure of the "missing information" associated with the degrees of freedom not accessible to us.

Generally speaking, the statistical ensembles that we typically use (microcanonical, canonical, etc.) have the property that they maximize the information entropy subject to the constraints of the information accessible to us. So for those ensembles, and only those ensembles, the information entropy is equal to the thermodynamic entropy. However, if we wanted to, we could try to guess the approximate positions/velocities of all the particles, and thus assign a much more peaked probability distribution than the conventional one. In that case, the information entropy would not be equal to the thermodynamic entropy, but the latter would be unchanged. (A better example would be if we happened to know something about the initial state -- for example, that all the particles in a gas in a box were originally in one side of the box. Then, at least in principle, we could evolve the probability distribution in time to find the final distribution, which would not be the same as the canonical ensemble. Nevertheless, even if we could do this (very hard) we generally do not choose to make use of this information, so for making experimental predictions we can still treat it as "information that isn't accessible to us").

Note: there is still a some amount of freedom in the definition of thermodynamic entropy, associated with what we mean by "information accessible to us". It's probably not right to call this "subjectivity", but Jaynes [1] did come up with some fascinating thought experiments where the definition of entropy depends on whether or not we are able to access the internal degrees of freedom of some hypothetical kind of atom. If we have the ability to access these internal degrees of freedom, but choose not to, then there are several definitions of entropy we are allowed to use. It's interesting to think about how this is consistent with statements like

if I assume a reversible transformation I would have $Q=k_b T\Delta S$ : the heat received by the system if he changes its entropy of $\Delta S$.

The answer is that "reversible" is defined to mean a process in which the total entropy of the universe is conserved -- hence which processes we call reversible actually depends on the definition of the entropy.

[1] http://www.damtp.cam.ac.uk/user/tong/statphys/jaynes.pdf

For more reading on this viewpoint on the thermodynamic entropy, see https://aapt.scitation.org/doi/10.1119/1.1971557 ("thermodynamic entropy" is referred to as "experimental entropy" in that paper).

Dominic Else
  • 3,281
0

Let me answer your question by reminding the following points because the many aspects that you amalgamate in your question must be well distinguished.

(1) one should be cautious not to interpret entropy in any arbitrary reversible thermodynamic system as information entropy, or vice. We should have a reasonable and well argued. It is not mathematically and physically not justified to apply by itself and as a general rule, the Shannon information entropy $H$ or its gain or loss $dH$ to thermodynamic systems, substituting $dS$ or vice versa.

(2) One should be extremely careful not to misinterpret the Boltzmann entropy, which has a very specific and well bounded definition in terms of reversibility, linearity, boundaries and distribution form, to every thermodynamic system and process, such as an irreversible, a non-linear, or an open. The stochastic model must comply with the system.

(3) We must distinguish between what is entropy (or information) and what is the maximum entropy principle (maximum information principle), whose application again very much depends on the observable system.

(4) Particularly when we elaborate on objectivity, we must well define what in our context objectivity means, to avoid interfering with cognitive or other aspects of objectivity which for instance could be inherent to the measurement apparatus.

As far as I understand, you are trying to define objectivity in the context of information gain, and particularly in the context of maximum information gain. At the same time, however, you ping-pong thermodynamic entropy with information gain.

On an “ideal” thermodynamic system of a thermostat, we have no subjective influence because noise and interfering effects by our measurement apparatus are excluded, since the system is ideally closed (I assume your thermostat is not semi-closed) and first order in dynamics. Having this in mind, we may interpret the inherent entropy change of the system as a type of information gain or loss, dependent on the view point. The whole system tends to follow the maximum information principle under the primitive constraints given. That means we cannot gain more information than the maximum information. The latter is determined stochastically by the stochastic properties of the ensemble respective microstates. Hence, we can conclude, that the objectivity, what you are talking about is correlation to the maximum information gain, possible, and has nothing to do with the observer. The system gives us just no more information than this maximum as far as we are positioned as an observer at macroscopic level.

The maximum information principle is a straight formalism and well established, you can just pick it from Wikipedia or standard literature. Its discrete form applies Shannon information entropy, while the continuous form applies the Jane's form.