20

Consider a magnet with temperature $T$. We can observe its net magnetization $M$, so we say that a value of $M$ specifies a macrostate. Statistical mechanics tells us which macrostate the magnet is in. To do this, we compute the free energy $F(M)$ and minimize it. The free energy is defined by $$F = U - TS$$ and hence depends on the entropy $S$. This entropy is determined by the number of microstates that could be compatible with the given macrostate, i.e. the number of spin states that lead to the magnetization we observe.

This procedure has always felt sketchy to me because it seems to rely on some subjective notion of knowledge. The reason that there are many allowed microstates is because we've postulated that we know nothing about the system besides the net magnetization; if we did know something else, it would decrease the number of consistent microstates and change the entropy, which changes $F$.

As such, it looks like the result of the calculation depends on the set of macrostates we use! For example, suppose I somehow attached a measuring device to every single spin. Then in principle, I could specify my macrostates with a long list containing the state of every spin; then there is only one microstate corresponding to each macrostate. In this case, $S = 0$ for each macrostate, $F = U$, and the minimum free energy is attained for minimum $U$.

Then I conclude that the system is always in the ground state!

What's wrong with this reasoning? Is it somehow illegal to make this macrostate choice? Could acquiring all of this information about the spins necessarily change how the magnet behaves, e.g. from something like Landauer's principle? In general, can changing macrostate choice ever change the predictions of statistical mechanics?

knzhou
  • 101,976
  • 8
    Are you familiar with the work of Jaynes? He's recommended reading for these sort of philosophical questions. (PS. the answer to your question is emphatically yes, as your reasoning already shows.) – Mark Mitchison Dec 02 '16 at 03:54
  • 1
    @MarkMitchison I'm not familiar with that work, but I don't think I'm asking a philosophy question. It's a question about a physically observable output of a mathematically well-specified theory. – knzhou Dec 02 '16 at 04:00
  • 7
    Check out "Information theory and statistical mechanics", by ET Jaynes (published in Physical Review in case you're worried about my allusions to philosophy). It's a 20th century classic. Enjoy :) – Mark Mitchison Dec 02 '16 at 04:06
  • 2
    By the way, I meant it's philosophical in the sense that only the statistical mechanics description changes. If you redefine the macrostates (or equivalently, the accessible observables), the predictive power of statistical mechanics increases, but of course the physically observed behaviour is the same. – Mark Mitchison Dec 02 '16 at 04:24
  • 1
    If you discover that you can change the physical behavior of a system by making a bookkeeping change entirely in your head, then your model of the system is flawed. Reality doesn't care about what you think. – rob Dec 05 '16 at 00:13
  • @rob Sure, but then, what exactly is wrong with my reasoning? Is it a flaw in statistical mechanics itself? – knzhou Dec 05 '16 at 00:16
  • 1
    Entropy is indeed a measure our your ignorance to the system, and it does change depending on how much you know about the system, i.e. how you define a macrostate. Thermodynamics and statical mechanics don't exist without this ignorance. – Reid Hayes Dec 05 '16 at 02:37
  • its a very interesting topic, and I don't think i could give a complete answer, I look forward to the answers – Reid Hayes Dec 05 '16 at 02:39
  • Note that if you reduce the number of available microstates by a factor of 10^1000, your entropy changes by a factor of 10^-20 Joules per Kelvin. So entropy is alot more stable to your scheming then you might think. This is why we don't worry about fixing the linear or angular momentum of a system in the microcanonical ensemble. Whereas energy is positive definite, and therefore has as a huge impact on the number of available states, momenta are not, and therefore any discrepancy from the fixed value can be made up for by altering the momentum of the last particle. – Reid Hayes Dec 05 '16 at 02:58
  • @ReidHayes: The linear and angular momentum is fixed at zero since the magnet is considered in its rest frame. Otherwise the free energy would have corresponding contributions. – Arnold Neumaier Dec 06 '16 at 16:29
  • 1
    @knzhou think of it this way: if you did measure all of those spins and processed dynamics with a perfect computer, a lot of the things that we call thermal noise, you could model explicitly without calling them "noisy" per se; you could see each thing continuously evolve to where it's going. However the coarse-graining of the phase space that makes stat-mech work would still be the property of classes of questions, "hey, if I start from one of these (big range) of models, where will the system go?" and you evolve a bunch of those starting-states and come up with a big distribution. – CR Drost Dec 06 '16 at 18:16
  • Stat-mech just lets you get a leg up on what that big distribution is, without actually doing all of that computation, just by the idea that our uncertainties expand: I know $a$ perfectly but $b$ only $\pm \sigma$, now I let them interact, now in principle I can figure out everything that could have happened to my range of $b$ but suddenly $a$ has some new uncertainty. – CR Drost Dec 06 '16 at 18:18
  • @ArnoldNeumaier linear momentum yes, but angular momentum no: not in an inertial reference frame. – Reid Hayes Dec 06 '16 at 23:16
  • @ArnoldNeumaier also, total linear momentum being zero is still a constraint – Reid Hayes Dec 07 '16 at 01:45
  • See: https://physics.stackexchange.com/a/755636/24066 – al-Hwarizmi Mar 17 '23 at 07:41
  • Can you tell how can you know spin of each sites accurately? I think that the uncertainty in the knowledge of spin directions would be quantum mechanical and therefore entropy would always be a positive quantity. – Aman pawar Jan 21 '24 at 17:11

10 Answers10

15

This was too long for a comment, so I am posting it as an answer.

I second @MarkMitchison 's advice to read E.T. Jaynes' work. His point is exactly identical to what you have made. If I have understood him correctly, entropy (in statistical mechanics) is a tool for statistical inference, which equips you to make the least biased decision regarding various macroscopic parameters based on only what information you have (that information being your knowledge of macrostate) and nothing more. But just because you made a statistical inference, doesn't imply that nature should conform to it. Whether your inference is correct or not, is to be verified by doing experiments. As far as I am aware, in ordinary cases, statistical inference based on maximizing entropy works excellently, but a priori it need not have. If you find that experimental results do not validate your inference, then it means that either the information you had was inadequate, irrelevant, or incorrect.

When entropy is interpreted this way, it becomes much more general. Let me give an example from my own research work. I have used entropy maximization procedure to find the equilibrium diameter distribution of droplets in turbulent flow experiments, based on only the knowledge of mean volume of droplets (just as you would find velocity distribution of molecules given mean energy). In some cases it gives a good fit. In some cases it doesn't. In those cases where it doesn't fit, it indicates that factors other than mean volume of droplets dictate the size distribution, and I have to introduce additional hypotheses to account for the same.

Deep
  • 6,504
4

For a system in thermal equilibrium, the only admissible macrostates are those of the form $\rho=e^{-S/k_B}$, where $S$ is a linear combination of additively conserved quantum numbers. This severly limits the possibility to ensembles like the canonical and grand canonical ensemble, and excludes your choice.

Outside of equilibrium, the admissible macrostates are still of the form $\rho=e^{-S/k_B}$, but the choices for $S$ are more varied. See, e.g., Chapter 10 of my online book ''Classical and Quantum Mechanics via Lie algebras''. This chapter contains also a discussion of the relation between entropy and information.

3

Let's take the Ising model for simplicity (as you have done): I think that choosing a single microstate between all the possible microstates corresponding to the macrostate described by the magnatization $M$ is changing the rules of the game.

The point is that the formalism of equilibrium statistical mechanics is derived under the assumption that the system is ergodic, i.e. that for large enough times every microstate corresponding to your macrostate will be visisted with equal probability.

To say this in another way, if your system is in thermodynamical equilibrium in the microstate $S$ with magnetization $M$, if you wait long enough there will always be a thermal fluctuation large enough to take the system into the state $S'$ with magnetization always equal to $M$.

Since you want your results to be valid for any time $t$ (we are working with equilibrium statistical mechanics after all), you have to take into account the fact that there will be thermal fluctuations which will change the microstate of your system if you wait a long enough time.

To renounce to the ergodic hypothesis would mean to renounce to most of the resuslts valid in equilibrium Stat. Mech.: in fact, treating non-ergodic systems like glasses or gels (or spin glasses, in our case) is much more complicated than treating ergodic systems.

valerio
  • 16,231
2

What's wrong with these reasoning?

First of all let me say what is right: you are right that the definition of macrostates is a choice. In your example, we could split the paramagnet into two equal sections (in our minds) and describe the macrostate in terms of the magnetizations $M_1$ and $M_2$ of each section. If we keep subdividing in this way eventually we end up in the situation you describe, where we consider the magnetization of each spin separately.

What's wrong is your handling of the thermodynamic limit. The statement 'the physical macrostate minimizes free energy' is only true in this limit. In any finite-size system the statistical mechanics gives you a probability distribution over the macrostates of the system, and while the one with minimum free energy has maximum likelihood, there is no reason for the distribution to be sharp. In particular, if you consider each spin seperately then the probability distribution is simply given by the Boltzmann factor, $P(\vec{s}) \sim \exp[\mu \vec{s}\cdot \vec{H}/k T]$.

Now, in order for the distribution to become sharp (and thus the conclusion that the physical macrostate minimizes free energy be valid), what is needed is that the number of microstates per macrostate becomes large. If I consider the macrostate to be described by the total magnetization, $M$, or the magnetizations $(M_1,M_2,\ldots)$ of a fixed finite number of different sections, then as the number of spins $N$ becomes large the number of microstates per macrostate also grows (exponentially in $N$), so the thermodynamic limit works fine. However, if I consider the macrostate as specifying the magnetization of every spin then the number of microstates per macrostate is a constant (equal to one), and we have a problem.

In short, your argument fails because you cannot assume the free energy is minimized as you have defined it.

Is it somehow illegal to make this macrostate choice?

No-one is going to arrest you, but in order for the thermodynamic limit to work the ratio macrostates/microstates must grow (exponentially) with $N$.

Could acquiring all of this information about the spins necessarily change how the magnet behaves, e.g. from something like Landauer's principle?

As I understand it, Landauer's principle implies that there is a minimum entropy cost to aquiring information, but says nothing about where this excess entropy must be held. If you are considering the paramagnet in equilibrium with a thermal bath nothing will change. Of course, in a real paramagnet continuous measurement of each spin certainly would affect how the system behaves.

In general, can changing macrostate choice ever change the predictions of statistical mechanics?

It changes what your model can predict, yes. For example, if I define the macrostate in terms of the two magnetizations ($M_1,M_2$) then I get more information (in principle) than if I define macrostate in terms of the magnetization of the whole system, $M$. There are some cases where this might be significant, for example if the external field has spatial variation. However, the predictions must be compatible with each other, in the sense that $M = M_1 + M_2$ (in the thermodynamic limit) or for a finite system $P(M) = \sum_{M_1+M_2=M}P(M_1,M_2)$.

Mark A
  • 1,692
  • 13
  • 21
0

A macrostate is determined solely by the value of macroscopic parameters of the system i.e the thermodynamical quantities like pressure temperature etc. When you attach a measurement device to each of spins you are talking about a microstate of the system. In equilibrium thermodynamics and statistical physics it has been assumed that the macroscopic degrees of freedom are unique for the bulk of the system. I mean you only should assign an overall , net magnetization to the system(instead of a spin configuration as you've mentioned).

So the answer to your first two questions is that you haven't chose a macrostate at all.

The answer to your third question if I've understood it correct is yes, in this case you're actually performing measurement on all the degrees of freedom of the system and many of quantum measurement related issues can occur in this situation.

If by changing macrostate choice you mean that by changing the value of macroscopic quantities do the prediction of statistical mechanics change, then the answer is yes (for example by decreasing temperature phase transitions occur in systems and each phase has completely different behavior usually).

If by changing macrostate choice you mean that by choosing a different group of independent macroscopic parameters as the macroscopic degrees of the system and put constraints on them, then the answer is yes and actually changing constraints on the macroscopic degrees of freedom usually results in statistical mechanics in a different ensemble.

Hossein
  • 1,397
  • For your first paragraph, where exactly is it assumed that you can't take microscopic degrees of freedom to specify a "macrostate"? I know what the usual rules are, but what rules out my weird choice of macrostates? – knzhou Dec 05 '16 at 00:08
  • As for the rest of your answer: does macrostate choice affect the results even in classical statistical mechanics? Because classically, measurement doesn't have to affect the system, so it looks like it shouldn't have any effect on the results. – knzhou Dec 05 '16 at 00:09
  • The point is that there many microstates corresponding to the same macrostate. So the statement "the system is in the macrostate A" is more general than "the system is in the microstate alpha which corresponds to the macrostate A". In the former case all of the consistent microstates have the same weight in the probabilistic distribution of the system while in the latter one they don't. – Hossein Dec 05 '16 at 00:23
  • Maybe we can discuss more clearly if you clarify your statement about macrostate choice. – Hossein Dec 05 '16 at 00:29
0

In statistical mechanics we assume that an isolated system in equilibrium has an equal probability of being in any of its accessible microstates. This allows us to do calculations on the basis of statistics, the method clearly works. However, the assumption of equal probabilities can be easily shown to be false. E.g. consider doing a free expansion experiment in a hypothetical totally isolated system such that the quantum state of the system does not decohere due interactions with the environment. In that case, the set of distinguishable physical states after the expansion must be the same as the original number of states, due to unitary time evolution.

However, there can be no doubt that statistical mechanics will not break down in this experiment. So, pretending that the larger number of states that includes all states compatible with the macrostates that we know the system actually cannot be in (they don't evolve back to the smaller volume under time reversal) are all equally probably as the states the system can actually be, leads to the same predictions of the properties of the gas.

Clearly, then, what is going on here is that the equal probability postulate is irrelevant, what matters is that there exists a large set of states that are statistically representative of the states the system can really be in, this allows you to consider this large set of states to do statistical computations with. But this does mean that the foundations of statistical mechanics as taught in almost all textbooks are misleading (they are not even wrong). To explain why statistical mechanics works is still an active topic of research, ideas such as eigenstate thermalization have been developed recently.

Then with this in mind, considering the example in the question, it's clear that when you narrow down the number of states more and more, statistical reasoning will break down more and more (even within the paradigm of statistical mechanics where you then take into account larger fluctuations due to smaller number of degrees of freedom) and the actual dynamics of the system will start to become more and more important.

Count Iblis
  • 10,114
0

I think a pretty much equivalent question is dealt with in the Wikipedia discussion on the mixing paradox. The entropy of a physical system does indeed depend on your choice of macrostate, but the internal energy $U$ also depends on your choice of macrostate in such a way that their difference $F$ is always minimized at the same value of $M$, which is the physically observed value.

tparker
  • 47,418
0

I have thought about this a bit and thought i ought to take a go.

So, to think that physical reality changes with constructs in our head or our ability to measure is absurd. I would agree with the other answers there.

However, your mathematical description does depend on the assumptions you make. For example, if i assumed that the ising ferromagnet is siting in a thermal bath, then at finite temperatures, all micro states are accessible to the system, with differing probabilities. The probabilities depend only on the energy of the system, and so clearly, the only macro parameter that the entropy does depend on is the energy.

Getting to your question, the free energy minimisation is a prescription that is used to find the most probable state of the system( that is justified in this SE In thermodynamic systems why must the free energy of the system be minimized? post). This works for a canonical ensembles and hence there is a tacit assumption of finite temperature. Hence, All micro states are accessible to the system. So, your statement that micro states depend on the chosen macro parameter is incorrect.

What this procedure is really doing is finding the most probable energy by finding the most probable magnetisation. Luckily, for us, a constant magnetisation implies, all states with that magnetisation are equally likely amongst themselves. This is a property of the macro parameter we call magnetisation. This does not mean that other magnetisations are not possible.

The assumption we are making in this case is not that of constant magnetisation but of finite constant temperature. The procedure has helped identify which is the most probable magnetisation. The entropy of the system itself has nothing to do with the most probable magnetisation. Fluctuations away from this value of the magnetisation go to zero in the thermodynamic limit.

Anonjohn
  • 734
0

Is that possible related with the ensembles you are choosing to describe your physical system ?

Microcanonical Ensemble:

  • Fixed variables : $N,E,V$;
  • Microscopic features: $W$ (number of microstates);
  • Macroscopic function : Boltzmann entropy $\Rightarrow \color{red}{S} = k_B\ln W$

Canonical Ensemble:

  • Fixed variables : $N,T,V$;
  • Microscopic features : $Z = \sum_i e^{-E_i/k_BT}$ (partition function);
  • Macroscopic function:Helmholtz free energy $\Rightarrow \color{red}{F} = -k_BT\ln Z$

Grand Canonical Ensemble:

  • Fixed variables:$\mu,T,V$;
  • Microscopic features : $Z = \sum_i e^{-(E_i-\mu N_i)/k_B T}$ (partition function )
  • Macroscopic function:Grand potential $\Rightarrow \Omega = -k_B T\ln Z$

For more details to Statistical Ensembles on Wikipedia.

Jack
  • 1,727
0

Statistical Mechanichs is the bridge between Thermodynamics (which deals only with macroscopic quantities) and the study of the micro-interactions (which deals only with microscopic quantities).

From a conceptual point of view, indeed you could monitor the internal micro-degrees of freedom (the spin orientation of every site), but you cannot control them. In other words: they are stochastic variables.

Knowing the istantaneous microstate of the system is possible (Have you ever seen a 2D Ising model applet?) but this doesn't change your entropy, since "entropy" is proportional to the number of total microstates compatible with the macrostate.

If you were able to control internal micro degrees of freedom of your system (e.g. the spin orientation at certain/all sites) you wouldn't need Statistical Mechanics any more. Wouldn't you?! It would be a sort of cheating!

In order to better clarify, entropy $S$ is just a tool to bypass all the difficulties linked with the detailed knowledge of the internal micro-degrees of freedom of your system. You shift from a purely classical and "integrable" approach to a statistic approach because, practically alway, it's the only way you have. The beauty of Statistical Mechanics is that you obtain (NOT fix) the most probable MACRO equilibrium configuration because the microscopic configurations that correspond to it are a lot.

AndreaPaco
  • 1,232
  • 9
  • 24