14

The second law of thermodynamics states that

the entropy in an isolated system cannot decrease

This seems intuitive when considering a low entropy system transitioning to a higher entropy state, but very counterintuitive when considering a system that is currently at the greatest possible entropy because the system can only transition to another maximum entropy state by first passing through a lower entropy state. Consider, for example, the system shown in "Entropy: Why the 2nd Law of Thermodynamics is a fundamental law of physics" by Eugene Khutoryansky. enter image description here This system starts with $500$ balls in the left container and intuitively we can understand that these balls will spread evenly between the two containers, but what happens when the balls are distributed evenly: $250$ in the left container and $250$ in the right container?

Does the second law of thermodynamics prohibit any ball from moving to another container because that would shift the system into a lower entropy configuration?

EDIT: I believe (although answers seem to indicate that this believe is incorrect) that the state in between has lower entropy because $$\Omega_1 = \binom{1000}{500} > \binom{1000}{501} = \Omega_2$$

Poseidaan
  • 546

5 Answers5

38

The second law of thermodynamics does not prohibit any ball from moving to another container because that would shift the system into a lower entropy configuration.

The question originated from a widespread misconception. There is nothing like the entropy of one configuration in statistical mechanics. Entropy is a property of the macrostate. Therefore, it is a collective property of all the microscopic configurations consistent with the macroscopic variables uniquely identifying the equilibrium state. The physical system visits all the accessible microstates as a consequence of its microscopic dynamics. Among these states, there are states with an unbalanced number of particles in the two containers. People refer to such states as fluctuations around the average equally distributed case. It is an effect of the macroscopic size of thermodynamic systems that the overwhelming majority of the microscopic states does not show large fluctuations.

10

This is a much-discussed question in thermodynamics.

The short answer is that the 2nd law of thermodynamics does not imply that balls cannot move between the containers when thermal equilibrium is established (at some non-zero temperature), nor does it rule out the possibility of fluctuations which move balls between the chambers temporarily. The question is, therefore, how to understand the concept of entropy when such fluctuations can take place.

Consider an isolated system, so it has fixed energy, volume, particle number $U,V,N$. Suppose this system also has some internal parameter $x$. In your example $x$ is the number of balls in one of the two containers. Let $S(x;U,V,N)$ be the entropy which the system would have if $x$ were constrained to take some particular value.

Such a system, in thermal equilibrium at some given temperature, will exhibit thermal fluctuations such that the probability that the system is found, at any time, in a state whose entropy would be $S(x;U,V,N)$ if $x$ were constrained is given by $$ P \propto e^{S(x;U,V,N)/ k_{\rm B}} $$

There is some particular value $x_0$ at which this $S$ is maximised for the given $U,V,N$. So that is the most likely value of $x$. Let us call it $x_0$. It would be $N/2$ in your example.

Now, as the question rightly points out, during the fluctuations $S(x;U,V,N)$ reaches values less than $S(x_0;U,V,N)$, and the question is, doesn't this break the 2nd law? The muddle has arisen because of two different uses of the word "entropy". The entropy of the system under consideration---the system with no constraint on $x$---is $$ S(U,V,N) \simeq S(x_0; U,V,N). $$ where the quantity on the left is the entropy for the system with no constraint on $x$. This is very closely approximated by $S(x_0; U,V,N)$ in the limit of large systems. $S(x_0; U,V,N)$ is the entropy of the largest set of microstates in the collection of those which contribute to $S(U,V,N)$ and this largest set contributes almost all of the total. This $S(U,V,N)$ is the quantity which the second law is concerned with.

The quantity $$ S(x \ne x_0; U,V,N) $$ is not the entropy of the system. It is what the entropy of the system would be if $x$ were also constrained and had some given value.

During thermal fluctuations the system explores a range states with different values of the parameter $x$, and these states can be macroscopically different in the statistical mechanics sense of the word. This makes one think of the system as moving around the macroscopic state space, with entropy going up and down accordingly, but that would be something of a mistake. One has to get used to the fact that such fluctuations are an intrinsic aspect of the situation of thermal equilibrium. The very concept of "thermal equilibrium" has to be understood as embracing this range of states. In this understanding we should ascribe the term "equilibrium entropy" not to $S(x \ne x_0; U,V,N)$ but to $S(x_0; U,V,N)$. It is this second entropy which the second law says cannot fall in an isolated system.

For a proof that indeed the 2nd law does hold in such cases, you can examine devices such as the Feynman-Smoluchowski ratchet which try to exploit thermal fluctuations in order to extract work from heat at one temperature. Such devices only get work when there is a temperature difference and they involve a loss of heat exactly as the 2nd law requires.

[All the above has assumed conditions towards the thermodynamic limit, where the distinction between average value and maximum value can be neglected, and probability distribution functions can be taken to be normal (Gaussian). A reference for further information is my thermodynamics book, section 27.1.1. Pippard also has interesting remarks.]

Andrew Steane
  • 58,183
  • You are discussing thermodynamic fluctuations, corresponding to an average number in the two containers different from the equilibrium value. However, the question was whether the second principle prohibits any ball from moving to another container because that would shift the system into a lower entropy configuration. This is a stronger condition and a different question. – GiorgioP-DoomsdayClockIsAt-90 Jun 23 '21 at 13:08
  • here is the same idea as quoted from Pippard https://physics.stackexchange.com/questions/534173/why-does-the-minimum-energy-principle-work/534565#534565 – hyportnex Jun 23 '21 at 20:23
  • It feels arbitrary to say $S(x \ne x_0; U,V,N)$ is not the entropy of the system. Suppose I had a gate between the two flasks, initially closed. The system has entropy $S(500, U, V, 500)$. When I open the gate, does the system immediately attain entropy $S(250, U, V, 500)$, because that's the equilibrium it will eventually reach? How can entropy increase gradually if that's the only meaningful entropy? Similarly, does randomly closing the gate after equilibrium suddenly change the entropy, e.g. to $S(241; U,V,500)$, since it's unlikely we're exactly equal at that moment? – JounceCracklePop Jun 23 '21 at 21:41
  • In fact, based on this answer it would seem that I can lower the equilibrium entropy of the system simply by closing the gate, since that imposes a constraint on $x$ and $x$ is unlikely to be exactly equal to $x_0$. – JounceCracklePop Jun 23 '21 at 21:46
  • 1
    @JounceCracklePop thanks for your question which prompted me to make an edit which clarifies this issue. I acknowledge $S(x_0;U,V,N)$ is not precisely $S(U,V,N)$ but is a good estimate of it; better for example than $S(x \ne x_0;U,V,N)$, and the main concept of my answer is unchanged. Closing a gate would lower the entropy if it reduced the number of microstates consistent with the given macrostate. You thus enter into the discussion of the ratchet and the Maxwell daemon. – Andrew Steane Jun 24 '21 at 11:11
  • @JounceCracklePop Regarding your first question: during any fluctuation the system can reach a set of microstates which is the very set it would be restricted to if there were a further restriction in place. But entropy was always about the total set of microstates it can explore. Therefore we should not say the entropy has fallen when the system happens to be in some part of state space if it is not restricted to stay there. The entropy measures the total set of states it can explore while remaining under given external constraints. – Andrew Steane Jun 24 '21 at 11:21
8

Let me start by saying that in my opinion this is a truly enlighting question to think about:
This question appears to derive from a misunderstanig of the definition of entropy:

Suppose we are dealing with a system $A$: it will have a macroscopic configuration and a microscopic configuration.

For example if the system is your bedroom its macroscopic configuration could be being in order; so, since the macroscopic configuration is simply a collection of its macroscopic properties: the books are all into the shelfs, there is nothing on the ground, ecc.
Its microscopic configuration is on the contrary the collection of its microscopic properties: meaning all the position of the books, all the other position of all the other objects, ecc.
Keep in mind the following crucial observation: the macroscopic configuration of the system limits the possible microscopic configurations: if the books are in order (macroscopic configuration) then this means that there is only a little subset of possible microconfigurations of position for the books! The position of the books can only be on the shelf, of course there isn't an unique configuration of books, but for sure they must be on the shelf.

Wonderful, with all this in mind let's think about the definition of entropy: Take our system $A$, it will have a number of possible microconfigurations, lets call this number $\Gamma$; then the entropy of the macroscopical system is: $$S=k_B\ln\Gamma$$ keep in mind that the entropy is defined for the system as a whole, it's defined for the macroscopic system! There is no useful way to define entropy for a microscopic system.
Now we can see that if $\Gamma$ becomes bigger then the entropy $S$ becomes bigger, an the second law of thermodynamics states that entropy can only grow or stay the same, for an isolated system at least. So this means that $\Gamma$ must grow or stay the same. But what influences the value of $\Gamma$? The macroscopic configuration! So we can now understand that the second law of thermodynamics really states:

A macroscopic system in the macroscopic configuration $C_1$ cannot evolve into a macroscopic configuration $C_2$ that allows fewer microscopic states.

And now we can finally talk about you apparent paradox: we can now understand that the second law of thermodynamics tells us nothing about the microscopic configuration of the system, so a ball could surely switch place and still obey the second law. What is deemed impossible by the second law is the evolution of the macroscopic state into a state with fewer possibilities for the positions of the balls: this means that for the second law is impossible that our system will evolve into a system with the property: all the balls are contained on the left side.
Again: keep in mind that the second law of thermodynamics talks about macroscopic system composed of microscopic parts, so it would not apply in a condition with only, let's say, three balls, the premises for the law would simply not hold, the microstates are not truly "microscopic" with three balls. And this of course is why, essentially, the second law of thermodynamics stems from statistical considerations, and for it to hold you must ensure a system of statistically relevant span, with a sufficiently high number of microstates, such as to turn statistically improbable macroscopic configurations in statistically impossible macroscopic configurations.

Lastly note another wonderful thing: the second law of thermodynamics, seemingly paradoxically, ensures that a ball will move if we are into a configuration of perfect 50 50 split, that's because otherwise the system would have evolved in a system with the macroproperty: the balls are perfectly divided into the space, and this is a macroscopic configuration that drastically reduces the possible microscopic configurations, $\Gamma$ would become lower, and this is not allowed.

Noumeno
  • 4,474
  • Can't fix a typo in edit. I assume you meant "the system as a whole", rather than a hole... – João Mendes Jun 24 '21 at 07:40
  • Somehow I'm missing something here. I assume that the general premise of the OP is universally accepted: The beginning state of the OP's experiment (all balls on one side, let's call it C1) shows the system in a lower state of entropy than the end state (all balls more or less distributed between the two sides, C2). Neither configuration allows fewer microstates though -- there are paths to all of them starting at either configuration. According to your definition, that means they have equal entropy. That seems to contradict how we commonly understand entropy though (where e(C2) > e(C1)). – Peter - Reinstate Monica Jun 25 '21 at 11:13
  • State C1, all balls on one side, limits the number of possible combinations of positions for the balls: the balls must be only on the left side. On the contrary the macrostate C2, the balls are present in both sides, allows all the possible positions inside the structure, so $\Gamma$ is larger. This means that the macrostate C2 has an higher value of entropy. The two macrostates do not have the same entropy. – Noumeno Jun 25 '21 at 11:33
2

Does the second law of thermodynamics prohibit any ball from moving to another container because that would shift the system into a lower entropy configuration?

No, it does not prevent any motion of any individual element of the system; it is a matter of probably.

Serge Hulne
  • 1,744
1

In statistical mechanics: Entropy $(H)$ is a measure of the amount of information $(I)$ which is unavailable about the many-particle system.

For the OP’s system on the left, initially:

($H_{conditional}) = (H_{joint})– (I)$

This is why the initial entropy is said to be small, because we know quite a lot $(I)$ about the system. How? Because if I know the position of just one of those particles to the left, I can make very good predictions of the other particles, because they are in such a confined space to begin with.

Once the particles start drifting out into the larger space (two bottles), the information $(I)$ that I, or any other observer has, decreases. As a consequence, the conditional entropy $(H_{conditional})$ increases. It does so until $(I=0)$ and the maximum entropy state is achieved.

This is what the the second law is really about - how information turns into entropy.

At this equilibrium, Boltzmann = Shannon entropy (as the microstates are now equiprobable). Boltzmann's entropy is a constant: it counts how many microstates can be taken on by a system at fixed energy. Boltezmans entropy equation cannot be used in non-equilibrium systems, we have to use Shannon’s entropy equation.

Then, getting to the OP’s question: Once we are at the maximum entropy state, then even knowing the position of one particle in the two bottles tells me nothing about the rest. What I, or anyone else, observes is individual particles moving randomly around the space of the entire two bottles. If that was not the case, we would suddenly have information again, and the conditional entropy would increase, breaking the second law.

So, the second law, as Adami said, should really be written as:

When an isolated system approaches equilibrium from a non-equilibrium state, its conditional entropy almost always increases

Mr Anderson
  • 1,399