0

I am studying a bit about statistical mechanics for a course at my university. My book introduces first the Boltzmann entropy $S = k_b lnW$, and then the Gibbs one, $S=k_b \sum p_i lnp_i$, in what seems a derivation from the previous formula. I was confused about the meaning of this last one, in particular my book in the last part of the process divide by the number of the systems in the ensemble, talking about the mean entropy of the ensemble.

This didn't make a lot of physical sense to me, so I started searching online, and I found the paper A formal derivation of the Gibbs entropy for classical systems following the Schrodinger quantum mechanical approach available for free on reserachgate, that made a similar derivation. The paper proceeds not in a discrete fashion, like my book, but considering a continuous phase space divided in a lot of cells, then obtain the expression for the multiplicity of the macrostate of the ensemble, then uses Boltzmann's formula. At the end, when there is the passage that involves dividing by the number of copies of the ensemble, it goes like The entropy S of every system in the ensemble can be obtained by taking into account that the entropy is additive and that all the systems in the ensemble are macroscopically identical, which means that they all have the same entropy.

While the first claim is ok, the ensemble systems are in fact non interacting with each other, the second one is not: if for example we are considering a canonical ensemble, there are microstates with different energy, so they aren't macroscopically identical and we can't assume the entropies are all equal!

What am I missing?

  • See e.g. Jaynes' papers, e.g. Stat. Mech. and Information Theory I and Gibbs and Boltzmann entropies (I don't know the exact titles by heart, but you should find something), to get an understanding of the Gibbs/Shannon entropy in stat. mech – Tobias Fünke Jan 12 '24 at 10:28
  • I found the paper but I don't like a derivation based on information theory – forgetfuled Jan 12 '24 at 10:52
  • @forgetfuled There is nothing in Jaynes' derivation that cannot be applied to Gibbs' entropy. Therefore, you can ignore the word information and you get a clean derivation within Gibbs' definition of entropy. – GiorgioP-DoomsdayClockIsAt-90 Jan 12 '24 at 16:48

2 Answers2

0

The ensembles give you the same macrostate given that the parameters are properly tuned. I can see why you would be confused with this statement because as you say, indeed, the canonical ensemble includes microstates with different energy, and that seems to be in direct contradiction with what I just said.

In short, for any sane system, the probability distribution of the energy density $E/N$ according to the canonical ensemble is well-approximated by a Gaussian, and that becomes sharper and sharper as you increase the system size $N$. This means that in the thermodynamic limit, the distribution pretty much becomes a delta peak at some energy value. We can define that value to be $e(\beta)$, and then write $P_{\mathrm{can}}(E/N;\beta)\simeq\delta(E/N-e(\beta))$. So, in this sense even the microstates in the canonical distribution also have macroscopically the same energy density! Basically, the "trick" (or really, the tricky part) is that while the different microstates indeed have different energy, they are typically just order $O(1)$ different, and any configuration that actually has a $O(N)$ different value of energy from the mean (i.e. that can actually viewed as "macroscopically different") will have a vanishingly small ($\sim O(e^{-N^2})$ because they tend to be a Gaussian) probability according to the canonical ensemble.

So, that's the core of the confusion I think, and I guess if you can grasp that part, you can follow their claim. Hope this helps.

0

There exist different definitions of entropy (e.g., see this answer), which are related, but used in different ways and in different situations. As in thermodynamic equilibrium all the states are assumed to be occupied with equal probabilities, $p_i=1/W$, the Gibbs entropy is equivalent to the Boltzmann one: $$ -\sum_i \frac{1}{W}\log\left(\frac{1}{W}\right)=W\times\frac{1}{W}\log W = \log W$$

Roger V.
  • 58,522