12

A microcanonical ensemble is one that represents an isolated system with fixed number of particles, volume, and energy. In other words, it's an $(N,V,E)$ ensemble.

If the energy is fixed, it will result in a fixed number of microstates. The total number of microstates has no variation with respect to energy, since the energy is constant. If $\Omega(N, V, E)$ is a constant, so is entropy, $k_B \ln \Omega$.

What does it mean to say that a microcanonical ensemble is one that maximizes entropy? With respect to what variable?

Edit: Responding to the first answer, i.e., by @flaudemus, with two well-known arguments in support of the claim, both of which are completely wrong, and do not work (and this is not about that answer; that's how a microcanonical ensemble is taught in the majority of classes in my opinion).

Wrong argument 1: The total number of microstates $\Omega(E)$ is a function of energy $E$ and also varies with energy.

Reasoning why it's wrong: $S$ varies with $E$ in general, not for a specific $(N,V,E)$ ensemble under study. Consider two microcanonical ensembles, $(N,V,E_1)$, and $(N,V,E_2)$. Those are two completely different ensembles, studied independently. They have no relation to each other. It is meaningless to say that if the second ensemble has higher entropy compared to the first one, entropy increased with energy. Well it increased by changing the ensemble you are studying, so you essentially changed the problem. Alternatively, if you change the energy within the same problem, it is no longer a microcanonical ensemble, since the energy is no longer fixed.

Wrong argument 2: In order to see, how entropy is maximized in such a system, consider splitting your system of total energy $E$ into two parts ${\cal S}$ and ${\cal S}'$, and let the energy of ${\cal S}$ be $\epsilon$, and that of ${\cal S}'$ be $E-\epsilon$. Then the total entropy is $$ S(\epsilon) = S_{\cal S}(\epsilon) + S_{{\cal S}'}(E-\epsilon).$$ Maximum entropy means $$ \frac{dS(\epsilon)}{d\epsilon} = \frac{dS_{\cal S}(\epsilon)}{d\epsilon} + \frac{dS_{{\cal S}'}(E-\epsilon)}{d\epsilon} = \frac{dS_{\cal S}(\epsilon)}{d\epsilon} - \frac{dS_{{\cal S}'}(E-\epsilon)}{d(E-\epsilon)} = 0.$$ According to our definition of temperature, this is equivalent to $$ \frac{1}{T_{\cal S}} = \frac{1}{T_{{\cal S}'}},$$ i.e., maximizing entropy means that the two subsystems have the same temperature. In other words, the temperatures of all thermodynamic subsystems of your NVE-system are the same.

Reasoning why it's wrong: All this derivation establishes is the maximization of entropy of a subsystem of that of the microcanonical ensemble, not the system itself. The system as a whole, was always, and will always will be at a fixed entropy $S$, corresponding to fixed total energy $E$. (edit: This is not entirely correct. The maximization is of the conditional entropy as explained in edit 3 below).

Edit 2: Would it be correct to say the following:

  • A microcanonical ensemble does not maximize total entropy of the system, since total energy and total entropy is fixed.
  • However, if we consider a large subsystem of the total system, that subsystem would have its entropy maximized with respect to its energy, which is fluctuating and not a consant.
  • By large subsystem, I mean the subsystem size could be one-half, one-third, but not one-hundredth, or one-thousandth of that of the total system. If that subsystem is too small, it becomes a canoncial ensemble, with the rest of the system becoming the heat bath, and then entropy might not maximize. (Would this be correct reasoning?)

Edit 3: I have read edits by @flaudemus, as well as some texts. I agree that we're dealing with two kinds of total entropies: $S(N, V, E)$ and $S_{c,\epsilon}(N, V, E, \epsilon)$, where (edit: correct expression in Edit 5):

$$\require{enclose} \enclose{horizontalstrike}{S(N, V, E) = \int_0^E S_{c,\epsilon}(N,V,E,\epsilon)\,d\epsilon}$$

$$S_{c,\epsilon}(N, V, E, \epsilon) = S_{\mathcal{S}}(\epsilon) + S_{\mathcal{S'}}(E-\epsilon)$$

However, this is still a serious problem. No stat-mech text clarifies that it's not the first kind of total entropy, but the second kind (let's call it conditional entropy: total entropy of the system at total energy $E$, given that part of the system has energy $\epsilon$).

Not only that, it raises other issues, like why can't we condition the total entropy on something else, like part of the system having n out of N particles, or part of the system having v out of V volume.

But then if $S(N, V, E) \neq S_{c,\epsilon}(N, V, E,\epsilon)$, what about the following (at equilibrium):

$$S_{c,\epsilon}(N, V, E,\epsilon) \stackrel{?}{=} S_{c,n}(N, V, E, n)$$

$$S_{c,\epsilon}(N, V, E,\epsilon) \stackrel{?}{=} S_{c,v}(N, V, E, v)$$

If they are not equal, there is nothing special about $S_{c,\epsilon}(N, V, E,\epsilon)$ compared to other conditional entropies.

Very unfortunate how such critical details are glossed over in 99% of classes and texts.

Finally, that still means total (unconditional) entropy of a microcanonical ensemble stays a constant, but total conditional entropy (conditioned on susbystem energy) is the one that gets maximized.

Edit 4: Some texts I've looked at:

  • Schroeder, An Introduction to Thermal Physics
  • Kardar, Statistical Physics of Particles
  • Sethna, Statistical Mechanics: Entropy, Order Parameters, and Complexity

I only went through short sections, mainly looking for discussion on microcanonical ensemble. As far as I can tell, all of them fall short in clarifying this issue. However, I was able to come up with a relationship between the two total entropies while looking at Fig. 3.1 in Schroeder.

Edit 5: I'm sorry my expression in Edit 3 is incorrect, as pointed out by @flaudemus, but with $d\epsilon$ not $dE$. The correct integral relation applies to microstates, not entropy:

$$\Omega(N, V, E) = \int_0^E \Omega_{c,\epsilon}(N,V,E,\epsilon)\,d\epsilon$$

This gives the following relation between entropies:

$$S(N, V, E) = k_B \ln \int_0^E \exp \frac{S_{c,\epsilon}(N,V,E,\epsilon)}{k_B}\,d\epsilon$$

Roger V.
  • 58,522
  • When you imply that once E has been fixed it stays fixed, would you apply the same reasoning to V? Then how can you ever define pressure? – lalala Mar 04 '19 at 17:56
  • +1, because I really enjoy your question and the answers/discussion it provokes. – flaudemus Mar 04 '19 at 22:38
  • @lalala: have a look at my answer. I give reasons, why I think it is meaningful to define a temperature, even in a microcanonical ensemble with fixed $E$. In a similar way, one could argue that a meaningful pressure can be defined. However, since the system is idealized to exchange neither energy nor volume nor particles with anything else in the universe, these definitions of $T$, $p$, and also of $\mu$ remain without any practical relevance, I guess. – flaudemus Mar 04 '19 at 22:43
  • You should refer to the answer by who posted it rather than just saying "first answer". The ordering of questions depends on user specific settings. – BioPhysicist Mar 05 '19 at 00:26
  • You say that you read "some texts". Can you please add, which texts you read, and where you see their strengths/weaknesses if relevant? – flaudemus Mar 06 '19 at 22:07
  • Added an edit, listing the texts I looked at. – user101043 Mar 06 '19 at 22:22
  • I wrote responses to your edits. Curious what you think. – flaudemus Mar 07 '19 at 09:36
  • A lot of statements in this question sound rather confused. Can you explain what you think the definition of the word "ensemble" is? – knzhou Mar 07 '19 at 11:12
  • Thanks for pointing out the typo in my integral. I corrected $dE$ to $d\epsilon$. – flaudemus Mar 07 '19 at 22:06

5 Answers5

2

The total number of microstates has no variation with respect to energy, since the energy is constant.

Edit in response to the comments: A thermodynamic system with state variables $E$, $V$, $N$, in which these variables are constant, has a constant physical entropy. I consider the statement correct that the microcanonical ensemble cannot maximize total entropy by varying any of the state variables $E$, $V$, $N$, because they are fixed by definition.

The entropy S varies between different microcanonical ensembles, which cannot be distinguished by any other way than by the distinct values of their state variable $E$. This is the basis for defining in general a number of microstates $\Omega(E)$, which is a function of energy (not that I omit the variables $N$ and $V$ for the sake of less clumsy notation).


This function for the total number of microstates $\Omega(E)$ is a function of energy $E$ and also varies with energy, if different systems as specified above are considered. Since entropy is $$ S(E) = k_\mathrm{B}\ln\Omega(E), $$ there is a finite derivative $$ \frac{1}{T} = \frac{dS}{dE}, $$ which is the inverse of temperature $T$. Even if the system has constant energy, the derivative of $\Omega(E)$ at this very energy might be finite, and a system temperature can be defined.

I therefore think it is necessary to ask, how equilibrium can be defined in such an ensemble. While it is clear, what we mean by equilibrium between two or more systems, it is not so clear what it means for a single system.

Here is the way I answer this question to myself:

In order to see, how internal equilibrium is established in a NVE-system, consider splitting the system of total energy $E$ into two parts ${\cal S}$ and ${\cal S}'$, and let the energy of ${\cal S}$ be $\epsilon$, and that of ${\cal S}'$ be $E-\epsilon$.


Note added in response to the OP's criticism: This procedure introduces, in addition to the original state variables $E$, $V$, $N$, which we keep constant, the new internal state variable $\epsilon$. Doing this, we have added new information about the system. We claim to know, how the energy splits among the two subsystems ${\cal S}$ and ${\cal S}'$. As a result, the total entropy of the system is now $S(E,\epsilon)$, i.e., also a function of $\epsilon$. This entropy differs from the original system entropy $S(E)$. This additional state variable $\epsilon$ allows us to maximize this new entropy $S(E,\epsilon)$ with respect to $\epsilon$, and thereby define the internal equilibrium, as I illustrate below. This does not mean, however, that we maximize $S(E)$, and it does also not mean that we maximize the entropy of a subsystem of that of the microcanonical ensemble as it is claimed in the edit of the question. It is still the total system entropy $S(E,\epsilon)$ that is maximized.


Then the total entropy is $$ S(\epsilon) = S_{\cal S}(\epsilon) + S_{{\cal S}'}(E-\epsilon).$$ Maximum entropy means $$ \frac{dS(\epsilon)}{d\epsilon} = \frac{dS_{\cal S}(\epsilon)}{d\epsilon} + \frac{dS_{{\cal S}'}(E-\epsilon)}{d\epsilon} = \frac{dS_{\cal S}(\epsilon)}{d\epsilon} - \frac{dS_{{\cal S}'}(E-\epsilon)}{d(E-\epsilon)} = 0.$$ According to our definition of temperature, this is equivalent to $$ \frac{1}{T_{\cal S}} = \frac{1}{T_{{\cal S}'}},$$ i.e., maximizing entropy means that the two subsystems have the same temperature. In other words, the temperatures of all thermodynamic subsystems of your NVE-system are the same.


Edit in response to Edit 2 in the question:

  • As clarified above, I do agree with the first statement

    A microcanonical ensemble with fixed energy $E$, volume $V$, and particle number $N$ does not maximize total entropy of the system, since for this system the entropy is a constant number.

  • Concerning the second point in Edit 2, I do not agree for the reasons explained above. If we consider a subsystem of the total system, we still have to maximize the total system entropy $S(E,\epsilon)$, if we are to describe the state of internal equilibrium. This entropy is, however, different from the entropy $S(E)$ considered originally.

  • The last point in Edit 2 deals with the question of what we can admit as a subsystem in the division of the total system. I think, there are no constraints as to the size of the subsystem in the case considered here. The maximization of the total entropy $S(E,\epsilon)$ would still be valid. In contrast, the canonical ensemble is based on the idea of a heat bath characterized by a temperature $T_\mathrm{bath}$ rather than an energy $E-\epsilon$. Strictly speaking, it is not based on the idea of being a subsystem of a microcanonical ensemble. If we still base it on the idea of being a large subsystem of the total system, we have to make sure that its volume, particle number and energy are taken to extremely large numbers (which is incompatible with the idea of keeping $E$, $V$, and $N$ fixed). The question of how we precisely define a heat bath is, however, not a matter of big practical relevance. If we stick to a specific NVE-system, having one subsystem much bigger than the other, the bigger one acts effectively like a heat bath, because its temperature will not significantly change, when a bit of heat is absorbed from or given to the small subsystem.


Edit in response of Edit 3 in the question:

I agree that we're dealing with two kinds of total entropies: $S(N, V, E)$ and $S_{c,\epsilon}(N, V, E, \epsilon)$, where: $$S(N, V, E) = \int_0^E S_{c,\epsilon}(N,V,E,\epsilon)\,d\epsilon$$

I do not see how this integral expressions can be correct. Consider the number $\Omega(E,V,N)$ of microstates of the microcanonical ensemble under consideration. By the fundamental postulate of statistical mechanics, each microstate $\mu$ has the same probability $p_\mu = 1/\Omega(E,V,N)$. The additional constraint that a subsystem $\cal S$ has energy $\epsilon$ leads to a new number of microstates $\Omega_{c,\epsilon}(E,V,N,\epsilon)$ that are compatible with the additional constraint. The probability for the total system to be in a specific microstate $\mu$ under this additional constraint is according to the elementary rule of statistics given by $p_\mu(\epsilon)=\Omega_{c,\epsilon}(E,V,N,\epsilon)/\Omega(E,V,N)$. Since the distribution $p_\mu(\epsilon)$ is normalized, $$ \Omega(E,V,N) = \int_0^E d\epsilon \Omega_{c,\epsilon}(E,V,N,\epsilon)$$ should be the correct relation.

Not only that, it raises other issues, like why can't we condition the total entropy on something else, like part of the system having $n$ out of $N$ particles, or part of the system having $v$ out of $V$ volume.

Yes, we can do exactly that. Conditioning the entropy on energy $\epsilon$ of subsystem ${\cal S}$ leads us to the definition of temperature. Conditioning the entropy on volume $v$ of subsystem ${\cal S}$ leads us to the definition of pressure (${\cal S}$ and ${\cal S}'$ are in thermodynamic equilibrium, if they have the same pressure). Conditioning the entropy on particle number $n$ of subsystem ${\cal S}$ leads us to the definition of chemical potential (${\cal S}$ and ${\cal S}'$ are in thermodynamic equilibrium, if they have the same chemical potential). So indeed, there is nothing special about $S_{c,\epsilon}(N,V,E,\epsilon)$ compared to other conditional entropies.

On the pedagogical side: You may consider the statement that each microstate $\mu$ of the microcanonical ensemble has the same probability $p_\mu = 1/\Omega(E,V,N)$ in thermodynamic equilibrium as the basic postulate of equilibrium statistical mechanics.$^*$ If you accept this postulate, you can define entropies, temperatures, pressures, chemical potentials using the division of the total system into subsystems, and you find the equilibrium conditions between these subsystems to be calculated by maximizing the total system entropy as I did above. As with all theories deduced from postulates, also this theory is only valid, if it is in agreement with experimental evidence, which it turns out to be.


$^*$: as it was said in the answer of Jan Lalinsky, you may see this postulate as being the result of maximizing information entropy (or Gibb's entropy) with the only constraint of having a normalized probability distribution $p_\mu$. This gives the result that all microstates have the same probability. However, this just replaces one postulate by another.


Edit in response to Edit 4 in the question:

Perhaps you find the book by Frederick Reif, Fundamentals of Statistical and Thermal Physics useful. In particular, have a look at chapter 2.

flaudemus
  • 2,179
  • I'm sorry I have to disagree with both your points (and I have already seen them before). For (1), once you have decided your system is going to be at fixed E, you fixed your entropy S. A different E would correspond to a different ensemble, not this ensemble that I'm working with. For (2), the derivation, all it does is claim that a part of the system will have maximum entropy, maybe one-half, maybe one-third. But not the system itself, since the system has fixed total S and it has no variation. – user101043 Mar 04 '19 at 15:50
  • @user101043: thanks for your feedback, perhaps I did not see the exact point you are after. I agree that fixing $E$ fixes the entropy to a number given by $\Omega(E)$, so you do not see any degree of freedom for maximizing it. I think, seen this way, maximum entropy makes no sense for an $NVE$-system. However, the system can be at equilibrium. What we mean by that is, that the parts of the system are in equilibrium with each other. This is, what I am showing with the derivation. – flaudemus Mar 04 '19 at 16:19
  • @user101043: The claim that a microcanonical ensemble is in equilibrium always sounded unsatisfactory to me, because I would say that only two (or more) systems can be in equilibrium with each other. How can a system then be in equilibrium with itself, what is the meaning of this phrase? My answer is, as said above, to say, individual parts of the system are in equilibrium. – flaudemus Mar 04 '19 at 16:20
  • Thanks for your comment. I think this question probably has more to do with the pedagogy and how almost everyone is led to believe that microcanonical ensemble maximizes entropy. Most educators make the same arguments. Stat-mech is not an easy topic. It doesn't help when it is not taught correctly. – user101043 Mar 04 '19 at 16:27
  • @user101043: I mended my answer, and would be curious, if you now agree with the presentation. – flaudemus Mar 04 '19 at 16:30
  • Thanks. I added an edit 3. Looks like the rabbit hole is getting deeper :) – user101043 Mar 06 '19 at 21:19
2

What does it mean to say that a microcanonical ensemble is one that maximizes entropy? With respect to what variable?

There are different entropies involved: thermodynamic(function of $E,V,N$) and information-theoretical (function of probabilities, also sometimes called the Gibbs entropy). These are not the same thing.

The general rule is: the thermodynamic entropy for some macroscopic state X is the maximum possible value of information entropy functional for all probability distributions that obey given constraints implied by state X.

The concept of microcanonical ensemble can be understood as "the best" probability description of a system for these constraints: known and fixed energy, volume and number of particles: $E,V,N$.

Information entropy is a function of set of probabilities for all possible states. If the space of states is discrete and has $N$ distinct states, the information entropy functional can be defined as

$$ I[{p_k}] = - \sum_{k=1}^N p_k \ln p_k. $$

(this function was studied by Claude Shannon and Edwin Jaynes who gave some arguments why this function and not some other).

For a system with known and fixed $E,V,N$, all the states in the sum above must be possible, in other words, all states $k$ have the same energy, volume, number of particles. Other states are not included in the sum.

Different probability assignments for those states, while all compatible with constraints implied by $E,V,N$ , may give different values for $I$. The distribution $p_k^* = 1/N$ gives, for large $N$, the maximum possible value for $I$. This maximum possible value then gives thermodynamic entropy of the system in macroscopic state $E,V,N$, according to formula

$$ S(E,V,N) = k_B I [ \{ p_k^*\} ] = k_B \ln N. $$

  • First half of your answer is not helpful to my question. Regarding the second half, we assume the same distribution $p_k^* = 1/N$ for canonical and grand canonical ensembles too. Are you suggesting that, out of all these ensembles, microcanonical is the only one that maximizes entropy over possible distributions, with that choice of $p_k$? (after all, we contrast micro and canonical by saying: micro is the one that maximizes entropy, while canonical is the one that minimizes Helmholtz free energy). – user101043 Mar 11 '19 at 01:33
  • In the first half of my answer I explain that you need to have at least two entropies, thermodynamic entropy alone does not obey any sort of maximum principle you suggest. If that is not helpful, I suggest you restate your question and restrict the kind of maximization you are interested in. – Ján Lalinský Mar 11 '19 at 02:03
  • Regarding the other ensembles, the principle of maximum inormation entropy applies as well, but the entropy there is maximal for different probability distribution. For example, in canonical ensemble, the maximizing probability distribution has probability of states given by exponential function of energy of those states. – Ján Lalinský Mar 11 '19 at 02:07
1

This is a good example where different definitions of entropy may lead to confusion (see Is information entropy the same as thermodynamic entropy?) There are also some nuances that require distinguishing between statistical physics (microscopic description of systems of many particles) and thermodynamics (a phenomenological approach.)

Thermodynamic entropy (also known as Gibbs I) is a function of state, that is of macroscopic variables, like $S(N,V,E)$. There is no sense to talk about its maximization for a closed system (which would correspond to microcanonical ensemble in statistical physics), although it is indeed meaningful to discuss its maximization as a function of parameters when we discuss contact between two systems, specifically between a system and a thermostat. In fact, it is often introduced from considering the conditions of thermodynamic equilibrium between two objects.

Boltzmann-Einstein-Planck entropy (sometimes referred-to as simply Boltzmann entropy, in which case it is confusable with the negative of Boltzmann H-function) is given by the well-known expression $S_{BEP}=k_B\log W$. When considering equilibrium between objects on the microscopic level, this can be shown to be equivalent to the thermodynamic entropy discussed above.

Gibbs II entropy is Shannon information entropy adapted to the ensembles used in statistical mechanics $$ S_G=-\int \rho(\mathbf{q},\mathbf{p},t)\log\left[\rho(\mathbf{q},\mathbf{p},t)\right] \prod_{i=1}^N\text{d}q_i \text{d}p_i. $$ It is meaningful to talk about maximizing this entropy in respect to the probability distribution, in which case the uniform probability of occupation over all accessible microstates gives us the BEP entropy. It is in this sense that microcanonical ensemble maximizes the entropy.

As an aside, one has to note that Jaynes has developed an approach to statistical mechanics based on such a maximization procedure - in which case it is simply postulated that the entropy must be at its maximum, given the values of the parameters.

Finally, it is necessary to point out that a single thermodynamic system, starting from a non-equilibrium state, would not maximize its Gibbs II entropy, as can be shown using the Liouville theorem (see What maximizes entropy?) However, under the ergodicity assumption the system would visit in the course of its evolution all the available microstates, the time average over the evolution becoming equivalent to the ensemble average of many systems starting from all possible initial conditions, i.e., corresponding to the maximum of entropy.

Roger V.
  • 58,522
0

So I understand it as two different claims.

One is the claim of the ensemble: in this context we are considering all distributions of particle positions and momenta that have the same $(N, V, E)$. This is the part which gets dicey when we start talking about the canonical ensemble or so—temperature is very hard to define for general systems and so you are implicitly making an appeal to a thermal equilibrium that has already happened. So the one claim is that “we should analyze this complex system by considering all possible configurations that have this number of particles, volume, and energy, as part of one unified whole that we call its microcanonical ensemble.”

The second is the claim of the state that the ensemble takes on: the claim is that in this overall microcanonical ensemble there may be many different “distinguishable” states to us, but to the laws of physics those are no longer distinct things: since they are at the same energy, the laws of physics do not favor one state over another. So the claim is that the right way to view this microcanonical ensemble is to “coarse-grain” it over all of the observable factors, so that each combination of those factors comprises a “macrostate” whose central distinguishing feature is the volume of the phase space—the number of microstates—that it encompasses. And the principle of the maximization of entropy states that since the physics can’t tell the difference between the microstates, the macrostate with the most microstates wins; and the size of this macrostate is being denoted in additive measure by $S$. Claim two is, “In that ensemble, the system naturally relaxes towards the state of maximum entropy, simply because our knowledge of the state of the system naturally gets more and more uncertain, so that it is as if nature ended up simply choosing points of the phase space at random with uniform probability, not caring about any other variables that we are paying attention to.”

So for example of the sort of reasoning we are trying to enable: you have a bigger system $AB$ that is comprised of two smaller systems $A$ and $B$, and you study it in the microcanonical ensemble: $AB$ cannot exchange energy with its environment, but $A$ and $B$ can exchange energy with each other. If you studied them in isolation you would find that each individually in its own microcanonical ensemble has some entropy $S_{A,B}(E)$. The overall system therefore has an entropy $$S_{AB}(E) = S_A(E_A) + S_B(E-E_A).$$ In other words, the $(N,V,E)$ ensemble for $AB$ consists of, for all values of $E_A$, all pairs of states $(s_A, s_B)$ where $s_A$ is in the ensemble for $A$ having energy $E_A$ and $s_B$ is in the ensemble for $B$ having energy $E_B$.

The claim that this is maximized is a claim about this internal variable $E_A$ and it says that the system must come to a state where $${\partial S_A\over\partial E} = {\partial S_B\over\partial E}.$$And in slightly more detail it says that an energy packet will flow from $B$ to $A$ if $\partial S_A/\partial E > \partial S_B/\partial E,$ which if we interpret this as heat flowing from hot to cold means that $\partial S/\partial E$ is some sort of measure of coldness with energy always flowing to the colder thing.

So it is then helpful to be able to compare different numbers from different microcanonical ensembles when those ensembles are really one family of ensembles parameterized over variables like total energy, volume, etc.: because we can then form bigger isolated systems out of the numbers we have for the smaller ones.

CR Drost
  • 37,682
  • I'm sorry you're making a very similar argument as the first answer, and the tradiational way of explaining it in majority of stat-mech texts and classes, which is wrong in my opinion and I have made an edit to my question to explain why. If the bigger system AB is what the microcanonical ensemble respresents, then S does not change. If we call A the ensemble, then it's not microcanonical since $E_A$ is fluctuating and not constant. – user101043 Mar 04 '19 at 17:37
  • @user101043 You're making a mistake, I think, in that you are treating a microcanonical ensemble as a state. It is a set of states, each of which has potentially a different entropy. The set of states for the bigger system $AB$ can be calculated from the states for the subsystems, no? For example for an ideal gas the microcanonical ensemble surely contains states where all the gas particles are in one half of the volume, but it is our assumption that if we just choose a state at random these states are vanishingly unlikely overall. – CR Drost Mar 04 '19 at 19:43
0

The OP may be misinterpreting the claims made in textbooks, in particular the textbooks that are referenced (Sethna, Schroeder, and Kardar). These books do not make the claim addressed by the OP; namely, they do not claim that "a microcanonical ensemble is one that maximizes entropy." The OP's problems can be solved simply by understanding (1) the microcanonical ensemble with fixed $E$ necessarily has fixed entropy $S$ and fixed phase volume $\Omega$, and (2) the composite system of two microcanonical systems brought together is not a microcanonical ensemble itself.

The latter composite system is isolated with fixed $E$, similar to the microcanonical ensemble. However, the composite system has internal constraints that a microcanonical system does not have --- namely, the two subsystems are known to have fixed energy at $E_A$ and $E_B=E-E_A$. This is why the composite system has a different entropy than a corresponding microcanonical system of energy $E$ that lacks the internal constraints. And this is also why the entropy of the composite system can be maximized over $E_A$.

None of the three references that the OP lists (i.e., Sethna, Schroeder, and Kardar) claim that the composite system is a microcanonical ensemble.

ratsalad
  • 353