2

I am writing a simulation that includes chemical reactions and I think I am getting some things wrong with entropy and Gibb's free energy.

I was calculating the overall entropy of my system at a particular point in time. There is a map of molecule abundances. My naive approach was to just use the frequencies of molecule abundances. Say I have a 1D map of molecular abundances $M$ in mol. I would calculate entropy $S$ like: $$ p_i = M_i ({\sum_{j} M_j})^{-1} \quad , \quad S = R \sum_{i} p_i \ln p_i $$ where R is the gas constant. At first sight this number looks good. A Kronecker delta (like $M = [0.0, 0.0, 5.0, 0.0, 0.0]$) would give $S=0.0$ and when I allow molecules to diffuse (e.g. $M = [0.5, 1.0, 2.0, 1.0, 0.5]$) this number increases.

Now, these molecules also have internal energies (representing energies of chemical bonds). If there is only one molecule species, and $E$ is the internal energy of one mol of this molecule, the internal energy of this system would be: $U = E \sum_{i} M_i$.

Here is what confuses me: $U$ grows with the number of molecules (seems intuitive). $S$ does not. So, when I calculate the Gibb's free energy of the whole system $G = U - T S$ ($T$ being absolute temperature) its dependence on $S$ depends on the size of my system. I.e. the more molecules I add to my system, the less important $S$ will be overall.

Is there some term that I should add to $S$ to reflect the actual number of molecules?

Edit for context The simulation is a rather macroscopic view of the world. Each voxel has a number of molecules (for ease of use in mol) and reaction velocities are based on Michaelis Menten Kinetics. Changes are calculated step-by-step where each step is a time unit. The direction of any reaction is defined by its change in free Gibb's energy as $\Delta G = \Delta G_0 + RT \ln Q$ where $G_0$ is the standard free energy of the reactants and $Q$ is the reaction quotient.

  • 1
    Your $p = M {{\sum_{i} M_i}}^{-1}$ doesn't look correct. I think it should be $p_i = M_i \left({\sum_{j} M_j}\right)^{-1}$. – Thomas Fritsch Jan 09 '23 at 01:12

1 Answers1

0

(Just to preface this: I don't do molecular dynamics or chemical engineering, this is just a comment on some of the statistical mechanics issues that come up. I hope someone else with more practical experience than I will answer this question as well!)

Is this a macroscopic simulation or a microscopic simulation? For a macroscopic simulation where we want to consider evolution in time and deal with number densities (as opposed to positions and velocities of individual particles), what you really want is:

  • $K$ intervals of length $dx$ which are locally in thermodynamic equilibrium (eg. things like temperature can be sensibly defined. This is a restrictive assumption.)
  • An extensive entropy for each interval in terms of the variables you want. The formula for the entropy of an ideal gas which sticks out in my mind is in terms of internal energy per particle, volume, and number density of particles: $S(u,L,n)$. This will look like the Sackur-Tetrode equation but in 1D. The 3D version of this equation is $S=nL(\log(u^{3/2})-\log(n)+\xi)$, so the 1D version should look like $S=nL(\log(u^{1/2})-\log(n)+\xi')$ for some different $\xi'$. It is extensive because the definition remains consistent if you divide the system into $K$ intervals: $S(u,L,n)=K\cdot S(u,L/K,n)$
  • The total entropy in a nonequilibrium position (but which is locally in equilibrium) should be calculated as $S=\sum_{i=1}^K S(u_i, dx, n_i)$. This is what making a big deal about the extensivity of entropy in statistical mechanics affords us.

With this setup, we could write a simulation where $u_i$ is constant (constant internal energy), and where the number densities evolve from $n_i=[0,0,5,0,0]$ to $n_i=[1,1,1,1,1]$. We have:

$S_{\text{initial}}=S(u,dx,5)=5 dx(\log(u^{1/2}-\log(5)+\xi')$,

$S_{\text{final}}=5S(u,dx,1)=5 dx(\log(u^{1/2}-\log(1)+\xi')$,

$\Delta S=S_{\text{final}}-S_{\text{initial}}=5 dx\log(5)$

Which is the correct change in entropy of an ideal 1D gas undergoing free expansion into vacuum.

From here you would have your work cut out for you in considering different chemical species, and I think you would have to specify what types of reactions you want to simulate.

David
  • 716
  • Thanks. I think I am looking at it from a macroscopic level. Each voxel has a number of molecules (for ease of use in mol) and reaction velocities are calculated by Michaelis Menten Kinetics. Changes are calculated step-by-step where each step is a time unit. The direction of any reaction is defined by its change in free Gibb's energy as $\Delta G = \Delta G_0 + RT \ln Q$ where $G_0$ is the standard free energy of the reactants and $Q$ is the reaction quotient. – mRcSchwering Jan 08 '23 at 23:29
  • I just realized that $S(u, L, n)$ has the molecule's internal energy. I guess this is not the same as the internal energy I was talking about (energy from chemical bonds)? I.e. I wouldn't expect the entropy of a molecule species to change because of an activated group (e.g. $ATP \rightleftharpoons ADP + Pi$) but rather because of a change in the number of molecules (1 ATP vs 2 (ADP + Pi)). – mRcSchwering Jan 08 '23 at 23:47
  • That would be good information to include in the question! The takeaway though is that $S=\sum_i - p_i \log(p_i)$ should be used when describing entropy as a sum over microstates, but if you're concerned with macrostates such an approach is wrong (or at least difficult). – David Jan 09 '23 at 00:07
  • I think in my head I mixed up the Nernst equation with any notion of entropy. Now I see that the Sackur–Tetrode equation actually makes sense. It took me a while to understand it – mRcSchwering Jan 10 '23 at 12:01