5

Spontaneous symmetry breaking can be defined as follows (Teresa & Antonio, 1996; pg89):

A physical system has a symmetry that is spontaneously broken if the interactions governing the dynamics of the system possess such a symmetry but the ground state of this system does not.

I am slightly confused about the meaning of 'ground state' in this expression in the context of condensed matter. In the case of the Heisenberg model the ground state can be take as: $$\rho\propto \sum_n \left|GS_n\right>\left<GS_n\right|$$ where $\left|GS_n\right>$ is the $n$th ground state. Now for such a ground state density matrix the $SU(2)$ symmetry of the Hamiltonian is preserved. Only in the case where we have a spontaneous symmetry breaking field do we get a density matrix that does not preserve the $SU(2)$ symmetry of the Hamiltonian. But in such a case the Hamiltonian strictly speaking does not have this symmetry anyway.

Does this system therefore strictly speaking have spontanous symmetry breaking? and what is meant by the term 'ground state' in the above definition does it refer to individual $\left| GS_n\right>$ or to the density matrix $\rho$? Does it refer to before or after symmetry breaking fields are applied?

  • https://physics.stackexchange.com/q/373931/169288 There is a related question. Even though for model with ssb, you can still find state without ssb in ground state subspace. – maplemaple Mar 29 '18 at 18:38

3 Answers3

3

I would like to offer a contrasting answer to that of @tparker. I want to emphasize the fact that it is actually not necessary to introduce any symmetry-breaking field, provided you use the proper setup. (Of course, you can use symmetry-breaking fields, or suitable boundary conditions to achieve that, but I believe that it is conceptually interesting that you don't have to.)

I'll only discuss the case of classical systems, as I am much more familiar with the relevant mathematical framework.

The goal is to define a Gibbs measure for an infinite system. This is necessary, as only truly infinite systems undergo genuine phase transitions (of course, large finite systems can display approximate, smoothed-out "phase transitions"). The usual definition relying on the Boltzmann weight $e^{-\beta H}$ is useless in this case, as the energy of an infinite system is usually undefined.

The most efficient framework to solve this problem, the so-called Dobrushin-Lanford-Ruelle theory, works as follows. I discuss the case of the Ising model on $\mathbb{Z}^2$ for simplicity, but the approach is completely general. One says that a probability measure $\mu$ on the set of infinite configurations $\{-1,1\}^{\mathbb{Z}^2}$ is an infinite-volume Gibbs measure if, for any finite set $\Lambda\subset\mathbb{Z}^2$ and any configuration $\eta\in\{-1,1\}^{\mathbb{Z}^2\setminus\Lambda}$, the conditional probability of seeing a configuration $\sigma$ inside $\Lambda$, given that the configuration outside $\Lambda$ is given by $\eta$, is given by the finite-volume Gibbs measure in $\Lambda$ with boundary condition $\eta$. The latter is well-defined (through the usual Boltzmann weight), since $\Lambda$ is finite.

For models with compact spins (such as the Ising model or the Heisenberg model), existence of infinite-volume Gibbs measures is guaranteed. Uniqueness, however, does not hold in general. For the Ising model on $\mathbb{Z}^2$, for example, there exists a critical value $\beta_{\rm c}\in(0,\infty)$ of the inverse temperature such that there is a unique infinite-volume measure when $\beta\leq \beta_{\rm c}$, while there are infinitely-many infinite-volume Gibbs measures when $\beta>\beta_{\rm c}$. It turns out that any infinite-volume Gibbs measure $\mu$ can be expressed as a convex combination of two of them: $\mu = \alpha \mu^+_\beta + (1-\alpha)\mu^-_\beta$, for some $0\leq\alpha\leq 1$. These two measures $\mu^+_\beta$ and $\mu^-_\beta$ thus contain all the relevant physics, and there are good reasons to consider them as the physically truly relevant ones. It turns out that the average magnetization under $\mu^+_\beta$ is equal to $m^*(\beta)>0$, the spontaneous magnetization (that is the value of the magnetization that you would get, had you first added a magnetic field $h>0$ and then let $h$ decrease to $0$). Under $\mu^-_\beta$, on the other hand, the average magnetization is $-m^*(\beta)<0$. In this precise sense, there is spontaneous symmetry breaking, even though the procedure described above does not explicitly break the symmetry at any step.

Let me now briefly link the above approach (still for the Ising model on $\mathbb{Z}^2$) with the one using an explicit symmetry breaking. A standard way to proceed in this case is to consider an increasing sequence of finite subsets $\Lambda_n\subset\mathbb{Z}^2$. We then consider the Gibbs measure $\mu^+_{\Lambda_n;\beta}$ associated to the Ising model in $\Lambda_n$, with $+$ boundary condition. One can then consider the (weak) limit of the probability measures $\mu^+_{\Lambda_N;\beta}$ as $\Lambda_n$ grows to cover $\mathbb{Z}^2$. It turns out that the limiting measure coincides with the infinite-volume Gibbs measure $\mu_\beta^+$ derived above. Of course, one recovers the measure $\mu^-_\beta$ by using a sequence with $-$ boundary condition.

So the two approached yield the same result, but I insist that the former does not require explicit symmetry breaking. (Moreover, it provides a much more powerful framework.)

Yvan Velenik
  • 10,371
  • Does the former formalism explain (or motivate) why real systems are always found in pure states rather than arbitrary Gibbs states? – tparker Mar 29 '18 at 22:38
  • @tparker : This issue is not trivial. You have to understand that a Gibbs state (pure or not) only describe the "local" behavior deep inside the system. This is due to the topology used when constructing the states or, if you prefer the approach via the thermodynamic limit, the fact that you are sending the boundary at infinity. In particular, you may get mixtures depending on how the system was prepared. – Yvan Velenik Mar 30 '18 at 07:20
  • The most trivial situation is if you took the thermodynamic limit using free or periodic boundary conditions. In this case, the limiting state is the mixture $\tfrac12\mu_\beta^++\tfrac12\mu_\beta^-$. You get a mixture because you do not know which phase is realized, both being equally likely. But there are other ways of getting a mixed state. – Yvan Velenik Mar 30 '18 at 07:20
  • As an easy example, consider the Ising model with Dobrushin boundary condition (i.e., $+$ on the upper half-plane and $-$ on the lower half-plane) in a sequence of growing squares centered at the origin. In that case, the sequence of finite systems you are considering will have an interface separating regions respectively occupied by the $+$ and $-$ phases. – Yvan Velenik Mar 30 '18 at 07:21
  • In the thermodynamic limit, the fluctuations of this interface (on $\mathbb{Z}^2$) become unbounded, which implies that the limiting Gibbs state is again the mixture $\tfrac12\mu_\beta^++\tfrac12\mu_\beta^-$. In this case, you know very well what the system is doing, but you still get a mixture, because the measurement you are performing occurs at a finite distance from $0$, and you don't know on which side of the interface it occurs. – Yvan Velenik Mar 30 '18 at 07:21
  • So, in this sense, there are natural situations in which mixtures occur. The formalism thus cannot tell you too much about what is likely to be observed or not. On the other hand, pure phases (or more generally, extremal Gibbs measures) have certain remarkable properties that set them apart from mixtures and make them more natural candidates to describe macroscopic systems. In particular, they are the only ones yielding deterministic predictions when measuring macroscopic quantities. – Yvan Velenik Mar 30 '18 at 07:21
  • That makes sense - since pure states respect cluster decomposition, the behavior deep in the bulk should in some sense be insensitive to fluctuations in the boundary conditions. – tparker Mar 30 '18 at 13:11
1

Three comments:

  1. It's true that at some point you need a tiny symmetry-breaking field. But it doesn't need to act on every site uniformly - even a field that acts on a single site is enough to break the symmetry. Realistically, you can't be expected to be able to able keep track of these tiny fields in a real system. So philosophically I suppose you could say that "symmetry breaking" doesn't happen in real life because the symmetry is never exact - but the point is that the system is unstable to tiny asymmetries that you can't realistically keep track of.

  2. You are assuming that the Hamiltonian doesn't change in time. But the point of SSB is that even a momentary symmetry-breaking field is enough to permanently break the symmetry. Once the system falls into a symmetry-breaking configuration, it gets "stuck" and can't "climb out" back into the symmetric configuration.

  3. It's true that the Gibbs ensemble is always symmetric. But the point of SSB is that in the symmetry-broken phase, the system isn't described by the Gibbs ensemble, but only by an asymmetric subset of the ensemble. So you only find yourself in one of the ground states not in an equally weighted mixture of all of them. This is called "ergodicity breaking".

tparker
  • 47,418
0

The book (Altland and Simons, 2010; pg258) (or pg263 in the 2006 edition) states the following (verbatim quote from 2006 ed):

In spite of the undeniable existence of solids, magnets, and Bose condensates of definite phase, the notion of a ground state that does not share the fully symmetry of the theory may appear paradoxical, or at least "unnatural". For example, even if any particular ground state of the "Mexican hat" potential shown in the figure above "breaks" the rotational symmetry, shouldn't all these states enter the partition sum with equal weight, such that the net outcome of the theory is again symmetric?

The book then goes onto discuss how symmetry breaking fields produce symmetry breaking as an observable phenomenon. From this discussion I therefore gather the following two things:

  • Spontaneous symmetry breaking refers to the ground states $\left| GS_n \right>$ not possessing the same symmetry as the dynamics.
  • The reason we see spontaneous symmetry breaking is due to imperfections.
  • I disagree with that: if you sample a configuration at random according to the Gibbs state associated to, say, the 2d Ising model at low temperatures, then symmetry will be broken in the sense that the average magnetization $\frac1N\sum_i \sigma_i$ in the sample will be close to $\pm m^(T)$, where $m^(T)$ is a deterministic nonzero value (the spontaneous magnetization at temperature $T$). So typical configurations are not invariant under reversal of all the spins, even though the Hamiltonian does not prefer any type of spins. – Yvan Velenik Mar 29 '18 at 12:23
  • Of course, the Gibbs measure itself is still symmetric for a finite system: each of the two values $\pm m^(\beta)$ occurs with the same probability. In the thermodynamic limit, however, there are infinitely many different Gibbs states, and the pure phases (the physically relevant ones) display either a spontaneous magnetization equal to $m^(T)$ with probability $1$ or equal to $-m^*(T)$ with probability $1$. – Yvan Velenik Mar 29 '18 at 12:23
  • So, in the thermodynamic limit (which is in any case necessary to speak properly of phase transitions), the symmetry is broken at the level of the (relevant) Gibbs states themselves. – Yvan Velenik Mar 29 '18 at 12:25
  • @YvanVelenik "if you sample a configuration at random according to the Gibbs state associated to, say, the 2d Ising model at low temperatures, then symmetry will be broken" - this depends on the details of your sampling procedure. It's true for a local-update procedure like Markov-chain Monte Carlo, but not for certain nonlocal sampling procedures. The local nature of environmental perturbations is key for understanding symmetry breaking. – tparker Mar 29 '18 at 16:21
  • @tparker : I was talking about truly sampling from the Gibbs state, which has nothing to do with any algorithmic procedure. (Of course, in practice you would probably use some Markov chain that has the Gibbs state as invariant distribution.) Note that this use of the verb "sample" is the usual one in probability theory. – Yvan Velenik Mar 29 '18 at 17:15
  • @YvanVelenik Could you clarify your claim that "there are infinitely many different Gibbs states"? At fixed temperature $T$, why isn't there a single Gibbs state with PDF $p(s) = e^{-E(s)/T} / Z$? I believe you are using a different definition of "Gibbs state" than I am. – tparker Mar 29 '18 at 17:41
  • @tparker : I wrote an answer with more details. I hope this helps. I can give much more information, but this would quickly become technical. – Yvan Velenik Mar 29 '18 at 17:47
  • @YvanVelenik That's a great answer you wrote, thank you! Since this is a subtle point that the OP has probably not been exposed to, it's worth pointing out explicitly that the finite-system Gibbs measure $p(s) = e^{-\beta H(s)}/Z$ is not a valid probability distribution for an infinite system, so the definition of the "Gibbs measure" needs to be generalized appropriately. – tparker Mar 29 '18 at 18:04
  • @tparker : exactly. This is something that is very seldom discussed in physics classes (with good reasons, I guess). It is nevertheless a conceptually very interesting topic. There are actually so much more remarkable properties that you can extract from this formalism, it would deserve a longer answer, but the current one is already probably too long ;) . – Yvan Velenik Mar 29 '18 at 18:08