I have been told that few models in statistical mechanics can be solved exactly. In general, is this because the solutions are too difficult to obtain, or is our mathematics not sufficiently advanced and we don't know how to solve many of those models yet, or because an exact solution genuinely does not exist, i.e. it can be proven that a model does not admit an exact solution?
-
31It's mostly for the same reason why most of the integrals don't have a closed form. They do have an exact value, but this exact value is not expressible in elementary functions. – Ruslan Jun 16 '20 at 12:12
-
31You can’t even solve for the zeros of most quintic polynomials. In general, it is remarkable when something is solvable, not when it isn’t. – G. Smith Jun 16 '20 at 17:12
-
3@Ruslan In the theory of differential equations one speaks of solving an equation by quadrature - expressing it in terms of integrals. And this is certainly considered an exact solution. – Roger V. Jun 17 '20 at 21:03
-
6@Vadim well, series solution of the three-body problem is also an exact solution. But I don't think one would seriously try to sum Sundman's series. – Ruslan Jun 17 '20 at 21:46
-
1@Daphne Are you thinking about lattice models of critical phenomena like the ones discusses in Rodney Baxter's book Exactly Solved Models in Statistical Mechanics. If so, the models in that book mostly all have a property that models that are not exactly solved lack, namely an infinite-dimensional symmetry group. One needs to take the thermodynamic limit to look at critical phenomena, and the symmetries are what make that calculation tractable. – Will Orrick Jun 19 '20 at 07:40
3 Answers
Exact (non-)solvability is an issue that pops up in every area of physics. The fact that this is surprising is, I believe, a failure of the didactics of mathematics and science.
Why? Consider the following: You solve a simple physical problem and the answer is $\sqrt{2}$ meters. So what is the answer? How many meters? Have you solved the problem? If I do not give you a calculator or allow you to use the internet, you will probably not be able to actually give me a very precise answer because "$\sqrt{2}$" does not refer to an exact number we are able to magically evaluate in our mind. It refers to a computation procedure by which we are able to get the number with high precision, we could use e.g. the iterative Babylonian method $$a_0=1, a_{n+1} = \frac{a_n}{2} + \frac{1}{a_n}$$ after three iterations ($n=3$) you get an approximation valid to six significant digits. But have you solved the problem exactly? No, you have not. Will you ever solve it exactly? No, you will not. Does it matter? No, it does not, since you can solve the problem extremely quickly to a degree that is way more precise than any possible application of the model itself will be.
So when people refer to exact solvability they really mean "expressible as a closed-form reference to a standard core of functions with well-known properties and quickly-converging computational approximations". This "standard core" includes rational functions, fractional powers, exponentials, logarithms, sines, cosines,... Much of them can either be understood as natural extensions of integer addition, division and multiplication (rational functions), to solutions of simple geometrical problems (sines, cosines), and solutions of particular parametrized limits/simple differential equations (exponential).
But there are other functions known as special functions such as elliptic integrals and Bessel functions that are sometimes understood as part of the "standard core" and sometimes not. If I express the solution of a problem as an exponential, it is an exact solution, but if it is an elliptic integral, it is not. Why is the reference to the circle and certain lengths within it (sine, cosine) more important than those within the ellipse (elliptic integral)?
When you dig deeper, you find out that the notion of exact solvability is largely conventional, and trying to formalize it will typically either exclude or include many systems that either are or are not considered solvable. So you can understand your question as "Why are most problems in physics not possible to express as solutions of a rather arbitrarily chosen set of simple geometrical problems?" And the reason is because, well, there is no reason why to believe they should be.
EDIT
There have been quite a few criticisms of the original answer, so I would like to clarify. Ultimately, this is a soft question where there is no rigorous and definite answer (you would lie to yourself if you were to pretend that there was), and every answer will be open to controversy (and that is fine). The examples I gave were not supposed to be a definite judgment on the topic but rather serve the purpose of challenging the concept of "exact solvability". Since I demonstrated that the concept is conventional, I wanted to finish on that point and not go into gnarly details. But perhaps I can address some of the issues raised in the comments.
The issue with $\sqrt{2}$:
I take the perspective of a physicist. If you give me a prediction that a phenomenon will have the answer $\sqrt{2}$ meters, then I take a measurement device and check that prediction. I check the prediction with a ruler, tape meter, or a laser ranging device. Of course, you can construct $\sqrt{2}$ meters by the diagonal of a square with the tape meter, but if your measurement device is precise enough (such as the laser rangefinder), I can guarantee you that the approximate decimal representation is ultimately the better choice. The $\sqrt{2}$ factor can also be replaced by any constant outside the class of straightedge and compass constructions such as $\pi$ or $e$ to make the argument clearer. The point is that once you think about "door-to-door", and "end-to-end" approaches, not abstract mathematical notions and "magical black boxes" such as calculators, the practical difference between "exactly solvable" and "accessible even though not exactly solvable" is not qualitative, it is quantitative.
Understood vs. understandable vs. accessible vs. exactly solvable:
I would like to stress that while there are significant overlaps between "understood", "understandable", "accessible", and "exactly solvable", these are certainly not the same. For instance, I would argue that the trajectories corresponding to a smooth Hamiltonian on a system with a low number of degrees of freedom are "accessible" and "fully understandable", at least if the functions in the Hamiltonian and the corresponding equations of motion do not take long to evaluate. On the other hand, such trajectories will rarely be "exactly solvable" in any sense we usually consider. And yes, the understandability and accessibility apply even when the Hamiltonian system in question is chaotic (actually, weakly chaotic Hamiltonians are "almost everywhere" in terms of measure on functional space). The reason is that the shadowing theorem guarantees us that we are recovering some trajectory of the system by numerical integration and by a fine sampling of the phase space we are able to recover all the scenarios it can undergo. Again, from the perspective of the physicist you are able to understand where the system is unstable and what is the time scale of the divergence of scenarios given your uncertainty about the initial data. The only difference between this and an exactly solvable system with an unstable manifold in phase space is that in the (weakly) chaotic system the instability plagues a non-zero volume in phase space and the instability of a chaotic orbit is a persistent property throughout its evolution.
But consider Liouville-integrable Hamiltonians, which one would usually put in the bin "exactly solvable". Now let me construct a Hamiltonian such that it is integrable but its trajectories become "quite inaccessible" and certainly "not globally understandable" at some point. Consider the set $\{p_i\}_{i=1}^{N+3}$ of the first $N+3$ prime numbers ordered by size. Now consider the set of functions $\xi_i(x)$ defined by the recursive relation $$\xi_1(x) = F(p_3/p_2,p_2/p_1,p_1;x), \; \xi_{i+1} = F(p_{i+3}/p_{i+2},p_{i+2}/p_{i+1},p_{i+1}/p_i;\xi_i)$$ where $F(,,;)$ is the hypergeometric function. Now consider the Hamiltonian with $N$ degrees of freedom $$H_N(p_i,x^i) = \frac{1}{\sum_{i=1}^N \xi_i(x^i) } \left(\sum_{i} p_i^2 + \sum_{i=1}^N \xi_i(x^i)^2 \right)$$ For all $N$ the Hamiltonian can be considered exactly solvable (it has a parametrized separable solution), but there is a certain $N$ where the solution for a generic trajectory $x^i(t)$ becomes practically inaccessible and even ill-understood (my guess this point would be $N\sim500$). On the other hand, if this problem was very important, we would probably develop tools for better access to its solutions and for a better understanding. This is also why this question should be considered to be soft, what is considered understandable and accessible is also a question of what the scientific community has been considering a priority.
The case of the Ising model and statistical physics:
The description of the OP asks about the solvability of models in statistical physics. The Ising model in 3 dimensions is one of the famous "unsolvable/unsolved" models in that field, which was mentioned by Kai in the comments and which also has an entire heritage question here at Physics SE. Amongst the answers I really like the statement of Ron Maimon:
"The only precise meaning I can see to the statement that a statistical model is solvable is saying that the computation of the correlation functions can be reduced in complexity from doing a full Monte-Carlo simulation."
This being said, the 3D Ising model can be considered "partially solved", since conformal bootstrap methods provide a less computationally demanding (and thus ultimately more precise) method of computing its critical exponents. But as I demonstrated in the paragraphs above, "understandability" and "accessibility" do not need to be strictly related to "exact solvability". The point about the 3D Ising model is that the generic line of attack for a numerical solution of the problem (direct Monte Carlo simulation) is largely inaccessible AND the computational problem cannot be greatly reduced by "exact solvability".
This also brings us to an interesting realization: exact solvability, in its most generous definition, is simply "the ability to reduce the computation problem considerably as compared to the generic case". In that sense, it is tautological that exact solvability is non-generic.
However, we should also ask why are certain classes of problems "generically computationally inaccessible" so that a lack of solvability becomes a huge issue. We do not know the full answer, but a part of it definitely has to do with the number of degrees of freedom. More complex problems are harder to model. Systems with more degrees of freedom allow for a higher degree of complexity. Why should we assume that as the number of degrees of freedom becomes large the computation of certain statistical properties of the system becomes simple? Of course, the answer is that we should not assume that the large-$N$ limit will become simple, we should instead assume that problems with computational complexity of large-$N$ systems will be generic and simplifications special.

- 391

- 19,926
- 1
- 34
- 81
-
1Interestingly, at some point trigonometric functions were also not very welcome as solutions of some kinds of equations (the ones for roots of polynomials). The result of corresponding research is the Abel-Ruffini theorem. – Ruslan Jun 16 '20 at 14:46
-
25I think this is not exactly correct and might be missing the point. My understanding of an "exact solution" is more like "we in principle know all possible details of the system under study", for example the exact solution of the 2D Ising model, versus the unsolved 3D Ising model. We can say that we know everything there is to know about the 2D Ising model, but there are still unknowns in regards to the 3D version – Kai Jun 16 '20 at 21:03
-
9@Kai Indeed. $\sqrt 2$ is a pretty bad example for this question, because everything which is known about $0$ or $1$ is also known about $\sqrt 2$. $\sqrt 2$ is easy to draw, compare, measure or display in any given base. Chaitin's constant, Graham's number, Busy Beaver, https://math.stackexchange.com/a/1266604/386794 or any point in the chaotic part of a logistic map would be a much better example. – Eric Duminil Jun 17 '20 at 06:51
-
13$\sqrt{2}$ is very much an exact number. It just isn't the ratio of 2 integers. Saying it isn't an exact number is like saying 0.5 isn't an exact number because you need a decimal point (or some fraction notation like $\frac{1}{2}$) to represent it. – chepner Jun 17 '20 at 11:45
-
10It's also pretty easy to imagine: it's the diagonal of a unit square. I find that a lot easier to imagine than, say, 57. – chepner Jun 17 '20 at 11:46
-
4@EricDuminil Why should the answer say anything about the uncerainty principle, chaos theory or the universe outside our light cone? Those were not part of the question. The question was simply why certain mathematical models of reality cannot be solved in a way that meets certain mathematical criteria. If you want uncertainty theory or stuff outside our light cone in your model then go and ask for a different model. But that is quite a different issue than whether or not solutions to this model take a certain form. – Dast Jun 17 '20 at 18:32
-
1@EricDuminil. Actually, the chaos thing might be a point, as that is something that comes out rather than being put in to the model. All chaotic systems will not have exact solutions, but perhaps their are systems that are non-chaotic but not exactly solvable? – Dast Jun 17 '20 at 18:43
-
@Dast related question at Math.SE: Are there chaotic systems with explicit solutions? – Ruslan Jun 18 '20 at 12:56
-
5I agree with Kai. I read the OP question as "Why do so few problems have a closed-form, analytic solution?" where a closed-form expression is one expressed using a finite number of standard operations, and an analytic solution is an exact expression (not statement). E.g., a closed-form solution to the Schrödinger equation for the helium atom has not been found. $\sqrt 2$ is absolutely a closed-form analytic solution. AP is referring to something akin to numeric computability and precision; $\sqrt 2$ can't be computed to infinite precision in finite steps. – DeusXMachina Jun 18 '20 at 14:19
-
1@Kai Well, we only understand the two-dimensional Ising model in the case of zero external magnetic field. And even the zero-field susceptibility remains mysterious, despite the existence of a well-studied form-factor expansion and of efficient methods for computing long series. As an analogy, there are many known representations of the Riemann zeta function, and one can compute many things about it very efficiently, but we are still very far from knowing everything about it. – Will Orrick Jun 19 '20 at 07:47
-
I have addressed many of the points appearing in the comments in an extended edit.
I would also like to underline the point of Ruslan about trigonometric functions as not being always welcome as "exact solutions" (and let's face it, if you can trade a trigonometric function or an exponential for a polynomial in a numerical computation, you still absolutely do so). And Will Orrick also has a good point: if the solution is given in terms of the Riemann zeta function, is it considered "exact" or not?
– Void Jun 19 '20 at 13:21
Try finding an analytical solution of the particle position $(x,y,z)$ at time $t$ when the movement is described by the Lorenz attractor equation system: $$ {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma (y-x),\\[6pt]{\frac {\mathrm {d} y}{\mathrm {d} t}}&=x(\rho -z)-y,\\[6pt]{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-\beta z.\end{aligned}} $$ We can't do that. An analytical solution doesn't exists, because the system is chaotic. We can only try to solve the equation numerically and draw particle position at each moment in time. What you will get is this:
Neither numerical method helps to shed a light on the particle's exact behavior or exact prediction where it will be exactly after time period $\Delta t$. You can do some estimations of course, but just predicting in small time window and with great inaccuracies/error. That's why weather prediction fails for large time scales, and sometimes fails for small ones too. Three-body problem in Newton mechanics is also chaotic and doesn't have a general solution either. So, there's unpredictable systems everywhere in nature. Just remember uncertainty principle.
EDIT
Thanks to @EricDuminil - he gave another more simple idea how to see chaotic behavior of systems. One just needs to recursively calculate Logistic map equation for couple of hundreds iteration:
$$ x_{n+1}=r\,x_{n}\left(1-x_{n}\right) $$
And draw $x$ values visited over all iterations as a function of bifurcation parameter $r$. One then will get a bifurcation diagram like this:
We can see that $r$ values in range $[2.4; 3.0]$ make a stable system, because it visits just 1 point. And when the bifurcation parameter is $r > 3.0$ the system becomes chaotic, output becomes unpredictable.

- 13,467
-
5Good answer. You could use the logistic map as a simple example for a chaotic system. Start with
x
between 0 and 1, and applyx = 3.7*x*(1-x)
100 times. Try it withx=0.5
,x=0.49999
orx=0.4999999
. – Eric Duminil Jun 17 '20 at 07:55 -
1
-
Also, "An analytical solution doesn't exists". What do you mean by this? Do you mean there is some proof that no solution exists? Even the meaning of that is not that clear solution expressed as what? (At worst the "solution" is defined by the differential equations in question. Is the point that it is proven to not be expressible in terms of other known (special) functions?) – Marten Jun 18 '20 at 14:35
-
I mean there is no solutions for solving differential equation system for $x(t), y(t), z(t)$. No proof, of course, just my beliefs. But they are based on the numerical estimations. If there would be analytical solutions to equations defining particle path - numerical estimations would not show chaotic behavior or Lorenz attractor - isn't it ? – Agnius Vasiliauskas Jun 18 '20 at 22:01
Exact solution is a vaguely defined term, which may change its meaning depending on the context. It practice it usually means being able to express the answer in terms of a set of well-known functions (elementary functions or special functions) and straightforward operations on them (arithmetic operations, differentiation, integration, etc.) This also means that such a solution can be quickly evaluated for any number of parameters. Based on this definition one can define several classes of problems:
- Problems exactly solvable in the above sense
- Problems that are not exactly solvable in the above sense, but for which the solution can be quickly obtained (e.g., numerically) - then we can simply define such a solution as a new special function and use it for more complex problems (in this context note that even the values of the basic functions, e.g., trigonometric functions, are evaluated only approximately).
- Problems that cannot be solved quickly -- such problems definitely exist. E.g., the field of quantum computing is about finding a way to make certain class of such problems quickly solvable.
Exact models have historically played important role in physics, as they allowed understanding phenomena - by which physicists usually mean reducing phenomena to combinations of simpler ones and/or intuitively understandable formulas. However, wWe are living in the age where our ability to reduce phenomena to simple mechanistic pictures, dependable on a few parameters, reaches its limit, which is manifested by an alternative approach that aims to make predictions without ever attempting to understand the phenomena behind - it passes under the name of machine learning.

- 58,522
-
4"basic functions, e.g., trigonometric functions" — you've already gone too far: even square roots are computed only approximately. Actually, why roots, even products of rationals are most often computed only approximately: we don't normally use exact arithmetic in numerical calculations, so have to cope with roundoff errors instead of keeping hundreds of digits in numerators and denominators. – Ruslan Jun 16 '20 at 14:27
-
1
-
2An "alternative approach that aims to make predictions without ever attempting to understand the phenomena behind". Well, this may be useful from an engineering point of view, but in my opinion there is no way such an approach can be considered science. – Yvan Velenik Jun 16 '20 at 15:00
-
1@YvanVelenik the paper that I linked makes some good arguments: e.g., some systems are hypersensitive to parameters, whereas others may be too complex to be described by a number of parameters sufficiently small for human to keep in mind - like biological systems. I don't know, if machine learning really solves these problems... it is definitely not science in traditional understanding of the word. – Roger V. Jun 16 '20 at 15:40
-
3@Vadim: Well, for me science is wholly about understanding (even if you have to make drastic simplifications, consider toy models, and so on). But just having a black box spitting out numerical answers to questions we throw at it is not something I'd find satisfactory (while fully recognizing how useful this would be practically). Anyway... – Yvan Velenik Jun 16 '20 at 16:48
-
11@YvanVelenik We have accepted the limits on science before, when we accepted that we could not measure things infinitely fast, that we cannot measure precisely the things that are very small, that we cannot write down the equations-of-motion for all the molecules in a gas. Now we are facing a situation where we cannot account for all the parameters. Perhaps it is my personal bias after several years in computational biology, but I do not think that this is the end of science. Neither do I think that data science is a solution to everything - even though it is frequently presented as such. – Roger V. Jun 16 '20 at 17:12