59

There exists this famous Haag's theorem which basically states that the interaction picture in QFT cannot exist. Yet, everyone uses it to calculate almost everything in QFT and it works beautifully.

  1. Why? More specifically to particle physics: In which limit does the LSZ formula work?

  2. Can someone give me an example of a QFT calculation (of something measurable in current experiments, something really practical!) in which the interaction picture fails miserably due to Haag's theorem?

Qmechanic
  • 201,751
Rafael
  • 2,731

6 Answers6

20

every theorem is only as powerful as its assumptions (and propositions). The answers are clearly that

  1. The LSZ formula always works for the field theories where it's used.
  2. No actual calculation relevant to physics fails because of Haag's theorem. Haag's theorem is just a philosophy.

Haag's theorem is morally wrong because it studies the question whether the operators in the interacting theory are "strictly" unitarily equivalent to those in the free theory: $$O_\mathrm{interacting} = U O_\mathrm{free} U^{-1}$$ Not surprisingly, Haag finds out that such a unitary equivalence doesn't exist. This is not surprising because, as we know, operators acquire anomalous dimensions from the interactions (and quantum effects), among other deviations from the classical intuition, and the naive algebra that is valid in the free theory simply no longer applies to the interacting theory.

In particular, the addition of the interactions also modifies the commutation relations between the fields that "directly" create and annihilate the particles - at least the low-energy effective fields. For example, the quantum effects produce effective Lagrangians that contain higher-derivative terms, including new terms with time-derivatives, and the latter modify the canonical momenta and/or the canonical commutation relations. When one is rigorous, many things change when the interactions are added. Haag only assumed that "some" things change, so his results are inconsequential for physics.

At any rate, this 1955 theorem is obsolete - much like most of the former discipline that used to be known as algebraic quantum field theory or axiomatic field theory - and be sure that interacting quantum field theories exist - lattice QCD is an example of a specific way how to define them - and it is equally true that the perturbative approximation of all the physical amplitudes may be calculated by the usual perturbative methods, with the extra philosophy and rigorous refinements given e.g. by the LSZ formalism you mentioned.

Haag's theorem was invented as an attempt to show that there was something wrong with one of the first loop diagrams people understood - the vacuum polarization graph. Mr Haag didn't like them. However, there is nothing wrong with the loop diagram - or any other loop diagrams that became the bulk of knowledge about particle physics in the subsequent decades. The developments in renormalization showed that the calculations, including the loops, are totally valid. The renormalization group made some further progress - it explained why the theories are universal and why the subtraction of infinities work. Haag's theorem became misleading and obsolete in the 1970s.

In particular, the LSZ formalism uses the "adiabatic hypothesis", the assumption that one may neglect the interactions between the particles in the asymptotically distant past. By slowly and continuously turning on the coupling constant, we may map the states of free particles to the states describing particles in the interacting Hilbert space. This is possible as long as all distances between the particles are large. However, this procedure wouldn't work for general configurations of nearby particles - so one can't promote this trick into a full-fledged "canonical" unitary equivalence between the free and interacting Hilbert spaces. There is clearly no such a "natural" or "unique" or "canonical" isomorphism because the free and interacting theories are physically inequivalent. When understood rationally, Haag's theorem is not saying anything else than this self-evident proposition. However, such an isomorphism is not needed to calculate physically meaningful quantities such as the scattering amplitudes.

At least from the viewpoint of physics as an empirical science, it should be clear that the actual calculations in QFT are good science - one that has made predictions and has passed tests comparing the predictions with experiments - while Haag's theorem is not because it hasn't predicted anything that has passed empirical tests. Haag's theorem tries to find problems with the fact that quantum field theory contains new effects such as renormalization that don't appear in quantum mechanics with a finite number of degrees of freedom. However, these extra effects of quantum field theory are real and essential and they lead to no inconsistencies.

Haag's theorem is not a tool to do particle physics; it's an excuse for someone who doesn't want to study particle physics. As every theorem, it says "A implies B". Because we know that B is incorrect - perturbative QFT clearly works - it follows that the assumptions A aren't right.

AccidentalFourierTransform
  • 53,248
  • 20
  • 131
  • 253
Luboš Motl
  • 179,018
  • Is $O$ a "fundamental field" operator, or were you thinking about composite operators too? – Rafael Jan 27 '11 at 14:06
  • 1
    Dear Rafael, it would be more natural if you asked the question exactly in the opposite way: did you mean just composite operators, or fundamental ones as well? The answer is both. None of them is unitarily equivalent in the interacting and free theories. For the composite operators, it's obvious because they're renormalized, acquire the anomalous dimensions, require one to choose a renormalization scheme, and so on. However, the unitary equivalence fails even for the free operators although a right field redefinition may guarantee the equivalence for a subset of operators. – Luboš Motl Jan 27 '11 at 15:08
  • I still feel that I haven't made it clear enough: the theorem itself is valid, but the interpretation that it causes any problem to QFT calculations is not valid. The conventional relationship between the Schrodinger and interaction picture is not rigorously correct, but the results calculated when all the methods of QFT are appreciated are correct and correspond to a fully consistent physical system. The relation between the Schrodinger and interaction picture is true at a classical level but one must properly calculate the corrections everywhere. – Luboš Motl Jan 27 '11 at 15:11
  • 16
    This answer contains serious misrepresentations not only of Haag's work and the motivation behind it but also its significance for the foundations of QFT. For anyone interested in a neutral, informed perspective please check out this article Haag's Theorem and Its Implications for the Foundations of Quantum Field Theory –  Jan 29 '11 at 13:23
  • @space_cadet: I’m with you about controlling the tone of conversation, but here there is genuine disagreement which is not always a sign of someone being misinformed. In my mind also, when attempts to formalize QFT are at odds with a successful research program, which underlies most of modern physics, I’d say the reasonable attitude is to go back to the drawing board and improve those attempts. I believe this is also the approach Tim, who is probably the most informed person in this conversation, takes. Drawing deep philosophical conclusions from this technical issue seems to me a bit silly. –  Jan 29 '11 at 19:10
  • 4
    This is a little bit like the development of calculus, which underlies Newtonian mechanics. It took a long time, and was clearly a very valuable exercise for both mathematics and physics. But, long before the subject was rigorously defined it was clear that Newtonian mechanics was correct, but the correct language for it does not exist yet. So, I think Haag’s theorem demonstrates that we are at the same stage of development of QFT. –  Jan 29 '11 at 19:37
17

A comment on what Lubos wrote (it would seem that I need more reputation to post comments :-)

There are still people who are living and are doing research in AQFT, so maybe AQFT is obsolete for some of us, but it is not dead.

We still don't have a full mathematically rigorous understanding of QFT. But everybody is allowed to ignore this, of course, and use the computational tools of QFT that have proven their value. Haag's theorem tells us that we simply don't know why the computational tools of perturbative QFT work so well, but as far as concrete computations go, as long as you get the numbers right, this does not need to concern you. Therefore you won't find any calculations that did not work out because of Haag's theorem.

Any rigorous construction of an interacting 4D QFT will have to avoid Haag's theorem in one way or another, however. But if the endeavor to do research into this direction is worth the effort is of course a matter of discretion.

Tim van Beek
  • 3,725
  • 1
    Dear Tim, there are also people who believe that the Earth is flat. And by the way, I didn't write it was dead. I wrote it was obsolete. And in 2011, it is simply no longer true that we don't know why the perturbative QFT tools work so well. The Renormalization Group, which has nothing to do with AQFT or Haag's theorem, explains why it works so well. I have explained how Haag's theorem is avoided. It's a lesson in the history physics without any implication for contemporary physics or future physics - please get used to this fact. – Luboš Motl Jan 27 '11 at 12:45
  • 14
    Dear Lubos, Haag's theorem assumes a certain mathematical framework, Renormalization is about subtracting an "infinite constant" from the Hamiltonian, which is not a mathematically well defined operation. In this sense Renormalization avoids Haag's theorem by employing a not (yet) well defined mathematical framework. You are of course entitled to state that Renormalization explains how everything works, but from a formal, mathematical point of view there is a gap in our understanding what Renormalization is, mathematically (ignoring Connes-Kreimer here). – Tim van Beek Jan 27 '11 at 14:36
  • 7
    Dear Tim, renormalization is not just about the subtraction of a constant (which would be called the vacuum energy or cosmological constant). It is about the subtraction of many non-constant divergent terms that depend on fields - the counterterms. The calculations based on renormalization, in any particular QFT we use and need, are fully well-defined mathematically. They're just not well-defined within a particular AQFT-like framework but the problem is with the axioms and assumptions of AQFT, not with a renormalized quantum field theory. – Luboš Motl Jan 27 '11 at 15:14
  • 3
    Tim, the project of formalizing QFT is valuable, but the particular approach of AQFT seems like trying to fit a square peg into a round hole. This is demonstrated by inability to reproduce many structures which are used in "practical" QFT all the time. I don't think it is because "practical" QFT is problematic in any way. It may be simply that AQFT the wrong language to formalize the subject. I think more modern attempts try to formalize the notion of the renormalization group, which is a more natural way to think about QFT. –  Jan 27 '11 at 16:21
  • Dear Lubosh, You are so knowledgeable! Can you give your sincere opinion on the following question: Are we bound to do renormalizations perturbatively or we can make the right subtractions in the total Hamiltonian before doing perturbative calculations? In other words, do you think we can invent another (Renormalized) Hamiltonian which will give the same final results in a natural way, without any renormalization now? – Vladimir Kalitvianski Jan 27 '11 at 16:42
  • Dear Vladimir, no, this is not possible. If you want the "same" final results, then you're talking about the equivalent theory, and it's just a physical fact that the fields in which the Hamiltonian has a simple form are infinitely (in the infinite cutoff energy limit) renormalized relatively to the fields that create measurable particles. This is a physical fact about the theory that doesn't depend on any conventions. Also, some QFTs are consistent non-perturbatively as well (QCD is, QED is not). An alternative definition - e.g. lattice QCD - is helpful to establish the nonpert. existence. – Luboš Motl Feb 06 '11 at 19:41
13

Much has been said… but I think there's a crack that I can poke.

References

First off, let me list a few of the relevant references for this topic: this way we can set the basis of the discussion and agree on a certain 'common theme of knowledge' (I understand there may be dissent and disagreement on the choice given below, but I believe that the historical relevance of the works is self-evident — this list is by no means exhaustive, and I mean no disrespect for works I may have forgotten to list). There it goes,

  1. PCT, Spin and Statistics, and All That;
  2. Local Quantum Physics: Fields, Particles, Algebras (Theoretical and Mathematical Physics);
  3. Mathematical Theory of Quantum Fields (International Series of Monographs on Physics);
  4. Finite Quantum Electrodynamics;
  5. Perturbative Quantum Electrodynamics and Axiomatic Field Theory (Theoretical and Mathematical Physics);
  6. Quantum Field Theory I: Basics in Mathematics and Physics: A Bridge between Mathematicians and Physicists (v. 1), and Quantum Field Theory II: Quantum Electrodynamics: A Bridge between Mathematicians and Physicists;
  7. Quantum Field Theory (Mathematical Surveys and Monographs);
  8. Mathematical Aspects of Quantum Field Theory (Cambridge Studies in Advanced Mathematics);
  9. Quantum Field Theory for Mathematicians (Encyclopedia of Mathematics and its Applications);
  10. Mathematical Theory of Feynman Path Integrals: An Introduction (Lecture Notes in Mathematics), and White Noise Calculus and Fock Space (Lecture Notes in Mathematics).

Comments

With that out of the way, let me make some comments. With a bit of luck I won't digress too much to the point of losing the original path…

  1. Historically, there were two movements with separate names: Axiomatic (or Algebraic) Quantum Field Theory, and Constructive Quantum Field Theory. While the former is marked by the reference (1) above, the latter is summarized in Quantum Physics: A Functional Integral Point of View. Both of them faced similar obstacles during their times: the first incarnation of Wightman's Axioms did not allow for [spontaneous] symmetry breaking (a fact later corrected by Ray Streater); while Glimm & Jaffe's book still insists that a [Feynman] Path Integral is associated to a unique quantum theory (something we know not to be true based on the Path Integral's dependence on its parameters (aka, coupling constants): different sets of parameters yield distinct QFTs). In fact, this is the very reason why I did not mix these topics (Algebraic and Constructive QFTs) in the list of refs above.

  2. Haag's Theorem (and also Haag-Kastler's theorem), historically, belongs to the field of Algebraic/Axiomatic QFT. But, later, with the further developments lead by Haag and his followers, this field naturally evolved into what is presently known as Local Quantum Physics.

  3. There are ways to rigorously define Euclidean QFTs in the lattice and then take its continuum (thermodynamic) limit. But, of course, I can't remember (or retrieve from my archives) the reference. IIRC, it was a group from Boston University… but, more importantly, I know who will be able to remember this: Pedro (lqpman ;-). In any case, B. Simon has a bunch of work done in this area (namely $P(\phi)_2$ Euclidean QFT and its relation to Statistical Mechanics), and it's not difficult to convince oneself that rigorous QFTs can be defined appropriately (in different ways, using different techniques).

  4. There are several ways to think of a QFT (or Gauge Theory, as you wish) and, as such, to formulate it in a variety of rigorous ways. Let me present one which I think is somewhat intuitive and more straightforward. In every single 'real life scenario' you have a maximum available energy, i.e., a UV-cutoff (call it $\Lambda$). With this simple realization you can start to construct your ingredients more rigorously: cast your QFT in a lattice where the spacing is given by $a = 1/\Lambda$. What I'm saying is that, in 'real life', we're always dealing with an Effective Field Theory of a form or another — and this is not altogether bad, for we can use its UV-scale to build a lattice where we're going to define our theory. This is one way to do it… another way is to use this realization of a UV-cutoff but rather than use it to define a latticized QFT, we can use it to define a Vertex Operator Algebra. The bottom line, in both cases, is that we're dealing with the problem of multiplying distributions (aka generalized functions) by defining some form of OPE: what lattices and VOAs do is to define the particular "point splitting" that your theory "likes" (i.e., that makes your theory "well behaved"). So, what you are effectively doing (pardon the pun) is defining your QFT via a particular choice of "point splitting" (OPE), be it with the help of a lattice, be it via VOAs. In the end of the day, however, what matters is that you, somehow, "discretized" your theory… and, as such, you can deal with it and circumvent Haag's theorem (which is only valid in the thermodynamic/continuum limit). This is what's behind the curtains.

  5. In some sense, this same discussion could be done regarding Statistical Mechanical models, e.g., the Potts Model, which is completely determined by its Transfer Matrix (analogous to the $S$ Matrix in QFT). However, when you try and take this continuum/thermodynamic limit… things get complicated really quickly. Of course this is a 'technical' point, a 'mathematical detail', and so on and so forth… but, I thought this was the whole point of this question… if I'm wrong, by all means, disregard these 'nitty picky'-points of mine. In any event, it is because of this limit to infinite degrees-of-freedom that QFT is more than just "S Matrix theory" or naïve extensions of Quantum Mechanics: as the saying goes, more is different. ;-)

Anyway, this is getting long and it's getting late (i.e., I'm getting hungry ;-)… and my point wasn't to give a scholastic answer, but to frame this discussion into its appropriate path. I hope this helps a bit.

Ruslan
  • 28,862
Daniel
  • 4,197
  • 1
    Can you elaborate on the every real life situation has a max Energy leading to a UV cutoff. I agree on this being true for the external lines, but why should this be true for the internal lines (which do need renormalisation). Also maximum energy does not imply maximum momentum for internal lines (aka virtual particles), since they are allowed to be off shell. – lalala Oct 15 '17 at 18:44
11

The original two questions deserve short clear answers. In reverse order:

2) "An example of a calculation which fails due to Haag's Theorem". Naive perturbation theory has to fail because of Haag's Theorem. And it does: when you compute the coefficients, they're mostly infinite.

1) "Why do QFT computations work, despite Haag's Theorem?" Because real QFT computations are not done in the interaction picture. Lattice QFT, as has been pointed out, doesn't use the interaction picture at all. Likewise, the LSZ formalism doesn't use the interaction picture. The only thing the interaction picture is used for is motivating an ansatz for the renormalized perturbation series. But when you switch to renormalized perturbation theory, you are actually abandoning the interaction picture, because you renormalize the field strengths.

user1504
  • 16,358
  • But if interaction is just an ansatz, what is the real justification for renormalized perturbation series? Or are you saying we should just postulate renormalized perturbation theory? – Jia Yiyang Apr 27 '14 at 15:35
  • @JiaYiyang: Yes. If you have to work perturbatively, just postulate the renormalized perturbation series. If you've got a lattice theory or some CFT or other more fundamental definition, then the perturbation series you derive from it has to be the renormalized perturbation series. – user1504 May 02 '14 at 23:04
7

Lubosh wrote: "... perturbative QFT clearly works...". No, it miserably fails in the initial approximation ("bare" particles, no soft radiation predicted) and in course of search of the solutions by iterations (infinite corrections to the initial approximation). That is why there are so many questions to it!

What is comparable with experimental data is a renormalized and IR summed up result (a "repaired solution") which is quite different from the original solution. And even after that there are conceptual and mathematical difficulties in the theory. Besides, there are non-renormalizable theories where attempts to "repair solutions on go" fail hopelessly.

QFT, as a human invention, suffers from severe problems. It is very far from a desired state and needs repairing. Some times renormalizations "work" but not always, and we are far from the statement "QFT has no problem". We should try other constructions. I disagree with the Lubosh's statement "this is not possible", especially if with help of renormalizations and IR contribution summation we go away from initially wrong approximation and obtain reasonable results. I believe we may start from a better initial approximation, eliminate those problems, and arrive at the final results directly. Denying such opportunities is not wise, to say the least.

0

Haag's theorem and its present discussion touches the difference between mathematics and physics. While mathematics is based on axioms and consists of theorems physics is supposed to explain our experiences within a restricted set of mathematics. According to this separation Haag's theorem belongs to mathematics.

The reason is that the mathematics of physics is in principle discrete, finite. Have you ever measured something which is infinite? Have you met physical laws which apply at every scale to give a chance for infinities to appear? The naive quantum field theory formalism, cast in continuous space-time leads to infinities only if we extrapolate the observed physical laws to infinitely short distances. The remedy of this problem is well know: The physical laws are applied only within the range where they are observed. To be more elegant, they are used in a wider range and this widening is controlled by the short and the long distance cutoffs, the reflection of our ignorance.

The convergence and limit are concepts of mathematics and are used in physics in the procedure to keep the cutoffs out of the way and to hide the resulting distant scales in our equations. The convergence of removing the short distance cutoff, renormalizability is a convenience rather than necessity in physics, e.g. the non-asymptotically free sectors of the Standard Models are presumably trivial and non-renormalizable, the space-time is supposed to show granulated structure down to Planck's scale.

What is needed in physics is a regulated quantum field theory which provides a mathematically well defined, contradiction-free platform to study the observed phenomemons with flexible range of cutoffs. We have that, e.g. lattice field theory.

The axiomatic and constructive field theory and Haag's theorem in particular has a very important role to play in physics: They draw the attention to the precise limit of physical sciences within the vast set of beautiful mathematical concepts.

Janos
  • 1