13

After recently going through a short program of self-study in quantum mechanics, I was surprised to find a quote attributed to Feynman essentially saying he was extremely bothered by the computational process of renormalization. It's "dippy" that anyone should have to subtract one infinity by another in order to arrive at a finite answer.

What's he referring to there, in rough terms? And what's the latest in attempts to replace this computational procedure with something more physically plausible, so that the theory can have more meaning and less dippiness?

Qmechanic
  • 201,751
  • Yes, formally yes, QFT is mathematically "consistent", but without renormalization and infra-red effects summed, it has non physical solutions. It suffices to compare the calculation results with experiment. Regularized QFT gives just physically and numerically wrong results. It is not the math problem but a physical model one. I explained it in http://arxiv.org/abs/1110.3702 – Vladimir Kalitvianski Oct 25 '11 at 08:46
  • Look also citations of Dirac and Schwinger here https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B4Db4rFq72mLNzU4MWJhZTUtMTljOS00OGU0LWE1MzAtNjBhNzFlMmU0ZThk&hl=en_US – Vladimir Kalitvianski Oct 25 '11 at 08:55

4 Answers4

8

Nonrelativistic QFT is consistent - all renormalizations are finite.

Relativistic QFT is consistent in 2 and 3 spacetime dimensions. There are various rigorous constructions of interacting local quantum field theories.

In 4D, the situation is different; not a single interacting relativistic local QFT in 4D is known. (But neither is there a no-go theorem that would forbid them.) The technical difficulties are much bigger than in 2D and 3D (where proofs are already highly nontrivial).

Nontrivial renormalization is needed in 3D and 4D. (In 2D, Wick ordering is sufficient, which simplifies things a lot.)

My tutorial paper Renormalization without infinities - a tutorial, discusses renormalization and how to avoid the divergences on a much simpler level than quantum field theory.

Chapter B5: Divergences and renormalization of my theoretical physics FAQ discusses some of the questions that are more specific to quantum field theories. In particular, there is a Section ''Is there a rigorous interacting QFT in 4 dimensions?'' with references to the state of the art.

6

Just as your quote says, infinity turns up as part of the answer in almost any simple-minded calculation in quantum field theory that goes past the lowest level of approximation. This doesn't affect quantum mechanics, but it's been there in the mathematics of QFT since the late 1920s. Feynman, together with a few other people at the end of the 1940s, introduced a less simple-minded way to calculate in QFT that sort-of-bypasses the infinities, but no-one who is mathematically inclined could feel very comfortable with the way it was done at that time.

Many Physicists, however, perhaps most, are content nowadays with the mathematics of what is called the renormalization group. The Wikipedia page will give you a taste of this, but I doubt anyone is going to be able to give you a short tutorial on the subject that will make you very happy. The renormalization group lets one organize the calculations rather more nicely. The renormalization group draws on lattice methods from classical statistical Physics, which I think contributed to Physicists feeling relatively comfortable with the mathematics.

There's definitely a group of people who think the current mathematics is still "dippy", but no-one has yet produced an acknowledged serious alternative. There are also less disgruntled efforts just to improve the mathematics more-or-less incrementally, one strand of which uses Hopf algebras, which is quite abstract mathematics that no-one could call careless; one always wants improvement.

Peter Morgan
  • 9,922
  • It's not all the mathematically inclined who are bothered, but only the smaller subset of them such that rigorous logic, strict axioms, and careful proofs have highest importance. – DarenW Oct 24 '11 at 22:29
  • Fair enough DarenW. Many, most, some, ...; I rarely get the logic of those right with a broad brush. Renormalization and its malcontents is a very convoluted story. – Peter Morgan Oct 24 '11 at 22:53
  • 1
    And there are still mathematical issues--no one is sure whether or not perturbation series converge, even if renormalization is consistent to an arbitrarily high $n^{th}$ order, and much of what we know about QFT is still from the perspective of perturbation theory. – Zo the Relativist Oct 25 '11 at 18:35
  • 1
    Dirac was notorious for not caring a bit about proofs. Or strict definitions, either. (¡The man who invented the Dirac delta function!) But he was very much bothered by renormalisation, and this was not changed by the renormalisation group idea. This should be taken as qualifying @DarenW 's comment. – joseph f. johnson Dec 31 '11 at 05:56
5

Feynman is referring to the problem of showing that Quantum Electrodynamics is mathematically consistent, which will be tricky, because it almost certainly isn't. The methods of Feynman and Schwinger showed that the perturbation theory of quantum electrodynamics is fully consistent, order by order, but the theory itself was convincingly argued to be no good by Landau. Landau's argument is that any charge is screened by virtual electrons and positrons, so that the bare charge is bigger than the charge you see. But in order to get a finite renormalized charge, the bare charge has to go to infinity at some small, nonzero, distance scale. The argument is not rigorous, but an exactly analogous thing can be seen to happen numerically in the Ising model.

The methods of Kadanoff, Wilson, Fisher and others make it clear that there is a path to defining (bosonic, real-action) quantum field theory which is completely fine. This method identifies the continuum limit of quantum fields with the second order phase transition in the parameter space for a Euclidean lattice action. The continuum limit is defined by a second order phase transition, and all properties of the continuum limit are determined by tuning the parameters close enough to the transition.

This path, however, has not been made rigorous yet, and likely requires a few new mathematical ideas to prove that the limit exists. The new ideas are being formulated now, and there is some disagreement over what they are. What follows is my own opinion.

Free fields and measures

To define free field theory is trivial--- you pick every Fourier mode of the field to be a gaussian random variable with a variance equal to the inverse propagator. That's it. There's nothing more to it (For abelian gauge theory, you need to fix a gauge, but ok, and nonabelian gauge theory is never free).

Already here there is a problem. Mathematicians do not allow random picking algorithms to define a measure on infinite dimensional spaces, because if you are allowed to pick at random inside a set, every subset has measure. This contradicts the axiom of choice. Mathematicians want to have choice, so they do not allow natural measure theory, and there's no reason to go along with this kind of muddleheadedness on their part.

The principle: If you have a stochastic algorithm which works to pick a random element of a set S, then this algorithm suffices to define a measure on S, and every subset U of S has measure, equal to the probability that the algorithm picks an element of U.

This principle fails within standard set theoretical mathematics even for the most trivial random process: flipping coins to uniformly pick the digits of a random number between 0 and 1. The probability that this number lands in a "non measurable set" is ill defined. This is nonsense, of course, there are no "non-measurable sets", and the picking process proves this. But in order to make the argument rigorous, you have to reject the axiom of choice, which constructs them. The random picking argument is called "random forcing" within mathematics, and outside of random forcing models, probability is convoluted and unnatural, because you have to deal with non-measurable sets.

Interacting fields

For interacting fields, the required result is that there are only finitely many repelling local directions in the space of actions near the second order transition under rescaling (renormalization group transformations). This theorem is difficult to prove rigorously. The heuristic arguments can be turned into proofs only in certain cases.

The construction of interacting theories in the literature is mostly restricted to resummations of perturbation theory, and is useless. Resumming perturbation series is in no way going to work for theories with nontrivial vacua (at least not the way people do it).

  • 1
    I'm tempted to upvote this, because I think it's awesome. However, I think this sort of answer is a complete "whoosh" for the Questioner. It's not really an answer, as a commentary or meditation for those who already know the basic mechanics of the calculations. – genneth Oct 25 '11 at 09:43
  • To continue the meditation a little, I'm currently coming up against non-separable Hilbert spaces in a sort-of-nonlinear approach to constructing interacting QFTs, which introduce some of the same troubles. The language here doesn't mesh very well with my ways of thinking, Ron, but it has an allusive attractiveness. +1 or not +1, TITQ. – Peter Morgan Oct 25 '11 at 11:41
  • I did not understand you problem of "picking", probably, because of my illiteracy in mathematics. I just want to remind that those Fourier modes carry energy-momentum and are observable such in experiments. These modes are populated in a discrete manner (occupation numbers are discrete). The interaction theory is about changes of these occupation numbers in course of interactions. It is true that we could not write down a simple theory containing only calculations; we get in troubles and see the reason elsewhere but in our physical models. – Vladimir Kalitvianski Oct 25 '11 at 15:42
  • @Peter: I know you have a weird idea that interacting fields are nonlinear on test functions. They aren't. Whether you give me +1 or -40, the ideas here are still correct. – Ron Maimon Oct 25 '11 at 20:12
  • @Vladimir: The notion of "picking" is "picking a real number at random". This is ill defined in modern set theory, because mathematicians chose a bad convention 100 years ago. Time to change. The idea you have that only the particle occupation representation, not the field Lagrangian, is physical is what gets you in trouble. To learn modern field theory, simulate the Ising model in 3d, and see what happens near the critical point--- you can define average fields and see the continuum limit emerge. – Ron Maimon Oct 25 '11 at 20:15
  • @RonMaimon: Ron, thank you for explaining the meaning of "picking". – Vladimir Kalitvianski Oct 25 '11 at 21:03
  • @Ron Sadness, but only one of many ideas. If there were only what is in my Q&As here, that'd not be good. I'm always looking at others, as I'm pretty sure you also are. The idea I'm referring to above is only of the last few days, and it'll likely fall apart as always. Gotta enjoy that. – Peter Morgan Oct 25 '11 at 21:11
  • @Peter: I am sorry--- I didn't say it properly--- it's not a bad idea, and I am just giving a knee-jerk reaction--- you shouldn't reject your own insights so quickly based on some stupid knee-jerk feeling of someone else's--- I am wrong a lot. I believe that the idea you had is that if you have a nonlinearity in the dependence on the test functions, then you could define nonlinear products of fields, because you could define a new nonlinear function on test functions corresponding to the pointwise product. I think this is not the right way only because the test functions are formal... – Ron Maimon Oct 25 '11 at 23:31
  • Their role is just to allow divergent pointwise limits, so long as the integral is finite. But the problem of defining products of fields remains--- the product is defined in a regularization, and as you take the continuum limit, you are changing the subtraction constants to define the product. This is similar to saying that the function on test functions is non-linear, but only if you interpret the test function as a regulator. The usual formalism is the operator product expansion. I think it is unwise to do regularization by saying smearing by test function, but if you do this,then maybe. – Ron Maimon Oct 25 '11 at 23:37
  • What is the downvote about? – Ron Maimon Dec 29 '11 at 18:35
  • 1
    -1 for missing the point of the question and throwing up a smoke-screen barrage. The query was about « mathematical consistency » and although that might mean different things to different people, you say nothing about it in specific. – joseph f. johnson Dec 30 '11 at 03:11
  • 1
    I can be more specific still. a) as above, it is not an appropriate answer to the question since nothing you mention addresses the issue of mathematical consistency. b) It shows no work, mere common-room ranting, there is no web-reference to anything which would help the OP understand what you are boosting. c) It's wrong and seriously misleading to talk about random variables here at all. If not for all three, I would let it pass. Add web-reference at the OP's level (the 'wrongness' by itself wouldn't matter so much) to somewhere proving renormalisation group is foundationally consistent. – joseph f. johnson Dec 30 '11 at 04:05
  • @Joseph f. Johnson: Ok, I see, thank you for the explanation. You should know that the problem of constructing no-quark QCD, or scalar field theory, is known to be equivalent to the problem of constructing a probability measure on continuous fields. The equivalence of random variables to quantum fields is a foundation of modern quantum field theory. The issue is obscured by mathematicians' conventions for measures, which make constructing even the trivial free measure nontrivial. There is no web reference, because this is my own approach, you are entitled to your opinion, but it is not valid. – Ron Maimon Dec 30 '11 at 09:38
1

The problem is not in subtracting one infinity from another, there is no another infinity, to be exact. What they call "another infinity" is a particle mass $m_e$, for example. It has never been infinity. In certain units, it is even equal to unity. $m_e$ is involved into a kinetic term in non relativistic approximation.

It is a specific theory "development" that gives a large kinetic term $\propto \delta m$ in addition to the regular one (a self-action ansatz). It was clear first that such a theory development was wrong. However, if one discards this additional large kinetic term (hence, discards some part of development ideology), then the results turn out to be better. To save the whole development ideology, they started to call $m_e$ infinite and negative in order to "give", together with good for nothing large kinetic term, the right value $m_e$: $m_e+\delta m=m_e$. You see, calling $m_e$ infinite on the left-hand side is a mental invention to legitimate this large kinetic term and its "self-action" ideology. Same is the "vacuum polarization" effect - it is a property of a bad (self-acting) theory or, better, a property of people interpreting a bad theory results.

They invented a notion of "bare" particles - particles "before" interaction. It is their bare masses and charges are infinite. Not only infinite but also cut-off dependent, as if the bare particles knew something about interaction cut-off. Such a logic is completely fallacious.

The one-parametric renormalization group is the "liberty" to choose any cut-off value $\Lambda$ in the sum $m_{bare} (\Lambda)+\delta m(\Lambda)=m_e$ if the sum value is forced to be equal to $m_e$ anyway (or in a similar "expression" $e_{bare} (\Lambda)+\delta e(\Lambda)=e_{Physical}$, whatever). On other words, whatever $\Lambda$ is, if one discards the term $\delta m(\Lambda)$ in the sum $m_e +\delta m(\Lambda)$, one recovers the right inertial properties of the particle. $\Lambda$-independence of $m_e$ is called "universality". Now you understand what is dippy - discarding is not a math, not a calculation in a proper sense. It is replacing the theory results with "what should be".

It is very difficult to even speak of alternative physical models since long ago because of renormalizators resistance. Renormalizators insist on uniqueness of QFT constructions and are very afraid of other ideas.