1

It is generally accepted that the radius of convergence of perturbation series in quantum field theory is zero.

E.g. 't Hooft in "Quantum Field Theory for Elementary Particles":

"The only difficulty is that these expansions will at best be asymptotic expansions only; there is no reason to expect a finite radius of convergence.”

Or Jackiw in "The Unreasonable Effectiveness of Quantum Field Theory":

"Quantum field theoretic divergences arise in several ways. First of all, there is the lack of convergence of the perturbation series, which at best is an asymptotic series."

The main argument for this comes from Dyson. (See: Dyson, Divergence of perturbation theory in quantum electrodynamics, Phys.Rev. 85 (1952) 631–632).

Now, in "Can We Make Sense Out of Quantum Chromodynamics?" t' Hooft introduced a transformation (now called t' Hooft transformation, or t' Hooft renormalization scheme) that replaces the coupling constant $g$ with $g_R$ such that $\beta(g_R) =a_1 g_R+ a_2 g_R^2$. In other words, the beta function for the new parameter $g_R$ has only two terms and not infinitely many as the series for $g$.

Isn't this in contradiction with the claim that the radius of convergence is zero? Or is the series for $g_R$ still divergent although it only contains two terms? If yes, where can we observe the divergence after the t' Hooft transformation?

Qmechanic
  • 201,751
jak
  • 9,937
  • 4
  • 35
  • 106
  • Maybe you could review the asymptotic series definition? The whole point of an asymptotic series is that a truncation will be as accurate an approximant as you wish for sufficiently small g, even if it is outside the strict domain of convergence (here, null!). So, implicitly, the series in g is truncated to a finite order for g. Then, it is converted to a binomial in $g_R$, which is not a series, so how could you expect to see divergence of a series? No, you will not observe the θ singularity in perturb. th. – Cosmas Zachos Jun 07 '17 at 19:36
  • @CosmasZachos Thanks for your comment. I think I understand the definition of an asymptotic series. It yields a great approximation for the correct result up to some given order $\approx 1/g$. This is where non-perturbative effects become important and hence the series no longer converges. My problem is the following: We start with $\sum_n c_n \alpha^n $ which yields infinity if we take all terms into account. Now after the 't Hooft transformation the exact same sum has only two terms. I'm wondering, where the infinity is now hidden, if it still exists. – jak Jun 08 '17 at 08:12
  • You transform a truncated series to a binomial, not a divergent series to one. – Cosmas Zachos Jun 08 '17 at 10:10
  • @CosmasZachos As far as I know, we are not transforming a truncated series, but the complete infinite series. This infinite series is divergent and becomes miraculously a binomial... – jak Jun 09 '17 at 08:24
  • I suppose it is easier to visualize a zero of a finite polynomial, as opposed to a zero of an infinite divergent series. – Cosmas Zachos Jun 12 '17 at 14:38

2 Answers2

4

There is no contradiction. Take an observable $O$ that depends on a coupling constant $x$: $$ O=f(x) $$ for some function $f$. Say we compute $O$ in perturbation theory, $$ f(x)=f_0+f_1x+\cdots $$

We may furthermore define a new "coupling constant" $$ \chi(x)\equiv f(x) $$ so that $O$ is exact: $$ O(\chi)=\chi $$

The two situations are perfectly consistent: $O$ is asymptotic when expressed in terms of $x$, and one-loop exact when expressed in terms of $\chi$. The trick is that $\chi$ is itself a function of $x$ which, when calculated in perturbation theory, is in fact asymptotic.

In QFT, most (but not all) observables are asymptotic when expressed in terms of the usual coupling constants (e.g., the on-shell or $\overline{\mathrm{MS}}$ coupling constants). But it is perfectly consistent to define new "coupling constants" in terms of the old ones such that the observables become exact when expressed in terms of the former instead of the latter.

To quote Weinberg (Vol.II, §18.3),

In fact, it is always possible to choose [the new coupling constant $\tilde g$ as a function of the old one $g$] so that all the terms in $\tilde\beta(\tilde g)$ of higher than third order in $\tilde g$ vanish, so we can describe the asymptotic behaviour of $\tilde g(E)$ for $E\to\infty$ by inspection of the first two terms in the perturbation series for $\beta(g)$. But this is of little value, since we would need to carry our calculations to all orders to determine how $g$ depends on $\tilde g$, and without this we cannot use our knowledge of the asymptotic behavior of $\tilde g$ to say anything about the asymptotic behavior of $g$, or of physical quantities.

AccidentalFourierTransform
  • 53,248
  • 20
  • 131
  • 253
2

Just a small addendum to AFT's excellent answer. " 't Hooft's clever trick" here is just a very old subject in mathematics: the conjugation of dynamical systems to their linear approximation at a fixed point of more generally to some normal form. It is as old as the method of variation of constants in the solution of ODEs. There is a version of the theory for transformations or discrete dynamical systems: $X_{n+1}=F(X_n)$, and also one for flows $\frac{dX}{dt}=F(X)$. I will only discuss the flow case. The dependent variable $X$ is assumed to belong to a vector space $V$. Say zero is a fixed point $F(0)=0$ and the function $F$ is defined and nice (analytic) around zero. The particular situation considered by t'Hooft is one-dimensional: $V=\mathbb{R}$, $t=\log \mu$, $X=x=g_D^2$, and $$ F(x)=-\beta_1 x^2+\beta_2 x^3+\beta_3 x^4+\cdots $$ To solve the conjugation problem, the first order of business is to construct the new variable $g_R^2=\psi(x)$ as a formal power series in the old one. This is not entirely trivial in the presence of zero eigenvalues and resonances. Typically, one can get rid of non resonant monomials but some monomials cannot be eliminated. I suppose this is the case for $\beta_2 x^3$. Then one needs to construct $\psi(x)$ as a function of $x$. That depends on the wanted regularity. In the continuous setting the basic result is the Hartman-Grobman Theorem. In the $C^k$ setting, the main result is Sternberg's Theorem. In the analytic setting (having to do the convergence of the formal power series) things are quite dicey because of small divisor problems. Search keywords like: Poincaré-Koenigs Theorem, Poincaré-Dulac normal forms, etc.