13

While studying renormalization and the renormalization group I felt that there wasn't any completely satisfying physical explanation that would justify those methods and the perfect results they get. Looking for some clarity i began to study Wilson's approach to renormalization; while i got a lot of insight on how a QFT works and what's the role of quantum fluctuations etc I could not find a direct clear connection between the "standard" approach and the Wilson one. I'll try to be more specific:

To my understanding Wilson approach says (very) basically this: given a quantum field theory defined to have a natural cutoff $\Lambda$ and quantized via path integrals (in euclidean space-time)$$W=\int \mathscr D\phi_{\Lambda} \; e^{-S[\phi]}$$ it is possible to study the theory at a certain scale $\Lambda_N<\Lambda$ by integrating, in an iterative fashion, off high momentum modes of the field. Such rounds of integration can be viewed as a flow of the parameters of the lagrangian which is bounded in its form only by symmetry principles. For example, given a certain $$\mathscr L_0=a_0\; \partial_{\mu}\phi+b_0\;\phi^2+c_0\;\phi^4$$ we will get something like $$\mathscr L_N=\sum_n a_n \;(\partial_{\mu}\phi)^n+b_n\phi^n+\sum_{n,m}c_{nm}(\partial_{\mu}\phi)^n(\phi)^m$$ where the new parameters $a_n \quad b_n \quad c_{nm}$ have evolved from the original parameters via some relation which depends on the cutoff in some way. Now from some dimensional analysis we understand that the operators corresponding to these parameters organize themselves in three categories which are marginal, relevant and irrelevant and this categories are the same as renormalizable, super renormalizable and non renormalizable. Then there is the discussion about fixed points and all that stuff needed to have a meaningful perturbative expansion etc.

My question(s) is (are):

How do I put in a single framework the Wilsonian approach in which the relations are between the parameters at the scale $\Lambda_N$ with those of the lagrangian $\mathscr L_0$ and their renormalization group flow describes those changes in scale with the "standard" approach in which we take $\Lambda\rightarrow+\infty$ and relate the bare parameters of the theory $g_0^i$ with a set of parameters $g_i$ via renormalization prescriptions at a scale $\mu$ and then control how the theory behaves at different energy scales using Callan-Symanzik equation ?

How different are the relations between the parameters in the Wilson approach and the one in the "standard" approach? Are these even comparable?

What is the meaning (especially in the Wilsonian approach?) of sending $\Lambda$ to infinity apart from getting completely rid of non-renormalizable terms in the theory?

Does, in the standard approach, a renormalization prescription which experimentally fixes the parameters $g_i$ at a scale $\mu$ basically give the same as integrating from $\Lambda\rightarrow +\infty$ to the scale $\mu$ in the Wilsonian approach?

I'm afraid I have some confusion here, any help would be appreciated!

Fra
  • 2,223

1 Answers1

4

I think the confusion is due to a lack of mathematically precise definitions of what is quantum field theory? what is one trying to construct and how? etc. There are a lot of vague notions used in the physics literature: the partition function (which does not make much sense in infinite volume), the effective action,...but the bottom line is the collection of all correlation functions of the theory, these (in the Euclidean setting) should be honest Schwartz distributions with singular support contained in the big diagonal (for an $n$-point function in $d$ dimensions this would be the subspace of $\mathbb{R}^{nd}$ where some of the $n$ points coincide). The goal of the RG, Wilsonian or "standard", is to have such correlations converge in the sense of distributions when one removes the cut-off. To understand how this works, in a precise manner, you can read the short article "QFT, RG, and all that, for mathematicians, in eleven pages" that I wrote recently.


A much more detailed account of what tried to explain in the comments below is here: Wilsonian definition of renormalizability

  • Thank you for the answer! Just to clarify, i have some understanding of the goal of these approaches and the problems arising from products of operator valued distributions ecc but i was more intrested in the interplay between those two approaches than on how to rigorously implement them. Yet it is perfectly possible that since, i'm afraid, the article was beyond the scope (and level) of my questions and i didn't completely understand it, that the answers there there and i simply couldn't see them. I fyou could shed some light on it, it would be most appreciated, thank you. – Fra May 20 '15 at 10:00
  • 2
    The article I recommended is not beyond the scope of your questions above, it exactly addresses them. Also, it does not talk about products of operator valued distributions. Essentially, the RG a la Wilson is a map from the theory $T(\Lambda)$ (or collection of couplings) at the UV scale to the effective theory at the IR scale $T(\Lambda_N)=RG[T(\Lambda)]$. In the "standard" RG you want to fix $T(\Lambda_N)$ (or rather finitely many coordinates of $T(\Lambda_N)$) and pick $T(\Lambda)$ appropriately so that $RG[T(\Lambda)]$ converges in the $\Lambda\rightarrow\infty$ limit... – Abdelmalek Abdesselam May 20 '15 at 14:37
  • 2
    ...this is a backwards shooting problem. If you have an ODE to solve, this means fixing the value a $t=0$ and trying to figure out where you should start at $t=-\infty$ to arrive where you want at time $t=0$. To do this, you need the ODE, otherwise this is all idle talk. The ODE in our context is Wilson's RG. – Abdelmalek Abdesselam May 20 '15 at 14:40
  • 2
    no pbm. Another place where you can find a clear treatment of perturbative renormalization is this article by Salmhofer: www.physik.uni-leipzig.de/~salmhofer/hesselberg.pdf – Abdelmalek Abdesselam May 21 '15 at 15:22
  • 1
    continuing on my previous comment with $t=0$ and $t=-\infty$ the constraints to satisfy are as follows. Suppose the list of all couplings is $g_1,g_2,\ldots$ and suppose $g_1,\ldots,g_r$ are the relevant/marginal ones whereas $g_{r+1},\ldots$ are the irrelevant ones. At $t=-\infty$ (the UV) you want to impose $g_{i}=0$ for $i>r$, i.e., you want to be on the bare surface. At $t=0$ (say the effective theory at "anthropic" scale) you want to fix $g_1,\ldots,g_r$ (or force them to converge when $\Lambda\rightarrow\infty$). The miracle is that all other couplings will converge at $t=0$. – Abdelmalek Abdesselam May 21 '15 at 15:29