2

I am wondering if it is always certain that there are enough counterterms to renormalize a renormalizable (e.g. non-negative mass dimension of coupling constant) theory. Through some methods such as power counting, one finds that there are a certain number of divergent parameters, which must be absorbed into free parameters of the Lagrangian, using counterterms. However, in the examples I have seen, there just so happens to be as many counterterms as divergent parameters to make this possible.

For example, in $\phi^4$ theory (neglecting vacuum diagrams), there are two divergent numbers in the two-point function, and one in the four-point function, which are able to be absorbed into the bare mass, the bare coupling constant, and the field strength renormalization. Similarly, there are four divergent numbers in QED which are absorbed into the electron mass and charge, and the electron and photon field strength renormalization.

Given that the arguments to determine how many numbers are divergent are not entirely trivial, why should we expect that there will be enough free parameters to absorb them into, even in a renormalizable theory?

Ghorbalchov
  • 2,113
  • 1
    I guess superficial degree of divergence answers your question. In renormalisable theories, 1PI diagrams' divergence decreases with increasing external legs. – emir sezik May 25 '23 at 18:27

2 Answers2

1

This is probably easiest to see using the Wilsonian effective action. To ensure that a counterterm is available, we should include it in the action to begin with.

(If we are discussing renormalizability in the Dyson sense, then we only allow relevant and marginal action terms, i.e. action terms with coupling constants of non-negative mass dimension, cf. e.g. this Phys.SE post.)

For examples, see e.g. this and this Phys.SE posts.

Qmechanic
  • 201,751
1

One way to define renormalizability is that a theory is renormalizable if it requires only a finite number of counterterms to cancel all divergences. This means that the theory has only a finite number of independent parameters that need to be fixed by experiments, and then it can make predictions for any other observable quantity. However, this definition is not very satisfactory, because it does not explain why some theories have this property and others do not.

Another way to approach the problem is to look at the dimensionality of the coupling constants in the theory. A theory is renormalizable as long as the classical coupling constants have non-negative mass dimension. This means that the coupling constants do not grow with energy and can be treated as small perturbations around a free theory. However, if the coupling constants have negative mass dimension, they become large at high energies and cannot be neglected. In this case, the theory needs an infinite number of counterterms to absorb all the divergences, and it loses its predictive power. This is what happens with gravity, for example.

A more general and modern perspective is to view any quantum field theory as an effective field theory, which is valid only up to some energy scale. In this framework, renormalizability is not a fundamental property of a theory, but rather a measure of its range of validity. A renormalizable theory can be extended to arbitrarily high energies without introducing new degrees of freedom or parameters, while a non-renormalizable theory needs to be replaced by a more fundamental theory at some energy scale where new physics appears. In this sense, renormalizability is not a proof of consistency or completeness of a theory, but rather a sign of simplicity and elegance.

  • Thanks, I don't think this answers the question though. I am already considering the case of a renormalizable theory (non-negative mass dimension of coupling constant). The question is, how do we know there are enough free parameters in the Lagrangian to cancel the (finitely many) divergences? – Ghorbalchov May 27 '23 at 11:43