7

I was wondering what is the opinion about importance of the hierarchy problem in the hep community? I'm still a student and I don't really understand, why there is so much attention around this issue.

1 loop corrections to the Higgs mass are divergent - in the cut-off regularization proportional to $\Lambda^2$ and therefore require large fine tuning between the parameters to make those corrections small. But this kind of problem do not appear in the dimensional regularization.

People like the value of $\Lambda$ to be very large, with an argument that it should correspond to same energy scale at which our theory breaks down. I don't think, that we should treat the scale $\Lambda$, as some kind of a physical scale of our model cut-off, as it is just a parameter to regularize the integral. Just like the $4+\epsilon$ dimension in the dimensional regularization is not a physical thing. Why do we apply a physical meaning to $\Lambda$? Not to mention the troubles with the Lorentz invariance.

Maybe the hierarchy problem is an argument that the cut-off regularization scheme is just not right to use?

Qmechanic
  • 201,751
AAA
  • 421
  • See answer here: https://physics.stackexchange.com/questions/617334/is-the-hierarchy-problem-definitely-a-problem/617338#617338 – MadMax Jul 28 '23 at 15:38

2 Answers2

11

Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is.

The schemes and algorithms may differ by the precise moment at which you subtract some unphysical infinite terms etc. Indeed, the dimensional regularization cures power-law divergences from scratch. But the hierarchy problem may be expressed in a way that is manifestly independent of these technicalities.

The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than $(E_{low}/E_{high})^k$ where $k$ is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.

If I formulate the problem in this way, it's clear that it doesn't matter what scheme you are using to do the calculations. In particular, your miraculous "cure" based on the dimensional regularization may hide the explicit $\Lambda^2$ in intermediate results. But it doesn't change anything about the dependence on the high-energy parameters.

What you would really need for a "cure" of the physical problem is to pretend that no high-energy scale physics exists at all. But it does. It's clear that the Standard Model breaks before we reach the Planck energy and probably way before that. There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters.

The low-energy parameters such as the LHC-measured Higgs mass 125 GeV are complicated functions of the more fundamental high-energy parameters governing the GUT-scale or Planck-scale theory. And if you figure out what condition is needed for the high-scale parameters to make the Higgs $10^{15}$ times lighter than the reduced Planck scale, you will see that they're unnaturally fine-tuned conditions requiring some dimensionful parameters to be in some precise ranges.

More generally, it's very important to distinguish true physical insights and true physical problems from some artifacts depending on a formalism. One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity.

But this is a deep misunderstanding. The actual physical problem hiding under the "nonrenormalizability" label isn't the appearance of the symbol $\infty$ which is just a symbol that one should interpret rationally. We know that this $\infty$ as such isn't a problem because at the end, it gets subtracted in one way or another; it is unphysical. The main physical problem is the need to specify infinitely many coupling constants – coefficients of the arbitrarily-high-order terms in the Lagrangian – to uniquely specify the theory. The cutoff approach makes it clear because there are many kinds of divergences that differ and each of these divergent expressions has to be "renamed" as a finite constant, producing a finite unspecific parameter along the way. But even if you avoid infinities and divergent terms from scratch, the unspecified parameters – the finite remainders of the infinite subtractions – are still there. A theory with infinitely many terms in the Lagrangian has infinitely many pieces of data that must be measured before one may predict anything: it remains unpredictive at any point.

In a similar way, fine-tuning required for the high-energy parameters is a problem because using the Bayesian inference, one may argue that it was "highly unlikely" for the parameters to conspire in such a way that the high-energy physical laws produce e.g. the light Higgs boson. The degree of fine-tuning (parameterized by a small number) is therefore translated as a small probability (given by the same small number) that the original theory (a class of theory with some parameters) agrees with the observations.

When this fine-tuning is of order $0.1$ or even $0.01$, it's probably OK. Physicists have different tastes what degree of fine-tuning they're ready to tolerate. For example, many phenomenologists have thought that even a $0.1$-style fine-tuning is a problem – the little hierarchy problem – that justifies the production of hundreds of complicated papers. Many others disagree that the term "little hierarchy problem" deserves to be viewed as a real one at all. But pretty much everyone who understands the actual "flow of information" in quantum field theory calculations as well as the basic Bayesian inference seems to agree that fine-tuning and the hierarchy problem is a problem when it becomes too severe. The problem isn't necessarily an "inconsistency" but it does mean that there should exist an improved explanation why the Higgs is so unnaturally light. The role of this explanation is to modify the naive Bayesian measure – with a uniform probability distribution for the parameters – that made the observed Higgs mass look very unlikely. Using a better conceptual framework, the prior probabilities are modified so that the small parameters observed at low energies are no longer unnatural i.e. unlikely.

Symmetries such as the supersymmetry and new physics near the electroweak scale are two major representatives of the solution to the hierarchy problem. They eliminate the huge "power law" dependence on the parameters describing the high-energy theory. One still has to explain why the parameters at the high energy scale are so that the Higgs is much lighter than the GUT scale but the amount of fine-tuning needed to explain such a thing may be just "logarithmic", i.e. "one in $15\ln 10$" where 15 is the base-ten logarithm of the ratio of the mass scales. And this is of course a huge improvement over the fine-tuning at precision "1 in 1 quadrillion".

Luboš Motl
  • 179,018
  • "One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity." But once you have a natural lattice/spin network/etc. and this is actually the fundamental theory (and not a field thoery with infinitely many parameters to specify), then you can compute your observable values and the problem is really gone, isn't it? – Nikolaj-K May 20 '12 at 12:53
  • Lubos - Just to play devils advocate- "It's clear that the Standard Model breaks ... There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters." I agree that there is certainly new physics and thus new mass parameters above the weak scale, but is it obvious that it will affect the Higgs mass? For example, you could say that eventually the plank mass will cause problems, but this naively only appears in inverse powers as $ \frac{1}{m_{pl}} $ (in perturbation theory) so it wouldn't lead to tuning of the higgs mass. – DJBunk Jun 20 '12 at 15:22
  • Dear @Nick, there can't be any fundamental theory on a lattice or any other similar discrete background, that was really my point. You can't compute anything because all the coefficients of the non-renormalizable interactions in the continuum limit simply get translated to infinitely many terms you may construct on the lattice. And this ignorance about the infinitely many parameters is the problem, not the question whether they're hiding under the sign $\infty$. So no real problem can ever be solved by discretizing the spacetime. – Luboš Motl Jun 25 '12 at 19:08
  • In quantum field theory, the actual rules that may remove the infinitely many parameters is the scale invariance. If one requires that the theory is scale-invariant in the short-distance limit, the ultimate UV, and that's what's pretty much true for almost all consistent QFTs, this determines all the infinitely many parameters up to a finite number of them. This is the actual source of the knowledge and removal of the infinite ignorance and it requires a continuum spacetime because lattices aren't self-similar and can't produce scale-invariant theories. – Luboš Motl Jun 25 '12 at 19:10
  • Dear @DJBunk, it's trivial to show that the Higgs mass is hugely affected by pretty much everything at the GUT or Planck scale unless one may show that the effect is canceled. Your $1/m_{Pl}$ coefficients appear in front of nonrenormalizable interactions induced by the Planck-scale physics, in this case. But the Higgs mass isn't a nonrenormalizable interaction. Quite on the contrary, it's a relevant term, with a positive power of mass, so $m_h^2 h^2/2$ gets corrected by terms like $M^2 h^2$ etc. where $M$ is of order the Planck mass; a correction of Planck-scale physics looks like this: huge. – Luboš Motl Jun 25 '12 at 19:13
  • @Luboš: But if we are doing perturbation theory with the plank-induced nonrenormalizable interactions (suppressed by inverse powers of the planck mass) won't all the terms in the effective action for a scalar field simply have inverse powers of the planck mass then? Where do the positive powers of the Planck mass come from? They seem to be only able to come from the derivative interactions which give positive powers of the cutoff, which are the very terms that you are arguing aren't real in the above answer no? Thanks for your time helping me clear this up. – DJBunk Jun 25 '12 at 20:01
  • Dear @DJbunk, by dimensional analysis, it's very clear that you will get positive powers of the Planck mass in front of $h^2$. An expression like $\Delta(m^2) = C\cdot h^2/m_{Planck}$ for the Planck-physics correction to the Higgs mass has the dimension of mass so it's obviously not the right Lagrangian density, is it? So what is the mass scale you put to $C$ to get the right units of $m^4$? It may only be the Planck scale itself because that's where the source of the correction is. You will get a positive power at the end. – Luboš Motl Jun 26 '12 at 05:57
  • Whether you view $\Lambda^2$ divergent terms - where $\Lambda$ is a cutoff - as real ones depends on your taste. In dimensional regularization, all power-law divergences may be set to zero and only the log divergences survive. But in the previous comment, I am not talking about $\Lambda^2$ related to a cutoff but $m_{Planck}^2$ related to a particular scale, the Planck scale, where some particular objects and interactions exist and contribute. These are damn real things and the sensitivity of the Higgs on those corrections is huge regardless of the chosen regularization. – Luboš Motl Jun 26 '12 at 06:01
0

It is not clear to me whether you are making reference to the physical mass (propagator's pole) or to the renormalized mass (in certain renormalization scheme) which has not to be anything measurable at all.

The physical mass does not depend on the energy scale at which you are performing your experiments. However, physical coefficients of interacting terms (which are actually amplitudes of probability) do depend on the energy scale you make the experiment, even when classically they don't. For example, one does not talk about the value of the electron's mass at 1 MeV or at 10 GeV, but one does talk about the value of the fine structure constant $\alpha$ at 1 MeV or at 10 GeV. Right?

So, taking Lubos's definition of the hierarchy problem:

The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than (Elow/Ehigh)k where k is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.

I do not see what "physical parameters" (observables) one has to fine-tune because, in my opinion, the coefficient of the quadratic term in the Hamiltonian is not the physical mass (even if you see the theory a la Wilson and this coefficient depends on an energy scale $\Lambda$, it does not represent a physical mass at that energy contrary to what some people must think).

Maybe I'm wrong... Why?

Diego Mazón
  • 6,819
  • The point is that there are Planck scales in the world, and the coefficient of the quadratic term has to be fine tuned to make the physical mass be what it is. It isn't tuned to a special point, either. I don't understand the confusion--- the cutoff is physical, it's where gravity kicks in to regulate the integrals. – Ron Maimon Jul 13 '12 at 20:35
  • Thank you. I know that the coefficient of the quartic term has to be fine tuned to cancel the contribution of the cut-off scale (Planck energy or GUT scale or whatever) so that the physical mass be much lower than the cut-off scale. I also know that quartic interactions of scalar fields leads to quadratic dependences on the energy scale in the running of the coefficient – Diego Mazón Jul 13 '12 at 21:03
  • So why is there still a question? – Ron Maimon Jul 13 '12 at 21:05
  • Sorry, I am new here and I don't know how to use these comments... Let me continue with my previous comment: My point is that the coefficient of the quadratic term is not physical because it is not the physical mass. Thus, I do not see the problem in fine-tunning a parameter that is not physical. However, I would see a problem if one had to fine-tune an observable to cancel the contribution of the cut-off scale. Is it clear? I think we are allowed to choose the coefficient of this term as we want. Thank you. – Diego Mazón Jul 13 '12 at 21:15