My current point of view regarding the Hierarchy Problem is of agreement with Michael's answer in this question. Or, in the words of Manohar in his Introduction to EFT notes, using a cutoff regulator and evaluating the top loop to be convinced that "the Higgs is quadratically sensitive to new physics because $\delta m \propto \Lambda^2$" is bogus.
Of course that this tale encodes, in some sense, the truth, but the line of reasoning as simply put is wrong. As a regulator, the $\Lambda^2$ dependence has little to no physical meaning -- it is an unphysical artifact used to make sense of intermediate calculations. The cutoff argument is way more precise if it is understood that it is no regulator of divergences, but a physical cutoff in a wilsonian point of view. It then plays the part of an avatar of the high energy physics.
In my opinion, the problem is best understood within Dimensional Regularization (contrary to what is sometimes stated). To see the problem then, we have to assume high energy physics explicitly. Use the top loop itself as a representation. Then we have the finite (meaningful) change on the renormalized mass
$$ \Delta m_R \propto m_t^2 \log... $$
and there is our 'quadratic sensitivity', which is on the mass of the loop particle. If there is higher scale physics (which we know there is), this mass scale could be enormous (Planck, GUT or whatever): and there we have the Hierarchy Problem, with too small $m_R/\Delta m_R$.
There's just one thing which shakes my confidence in this point of view: Supersymmetry and Twin Higgs (and perhaps all?) theories which seek to solve the Hierarchy Problem through technical naturalness do it by specifically cancelling exactly the top loop: be it through the Spartner loop (Supersymmetry) or through an effective interaction of a low energy CCWZ theory (Twin Higgs).
How does this guarantee that the same type of cancellation happens to the trully harmful, higher scaled physics? Or am I wrong and misunderstanding the references I cited?