63

Forgive me if this topic is too much in the realm of philosophy. John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, versus dimensioned constants like $G$ or $c$ [ http://math.ucr.edu/home/baez/constants.html ]. What is the relative importance or significance of one class versus the other and is this an area that physicists have real concerns or expend significant research?


Qmechanic
  • 201,751

5 Answers5

78

first of all, the question you are asking is very important and you may master it completely.

Dimensionful constants are those that have units - like $c, \hbar, G$, or even $k_{\rm Boltzmann}$ or $\epsilon_0$ in SI. The units - such as meter; kilogram; second; Ampere; kelvin - have been chosen partially arbitrarily. They're results of random cultural accidents in the history of mankind. A second was original chosen as 1/86,400 of a solar day, one meter as 1/40,000,000 of the average meridian, one kilogram as the mass of 1/1,000 cubic meters (liter) of water or later the mass of a randomly chosen prototype, one Ampere so that $4\pi \epsilon_0 c^2$ is a simple power of 10 in SI units, one Kelvin as 1/100 of the difference between the melting and boiling points of water.

Clearly, the circumference of the Earth, the solar day, a platinum prototype brick in a French castle, or phase transitions of water are not among the most "fundamental" features of the Universe. There are lots of other ways how the units could be chosen. Someone could choose 1.75 meters - an average man's height - to be his unit of length (some weird people in the history have even used their feet to measure distances) and he could still call it "one meter". It would be his meter. In those units, the numerical values of the speed of light would be different.

Exactly the products or ratios of powers of fundamental constants that are dimensionless are those that don't have any units, by definition, which means that they are independent of all the random cultural choices of the units. So all civilizations in the Universe - despite the absence of any interactions between them in the past - will agree about the numerical value of the proton-electron mass ratio - which is about $6\pi^5=1836.15$ (the formula is just a teaser I noticed when I was 10!) - and about the fine-structure constant, $\alpha\sim 1/137.036$, and so on.

In the Standard Model of particle physics, there are about 19 such dimensionless parameters that "really" determine the character of physics; all other constants such as $\hbar,c,G,k_{\rm Boltzmann}, \epsilon_0$ depend on the choice of units, and the number of independent units (meter, kilogram, second, Ampere, Kelvin) is actually exactly large enough that all those constants, $\hbar,c,G,k_{\rm Boltzmann},\epsilon_0$, may be set equal to one which simplifies all fundamental equations in physics where these fundamental constants appear frequently. By changing the value of $c$, one only changes social conventions (what the units mean), not the laws of physics.

The units where all these constants are numerically equal to 1 are called the Planck units or natural units, and Max Planck understood that this was the most natural choice already 100 years ago. $c=1$ is being set in any "mature" analysis that involves special relativity; $\hbar=1$ is used everywhere in "adult" quantum mechanics; $G=1$ or $8\pi G=1$ is sometimes used in the research of gravity; $k_{\rm Boltzmann}=1$ is used whenever thermal phenomena are studied microscopically, at a professional level; $4\pi\epsilon_0$ is just an annoying factor that may be set to one (and in Gaussian 19th century units, such things are actually set to one, with a different treatment of the $4\pi$ factor); instead of one mole in chemistry, physicists (researchers in a more fundamental discipline) simply count the molecules or atoms and they know that a mole is just a package of $6.022\times 10^{23}$ atoms or molecules.

The 19 (or 20?) actual dimensionless parameters of the Standard Model may be classified as the three fine-structure constants $g_1,g_2,g_3$ of the $U(1)\times SU(2)\times SU(3)$ gauge group; Higgs vacuum expectation value divided by the Planck mass (the only thing that brings a mass scale, and this mass scale only distinguishes different theories once we also take gravity into account); the Yukawa couplings with the Higgs that determine the quarks and fermion masses and their mixing. One should also consider the strong CP-angle of QCD and a few others.

Once you choose a modified Standard Model that appreciates that the neutrinos are massive and oscillate, 19 is lifted to about 30. New physics of course inflates the number. SUSY described by soft SUSY breaking has about 105 parameters in the minimal model.

The original 19 parameters of the Standard Model may be expressed in terms of more "fundamental" parameters. For example, $\alpha$ of electromagnetism is not terribly fundamental in high-energy physics because electromagnetism and weak interactions get unified at higher energies, so it's more natural to calculate $\alpha$ from $g_1,g_2$ of the $U(1)\times SU(2)$ gauge group. Also, these couplings $g_1,g_2$ and $g_3$ run - depend on the energy scale approximately logarithmically. The values such as $1/137$ for the fine-structure constant are the low-energy values, but the high-energy values are actually more fundamental because the fundamental laws of physics are those that describe very short-distance physics while long-distance (low-energy) physics is derived from that.

I mentioned that the number of dimensionless parameters increases if you add new physics such as SUSY with soft breaking. However, more complete, unifying theories - such as grand unified theories and especially string theory - also imply various relations between the previously independent constants, so they reduce the number of independent dimensionless parameters of the Universe. Grand unified theories basically set $g_1=g_2=g_3$ (with the right factor of $\sqrt{3/5}$ added to $g_1$) at their characteristic "GUT" energy scale; they may also relate certain Yukawa couplings.

String theory is perfectionist in this job. In principle, all dimensionless continuous constants may be calculated from any stabilized string vacuum - so all continuous uncertainty may be removed by string theory; one may actually prove that it is the case. There is nothing to continuously adjust in string theory. However, string theory comes with a large discrete class of stabilized vacua - which is at most countable and possibly finite but large. Still, if there are $10^{500}$ stabilized semi-realistic stringy vacua, there are only 500 digits to adjust (and then you may predict everything with any accuracy, in principle) - while the Standard Model with its 19 continuous parameters has 19 times infinity of digits to adjust according to experiments.

svavil
  • 628
Luboš Motl
  • 179,018
  • I concede; this was physics question not a philosophy one. Many thanks. – Michael Luciuk Apr 11 '11 at 15:21
  • 2
    It might be interesting to note here that before Luboš was 10 years old, The coincidental similarity between the proton-to-electron mass ratio and the number $6\pi^5$ was noted and published in possibly the shortest PRL ever (one single sentence!) by Friedrich Lenz [PRL 82, 554 (1951)]. – Paul Aug 12 '15 at 14:11
  • 2
    This is very interesting.I am sure that I would reject the paper if I were the referee. – Luboš Motl Aug 12 '15 at 15:00
9

Only dimensionless quantities are important. They are just pure numbers and there can't be any ambiguity about their value. This is not so with dimensionful quantities. E.g. if I tell you my speed $v$ relative to you is $0.5\, \rm speedons$ that doesn't give you much information as I have a freedom to define my $\rm speedon$ units any way I want. Only way I can give you some information is if I give you dimensionless quantity like $v/c = 0.5$.

Now what we need to make dimensionful quantities dimensionless is some reference scale (in previous example it was $c$). We can in principle choose any scale we want but usually it will be something from day to day experience. E.g. you choose meter to be what it is so that stuff you usually encounter (other people, houses, trees, etc.) is of the order $\sim 1$ with respect to meter. These is how all our units originated. Naturally, there's nothing particularly special about humans and the scales they usually work with. We know there are lots of important scales as we go down to atomic and nuclear sizes. We also know there is more important speed scale (namely, ultra-relavistic $v/c \to 1$). And so on.

Still, we need to choose some units to work with to be able to compute anything and it would be nice to choose some units that wouldn't suffer from the above-mentioned arbitrariness. It turns out we are in luck because Nature has given us few special constants. Each of them is related to some fundamental theory ($c$ in special relativity, $G$ in gravity, $\hbar$ in quantum mechanics, etc.). It would be silly not to exploit this generous gift. So we can talk about speeds being 0.9 (meaning actually $v/c$), action of 20 ($=S/\hbar$) and so on. This system of units is called Planck's and while it's not used in day to day life for obvious reasons, it's very useful anytime we deal with fundamental physics.

Marek
  • 23,530
  • 1
  • 77
  • 106
4

(...) is this an area that physicists have real concerns or expend significant research?

Interestingly, Paul Dirac did some research on Cosmology, based on the consideration of dimensionless combinations of numbers approaching the unity, that are built from fundamental physical quantities. The combinations mixed micro-physical quantities like the electron charge with cosmological parameters like the Hubble constant. This is an example, extracted from Coles/Lucchin Cosmology book (Wiley, 2nd ed 2002):

$ \frac{e^{4}H_{0}}{Gm_{p}m_{e}^{2}c^{3}} \simeq 1$

Assuming the validity of this relation has interesting implications: since $H_{0}$ evolves with time, one or more of the so-called fundamental constants that appear in the equation must vary in time too. This lead to some attempts to build theories with different past values of the gravitational constant.

The theory is almost forgotten. It is still not fully clear if he opened a Pandora's box of numerology speculation, of if something with deep physical, still unveiled meaning is hidden there. The current explanations for these numerical coincidences(?) is the Weak Anthropic Principle, which seems to me at least as speculative and philosophical as the original idea of Dirac.

Here is a link to the full text of a Dirac paper about the question, in 1974: http://www.jstor.org/discover/10.2307/78591?uid=3737952&uid=2&uid=4&sid=21101428637013

1

The universe can be described within a formal mathematical framework, all physical quantities can therefore be described using equation that contain only dimensionless numbers. Now, given any set of equations, you are always free to introduce scaling variables allowing you to study certain scaling limits of the theory. The universe as we experience it can be accurately described as a degenerate scaling limit that requires introducing 3 scaling variables and then taking a scaling limit in the right order. That degenerate limit is what we call "classical physics".

Since we are not exactly at the scaling limit, the scaling variables are not actually at their limiting values (infinite or zero). But to obtain classical physics exactly, you do need to send these variables to their appropriate limits. Since we started out with almost zero knowledge of the laws of physics several centuries ago, we needed to find out how the universe works by doing experiments. But since we live in almost the scaling limit, what happens is that certain relations between observables are very difficult to observe (exactly at the scaling limit, you can end up with singular equations, you then lose relations between physical variables). It then looks like a complete description of the Universe requires a few independent physical variables that cannot be related to each other.

We then developed a mathematical formalism that imposes this incompatibility via the introduction of "dimensions". When we later learned about how these supposedly incompatible quantities are actually related, we found these relations with the scaling variables appearing as dimensionfull constants in the equations that when expressed in the old units, have a very large or small magnitude.

Count Iblis
  • 10,114
-3

Speaking of the electron-proton mass ratio (which is about 1/1836), Lubosh found it might be connected with $\pi$, and I think it is kind of a coupling constant in the Hydrogen atom.

The atom has the center of inertia variables and internal motion variables. When an external force is applied to the atomic nucleus, the atom is accelerated as a whole and its internal motion can also be excited. The ratio $m_e/m_p$ determines efficiency of "pumping" the internal degrees of freedom of an atom with an external force acting on the nucleus.

EDIT: Seeing so many downvotes, I changed my mind. I agree with Lubosh: $m_p/m_e = 6\pi^5$ and has nothing to do with physics :-(.

  • Notwithstanding the truth value of that proposition, this isn't an answer. It's just a comment. Please post comments as such. By doing that, you'll save time of lots of people who have to click down-vote button ;) – Marek Apr 11 '11 at 16:48
  • 3
    It was a complement, not a comment. – Vladimir Kalitvianski Apr 11 '11 at 19:05