3

I'm doing a research for my stats class in high school and I chose quantum mechanics as my subject. I narrowed down to electron localization in an atom and radial probability distribution. However, I can't find any data to support my claims that probability and statistics are very important in quantum mechanics. Is there any data that I can analyse to prove my claims that is suitable for me?

I introduced the double slit experiment, Schrodinger's equations, wave function and superposition.

Lenol
  • 735
  • 2
    What about the Born rule? Probability is directly built into modern QM from the get-go. – Stan Liou Dec 18 '13 at 15:41
  • I know nothing about the Born Rule. Do you have any ideas? – Lenol Dec 18 '13 at 15:44
  • 1
    @MYaman: The Born rule provides the wavefunction with the interpretation that its modulus-squared is a probability of a measurement result, as well as some more general statements about measurement and probability. Without the Born rule, there is no point to having a wavefunction, because it wouldn't mean anything physically. – Stan Liou Dec 18 '13 at 15:52
  • 1
    Title question (v1) seems more profound than question in main text. Please harmonize title and main text. – Qmechanic Dec 18 '13 at 16:48
  • Find some disintegration data. The discovery of the Higgs was totally probabilistic (there are many books about statistics applied to particle physics). Feynman diagrams and such also reveal that QM is probabilistic. – jinawee Dec 18 '13 at 16:50
  • Harmonized title with body question per @Qmechanic comment. – Alfred Centauri Dec 18 '13 at 18:13

3 Answers3

4

I'm going to explain roughly what the Born Rule, following Stan Liou's comment.

One of the Postulates of Quantum Mechanics relates a mathematical quantity, the wave function (or state $\psi$ of a Hilbert space, $\mathcal{H}$) to a measurable entity, the probability of a given event to happen.

The idea goes like this: if you want to measure a quantity $A$, it might be the energy, angular momentum, position, etc. you will take the related (linear) operator to this observable $A$, lets call it $\hat{A}$. Form the postulates of QM we know that when you measure the observable $A$, we can only read certain values, the eigenvalues of the operator $\hat{A}$, we will denote them by $a_i$. What determines which eigenvalue $a_i$ your measure is going to read? Probability, with the following "protocol":

At the beginning of the experiment your system was in a state $\psi\in\mathcal{H}$, that will be a linear combination of the base states (functions) composed by the eigenstates of the operator $\hat{A}$ (those that if you measure them will yield the eigenvalue associated to that eigenstate always, i.e with probability 1): $$ \psi = \sum_i\alpha_i\phi_i $$ where $\alpha_i\in\mathbb{C}$ denotes the complex amplitude of the eigenstate $\phi_i$. Then the probability of measuring a given $a_i$ (eigenvalue associated to the eigenstate $\phi_i$) in your experiment will be $|\alpha_i|^2$.

iiqof
  • 772
4

I'd like to perhaps a slightly different viewpoint to your question and maybe turn it around a little. Probability is hard. Very hard. Defining the foundations of probability and statistics so that they are altogether sound and rigorous is actually a work in progress. It definitely is not complete. On the other hand Quantum mechanics is easy. Very easy! I'm being "slightly" tongue in cheek here of course but what I'm basically getting at is that:

In quantum mechanics you can always ask Nature for the answer by doing an experiment and seeing what the outcome is.

So many philosophers and mathematicians who think very hard about the foundations of probability these days actually use real physical examples from quantum mechanics to give them insight into their thought. The reason why this is a useful thing to do should be plausible from the other answers: quantum mechanical systems experimentally seem to be "probabilistic" at a very basic level. So it's natural that someone should think of them as models to inspire abstract mathematical ideas and definitions.

The physicist Richard Feynman and Albert Hibbs showed a great deal of foresight in the 1960s when they expressed the view in their book "Quantum Mechanics and Path Integrals" that that quantum mechanics represents a replacement of probability theory itself. I like to think that what they were getting at was something like what I am saying here: we should henceforth think of Quantum Mechanics as the real world foundation whence to derive abstract probability notions.

So quantum mechanics is more basic because it is a concrete, real world behaviour whereon we can ground abstract mathematical notions.

For some references showing some of the open problems in defining "probability" rigorously, including examples from quantum mechanics used by philosophers, see how you go with the references below from one of my favourite websites: The Stanford Encyclopedia of Philosophy.

  1. Interpretations of Probability; and

  2. Chance versus Randomness ; and

  3. Bayesian Epistemology

Do not worry if you do not understand everything. The interesting point is that probability and statistics are not as "cut and dried" as it is often presented in high school and if anyone tells you it's easy, you can just say to them "pants on fire!" so that it is a highly useful thing to do to use a real world "model" like quantum mechanics to help philosophically and with questions of foundation. As I used to say to my daughter when she was learning to read, sometimes if you can't understand everything, let the words wash over you and see what sense your mind makes of them as it mulls. I do this all the time: I think I understand about 5% of what I read at first reading when reading journals in my field and it would not be a stretch to say that I need to read papers on average of the order of twenty times to get the gist! Just a head start: you will come across the ideas "Frequentist" and "Subjectiveist" notions of probability: the former is the idea of defining probabilities as frequencies in an experiment as the number of trials $\to\infty$; the latter, subjectiveist notion is as an "intuitve" measure of likelihood not gleaned from experiment and often assigned by things like "symmetry": for example, we hold up the notion of a "fair die" when "by symmetry" all its outcomes are equal and we beget this notion abstractly and independently from any experiment or even before we know such a thing might be possible.

3

Let me try to give you a kitchen-table explanation.

I can't help you with statistics vis-a-vis quantum mechanics, but probability is very basic.

The underlying "real stuff" in quantum mechanics are numbers that, when squared, produce probabilities of seeing things. Typically, these numbers are complex, but they don't always have to be.

These numbers are called amplitudes of the probabilities. They are called that because, just as an electrical voltage is called its amplitude, and its energy is proportional to voltage squared, so in the quantum world probability is proportional to amplitude squared.

As I said, these amplitudes are treated as the underlying realities of everything. They can be added to each other, because they are complex numbers. For example, if you have two such numbers, each one can represent a probability, but when you add them together, the result could be larger than each one by itself, or it could be zero, if one number is the negative of the other. So you see they can act like waves, either canceling or reinforcing each other.

If you think of tossing a die, with 6 numbered sides, the probability of each side (like 1) is 1/6. The probability that you get either a 1 or a 4 (two particular sides) is 1/6 + 1/6 = 1/3. However, in the quantum world, each side of the die has an amplitude, which is any complex number that when squared equals 1/6. (Technically, you multiply it by its "complex conjugate".) So, if you happen to have a die where the amplitude of 1 and the amplitude of 4 are opposites, the probability of (1 or 4) would be zero!

I still think the double-slit experiment is the best eye-opener for the relationship between probability and its amplitudes.

Mike Dunlavey
  • 17,055