I know that perhaps some will answer with "the double slit experiment" but what is it that makes the double slit experiment something probabilistic?
I cannot understand this and please I ask you to explain this as if I were a child
I know that perhaps some will answer with "the double slit experiment" but what is it that makes the double slit experiment something probabilistic?
I cannot understand this and please I ask you to explain this as if I were a child
If you measure a system, instead of giving you a definite value, quantum mechanics gives you probabilities for the different measurement outcomes.
In Newtonian mechanics we have a position $\vec r(t)$ of a point particle, and can give a deterministic prediction where we will measure the particle at time $t_1$ by solving the equation of motion $$ m \ddot {\vec r}(t) = \vec F\big(\vec r(t), \dot {\vec r}(t), t\big). $$
In quantum mechanics a particle is described by a field $\psi(\vec r, t)$ called the wave function. There is a probabilistic equation of evolution, the Schrödinger equation, which tells you how this field evolves: $$ i \hbar \dot \psi = \left( - \frac{\hbar^2}{2m} \Delta + V(\vec r, t) \right) \psi. $$
However, when we measure the position of the particle, we don't get a specific prediction, rather we get a probability distribution $$ p(\vec r, t) = \left| \psi(\vec r, t) \right|^2. $$
The measurement will return a specific value for the position, but we don't know which one, and the measurement will change the state of the system (unless it was already in a so-called eigenstate).
To our current knowledge there is no way of getting the actual measurement result ahead of time. We believe it not to be a lack of knowledge about the system that leads to this uncertainty, but a fundamental intrinsic property of our physical world.
There are theories that try to explain this weird non-deterministic measurement in terms of the deterministic evolution equation. The fundamental idea behind this is called decoherence. We model our system and measurement apparatus quantum mechanically, and include the state of the "environment". When you then don't care about the state of the environment after the measurement (which you can model by a calculation operation called tracing out the environment) – you get exactly the probability distribution for the states of the system. However, it is not entirely clear how and why this is valid and describes our world. One way to interpret this is, that system, observer and environment are entangled in a way that guarantees consistent classical measurement results. This is the starting point for the Everett interpretation (also called many-world interpretation) where we say that each of the terms in the entanglement of classical measurement result and environment state will no longer interact, and therefore constitute separate world evolutions that split off. Everything that could happen, does happen in some of the world, and in average such a world line will see the classical probabilities.
Another line of thought are so-called hidden variable theories, where the measurement results are indeed pre-knowable. But that knowledge is hidden in inaccessible degrees of freedom of the system. The Bell inequalities derive hard bound on the possible correlation of certain measurements for a large class of such theories, and experiments have ruled those out (because the Bell inequalities are not upheld by measurements on entangled systems).
The usual text-book solution is the Kopenhagen interpretation, which is sometimes glossed as "shut up and calculate", which states there is no hidden mystery and not more to know than what the mathematical formalism of quantum mechanics gives you – the theory works and makes useful predictions and there is nothing more you can ask of a physical theory.
There are variants of standard quantum theory where this is the result of some deterministic process "behind the scenes". All such theories either fail in experiments or are not testable by experiment.
– John Doty Jul 01 '23 at 13:21