Is randomness based on lack of knowledge or behavior of universe is true random?
Or in other words,
are the allegation by EPR about hidden variable in the QM theory justifiable? What evidence can disprove/prove EPR?
Is randomness based on lack of knowledge or behavior of universe is true random?
Or in other words,
are the allegation by EPR about hidden variable in the QM theory justifiable? What evidence can disprove/prove EPR?
This is a very general question, and can be answered from several perspectives. I shall try to give an overview so you can perhaps research the areas that interest you a bit more.
Firstly, the most fundamental interpretation of probability (as considered by most mathematicians) is Bayesian probability. This effectively states that probability measures state of knowledge of an observer.
This view has interesting ties with physics, in particular quantum mechanics. One could consider the random outcome of a QM measurement (wavefunction collapse) from a frequentist approach, but it is often more appealing philosophically to consider it as a state of knowledge. (The famous thought experiment of Schrodinger's cat is a good example - until one opens the box, we can only say it is an "alive-dead" cat!)
Interestingly, Bayesian probability does not explicitly preclude determinism (or non-determinism). Our current understanding of quantum mechanics does however. In other words, even knowing perfectly the state of a system at a given time, we cannot predict the state of the system at a future time. This most famous upset Albert Einstein, who spent many years of his life looking for a more fundamental deterministic theory - a so-called hidden-variables theory. Since then, however, we have learnt of Bell's theorem, which implies the non-existence of local hidden variables, suggesting that there is no more fundamental theory that "explains away" the non-determinism of QM. This is however a very contentious issue, and in any case does not rule out the existence of non-local hidden variable theories - the most famous of which is Bohm's interpretation.
In summary, this issue is far from settled, and creates a lot of contention between different groups of physicists as well as philosophers today.
There is a fundamental randomness in the universe but we can often treat things as deterministic. For example, we can accurately predict the path of a projectile provided we know the initial velocity and the gravitational acceleration. However, every measurement has uncertainty due to the accuracy and precision of the instruments used to make the measurements. From these measurements, our predictions also have uncertainty.
Uncertainty becomes a fundamental problem at extremely small scales. You should read up on the Uncertainty Principle for a detailed explanation of this but I will attempt to put it simply. To make a measurement, you actually have to interact with the object. For example, to see in the dark you may use a torch. This will shine light at objects, which will be scattered and reflected and your eyes will detect the reflected light. Here light interacts with the object you are observing. At a large scale this doesn't change much, but at extremely small scales the energy carried by light is enough to change the system significantly. So the action of observing necessarily implies that you are changing the system so that you can never measure something exactly. It is important to realise that this is a fundamental law of the universe, not just that our equipment is not good enough. I recommend searching for Dr Quantum videos - it is an animated series that explains these concepts. Due to these limitations, we have to model things like the position of a particle as a probability distribution.
Another important thing with regards to determinism is radioactive decay. We can predict very well how much of a radioactive substance will decay in a certain time. It is simply an exponential decay. However, if we extract a single atom, we have no idea when it will decay. This is completely random - the decay of an atom is indeterministic and not at all affected by environmental factors. Again, our models are reduced to probability.
I think there is another level at which this question is being asked. Randomness of a symbol string means there is no formal data compression algorithm that reduces the string to some small form. The shortest description of a string is the level at which its complexity is reduced to an extremum, and that it can be executed by a Turing machine which halts. This is the Kolmogoroff complexity of a symbol string. So to emulate the string there exists some Turing machine which has a "tape" and a stack, where the complexity of the string can't be significantly more (longer length, more bits etc) than the Turing machine to guarantee a halting condition. A set of $n$ coin tosses will produce $N~=~2^n$ possible binary configurations. For $n$ large a smaller percentage of the $N$ binary strings are likely to satisfy this condition. The Kolmogoroff complexity is then a form of the Chaitan halting probability, which is itself not computable in general, which does give a bound on the number of strings which are "halting."
This leads to the indefinability of randomness. To define randomness you need some algorithm which can compute that a string is random. So given a string $S$ there must exist a Turning machine which determines $Rand(S)~=~T\vee F$. However, that is not mathematically possible. This means randomness is not computable.
Ok, what if we were to use some sort of metaphysical observation (no measurement) and then use this observation to choose when to measure thus perhaps partially aligning the colapse with some higher order? I would assume the waveform would collapse according to the same 'random' order but perhaps because of the partial alignment we could somehow influence the outcome of said measurement.