6

Is radioactive decay able to be used for true randomness? And do we know if radioactive decay is truly random?

Edit. Here is a example true random number generator made using radioactive decay. http://www.instructables.com/id/Arduino-True-Random-Number-Generator/ Would this be truly random if the first two questions are true?

Qmechanic
  • 201,751
  • yes, yes, and http://physics.stackexchange.com/q/105107/ – Martin Beckett Apr 22 '15 at 23:05
  • The standard algorithm is to compare the time between decays (c) and (b) with that between (b) and (a). $t_{cb} > t_{ab}$ implies one value for your bit, the other sense the other. Discard any that are too close to call or any triplets featuring one or more times near your resolution. – dmckee --- ex-moderator kitten Apr 22 '15 at 23:16
  • No there is no such thing as random in this universe. Radioactive decay is not an exception. it is just another physics law that we can not grasp yet with our technology and knowledge. Remember 1000 years ago many more thing were also random for us but today they are not. If there were such even as random, this universe would never become real. Everything has an order and a duty. – Furkan Gözükara Sep 07 '16 at 14:08

1 Answers1

8

Your question drives at the definition of "true randomness", which is a deep question and not altogether resolved. But in short, in modern physics we believe the answer is yes. Indeed there is a whole body of knowledge around Bell's Theorem and the untenability of notions of countefactual reality (the notion that the outcome of a quantum measurement exists before the measurement is made), so we believe in principle that we cannot foretell in any way exactly when a radioactive decay event will happen.

Many philosophers and mathematicians who deal with foundational questions about notions of randomness and probability theory go even further than this: they look to modern quantum mechanics as a model for what randomness truly is and for help in formulating notions and definitions of randomness. You can get a feel for thisfrom the Stanford Encyclopedia of Philosophy a most excellent resource, particularly under the pages:

  1. Chance versus Randomness

  2. Interpretations of Probability; and

  3. Bayesian Epistemology;

You'll quickly see that rigorous underpinnings of propability and statistics are a work in progress.

One definition of true randomness can be the following. Can we foretell the times at which decays will happen such that there is any nonzero correlation between the observed and theoretical times? If the answer is no then the sequence is random. One can define this notion more rigorously through Kolmogorov Complexity. Thus, informally, we talk about randomness as futility of foresight: we can have no foresight into true randomeness.

So you have a whole sequence of numbers encoding the time differences between successive events at your radiation decay detector. We don't believe, on average there is any way of describing this sequence that is a shorter description than simply naming the time difference timeseries itself: the mean ratio of the complexity $K(X)$ of the observed sequence $X$ to the length $L(X)$ of the raw sequence approaches unity as the sequence length approaches infinity.

  • Kolmogorov Complexity is irrelevant. Assuming your measurements have finite precision there is a certain probability of obtaining [1,1,1,1,1,1,1,1,1,1] for the first ten intervals, which obviously has low Kolmogorov Complexity, but was still perfectly unpredictable. – Hugh Allen Apr 23 '15 at 06:01
  • It goes the other way too - you can have high Kolmogorov Complexity with low or no randomness, for example if you are reading from a table of "random" numbers, I can perfectly predict what you will say if I have a copy of the table or can see over your shoulder. There is no fundamentally unpredictable quantum process going on. – Hugh Allen Apr 23 '15 at 06:19
  • @HughAllen Not irrelevant. What your comment shows is that you need to look at an average notion of complexity. See end of answer. – Selene Routley Apr 23 '15 at 06:24
  • OK, your edit makes a difference. A truly random sequence will be incompressible on average due to the pigeon-hole principle. But if you want to know if a particular finite sequence was taken from a "truly random" source, statistics can give you a hint but never prove anything. It just seems like a bad way to define randomness. If I told you I had generated a random bit and it was a 1, would you be able to tell if it was "truly random"? The only definition that makes a bit of sense is for nobody to have been able to predict that it would be a 1. – Hugh Allen Apr 23 '15 at 06:48
  • 2
    @HughAllen Like I said: true randomness is a subtle and deep concept and if you've not read the Stanford Philosophy articles before (although it sounds as though you may have) you might enjoy them. People are very glib about "randomness" as though it's universally understood and it's not at all. The question that often amuses me is "Why can't QM be modelled by classical statistics" with the clear undertext that classical statistics would be easier to understand. My reaction to that is that QM is "easy" in the sense that one can always ask Nature for the answer by doing an experiment .... – Selene Routley Apr 23 '15 at 06:58
  • .....we don't have that luxury for probability theory! – Selene Routley Apr 23 '15 at 06:59