2

Assume I have a radioactive sample composed of $N$ atoms of some type A. I know that if I measure at time t the number of atoms not already decayed, this number will be given by

$$ N(t) = N_0 \exp\left({-t/T}\right) $$

where $T$ is the mean decay time and $N_0$ is the initial number of atoms A in the sample. However, say that I have a detector that allows me to detect every radioactive emission from the sample and also to measure the exact time at which this emission happens. I will continue this experiment until I have enough data to plot an histogram. On the $y$ axis I put the number of emission detected and on the $x$ axis the time in which the emission has been detected. What is the shape that I expect for this histogram?

I thought I will find a uniform distribution since the exact time in which each atom of the sample decay is completely random.

Another question is: what distribution I will find if I plot the temporal distances between detection times? For example at $t_1$ I have measured an emission and at $t_2$ another one. The temporal distance is $t_2-t_1$. I iterate this process for each $t_i$ of the measurement for $i$ that goes from 1 to $n$, where $n$ is the number of emissions detected. Once I have the resulting temporal distances I plot the histogram.

Thanks.

Roger V.
  • 58,522
Leonardo
  • 113

3 Answers3

3

The probability density function of the number of atoms after time $t$ is given by the Poisson distribution, as Jon Custer already stated in his comment. This is a standard result in physics and you will find many references. However, the distribution of the times between two decays is given by the exponential distribution. The exponential distribution is the distribution, which is memoryless.

What does this memoryless property mean? Suppose the probability of observing at least one decay during the time interval $dt$ is given by $P_1 = P(X\le dt)$. Further suppose that we already waited the time $T$ and during that time no decay happened. The memoryless property means that the probability that we observe a decay during the time interval $[T, T+dt]$ is also $P_1 = P(X\le dt)$. Hence, no matter how long we wait, as long as there is no decay the probability of observing a decay does not change. This is exactly what we would expect from a random process, such as a radioactive decay. E.g. in a $\beta^-$- decay there is no "build-up" of the decay. Instead, the neutron decays suddenly into its constituents.

Semoi
  • 8,739
3

Poissonian vs. pure death process
Contrary to the immediate intuition (reflected in the comments and the earlier version of my own answer) we are not dealing here with a Poisson process, but with a pure death process (called so as a particular case of more general birth and death process). Both are Markovian processes, where the probability of the next event depends only on the parameters of the previous event. However, in a Poisson processes this probability does not depend on the total number of the previous events, whereas in the pure death process it is.

The probability of having $n$ non-decayed atoms is described by the following equations: $$ \dot{p}_n(t) = -\lambda np_n(t) + \lambda (n+1)p_{n+1}(t), n>0,\\ \dot{p}_0(t) = \lambda p_1(t) $$ The first term describes the reduction of the probability of having $n$ atoms due to the decay of one atom among $n$, the second term describes the increase of this probability via the decay of one atom among $n+1$. The difference with the Poisson process is the presence of factors $n$ and $n+1$ in the rates, reflecting the fact that the probability of an atom decaying is proportional to the number of atoms.

The equations above can be easily converted to the equation for the average number of atoms, $$ N(t) = \sum_0^{N_0}p_n(t) $$ with solution $$ N(t)=N_0e^{-t/T}. $$

Survival probability and transition probability
Thus, the survival probability, i.e., the probability that after a decay event at time $t'$ we do not observe another decay event till time $t$ is $$ S(t|n,t')=e^{-\lambda n(t-t')}, $$ whereas the probability density of another decay at time $t$ is $$ f(n-1, t|n, t') = -\frac{d}{dt}S(t|n,t') = \lambda n e^{-\lambda n(t-t')} $$

Joint probability density of multiple events
The joint probability density of events occurring at times $t_M>t_{M-1}>...>t_2>t_1$, laying in the interval $[t_0, t]$, conditioned on having $N_0$ atoms at $t_0$, is then given by $$ f(t, t_M, t_{M-1}, ..., t_2, t_1|N_0)=\\S(t|t_M, N_0-M)f(N_0-M, t_M|N_0-M+1, t_{M-1})...f(N_0-2, t_2|N_0-1, t_1)f(N_0-1, t_1|N_0, t_0)=\\ S(t|t_M, N_0-M)\prod_{m=1}^Mf(N_0-m, t_m|N_0-m+1, t_{m-1}) $$

References
As a general (although a bit advanced) mathematical text on point processes and survival analysis I suggest Aalen et al., Survival and event history analysis

Update
I add for completeness the solution for $p_n(t)$: $$ P(n, t|N_0) = \begin{cases} {N_0\choose n}e^{-n\lambda t}\left(1-e^{-\lambda t}\right)^{N_0-n}, \text{ for } n\leq N_0,\\ 0, \text{ otherwise}. \end{cases} $$

Roger V.
  • 58,522
1

If your sampling time is comparable to the lifetime $T$ of your emitter, then the distribution of counts will be not uniform but exponential: in two little bins separated by $T$, the earlier bin will have more counts by a factor of $e$.

If you sample for a time $\delta t$ that’s short compared to the lifetime of the emitter, $\delta t \ll T$, there is still a slope $\frac{d}{dt}N(t) = - N(t)/T$ in the count rate. If you neglect the higher derivatives, you would expect each time bin to contain fewer counts than its predecessor by a factor of $\delta t / T$. However, whether you can distinguish between the count rates in adjacent bins depends on the absolute number of counts you include. Thanks to Poisson statistics, two time bins which are both predicted to receive $N$ counts will actually receive $N\pm\sqrt N$. So if you wanted to distinguish two adjacent time bins having width $\delta t/T = 1\%$, the number of counts in each bin required to do so with statistical confidence is something like $\frac{1}{\sqrt{1\%}} = 10^4$. If the number of decays detected in each time bin is small, it’ll be impossible to distinguish between the actual exponential decay and your guess of a uniform distribution.

You are interested in the arrival of events within a single time bin, to which I say: make your time bins smaller and use my analysis.

The distribution of intervals between successive related events is also related to the Poisson distribution, though in a convolved sort of a way. Suppose that your events are occurring uniformly, at an average rate of $1/\tau$, but independent and uncorrelated. Your first event starts a clock, which is no less arbitrary than any other starting point. If you wait until $\tau$ after your clock you expect to have observed $1\pm1$ further events; if you wait until $2\tau$ you expect to have observed $2\pm\sqrt2$ further events; if you wait until $10\tau$ you expect to have observed $10\pm\sqrt{10}$ further events. You can invert this and say that the probability of having $10\tau$ (or more) elapse between two successive events is the same as the probability of drawing a zero from a Poisson distribution with mean $10$: small, but not negligible. The probability of having $\tau$ (or more) elapse between successive events is the same as the probability of drawing zero from a Poisson distribution with mean $1$: about $1/e$.

If you do the integral on this process, you learn that inter-arrival times are described by the exponential distribution.

rob
  • 89,569