If you already know the uncertainties of your measurement process, and care about minimizing the uncertainties in the parameters of the fit, I believe it's the same whether you repeat your measurement at the same t several times or not, with an important caveat (below). I can prove that in a linear fit this is the case. I've never done the calculations for a nonlinear fit as in your example, so it may not hold true in that case.
Proof for a linear fit
I'm sorry if this is too complicated. I can clarify anything in the comments, or you can just skip this part if you're not interested in it, of course ;-)
From Bevington "Data reduction and error analysis for the physical sciences", if you fit a function with the form
$y(x) = \sum_{k=1}^N a_k f_k(x)$
then the coefficients are
with 
(this is for N=3, but it's the same for larger N). The vertical bars are the determinant of the matrix inside them.
The only important thing to notice here is that $y_l$ appears just once for every term in $a_k$, and it's exponent is only one (i.e., there's no $y_l^2$ or higher).
The variance for $a_k$ is
$\sigma_{ak}^2 = \sum_m \left(\frac{\partial a_k}{\partial y_m} \sigma_m\right)^2$
but, since you have to derive with respect to $y_l$, it doesn't appear in the final expression. You just get lots of terms and products of the form $\sum_{l=1}^N \frac{f_i(x_l) f_j(x_l)}{\sigma_l^2}$. If a few of the first terms in the sum were measured at the same point $x_1$, you can get
$$\sum_{l=1}^{n} \frac{f_i(x_1) f_j(x_1)}{\sigma_1^2} + \sum_{l=n+1}^N \frac{f_i(x_l) f_j(x_l)}{\sigma_l^2} = n \frac{f_i(x_1) f_j(x_1)}{\sigma_1^2} + \sum_{l=n+1}^N \frac{f_i(x_l) f_j(x_l)}{\sigma_l^2}$$
This can be written as
$$\frac{f_i(x_1) f_j(x_1)}{(\sigma_1/\sqrt{n})^2} + \sum_{l=n+1}^N \frac{f_i(x_l) f_j(x_l)}{\sigma_l^2}$$
This last expression is the same one you would get for a measurement in which you measure n times at the same $x$, and average these data points before feeding your fitting routine.
So you see that, in the linear case, there is no advantage in measuring repeatedly at the same point.
Discussion for a nonlinear fit
In the nonlinear case, as in your example, I'm not so sure if it's the same or not. I hope other users here can answer you in more detail.
In my (not so long) experience, though, I prefer to measure at different points, but (and this is the caveat) taking care to minimize the possible values of the parameters (the "wiggle" room the function has, so to speak). For example, if in your function with two exponential decays you have $B=1/s$ and $D=1/(20\,s)$ you need to measure several points around 1, and several around 10. For example, you could do 50 points from 0s to 5s and 50 from 10s to 100s.
If you don't do this, it's possible that the data you have can't determine very well one of the parameters, just because you don't have much information in the range in which one parameter is dominant.
Again, maybe someone can justify better the above reasoning, but qualitatively I think this is correct. And, as AdamRedwine wrote, this holds only when you already know your uncertainties.