I can think of a method that follows from a numerical Hilbert transformer I came up with for estimating the phase of Raman gains, a problem which is very like many handled in spectroscopy. I've never tried it quite in the way you want, so I can't give any guarantees. First, for background see:
see https://mathoverflow.net/a/69924/14510
In summary, this method maps a complex half plane onto the unit disk and so gets around the numerical headache of handling Cauchy principle values.
Your situation is slightly different. You want to find a function whose Laplace transform is holomorphic in the open right half plane: inverse Laplace transforms yield causal functions by definition and the holomorphicity in right half plane means they are stable (i.e. bounded for all time and either decay or boundedly oscillate as $t\rightarrow\infty$).
So, in your situation, what my method would do map $\mathbb{P} = \left\{\left. z \in \mathbb{C} \right|\; {\rm Re}(z) \geq 0\right\}$ to the closed unit disk $\mathbb{D} = \left\{\left. z \in \mathbb{C} \right|\; |z| \leq 1\right\}$ and the mapping you need is:
$M : \mathbb{P} \rightarrow \mathbb{D};\; M(s) =\frac{\omega_0 - s}{\omega_0 + s}$
what my method relies upon is that all functions holomorphic on the closed unit disk have a convergent Taylor series there, i.e. $f(z) = \sum_{n=0}^\infty f_n \,z^n$. So this is the general form of a function that you want. Transforming the domain of this Taylor series with the inverse Möbius transformation, we find that the general form of your function has Laplace transform:
$F(s) = \sum\limits_{n=0}^\infty f_n \,\left(\frac{\omega_0 - s}{\omega_0 + s}\right)^n$
i.e. your frequency response is of the form
$F(i\,\omega) = \sum\limits_{n=0}^\infty f_n \,\left(\frac{\omega_0 - i \omega}{\omega_0 + i \omega}\right)^n$
This is now in a form that you can least squares best fit to a dataset with linear regression: you assume a finite number of terms and then do a linear regression on the $f_n$.
You will need to experiment with this heaps! The "black art" of this numerical brew is to choose the real parameter $\omega_0$ so that the "interesting bits" of your function you want to transform happen on a "reasonable" portion of the unit circle. Suppose that your function can be thought of as having a compact support - some finite frequency interval centred on nought; within this compact support it does its interesting stuff and outside it is roughly nought. Then is is no good choosing $\omega_0$ so small that the image of this compact support is a few degrees of the unit circle centred on $z = 1$; likewise you might run into hardship if you choose $\omega_0$ too small and the compact support's complement is mapped to some tiny part of the unit circle centred around $z = -1$: if you make this mistake the algorithm may not recognise a zero at infinity.
Let's look at what this response would be like in the time domain. If you invert the Laplace transform above, we get a function of the form:
$f(t) = e^{-\omega_0\,t} \left(a_0 + a_1 t + a_2 t^2 + \cdots + a_n t^n\right) U(t)$
where $U$ is the Heaviside unit step function. So this is simply polynomial fitting, with the exponential function making things "behave well" as $t\rightarrow\infty$. Polynomial fitting can get numerically ill conditioned, especially if you have too many terms, so something else you might want to try is fitting a function of the form:
$f(t) = e^{-\omega_0\,t} \left(a_0 + a_1 T_1\left(\frac{t}{\tau_m}\right) + a_2 T_2\left(\frac{t}{\tau_m}\right) + \cdots + a_n T_n\left(\frac{t}{\tau_m}\right)\right) U(t)$
where now $T_n$ is the $n^{th}$ Tschebyschev polynomial and $\tau_m$ some cutoff time (after which the time response is effectively nought). So in the frequency domain you would fit:
$F(i\,\omega) = \sum\limits_{n=0}^N f_n \,\left.\mathcal{L}\left(U(t)\,e^{-\omega_0\,t} T_n\left(\frac{t}{\tau_m}\right)\right)\right|_{s = i \omega}$
More generally, Any rational function of frequency $F(i\,\omega)$ such that $F(s)$ has no poles in the right half plane will do (it will define a causal, stable function). This knowledge might get you a good fit - especially if your function has obvious resonances. However, fitting the co-efficients to your data is now going to be a nonlinear regression and moreover you will need to test that your poles are always in the left half plane. The linear regression of basis functions above doesn't have this problem - the poles of the fitted function do not shift.
As you are an experimental physicist, i.e. in my own definition someone who lives and breathes the concept of signal to noise ratio, I'm making this comment more for other readers: sometimes you can be too clever processing data and you need to tread with care. Methods like the above which seemingly should work by dint of some simple theory can worsen, not improve, the quality of your data, because they try to draw meaning from something that can be pure noise and really ought to be disregarded. This is true especially of any kind of deconvolution (mentioned in other answers), which boosts the noise power of signals where the system's response is weak and therefore (e.g. trying to offset a lowpass filtered by adding gain to high frequencies).
As for my original method: as far as I know is still used by a major commercial optical systems simulation package and hasn't had too many complaints.
VECTFIT
? – nibot Mar 10 '11 at 22:40