In undergraduate differential equations it's usual to deal with the Laplace transform to reduce the differential equation problem to an algebraic problem. The Laplace transform of a function $f(t)$, for $t \geq 0$ is defined by $\int_{0}^{\infty} f(t) e^{-st} dt$. How to avoid looking at this definition as "magical"? How to somehow discover it from more basic definitions?
9 Answers
What is also very interesting is that the Laplace transform is nothing else but the continuous version of power series - see this insightful video lecture from MIT:

- 4,608

- 5,875
This answer is not exactly an answer to the original question, but this is for the benefit of MO user vonjd who wanted to know more details about the similarities between solving differential equations through Laplace transforms and solving recurrence relations using generating functions.
Since I was going to write it anyway, I figured I might as well post it here for anyone interested.
I will do an example of each, and this should be enough to show the similarities. In each case, we have a linear equation with constant coefficients; this is where both methods really shine, although they both can handle some variable coefficients more or less gracefully. Ultimately, the biggest challenge is to apply the inverse transform: always possible in the linear case, not so easy otherwise.
Differential Case
Take the function $y(t)=2e^{3t}-5e^{2t}$. It is a solution of the IVP: \begin{equation} y''-5y'+6y=0; \qquad y(0)=-3,\ y'(0)=-4. \end{equation} If we apply the Laplace transform to the equation, letting $Y(s)$ denote the transform of $y(t)$, we get $$ s^2Y(s)-sy(0)-y'(0)-5[sY(s)-y(0)]+6Y(s)=0.$$ Substitute the values of $y(0)$ and $y'(0)$, and solve to obtain: $$ Y(s)=\frac{11-3 s}{s^2-5s+6};$$ and apply partial fractions to get: $$ Y(s)= \frac{2}{s-3}+\frac{-5}{s-2}.$$ This is where you exclaim: "Wait a second! I recognize this, since it's well known that $$\mathcal{L}[e^{at}]= \frac{1}{s-a}$$ for all $a$, then by linearity we recognize the function that I started from.
Recurrence case
Let $(a_n)$ be the sequence defined for all $n\geq 0$ by $a_n=2(3^n)-5(2^n)$. It is a solution of the IVP: \begin{equation} a_{n+2}-5a_{n+1}+6a_n=0 \qquad a_0=-3,\ a_1=-4. \end{equation} Define the generating function $A(x)$ to be: $$ A(x)=\sum_{n=0}^{\infty} a_n\; x^n. $$ Multiplying each line of the recurrence by $x^{n+2}$ gives: $$ a_{n+2}\; x^{n+2}-5a_{n+1}\; x^{n+2}+6a_n\; x^{n+2}=0 $$ You can sum those lines for all $n\geq 0$, do a small change of index in each sum, and factor out relevant powers of $x$ to get $$ \sum_{n=2}^{\infty} a_n\; x^n-5x \sum_{n=1}^{\infty} a_n\; x^n+6x^2 \sum_{n=0}^{\infty} a_n\; x^n=0.$$ Or in other terms: $$ A(x)-a_1x-a_0-5x[A(x)-a_0]+6x^2A(x)=0.$$ Substituting $a_0$ and $a_1$ and solving for $A(x)$ then gives, with partial fractions: $$ A(x)=\frac{11 x-3}{6 x^2-5 x+1}=\frac{2}{1-3 x}+\frac{-5}{1-2 x}$$
Looks familiar? It should! If you substitute $x=1/s$, you will recover $sY(s)$ from the differential example.
For generating functions, the key fact we need here is the sum of geometric series: $$ \sum_{n=0}^{\infty} (ax)^n=\frac{1}{1-ax}.$$ Thus, by linearity again, we recognize the sequence we started from in the expression for $A(x)$.
Closing Remarks
In both theories, there is the notion of characteristic polynomial of a linear equation with constant coefficients. This polynomial ends up being the denominator of the Laplace transform, and the reversed polynomial $x^dp(1/x)$ is the denominator of the generating function. In both cases, multiple roots are very well managed by the theories and explain very naturally the appearances of otherwise "magical" solutions of the type $te^{\lambda t}$ or $n(r^n)$.
The biggest mystery to me is the historical perspective: did one technique pre-date the other, and were the connections actively exploited, or did both techniques develop independently for a while before the similarities were noticed?

- 4,536
-
1note that for analytic $f$ the Laplace transform works just as well as treating the differential equation as a recurrence relation in the derivatives $f(0),f'(0),f''(0),\dots$ since: $$y''-5y'+6y=0\\implies y^{(3)}-5y''+6y'=0\\dots\\implies y^{(n)}-5y^{(n-1)}+6y^{(n-2)}=0$$ – obataku Jul 21 '15 at 07:37
-
3now use the method of generating functions with a power series in $-s$: $$\sum_{n=0}^\infty f^{(n)}(0) s^{-n}=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}\int_0^\infty x^n e^{-st}, dt=\int_0^\infty f(t) e^{-st}, dt$$ – obataku Jul 21 '15 at 07:39
-
1oops, I meant in $s^{-1}$, and it actually gives: $$\sum_{n=0}^\infty f^{(n)}(0) s^{-n}=-\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}\int_0^\infty x^n e^{-st},dt=-\int_0^\infty f(t)e^{-st}\ dt$$ – obataku Jul 21 '15 at 21:51
-
@oldrinb Can you show some more steps for how you work out the power series? It seems like you are using the definition of the Taylor series to move from one step to the next, but I am missing a lot of the steps that should probably be obvious. For example, where does x^n come from? – alanwj Jan 31 '19 at 06:45
-
1@alanwj: sorry, I meant $t^n$ and I'm just using a simple integral or Laplace transform identity: $$\int_0^\infty t^n e^{-st}, dt = n!\cdot s^{-n}$$ – obataku Apr 08 '19 at 02:23
-
note the interchanging summation/integration etc. require care to be rigorous, but if you permit my handwaving and define the Laplace transform here in the 'usual' way, the Laplace transform approach seems to be equivalent to using the method of generating functions for sufficiently well-behaved analytic functions – obataku Apr 08 '19 at 02:33
I'd always understood the motivation to be that the Fourier transform gives you a function that can be analytically continued, and that analytically continuing it gives you the Laplace transform. Many of the "magical" properties of the Laplace transform therefore follow from similar properties of the Fourier transform, but you get some extra ones as well because you have the theory of analytic functions to draw on.

- 28,729
-
4... and, similarly, you can translate between Fourier and Taylor series by seeing $\sum_{n=0}^\infty c_n e^{in\theta}$ as $\sum_{n=0}^\infty c_n z^n$ for $z=e^{i\theta}$. – Emilio Pisanty Mar 18 '13 at 13:16
You can think of $\int_{-\infty}^{+\infty} f(t) g(t;s) dt$ as a decomposition of $f(t)$ in terms of the basis functions $g(t)$.
There are several nice things about choosing $g(t;s) = e^{-st}$ as the set of basis functions to use, the prime one being that $\frac{dg(t;s)}{ds} = -se^{-st}$, which is a neat property if you're dealing with derivatives.

- 221
- 4
- 9
-
7I think that your `decomposition' point of view is valid only if the various $g(t)$'s are orthogonal. For example, if this were literally correct, then one would expect the Laplace transform of an exponential to be some sort of delta function at a point, not a rational function with a pole. Perhaps there is a more general, philosophical sense in which it is still valid to think in those terms, though? – LSpice Nov 26 '13 at 19:13
Even if we don'y use Laplace Transform as often as Fourier Transform, it is definitely a more subtle tool. The reason is that the FT analyses only functions $f:{\mathbb R}\rightarrow X$ that decay at $\pm\infty$. At best, $f$ can be a temperate distribution, and this means that $f$ and its derivatives grow slowly at infinity; but let's think about functions. The LT instead deals with functions $f:(0,+\infty)\rightarrow X$ (the domain may be $(-\infty,0)$ as well, but not their union), whose growth at infinity is moderate (at most exponential). Then the transformed function $\hat f$ is defined and holomorphic on a half-space $H$. This gives us the possibility of employing the tools of complex analysis. Also, $\hat f$ contains a lot of redundancy. Let me give an example (which applies to Dirichlet series as well): it may happen that $\hat f$ has a pole at some $z_0$, a point boundary of $H$. I mean that $\hat f$ has a meromorphic extension in a slightly bigger half-space than $H$. Then the residue calculus gives us an information about the asymptotics of $f$ at $+\infty$. This applies to the Prime Number Theorem and to the Theorem of Arithmetic Progression.
The FT and LT are widely used in Partial Differential Equations. Fourier transform is efficient for linear, constant coefficients, Cauchy problems. By this, I mean that the physical domain is ${\mathbb R}^d$, and there is a time variable $t$. Think of the Heat, Wave or Schroedinger equations. Then you apply Fourier to the space variables and receive a linear ODE in $$\frac{d\hat u}{dt}=M(\xi)\hat u(t,\xi),$$ which you analyse easily.
Once the domain has a boundary, you need the Laplace transform, because you cannot get such a simple object as an ODE. At best, you reduce your problem to a linear PDE in $(t,x_d)$, where $x_d$ is the coordinate normal to the boundary. This PDE is parametrized by the Fourier variable $\eta$ associated with the coordinates that are tangent to the boundary. In addition, you have a boundary condition at $x_d=0$, and an initial data at $t=0$. The well-posedness for $t>0$ must then be attacked through a Laplace transform in time (one could do that in the case of the Cauchy problem as well, but this is not so essential). The key words in the theory are incoming modes and Lopatinskii condition.
For the interested readers, see my book in collaboration with S. Benzoni-Gavage, The Hyperbolic Initial-Boundary Value Problem, Oxford Univ. Press (2007).

- 51,599
There is in fact a very good paper of 2011 which addresses this question exactly:
The Laplace Transform: Motivating the Definition
by Howard Dwyer
Abstract:
Most undergraduate texts in ordinary differential equations (ODE) contain a chapter covering the Laplace transform which begins with the definition of the transform, followed by a sequence of theorems which establish the properties of the transform, followed by a number of examples. Many students accept the transform as a Gift From The Gods, but the better students will wonder how anyone could possibly have discovered/developed it. This article outlines a presentation, which offers a plausible (hopefully) progression of thoughts, which leads to integral transforms in general, and the Laplace transform in particular.

- 5,875
-
2For people arriving in the future, the article has been moved to https://faculty.saddleback.edu/site/fgonzalez/files/laplace-transform-motivation.pdf – Ruvi Lecamwasam Mar 01 '18 at 21:54
-
@RuviLecamwasam, thanks from the future! How did you find it? The "Published" suggests that it's part of a journal, but going up one level (which itself takes some effort) just takes one to Frank Gonzalez's home page with various links to class and tutoring resources, and no indication (that I can see) that one can find this article somewhere subsidiary. – LSpice Apr 24 '18 at 16:44
-
-
1
Well, the Laplace transform (when you do the integral from -infinity to infinity) is related to the Fourier transform via a factor of -2ipi in the argument. And the Fourier transform is understandable in the abstract framework of Pontryagin Duality.
Basically, what physicists call the "time domain" and the "s-domain" of Fourier transforms are in fact a pair of locally compact abelian groups which are "dual" to each other (in a way similar to how a finite-dimensional vector space and its "dual" space are dual to each other). Check out wikipedia if you want to find out more about this:
https://en.wikipedia.org/wiki/Fourier_transform#Locally_compact_abelian_groups

- 4,608

- 11,060
This paper is a gem in showing the general idea behind the Laplace transform:
Discovering the Laplace Transform in Undergraduate Differential
Equations
by Terrance J. Quinn and Sanjay Rai
The key hypothesis is that that solutions to differential equations are combinations of exponential functions. The Laplace transform is a means of extracting the coefficients and exponents (and therefore the free parameters).
Highly recommended!

- 4,608

- 5,875
-
-
1The link works when you have the access rights. The problem is that at the time of creation the journal was open - now it is behind a pay wall. – vonjd Nov 24 '11 at 15:46
-
1
"Laplace transform is nothing else but the continous version of power series"
In fact, Laplace transform origins are in Heaviside's operational calculus, i.e., the theory of formal power series.

- 51
http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-19-introduction-to-the-laplace-transform/
I don't know how OCW handles their links, so maybe in a near future the link will become broken again. Anyway, very nice lecture and very nice motivation! Thank you very much for sharing.
– Lucas Seco Nov 23 '11 at 03:32