51

Given a function f:R+C satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its Mellin transform is defined by the function

M(f)(s)=0f(y)ysdyy,

and f(y) can be recovered by the Mellin inversion formula:

f(y)=12πiσ+iσiysM(f)(s)ds.

This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if f(y) is the characteristic function of [0,1] then its Mellin transform is 1/s, and one recovers the fact (Perron's formula) that

12πi2+i2insdss

is equal to 1 if 0<n<1, and is 0 if n>1. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with σ>0, and the integral is equal to 1/2 if n=1.)

I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to or + depending on whether n>1), but this proof doesn't prove the general Mellin inversion formula.

My question is:

What do the Mellin transform and the inversion formula mean? Morally, why are they true?

For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?

I found some resources -- Wikipedia; this MO question is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.

Frank Thorne
  • 7,199
  • 4
    You could also take a look at section 5.1 of Montgomery and Vaughan's Multiplicative Number Theory. It has several specific examples of Mellin transform pairs, together with brief remarks on how the formulas are proved. I believe that the basic method really is the same as Perron's formula: if y is small then you drag the contour integral to the right and get a contribution of 0, while if y is large then you drag the contour to the left and pick up the residue of the integrand at s=0. – Greg Martin Nov 02 '11 at 23:15
  • 11
    Morally, why is the Fourier inversion formula true? Morally, why is the Laplace inversion formula true? – Gerry Myerson Nov 02 '11 at 23:44
  • 14
    Shouldn't this be something like Pontryagin duality for the multiplicative group of positive reals (maybe with some analytic continuation thrown in?). – Aaron Bergman Nov 03 '11 at 00:06
  • 3
    I agree with Greg here that Montgomery and Vaughan is a good pointer on why Mellin inversion works. But personally, the only justification I need is changing variables so that it becomes the same as Laplace inversion. This change of variables shows why one integral is over the positive reals while the other is a line integral in the complex plane. Of course, this all depends on how willing you are to believe why Laplace inversion is morally true, or Fourier inversion, for that matter. – Peter Humphries Nov 03 '11 at 00:50
  • 38
    I think of the Mellin transform as the equivalent gadget to the Fourier transform, but for the group of positive reals instead of the real line. The change of variables going from one transform to the other is either an exponential or log (depending on which way it's going) which corresponds to the usual isomorphism between the two groups. The function tts is a character on the positive reals which is analogous to the function xe2πixy which is a character on the real line. The Mellin transform is natural for studying multiplicative functions. – Matt Young Nov 03 '11 at 01:33

6 Answers6

28

[Some next-day edits in response to comments] As counterpoint to other viewpoints, one can say that Mellin inversion is "simply" Fourier inversion in other coordinates. Depending on one's temperament, this "other coordinates" thing ranges from irrelevancy to substance... The question about moral imperatives for Fourier inversion is addressed a bit below.

[Added: the exponential map xex gives an isomorphism of the topological group of additive reals to multiplicative. Thus, the harmonic analysis on the two is necessarily "the same", even if the formulaic aspects look different. The occasional treatment of values (and derivatives) at 0 for functions on the positive reals, as in "Laplace transforms", is a relative detail, which certainly has a corresponding discussion for Fourier transforms.]

The specific riff in Perron's identity in analytic number theory amounts to (if one tolerates a change-of-coordinates) guessing/discerning an L^1 function on the line whose Fourier transform is (in what function space?!) the characteristic function of a half-line.

Since the char fcn of a half-line is not in L^2, and does not go to 0 at infinity, there are bound to be analytical issues... but these are technical, not conceptual.

[Added: the Fourier transform families xα1exχx>0 and (1+ix)α, (up to constants) where χ is the characteristic function, when translated to multiplicative coordinates, give one family approaching the desired "cut-off" effect of the Perron integral. There are other useful families, as well.]

To my taste, the delicacies/failures/technicalities of not-quite-easily-legal aspects of Fourier transforms are mostly crushed by simple ideas about Sobolev spaces and Schwartz' distributions... tho' these do not change the underlying realities. They only relieve us of some of the burden of misguided fussiness of some self-appointed guardians of a misunderstanding of the Cauchy-Weierstrass tradition.

[Added: surely such remarks will strike some readers as inappropriate poesy... but it is easy to be more blunt, if desired. Namely, in various common contexts there is a pointless, disproportionate emphasis on "rigor". Often, elementary analysis is the whipping-boy for this impulse, but also one can see elementary number theory made senselessly difficult in a similar fashion. Supposedly, the audience is being made aware of a "need/imperative" for care about delicate details. However, in practice, one can find oneself in the role of the Dilbertian "Mordac the Preventer (of information services)" [see wiki] proving things like the intermediate value theorem to calculus students: it is obviously true, first, or else one's meaning of "continuous" or "real numbers" needs adjustment; nevertheless, the traditional story is that this intuition must be delegitimized, and then a highly stylized substitute put in its place. What was the gain? Yes, something foundational, but time has passed, and we have only barely recovered, at some expense, what was obviously true at the outset.

On another hand, Bochner's irritation with "distributions theory" was that it was already clear to him that things worked like this, and he could already answer all the questions about generalized functions... so why be impressed with Schwartz' "mechanizing" it? For me, the answer is that Schwartz arranged a situation so that "any idiot" could use generalized functions, whereas previously it was an "art". Yes, sorta took the fun out of it... but maybe practical needs over-rule preservation of secret-society clubbiness?]

Why should there be Fourier inversion? (for example...) Well, we can say we want such a thing, because it diagonalizes the operator d/dx on the line (and more complicated things can be said in more complicated situations).

Among other things, this renders "engineering math" possible... That is, one can understand and justify the almost-too-good-to-be-true ideas that seem "necessary" in applied situations... where I can't help but add "like modern number theory". :)

[Added: being somewhat an auto-didact, I was not aware until relatively late that "proof" was absolutely sacrosanct. To the point of fetishism? In fact, we do seem to collectively value insightful conjecture and not-quite-justifiable heuristics, and interesting unresolved ideas offer more chances for engagement than do settled, ironclad, finished discussions. For that matter, the moments that one intuits "the truth", and then begins looking for reasons, are arguably more memorable, more fun, than the moments at which one has dotted i's and crossed t's in the proof of a not-particularly-interesting lemma whose truth was fairly obvious all along. More ominous is the point that sometimes we can see that something is true and works despite being unable to "justify" it. Heaviside's work is an instance. Transatlantic telegraph worked fine despite...]

In other words: spectral decomposition and synthesis. Who couldn't love it?!

[Added: and what recourse do we have than to hope that reasonable operators are diagonalizable, etc? Serre and Grothendieck (and Weil) knew for years that the Lefschetz fixed-point theorem should have an incarnation that would express zeta functions of varieties in terms of cohomology, before being able to make sense of this. Ngo (Loeser, Clucker, et alteri)'s proof of the fundamental lemma in the number field case via model theoretic transfer from the function field case is not something I'd want to have to "justify" to negativists!]

Qfwfq
  • 22,715
paul garrett
  • 22,571
  • 17
    It is probably just me but... your answer strikes me a slightly too poetical. – Mariano Suárez-Álvarez Nov 04 '11 at 01:04
  • 4
    @Mariano S-A... Hahaha! Your "criticism" does gratify me, of course, ... tho' with a small worry in the back of my mind. :)

    As you/one may imagine, I was sincere. At the same time, yes, the school-math mind-set does prohibit having opinions/tastes.

    My real worry is that beginners accidentally alienate potential employers by being "too honest". That is, it is often the case that conformity is the implicitly-valued trait, despite other things being the advertised desiderata.

    I am glad (!?) I did not understand the state of things when I was younger... it would have been ... impossible. (Thx)

    – paul garrett Nov 04 '11 at 01:23
  • 8
    I can't even tell what most of your answer and essentially all of your comment have to do with the question, really. «They only relieve us of some of the burden of misguided fussiness of some self-appointed guardians of a misunderstanding of the Cauchy-Weierstrass tradition.» is a circumlocution for an demeaning comment at who/what, exactly? I have absolutely no problem with opinions or tastes —despite my school-math mind-set and my characteristic conformity, which you very perspicaciously managed to put to the fore, even I have been known to have a couple of both— but... – Mariano Suárez-Álvarez Nov 04 '11 at 01:45
  • 1
    ...I do have problems with obscurity. – Mariano Suárez-Álvarez Nov 04 '11 at 01:45
  • Is this called "analysts' pedantry strikes me as misguided", by any chance? – Yemon Choi Nov 04 '11 at 02:37
  • 1
    @paul garrett: I would be interested to read a write-up of your views on putting the "Fourier" back in "Fourier analysis"; but I don't think the cramped comment boxes of MathOverflow are an optimal place :) Certainly these boxes aren't a good place for discussion. – Yemon Choi Nov 04 '11 at 03:32
  • 1
    But on the other hand, I found the flourish enjoyable to read; indeed, the imprecise and imperfect, but iconoclastic is as worthy of attention as its counterparts. best regards-- – Suvrit Nov 13 '11 at 20:58
  • 2
    Yeesh. Paul, I agree with you that mathematics would be better off if the poetic and intuitive aspects of our work was not excised from our writings. But the claim that an emphasis on rigor is if I may paraphrase you, the enemy of understanding, is I think wrong on its face. Rigor is really the act of undressing our intuitions; perhaps without it, one can get a feeling for a subject, but it is the detailed analysis of those feelings that transmutes them into true understanding. To give a topical example, can imagine a serious investigation of Pontrjagin duality without rigor? And I... – Daniel Litt Nov 13 '11 at 23:18
  • 1
    (cont.) think there's no question that Pontrjagin duality truly clears up a lot of what is mysterious about Fourier analysis. – Daniel Litt Nov 13 '11 at 23:18
  • 1
    Paul: What do you mean about elementary number theory being made senselessly difficult? – KConrad Nov 14 '11 at 02:31
  • 12
    @daniel litt: I am not opposed to rigor itself, but to (what seems to me) a disproportionate interest in prohibition, rather than facilitation. A caricature of this is a scenario in which one proves that there is no Dirac delta "function", rather than the more useful building-up of a situation in which it is perfectly legitimate. Related to this (example and notion) is/are provocative examples (e.g., the first volume of Gelfand-et-alia on Generalized Functions, which I accidentally encountered before having any general understanding of distributions) compelling work to legitimize it. – paul garrett Nov 14 '11 at 14:25
  • 3
    @KConrad: About elementary number theory... of course, it can be a very nice entry point into mathematics other than calculus. But I have witnessed many instances in which deliberate suppression of the notions of group and ring made a mess. Discussing "congruences" in a "null context" is gruesome, I think. Also distressing to me are "elementary" arguments for quadratic reciprocity. In general, making an argument (much) more complicated while making it seemingly "elementary" seems to me to dis-serve everyone. (I'll save my rant about contrived "exercises" for another time...) – paul garrett Nov 14 '11 at 14:33
23

Thanks to everyone who answered! A (CW'ed) summary of some of what I learned:

In the first place, I now cheerfully second Greg Martin's recommendation of Chapter 5.1 of Montgomery and Vaughan. It is a rather "lowbrow", very readable treatment. (doesn't prove Mellin inversion in complete generality)

Also, as Matt Young pointed out, for any complex s, the function tts is a character on R×. This is a triviality, but the importance of this fact escaped me the first time. The invariant measure on R× is dxx, and so the Fourier transform of a function f defined on this group is exactly

xR×f(x)xsdxx,

the Mellin transform. Once this is written down, the rest follows mechanically (from change of variables and Fourier inversion).

Thanks to all!

Frank Thorne
  • 7,199
  • Frank, why is this an answer and not a comment or edit to your own question? – KConrad Nov 14 '11 at 02:31
  • 6
    Is this bad etiquette or something? – Frank Thorne Nov 14 '11 at 03:58
  • Usually, it is considered helpful and nice to update the original question with what you learned from the various answers. Doing so helps others get a digested summary of the highlights of the various answers. – Suvrit Nov 15 '11 at 17:52
17

As others have pointed out, the Mellin inversion theorem is just the Fourier inversion theorem in disguise for the particular group R+ with invariant measure dxx. The goal of the Fourier transform is to express a general function as a linear combination (i.e. integral) of the characters of the group, so that in this basis the operations of translations and all commuting operations will be diagonalized. For the R+, these characters look like xxs (the minus sign because of the normalization you chose in the question), and they are unitary (take values in the circle) for imaginary s -- the operation of multiplying characters is just addition in the s variable, so in the inversion formula you have the measure ds. There's also this funny thing about how there are s with positive real part -- this is because in the "physical space" R+ you're always talking about distributions which are compactly supported away from 0 when you use this transform. Let's ignore that.

Since Mellin inversion is a disguised Fourier inversion, the real question is: why is the Fourier inversion formula on R true? To me the most convincing answer is the following: we can decompose a general function f(x)=f(y)δ(xy)dy (this is the definition of δ but you have to take approximate delta-functions to make this rigorously work like a decomposition), so if we want to express a general function as a combination of the characters xe2πiξx, it suffices to consider the δ function

δ(x)=u(ξ)e2πiξxdξ

One interpretation of this formal idea is that the distributions δ(xy) are just like your usual standard basis functions.

Now, observe that because δ(x) is invariant under multiplication by e2πiηx for any η, the distribution u(ξ) is translation invariant, and therefore must be constant. After you find the constant, plugging in δ(x)=Ce2πiξxdξ into f(x)=f(y)δ(xy)dy gives the Fourier inversion formula. Complete, rigorous proofs all follow more or less these lines, but there are many flavors of how you like to phrase it. Of course, we can write the whole argument with multiplicative characters as well.

Edit: The above argument assumes uniqueness of the representation, but one can also remark that if there is even a single function f(x) for which f(x)dx0 and which can be realized as a linear combination ˆf(ξ)e2πiξxdξ, then by rescaling, renormalizing and taking a limit, we obtain δ(x)=Clim, leading formally to the formula \delta(x) = C \int e^{2 \pi i \xi \cdot x} d\xi. One common rigorous execution of this philosophy is performed by taking f to be a Gaussian.

Phil Isett
  • 2,213
12

Two equations that encapsulate the properties of the Fourier and Mellin transforms:

\int^{\infty}_{-\infty}{\exp(2 \pi ifx)\exp(-2 \pi ify)df} = \delta(x-y)

\frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} x^{-s} y^{s} ds= \delta(\ln(x)-\ln(y))= y \delta(x-y).

The transformations from one equation to the other are obvious. The delta function results are intuitive and an extrapolation of the discrete case for the orthogonality relationships of the characters of character groups. The transform pairs, Plancherel and convolution theorems, and other relations are easy to derive from these two.

(Note that whereas e^{sz} is an eigenfunction for d/dz and so the Laplace/Fourier transforms are appropriate for devising an operator calculus for f(d/dz), z^s is an eigenfunction of zd/dz and so the Mellin transform is more appropriate for f(zd/dz).)

Ramanujan's Master Formula/Theorem (see Wikipedia, particularly the Hardy refer.) gives a somewhat intuitive perspective on the Mellin transform as providing an "interpolation" of the coefficients of the Taylor series of certain classes of functions, as discussed in the intro of "Ramanujan's Master Theorem ..." by Olafsson and Pasquale. E.g.,

\int^{\infty}_{0}f(x)\frac{x^{s-1}}{(s-1)!} dx = g(-s) and

\frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{\pi}{\sin(\pi s)} g(-s) \frac{x^{-s}}{(-s)!} ds = \sum_{n=0}^{\infty} g(n) \frac{(-x)^{n}}{n!} = f(x)

for the transform pairs

f(x)=\exp(-x) and g(-s)= 1 (\sigma>0) and

f(x)=\frac{1}{1+x} and g(-s)= (-s)! (0<\sigma<1 and abs(x)<1)

f(x)=\exp(-x^2) and g(-s)= \cos(\pi\frac{ s}{2})\frac{(-s)!}{(-\frac{s}{2})!} = \frac{1}{2}\frac{(\frac{s}{2}-1)!}{(s-1)!} (\sigma>0).

From a similar perspective, the iconic Euler (Mellin) integral for the gamma function for Real(s) > 0

\displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-t\;p} \; dt = p^{-s}

provides the scaffolding for understanding and utilizing the interplay among the Mellin transform, its inverse, operator calculus, and interpolation.

A natural interpolation of the derivative as the fractional integroderivative of fractional calculus is obtained by using the Mellin transform to interpolate the op coefficients of the op e.g.f. \displaystyle e^{tD_x} \;, i.e., the shift op, for the integer powers of the derivative:

\displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-tD_x} \; dt \; H(x) g(x) = D_x^{-s} H(x) g(x) = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-tD_x} \; H(x) g(x)\; dt

\displaystyle = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H(x-t) \; g(x-t) dt \; .

Then specifically acting on the power function for \displaystyle \alpha > -1

\displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H(x-t) \; (x-t)^\alpha dt = \int_0^x \frac{t^{s-1}}{(s-1)!} \; (x-t)^\alpha \; dt

\displaystyle = \int_0^x \frac{t^{s-1}}{(s-1)!} \; \sum_{k \ge 0} (-1)^k \; x^{\alpha-k} \frac{\alpha!}{(\alpha-k)} \; \frac{t^k}{k!} \; dt = \frac{1}{(s-1)!} \sum_{k \ge 0} (-1)^k \; x^{\alpha-k} \binom{\alpha}{k} \; \frac{t^{s+k}}{s+k} \; |_{t=0}^{x}

\displaystyle = x^{\alpha + s} \; (-s)! \; \sum_{k \ge 0} \; \binom{\alpha}{k} \; \frac{sin(\pi (s+k))}{\pi (s+k)} = x^{\alpha +s} \frac{\alpha!}{(\alpha+s)!} \; = D_x^{-s} x^\alpha \; .

The last summation converges with no restriction on s. So, we see that the Mellin transform does indeed interpolate the coefficients of the e.g.f. generated by the binomial theorem expansion \displaystyle x^{\alpha-k} \frac{\alpha!}{(\alpha-k)} to \displaystyle x^{\alpha+s} \frac{\alpha!}{(\alpha+s)} to give an interpolation of the coefficients of the shift op D_x^k to D_x^{-s} consistent with fractional calculus.

The same method can be used to interpolate

\displaystyle (x \; D_x \;x)^n = x^n D_x^n x^n = x^n \; n!\; L_n(-:xD_x:) ,

where L denotes the Laguerre polynomials and (:xD_x:)^k = x^kD_x^k by definition, leading to

\int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-txD_xx} \; dt \; H(x) x^\alpha = (xD_xx)^{-s}\; x^\alpha = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; \frac{x^\alpha}{(1+xt)^{\alpha+1}} \; dt = x^{\alpha-s} \frac{(\alpha-s)!}{\alpha!} = x^{-s} D_x^{-s} x^{-s} \; x^\alpha

for 0 < Real(s) < \alpha +1 \; .

Or, give the analytic continuation for a Mellin transform related to a class of differential operators encompassing the Witt Lie algebra:

(x^{1+y}D_x)^{-s} \; x^\alpha = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H[\frac{x}{(1+y\;t\;x^y)^{1/y}}] \frac{x^\alpha}{(1+y\;t\;x^y)^{\alpha/y}} \; dt

= H(y) \; x^{\alpha-sy} y^{-s} \frac{(-s+\alpha/y-1)!}{(\alpha/y-1)!} \;+ \; H(-y) \; x^{\alpha+s|y|} |y|^{-s} \frac{(\alpha/|y|)!}{(\alpha/|y|+s)!} \;.

A simple way to derive the formulas in your question is by looking at the inverse Mellin transform rep of the Dirac delta function. See my short note on the Inverse Mellin Transform and the Dirac Delta Function. See also some applications in Dirac's Delta Function and Riemann's Jump Function J(x) for the Primes and The Inverse Mellin Transform, Bell Polynomials, a Generalized Dobinski Relation, and the Confluent Hypergeometric Functions.

Edwards in Riemann's Zeta Function in Ch. 10 Fourier Analysis Sec. 10.1 Invariant Operators on R+ and Their Transforms gives a nice, more group-theoretic intro to the Mellin transform in line with other comments in this stream.

Tom Copeland
  • 9,937
  • In a similar vein, look at f(x)=H(1-x)(1-x)^\alpha=H(1-x)\sum_{n=0}^{\infty }\frac{\alpha!}{(\alpha-n)!}\frac{(-x)^n}{n!} with H(x) the Heaviside step function, giving a fundamental result for fractional calculus, and for the iconic Ramanujan divergent series look at f(x)=\frac{1}{e^x-1}=\sum_{n=1}^{\infty }e^{-{nx}}\rightarrow \sum_{j=0}^{\infty }\left ( \sum_{n=1}^{\infty }n^j \right )\frac{(-x)^j}{j!}\rightarrow \sum_{j=0}^{\infty }\zeta (-j)\frac{(-x)^j}{j!} . – Tom Copeland Sep 12 '12 at 23:56
  • The Ramanujan divergent series relation to the Riemann zeta function can be generalized to more general Dirichlet series through f(x)= \sum_{n=0}^{\infty } \varphi (n)e^{-nx} \rightarrow \sum_{j=0}^{\infty}\left ( \sum_{n=1}^{\infty }\varphi (n)n^j \right )\frac{\left ( -x \right )^j}{j!}\rightarrow \sum_{j=0}^{\infty} D_{\varphi}\left ( -j \right ) \frac{\left ( -x \right )^j}{j!} – Tom Copeland Sep 27 '12 at 22:21
  • See also http://tcjpn.wordpress.com/2015/08/05/newton-gauss-interpolation-and-the-derivative-in-finite-differences/ – Tom Copeland Aug 07 '15 at 20:46
  • For y < 0, the Heaviside step function in the last integral is to be interpreted as setting the upper limit of the integral as the first zero from the origin of (1-|y|tx^{-|y|})^{\alpha/|y|}, i.e., the upper limit should be t = x^{|y|}/|y| . – Tom Copeland Sep 15 '15 at 01:13
  • Related: pg. 26 of "Multiple polylogarithms and mixed Tate motives" by Goncharov https://arxiv.org/abs/math/0103059. – Tom Copeland Jun 26 '17 at 16:12
  • Continuing applications: "Simplicity in AdS Perturbative Dynamics" by Ellis Ye Yuan https://arxiv.org/abs/1801.07283 p. 104 – Tom Copeland Jul 30 '18 at 20:58
  • For more operator interpolations, see "Mellin Interpolation of Differential Ops and Associated Infinigens and Appell Polynomials: The Ordered, Laguerre, and Scherk-Witt-Lie Diff Ops" at https://tcjpn.wordpress.com/2015/09/16/mellin-transform-interpolation-of-differential-operators/ – Tom Copeland Jul 30 '18 at 22:41
  • Recent application of this Mellin interpolation in "Zeta functions connecting multiple zeta values and poly-Bernoulli numbers" (Defs. 4.4 and 4.5, pg. 9) by Kanejko and Tsumura https://arxiv.org/abs/1811.07736 – Tom Copeland Nov 25 '19 at 16:27
  • See also the Mellin interpolation of the symmetric power sums of eigenvalues of a matrix in "On Functional Determinants of Laplacians in Polygons and Simplicial Complexes" by Erik Aurell and Per Salomonson. – Tom Copeland May 06 '20 at 02:27
  • Other examples of use of RMF at https://mathoverflow.net/questions/379428/ramanujans-master-formula-a-proof-and-relation-to-umbral-calculus?noredirect=1&lq=1 – Tom Copeland Feb 05 '21 at 21:23
  • A good overview of the mechanics of using the Mellin transform pair--encoding info on a function f(x) into singularities of its Mellin transform F(s)--while also covering the topics in this stream is "The Mellin Transform" by Jacqueline Bertrand, Pierre Bertrand, and Jean-Philippe Ovarlez (https://hal.archives-ouvertes.fr/hal-03152634/document). – Tom Copeland Oct 29 '22 at 23:38
  • I disagree with the brief remarks, oft repeated, on the history of the Mellin transform in the Bertrand et al. paper. Before Riemann, Euler used the Mellin transform (at least for real arguments) in integral reps of the Euler gamma and beta functions, and Mellin gave much credit to Pincherle for inspiring his development of the transform pair. – Tom Copeland Oct 29 '22 at 23:41
9

Have a look at Zagier's appendix: http://people.mpim-bonn.mpg.de/zagier/files/tex/MellinTransform/fulltext.pdf

It provides a nice description of the Mellin transform when f(x) is sufficiently smooth at x=0, and of rapid decay at infinity.

For example, assume f(x) = \sum_0^\infty a_n x^n, in some neighbourhood of the origin, and decays rapidly as x \to \infty, then its Mellin transform has meromorphic continuation to all of \mathbb{C} with simple poles of residue a_n at s=-n, n=0,1,2,3,\ldots. This is nicely explained in Zagier's appendix.

So, rate of decay issues aside, shifting the inverse Mellin transform to the left, i.e. letting \sigma \to -\infty, picks up the residues of the integrand at s=-n, i.e. a_n x^n, i.e. recovers the Taylor expansion about x=0 of f(x).

Of course, it only applies to a limited class of functions f, but, in many practical examples, this reasoning gives one explanation of why the Mellin inversion formula is true, without resorting to Fourier inversion.

4

Another property: The (inverse) Mellin transform interchanges q-expansions of modular forms and Dirichlet L-series.