73

Defining $$\xi(s) := \pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s)$$ yields $\xi(s) = \xi(1 - s)$ (where $\zeta$ is the Riemann Zeta function).

Is there any conceptual explanation - or intuition, even if it cannot be made into a proof - for this? Why of all functions does one have to put the Gamma-function there?

Whoever did this first probably had some reason to try out the Gamma-function. What was it?

(Best case scenario) Is there some uniform way of producing a factor out of a norm on the rationals which yields the other factors for the p-adic norms and the Gamma factor for the absolute value?

Peter Arndt
  • 12,033
  • 4
    Have you ever read Emil Artin's monograph about the gamma function? – Harry Gindi Dec 03 '09 at 14:20
  • 8
    They are both conceptually related to sums of powers. The $\zeta$ function itself is defined as a non-alternating sum of powers for $\Re(z)>1$, and as an alternating sum of powers (times a certain factor) for $\Re(x)\in(0,1)$ On the other hand, geometric shapes of the form $x^n+y^m=1$, called superellipses or Lame curves, are also bounded sums of powers. But by integrating $y=\sqrt[m]{1-x^n}$ or $x=\sqrt[n]{1-y^m}$ on $(0,1)$ we get the multiplicative inverse of the binomial coefficient ${m+n\choose n}={m+n\choose m}$, which is obviously expressible in terms of the $\Gamma$ function. – Lucian Jun 01 '14 at 16:52
  • 4
    For any even function $f$ belonging to the Schwartz space, we have $\widetilde f (s) \zeta(s) = \widetilde{\hat f}(1-s) \zeta(1-s)$, where $\widetilde g$ is the Mellin transform of $g$. Taking $f(y) = e^{-\pi y^2}$ yields the result. – Watson Apr 02 '17 at 16:33
  • Note that this gamma formula does not complete the whole thing: in particular, you cannot derive the hallowed critical strip from it and the base sum-of-powers definition - which should make sense, as it suggests there is, in a sense, "too much complexity" and "I'm not gonna make it quiiite so eassie on youue" for the gamma function, alone, to capture. In that regard, perhaps, the gamma formula may not be as surprising as one may at first think. – The_Sympathizer Apr 18 '19 at 02:15

10 Answers10

60

One way to get started is to look at the integral for the gamma function: $$\Gamma(s) = \int_0^\infty t^{s-1} e^{-t}\,dt$$ Subsitute $t=nx$ in the integral to arrive at $$\frac{\Gamma(s)}{n^s} = \int_0^\infty e^{-nx}x^{s-1}\,dx$$ which we then sum up to get $$\Gamma(s)\zeta(s)=\int_0^\infty \frac{x^{s-1}}{e^x-1}\,dx$$ which already shows that there is some connection between the gamma and zeta functions, and it does in fact allow us to extend the definition of the zeta function into the critical strip.

What comes next is far less obvious, but the idea is to introduce a branch cut for $x^{s-1}$ along the positive real axis, and to replace the above integral by one running from $+\infty$ along the bottom of the positive real axis, around the origin, and back to $+\infty$ along the top of the real axis. This introduces an extra factor $1-e^{2\pi i s}$. Now start expanding the circle around the origin, taking account of the poles of the integrand along the imaginary axis as we go, and end up with $$\Gamma(s)\zeta(s)=(2\pi)^{s-1}\Gamma(1-s)\sin(\tfrac12\pi s)\zeta(1-s).$$ From there, some cleanup still remains. As I said, this is not terribly intuitive, so it doesn't answer your question, but the first paragraph should at least give you a notion how the gamma and zeta functions are interrelated.

Harald Hanche-Olsen
  • 9,146
  • 3
  • 36
  • 49
  • 14
    So far, this answer has received four upvotes and two downvotes. I am curious about the reason for the downvotes: I thought they were intended for off topic or wrong answers, especially the sort of answers you want to discourage, and I can't see that this answer is either. Perhaps I should have left the second half out of it, since it does not contribute much to the why question, but to me, that doesn't seem sufficient reason for a downvote. – Harald Hanche-Olsen Dec 03 '09 at 15:24
  • 19
    I guess Riemann's opinions about the Riemann zeta function aren't good enough for some people? – Ryan Budney Dec 03 '09 at 22:18
  • 1
    @Ryan: Now don't let us overreact. I am puzzled, not angry or disappointed. – Harald Hanche-Olsen Dec 03 '09 at 23:01
  • 4
    I was one of the downvotes. I'll say, I don't think your answer is wrong or problematic, I just think there's a much better answer, which hasn't been written properly: "Q has a real prime." I'm not familiar enough with the subject to write a good answer like that, but I'm familiar enough with the subject to say I don't think an answer which leaves it out should be at the top. – Ben Webster Dec 03 '09 at 23:14
  • 5
    Ryan- Actually yes. I think Tate understands the Riemann zeta function a lot better than Riemann ever did, though of course, that involved a lot of standing on the shoulders of giants (specifically, Tate knew about class field theory, and Riemann never had a chance). – Ben Webster Dec 03 '09 at 23:18
  • 10
    @Ben Webster: though it is true that it is the real prime of Q that allows for the appearance of the gamma factor, the question was "Why the Gamma function?", not "Why is there another factor?". The real prime could be considered as a reason to have another factor, not a reason for that factor to be the gamma function. Harald's answer illustrates how the gamma function arises in the proof. – Rob Harron Dec 03 '09 at 23:40
  • 3
    @Ben: Thanks for the explanation. Now I only have to figure out what “Q has a real prime” means. (As is probably clear by now, I'm an analysis guy, not an algebraist.) – Harald Hanche-Olsen Dec 04 '09 at 00:17
  • 5
    @Harald: What Ben means about a real prime is that if you consider the set of equivalence classes of absolute values on Q there is one for each prime p, and the usual absolute value. This leads number theorists to consider the usual absolute value as an "infinite" prime. It is called real as it comes from the embedding of Q into R (whereas finite extensions of Q might embed into C, but not R, and hence have complex primes). The Riemann zeta function can be viewed as an Euler product of factors 1/(1-p^-s) and the gamma factor can be viewed as the factor coming from the infinite prime. – Rob Harron Dec 04 '09 at 01:07
  • 4
    @Harald: It means that you can embed the field Q of rational numbers into the p-adic field Q_p, for any prime number p, just as you can embed it into the field of real numbers R. The p-adic fields are just the other completions of Q that are there, in addition to R (completions with respect to other metrics). Given that, you can view the reals as another "prime number". This resembles adding the point at infinity to the affine line, to get the projective line. The "real prime" is that point at infinity. Formulas in number theory are supposed to include it on par with the ordinary primes. – Leonid Positselski Dec 04 '09 at 01:17
  • 3
    Again, thanks for the explanation. I should probably ask over at Leonid's question, but since this thread of comments is so long already – if I want to learn more, in order to understand Leonid's answer, but don't quite feel up to obtaining and reading John Tate's dissertation, is there a good place I can go? Bear in mind that I am not an algebraist, I just want to widen my horizon, not to understand every little detail in the argument. – Harald Hanche-Olsen Dec 04 '09 at 01:40
  • 1
    I guess Tate's dissertation has been published in "Algebraic Number Theory", edited by Cassels and Froelich. It must be well-written, but slightly complicated in that it deals with arbitrary number fields rather than just Q. For an introduction to p-adic numbers, I would suggest "P-adic numbers, p-adic analysis, and zeta-functions", by Neal Koblitz. – Leonid Positselski Dec 04 '09 at 01:48
  • 2
    Tate's thesis is rather well written. There's also the book of Ramakrishnan-Valenza "Fourier analysis on number fields" that develops the subject and goes over Tate's thesis. The title of the book is rather suggestive. Section 3.1 of Bump's book "Automorphic forms and representations" has an overview of Tate's thesis. – Rob Harron Dec 04 '09 at 02:00
  • 2
    Thanks for the answer! Indeed I came to my question from the other side, being aware that there should be a factor for the infinite prime but not understanding why it involves the Gamma-function. This gives me an idea how Riemann got there. Checkmark goes to Leonid though, bringing me the good news that we live in the best of possible worlds :-) – Peter Arndt Dec 04 '09 at 02:34
  • 1
    Hi Harald, sorry, that was meant to be more of a sympathetic light-hearted joke. But understatement doesn't translate through ASCII. – Ryan Budney Dec 04 '09 at 05:04
  • 2
    @Ryan: That's okay. I learned a lot from these comments anyway, and that's worth more than a few points of rep on MO. – Harald Hanche-Olsen Dec 04 '09 at 12:21
  • There is something wrong with this functional equation; the Gamma factor should be $\Gamma (s/2)$ and not $\Gamma (s)$. – Venkataramana Oct 25 '13 at 14:35
  • 2
    @Aakumadula I think it's correct. As I wrote, some more work is needed to get the functional equation on the form that is commonly seen. The functional equation $\sqrt\pi\Gamma(2z)=2^{2z-1}\Gamma(z)\Gamma(z+\frac12)$ helps in this regard. – Harald Hanche-Olsen Oct 25 '13 at 15:17
  • Related: http://math.stackexchange.com/questions/143449/riemanns-thinking-on-symmetrizing-the-zeta-functional-equation and http://mathoverflow.net/questions/58004/how-does-one-motivate-the-analytic-continuation-of-the-riemann-zeta-function – Tom Copeland Jan 05 '17 at 21:27
59

To the best of my understanding, the answer is yes, and this uniform way consists of some integration over the local field. This is explained in John Tate's dissertation. One starts with a certain smooth rapidly decreasing function, for which one takes the characteristic function of the p-adic integers in the nonarchimedean case and the function $e^{-|x|^2}$ for an archimedean field. This is being multiplied with $|x|^s$ (approximately) and integrated over the Haar measure of the additive group of the field. This produces the $\Gamma$-factor for an archimedean field and $(1-p^{-s})^{-1}$ for a p-adic field.

  • 4
    That is the content of John Tate's PhD thesis. It is in Cassels & Froehlich - Algebraic Number Theory (Last chapter). In my opinion, this is still the best reference for this matter. – Marc Palm Feb 09 '11 at 10:45
  • 5
    The factor $\pi ^{-s/2}\Gamma (s/2)$ is the Mellin transform of $e^{-\pi x^2}$ which is its own Fourier transform. This yields a functional equation for the theta function whose Mellin transform is the Zeta function (together with the Gamma factor); therefore, this zeta function gets a functional equation. – Venkataramana Oct 25 '13 at 14:38
  • 7
    The expression $\pi^{-s/2} \Gamma(s/2)$ looks similar to the volume of the $s$-dimensional unit ball (say $s$ is an integer). Is this a coincidence? – mlbaker Jan 23 '15 at 07:02
  • 8
    @mlbaker: According to one of Quillen's manuscript in the Clay's math site, up to a factor of 2, the reciprocal of the expression $\pi^{-s/2}\Gamma(s/2)$ is the area of an $(n-1)$-dimensional unit sphere in ${\mathbb R}^n$. That is the archimedean factor of Riemann's completed zeta function. And, using the Euler product expansion of the zeta function, the local Euler factors $(1-p^{-s})^{-1}$ are the $p$-adic areas of the unit sphere in ${\mathbb Q}_p^n$ : these are the non-archimedean factors of the complete zeta function. Of course all of these are for positive integer values of $s$... – F Zaldivar Apr 18 '19 at 01:55
  • ... There are some extra comments by Quillen in his notes for non integer positive real values of $s$. All of these may be, some way or another, in Tate's thesis, but I don't have it at hand now. – F Zaldivar Apr 18 '19 at 01:55
  • 1
    For anyone else looking for Quillen's remarks, they are dated August 25, 1985 and are on pages 69, 70 of the corresponding notebook. These are PDF pages 13, 14 in the scan currently available here: http://www.claymath.org/library/Quillen/Working_papers/quillen%201985/1985-6.pdf – Oliver Nash Jun 08 '21 at 12:50
36

As has been explained above, the zeta function has a factor for each completion of $\mathbb{Q}$. The factor at $\mathbb{R}$ has to do with integrating $e^{- \pi x^2}$ and the factor at $\mathbb{Q}_p$ has to do with integrating the characteristic function of $\mathbb{Z}_p$.

Some people might wonder why these two functions were chosen. The answer is simple: they are both their own Fourier transforms.

Also, I don't think anyone has recommended Terry Tao's expository post on this material yet. It is quite good.

David E Speyer
  • 150,821
  • 3
    Ah, Terry Tao's post is clearly the answer I was looking for. – Ben Webster Dec 04 '09 at 22:32
  • 11
    But there is a lot of functions which are their own Fourier transform, isn't it ? why this one in particular ? what would happen with other functions with the same property ? – Simon Henry Oct 26 '15 at 09:59
  • @SimonHenry : see KConrad's answer below (from 2019). Essentially this is because we know the zeros/poles of $\Gamma(s)$. – Watson Dec 14 '21 at 09:31
22

There may also be some interest in the point of the "local functional equation", namely, that in fact the Gamma function (with the power of $\pi$) is just one (optimized) possibility, and somehow making a suboptimal choice doesn't really matter:

For a Schwartz function $f$ on $\mathbb R$, let $\Gamma(f,s)=\int_{\mathbb R^\times} |x|^s\,f(x)\;{dx\over |x|}$. The usual Gamma factor is obtained by taking a Gaussian. The local functional equation (proven by changing variables in the defining integrals, in the range $0<{\rm Re}(s)<1$, is $$\Gamma(f,s)\cdot \Gamma(\hat{g},1-s)\;=\; \Gamma(\hat{f},1-s)\cdot \Gamma(g,s)$$ for any two Schwartz functions $f,g$. And Riemann's argument proves $$ \Gamma(f,s)\cdot \zeta(s) \;=\; \Gamma(\hat{f},1-s)\cdot \zeta(1-s) $$ for any Schwartz $f$.

paul garrett
  • 22,571
  • 2
    So one can use any fixed point of the Fourier transform for $f$. Any idea what do we get for the appropriate Hermite functions? – Vít Tuček Sep 10 '14 at 16:41
  • 3
    If I'm thinking correctly, $\zeta(s)$ is formally $\Gamma(g,1-s)$ where $g$ is a sum of $\delta$ functions at the positive integers. Is there some sense in which $g = \hat{g}$ here? – David E Speyer Sep 10 '14 at 16:47
  • 3
    I guess that would be the Dirac comb. See http://mathoverflow.net/a/39187/6818 – Vít Tuček Sep 10 '14 at 16:53
  • 1
    For the record, the Hermite functions yield just a polynomial multiple of the standard Gamma factor. For example $H_{16}$ leads to $2027025 + 16 (-1 + s) s (274455 + 2 (-1 + s) s (27051 + 8 (-1 + s) s (127 + (-1 + s) s)))$ – Vít Tuček Sep 10 '14 at 22:46
  • @VítTuček, and, if one wants greater harmony with the normalization(s) of the Gaussian in common use for the various version of Hermite polynomials, and using then the fact that $H_{n+1}(x){\rm Gaussian}(x)=(d/dx\pm c\cdot x)(H_n(x){\rm Gaussian}(x))$, so the polynomial factor can get normalized to $(s-1)(s_2)...$ rather than a messier one due to a mismatch of normalizations... if one wants. – paul garrett Sep 11 '14 at 00:17
  • @paulgarrett: Right, but... Let's see what happens for $f(x) = xe^{-\pi x^2}$ which is an eigenfunction of $$\mathcal{F}(g)(s) = \int_\mathbb{R} g(x)e^{-2\pi \imath x s}$$ of eigenvalue $-\imath$. The Gamma factor is $$\Gamma(f,s) = \frac{1}{2\sqrt{\pi}} \pi^{-\frac{s}{2}} \Gamma\left(\frac{s+1}{2}\right).$$ – Vít Tuček Sep 11 '14 at 21:59
  • 1
    Starting from $\Gamma(f,s)\zeta(s) = \Gamma(\widehat{f},1-s)\zeta(1-s)$ and multiplying both sides by $\Gamma\left(\frac{s}{2}\right)\Gamma\left(\frac{1-s}{2}\right)$ I obtain, after cancellation of the classical functional equation for $\xi(s)$, the following equation $$ \Gamma\left(\frac{1-s}{2}\right)\Gamma\left(\frac{1+s}{2}\right) = -\imath \frac{\pi}{\sin(\pi s)}, $$ which can't be true since for real $s$ the left hand side is real whereas the right hand side is imaginary. Where did I make mistakes? – Vít Tuček Sep 11 '14 at 22:00
  • 1
    @VítTuček, ah, one little point is that the game is to integrate over the whole $\mathbb R^\times$, so using odd Schwartz function $xe^{-\pi x^2}$ simply produces $0$, avoiding the seeming paradox. (I'd wager that using the second Hermite polynomial produces no contradiction!) For quadratic imaginary fields, for example, the range of choices of equivariance under the circle action (thinking of Iwasawa-Tate set-up) gives many chances to cancel-and-be-zero, etc. – paul garrett Sep 11 '14 at 22:07
  • Bingo! The second Hermite function yields an innocent factor of $(1-2s)$. Thank you for clarification. – Vít Tuček Sep 11 '14 at 22:28
  • @VítTuček, :) ... – paul garrett Sep 11 '14 at 22:30
  • Yes, but it always simplifies to $$\zeta(s)=\frac{\Gamma(\hat{f},1-s)}{\Gamma(f,s)},\zeta(1-s)=2^s, \pi^{s-1}, \sin\left(\frac{\pi s}{2}\right), \Gamma(1-s),\zeta(1-s).$$ It also works with (at least some) distributions and non-Schwartz functions where perhaps the optimal choice is $f(x)=\delta(|x|-1)$ in which case $$\hat{f}(w)=\mathcal{F}_xf(x)=2 \cos (2 \pi w)$$ and $\Gamma(f,s)=2$. – Steven Clark Aug 26 '23 at 14:36
18

Multiple answers and comments have already pointed out that the conceptual role of $\pi^{-s/2}\Gamma(s/2)$ comes from the viewpoint of Iwasawa and Tate, which for $\text{Re}(s) > 1$ creates this function as $\int_{\mathbf R^\times} e^{-\pi x^2}|x|^s\,dx/|x|$, an integral over the multiplicative group $\mathbf R^\times$ of the function $e^{-\pi x^2}$ that is self-dual for the Fourier transform on the additive group $\mathbf R$ relative to the self-duality $\langle x,y\rangle = e^{2\pi ixy}$ or $\langle x,y\rangle = e^{-2\pi ixy}$ on $\mathbf R$. (If we use another self-duality of $\mathbf R$ then $e^{-ax^2}$ would be self-dual for some $a \not= \pi$ instead.)

It's also been said elsewhere on this page that there are many self-dual Schwartz functions on $\mathbf R$, or more specifically many even self-dual Schwartz functions on $\mathbf R$: for Schwartz $f$ on $\mathbf R$ and $\text{Re}(s) > 0$, we have $\int_{\mathbf R^\times} f(x)|x|^s\,dx/|x| = \int_{0}^\infty (f(x) + f(-x))x^s\,dx/x$ and this is $0$ when $f$ is odd, so we may as well assume $f$ is even since $f(x) + f(-x)$ is even anyway and we want to avoid the silly equation $0=0$ even if it is a valid equation.

For arbitrary Schwartz $f$ on $\mathbf R$, set $\Gamma_f(s) = \int_{0}^\infty f(x)x^s\,dx/x$, which is a mild modification of the function $\Gamma(f,s)$ in Paul Garrett's answer (his $\Gamma(f,s)$ is my $\Gamma_{f(x)+f(-x)}(s)$ by a formula I wrote in the previous paragraph). This function converges absolutely and is analytic for $\text{Re}(s) > 0$, and it extends meromorphically to $\mathbf C$ by repeated integration by parts (the same way the $\Gamma$-function can be extended to $\mathbf C$ from its integral definition for $\text{Re}(s) > 0$), and Tate's thesis shows there is a general functional equation $\Gamma_f(s)\zeta(s) = \Gamma_{\hat{f}}(1-s)\zeta(1-s)$ where $\hat{f}$ is the Fourier transform of $f$ (for the self-duality on $\mathbf R$ given by $\langle x,y\rangle = e^{-2\pi ixy}$), so if $f$ is self-dual then we get $$\Gamma_f(s)\zeta(s) = \Gamma_{f}(1-s)\zeta(1-s),$$ a very nice functional equation indeed, especially if we use even $f$ to avoid $0 = 0$.

All of what I wrote so far has appeared explicitly or implicitly in some of the other comments or answers. Since there are many self-dual even Schwartz functions $f$ on $\mathbf R$, what is it about the choice $f(x) = e^{-\pi x^2}$, leading to $\Gamma_f(s) = (1/2)\pi^{-s/2}\Gamma(s/2)$ (an extra $1/2$ on both sides of the functional equation can be cancelled) that is so nice? I have not seen the following property pointed out yet: with this choice of $f$ and familiarity with the $\Gamma$-function we know $\Gamma_f(s) \not= 0$ for $\text{Re}(s) > 1$ (in fact for $\text{Re}(s) > 0$), so therefore $\Gamma_f(s)\zeta(s) \not= 0$ for $\text{Re}(s) > 1$ from $\zeta(s)$ being nonvanishing there, and then by the functional equation $\Gamma_f(s)\zeta(s) \not= 0$ for $\text{Re}(s) < 0$, which means all zeros of $\Gamma_f(s)\zeta(s)$ have $0 \leq \text{Re}(s) \leq 1$. If you want to use a totally random even Schwartz function for $f$ in order to define a factor $\Gamma_f(s)$ that completes the Riemann zeta-function, you will get the nice-looking nontrivial functional equation displayed above but how are you going to use $\Gamma_f(s)\zeta(s)$ to analyze the location of zeros of $\zeta(s)$ (including discovering its trivial zeros, whether or not you consider those important) if you do not know where $\Gamma_f(s)$ has its zeros and poles?

So although there are many even Schwartz functions $f$ on $\mathbf R$ besides $e^{-\pi x^2}$ that you could use to get a nice functional equation by multiplying $\zeta(s)$ by $\Gamma_f(s)$, the reason that the choice $f(x) = e^{-\pi x^2}$ is so convenient is that we actually know the zeros and poles of $\Gamma_f(s) = (1/2)\pi^{-s/2}\Gamma(s/2)$: it has no zeros in $\mathbf C$ and it has simple poles at $0, -2, -4, \ldots$. For even self-dual Schwartz $f$ on $\mathbf R$ that are not simple modifications of $e^{-\pi x^2}$, how feasible is it to determine whether or not $\Gamma_f(s) \not= 0$ for $\text{Re}(s) > 1$ (or $\text{Re}(s) > 0$)? The method of meromorphically continuing $\Gamma_f(s)$ from the half-plane $\text{Re}(s) > 0$ where it is analytic to all of $\mathbf C$ shows that its only possible poles are at $0, -1, -2, -3, \ldots$ with orders at most $1$ and the residue at $s = -n$ is $(-1/n!)\int_0^\infty f^{(n+1)}(x)\,dx$, which by the Fundamental Theorem of Calculus is $(-1/n!)(f^{(n)}(\infty) - f^{(n)}(0)) = f^{(n)}(0)/n!$. Therefore you could determine the poles of $\Gamma_f$ by seeing when $f^{(n)}(0)$ is 0 and not 0, but how are you going to determine where the zeros of $\Gamma_f$ are or that there are no zeros? (EDIT: for even $f$, its odd-order derivatives vanish at $0$, so the residue at $-n$ vanishes when $n$ is odd, which means the poles of $\Gamma_f(s)$ can only be at $n = 0, -2, -4, -6, \ldots$. Those are all simple poles of $\pi^{-s/2}\Gamma(s/2)$, which has no zeros, so $G(s) := \Gamma_f(s)/(\pi^{-s/2}\Gamma(s/2))$ is an entire function. Thus $\Gamma_f(s) = G(s)\pi^{-s/2}\Gamma(s/2)$ with $G$ entire, so $\pi^{-s/2}\Gamma(s/2)$ a "holomorphic gcd" of all $\Gamma_f(s)$ for even Schwartz functions $f$ on $\mathbf R$. The exponential factor $\pi^{-s/2}$ was kind of irrelevant to drag through the calculation since it has no zeros or poles, but it's traditionally seen alongside $\Gamma(s/2)$ so I used it. This addresses comments below by Will Sawin and Venkataramana.)

Example: the function $f(x) = 1/(e^{\pi x} + e^{-\pi x})$ is an even self-dual Schwartz function on $\mathbf R$. Can someone determine in a self-contained way (i.e., not using $\zeta(s)$) where $\Gamma_f(s)$ has its zeros on $\mathbf C$, or determine if it has no zeros?

Edit: Ignoring the wacky example just above, in some comments below I work out an example with $f(x)$ being a 4th degree Hermite polynomial times a Gaussian and find that $\Gamma_f(s)$ has two zeros with positive real part, at $s = (1\pm \sqrt{-2})/2$.

KConrad
  • 49,546
  • 2
    (+1) for emphasizing that self-duality is not special to the Gaussian. One could also add the following complete characterization of self-duality. Hermite functions (Gaussian times Hermite polynomials) form a Schauder basis of Schwartz space which is also an eigenbasis for the Fourier transform with four eigenvalues corresponding to the four roots of unity. I wonder if anyone computed $\Gamma_f$ for one of these more general $f$'s in the $\lambda=1$ eigenspace. – Abdelmalek Abdesselam Apr 17 '19 at 16:30
  • Indeed, in a paper that I've only seen cited in a 2009 paper by Lachaud about Connes' program, 80+ years ago Munsz wrote a paper emphasizing that the Gaussian can be replaced by an Schwartz function (not purely odd...) – paul garrett Apr 17 '19 at 21:38
  • Also, to be clear, self-duality is not necessary: local functional equation. And, one more point, Iwasawa talked about such things at the 1950 Congress, so perhaps "Iwasawa-Tate" theory would be a more accurate label. – paul garrett Apr 17 '19 at 21:41
  • @AbdelmalekAbdesselam the first time we get a self-dual function beyond a Gaussian using Hermite polynomials, we need the 4th Hermite polynomial. Of course we need to be clear about how we normalize that. Define the $n$th Hermite polynomial by $(d/dx)^{n}(e^{-x^2}) = (-1)^nH_n(x) e^{-x^2}$, so $H_n(x)$ has positive leading coefficient. Then $H_4(x) = 16x^4 - 48x^2 + 12$. A choice of self-duality $\langle x,y\rangle = e^{\pm ibxy}$ for a fixed sign on $i$, with $b > 0$, leads to a self-dual Haar measure on $\mathbf R$, which is $\sqrt{b/(2\pi)},dx$. Then we have (to be continued...) – KConrad Apr 17 '19 at 23:27
  • $\int_{\mathbf R} e^{\pm i bxy}H_n(\sqrt{b}x)e^{-(b/2)x^2},\sqrt{b/(2\pi)},dx = (\pm i)^nH_n(\sqrt{b}y)e^{-(b/2)y^2}$. (I checked this numerically for a few $b$ and $y$ and it looks okay.) Letting $n = 4$, a self-dual function is $f(x) = (16b^2x^4 - 48bx^2 + 12)e^{-(b/2)x^2}$. I calculate for $\text{Re}(s) > 0$ that $\Gamma_f(s) = \int_0^\infty f(x) x^2,dx/x$ is $(1/2)(b/2)^{-s/2}(64\Gamma(s/2 + 2) - 96\Gamma(s/2+1) + 12\Gamma(s/2))$. Thus the question becomes: does $64\Gamma(s/2 + 2) - 96\Gamma(s/2+1) + 12\Gamma(s/2)$ have zeros (esp. with real part greater than 0)? To be continued... – KConrad Apr 17 '19 at 23:33
  • 1
    Replacing $s$ with $2s$, does $64\Gamma(s+2) - 96\Gamma(s+1) + 12\Gamma(s)$ have zeros? Using the relation $\Gamma(s+1) = s\Gamma(s)$, we get $64\Gamma(s+2) - 96\Gamma(s+1) + 12\Gamma(s) = (64(s+1)s - 96s + 12)\Gamma(s) = 4(16s^2 - 8s + 3)\Gamma(s)$, and that quadratic has roots: $(1\pm \sqrt{-2})/4$, and the real part is $1/4$, which is positive. Going back to the original function $64\Gamma(s/2+2) - 96\Gamma(s/2+1) + 12\Gamma(s/2)$, this is $4(4s^2 - 4s + 3)\Gamma(s/2)$ and the quadratic factor has roots $(1\pm \sqrt{-2})/2$ with real part $1/2 > 0$. Thus this $\Gamma_f(s)$ has zeros. – KConrad Apr 17 '19 at 23:39
  • 2
    @KConrad We should be able to show that given any even self-dual Schwartz function, its Mellin transform has the form $ g( s (1-s)) \Gamma (s)$ for some holomorphic $g$. If $g$ has no zeroes, it is the exp of some function of $s (1-s)$, which if it is nonconstant has growth order 2, thus grows faster than the gamma function. If $g$ is constant, we can do an inverse Mellin transform. So the Gaussian/ Gamma pair is optimal in some precise sense... – Will Sawin Apr 18 '19 at 00:20
  • It is possible to prove that for any Schwarz function f on $\mathbb R$, $\pi ^{s/2}\Gamma (s/2)$ $divides$ the Mellin transform $\Gamma _f(s)$, in the sense that the ratio is holomorphic. This is done by showing that $f(x)-f(0)e^{\pi x^2}$ is $xg(x)$ where $g$ is a Schwarz function. In this way, one can see that the poles of $\Gamma (s/2)$ are the "worst". – Venkataramana Apr 18 '19 at 00:51
  • @Venkataramana watch out for typos: $\pi^{-s/2}$ and $e^{-\pi x^2}$ (and Schwartz). Also, your statement is incorrect for the Schwartz function $f(x) = xe^{-\pi x^2}$, for which $\Gamma_f(s) = (1/2)\pi^{-(s+1)/2}\Gamma((s+1)/2)$ is not divisible by $\pi^{-s/2}\Gamma(s/2)$. You meant to say "for any even Schwartz function $f$". I updated my answer to address this by the residue formula I already gave, as an alternative to your suggestion, which would be more conveniently written as $f(x) - f(0)e^{-\pi x^2} = x^2g(x^2)$ for a Schwartz function $g$. – KConrad Apr 18 '19 at 06:52
  • @WillSawin you meant $\Gamma(s/2)$, not $\Gamma(s)$. I inserted an edit in the last long paragraph of my answer that shows $\Gamma(s/2)$ is "optimal" in a divisibility sense. – KConrad Apr 18 '19 at 07:01
  • @KConrad: Thanks. The typos are terrible! You are right: $\Gamma _f(s)$ is divisible by $\pi ^{-s/2}\Gamma (s/2)$ only for even Schwartz class functions. – Venkataramana Apr 18 '19 at 07:46
  • 1
    The issue of finding a function whose integral is the gcd of integrals of all possible functions in some class reoccurs in the theory of (local) $L$-functions of (higher rank) automorphic forms. For the standard $L$-function, at non-archimedean places , newforms fill this role. – Will Sawin Apr 18 '19 at 12:37
  • 1
    @paulgarrett I found Lachaud's paper (MR2022610) and it was in 2003, not 2009. He says on the top of p. 182 that for all Schwartz $f$, the Mellin transform $\Gamma_{f(x)+f(-x)}(s)$ is nonvanishing for $\text{Re}(s) = 1/2$. Do you know such a general nonvanishing result? Determining where $\Gamma_f(s)$ is $0$ seems subtle. Moreover, by calculations I mentioned in earlier comments, if $F(x) = (8b^2x^4−24bx^2+6)e^{-(b/2)x^2}$ for either self-duality $\langle x,y\rangle = e^{\pm ibxy}$, then $\Gamma_{F(x) + F(-x)}(s) = \Gamma_{2F}(s)$ has zeros $1/2 \pm \sqrt{2}i/2$, contradicting that claim. – KConrad Apr 18 '19 at 20:47
  • 1
    @paulgarrett since you write that you have not seen Muntz's 1922 paper cited much, another place is Albeverio and Cebulla, "Müntz formula and zero free regions for the Riemann zeta function" (MR2285583). – KConrad Apr 18 '19 at 20:54
  • @KConrad, ah, thanks for the correction on the date. Also for the other reference. I know Albeverio's name from other things (receptiveness to non-standard analysis, solvable models in physics). And you are right that (as far as I know) it is difficult to prove that an entire function has no zeros... My soon-to-finish student Kim Klinger-Logan has a result about non-vanishing of certain things of the form h(s)-h(1-s) off the critical line, but her method seems not to touch non-vanishing results for a given function... Pity. :) – paul garrett Apr 18 '19 at 21:30
  • @KConrad, specifically, no, I do not necessarily believe a claim that "Gamma" functions attached to even Schwartz functions do not vanish on the critical line. I'd bet against, rather than for... – paul garrett Apr 18 '19 at 21:35
  • @KConrad, actually, upon reflection, it is possible that Lachaud's remark was a hasty and perhaps careless nod to a class of results provable by various operator-theory ideas, which would be unsurprising in the context of Connes' school, but careful hypotheses are necessary. – paul garrett Apr 18 '19 at 21:43
  • @WillSawin Will, could you expand on your remark about newforms? – Ilya Zakharevich Sep 17 '19 at 06:36
  • @IlyaZakharevich I just mean that if you take a modular form $f(\tau)$ and integrate $\int_{0^\infty} f(iy) y^s dy/y$, you'll get the standard $L$-function if $f$ is a newform, but if you have a different modular form with higher level at $p$ but the same Hecke eigenvalues, like $f(p \tau) - f(\tau/p)$, you'll see the $L$-function multiplied by some holomorphic function, introducing additional zeroes. This is the $p$-adic analogue of the archimedean phenomenon described here. – Will Sawin Sep 17 '19 at 12:18
14

"Why of all functions does one have to put the Gamma-function there?"

$\zeta(s)$ has trivial zeroes at $-2, -4, -6$, etc. $\zeta(1-s)$ thus has trivial zeroes at $s=3, 5, 7$, etc - a completely different set of zeroes.

To make a reflection formula where $\zeta(s)$ is somehow equal $\zeta(1-s)$, you have to get rid of the two differing sets of trivial zeroes. Multiplying by the gamma is perfect for this since its poles will cancel out those zeroes. For example, $\Gamma(s/2)$ has poles at $0, 2, 4, 6$, etc. and should go with $\zeta(s)$. $\Gamma((1-s)/2)$ has poles at $s=1, 3, 5$, etc. and should go with $\zeta(1-s)$.

It's possible to prove that gamma is the right choice, but Euler no doubt discovered that gamma is the right function through numerical experimentation - when he discovered the zeta reflection formula like 250 years ago.

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
Dr_Acula
  • 141
  • Nice, thank you! I can imagine that this was maybe the first hint, then leading to the connection pointed out by Harald Hanche-Olsen... – Peter Arndt Aug 02 '10 at 10:34
  • 3
    Peter: that was not the first hint. That the zeta-function even makes sense for negative numbers (in a rigorous sense) was first worked out by Riemann through his proof of analytic continuation. Before that there was not a known relation between zeta(s) and zeta(1-s) to motivate using the Gamma-function. Although Euler, long before Riemann, had derived a non-rigorous formula that is equivalent to the functional equation of the zeta-function just at integers, I don't think it was something that influenced Riemann's work which brought in the Gamma-function explicitly. – KConrad Mar 10 '11 at 05:45
  • 1
    I posted an MSE question asking about Riemann's thinking on symmetrizing the functional equation, http://math.stackexchange.com/questions/143449/riemanns-thinking-on-symmetrizing-the-zeta-functional-equation. – Tom Copeland May 10 '12 at 23:03
10

Whoever did this first probably had some reason to try out the Gamma-function. What was it?

The first one to do this was, precisely, Riemann in his famous (and 150 years old) paper: Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse. There he proved the functional equation as well, with the method that Harald explained above.

Ricardo
  • 239
  • 1
  • 8
  • Related to http://mathoverflow.net/questions/58004/how-does-one-motivate-the-analytic-continuation-of-the-riemann-zeta-function. – Tom Copeland May 19 '12 at 15:33
  • 4
    Actually, not quite. Define $\eta(z)=(1-2^{1-z})\zeta(z)=\sum_{n\ge 1}\frac{(-1)^{n+1}}{n^z}$. This sum converges (conditionally) when Re$,z>0$, thus $\eta$ is defined in the same half-plane (modulo considerations for Re$,z=1$. The functional equation for $\zeta$ leads to a functional equation for $\eta$. The latter makes sense without complex analysis since $\eta(s)$ and $\eta(1-s)$ are both defined if $0<s<1$. This functional equation was already published by Euler! See: 1) E. Landau: Euler und die Funktionalgleichung der Riemannschen Zetafunktion. 2) A. Weil: Prehistory of the zeta-func. – M Mueger Nov 11 '15 at 22:01
  • See also http://math.stackexchange.com/questions/143449/riemanns-thinking-on-symmetrizing-the-zeta-functional-equation – Tom Copeland Jan 05 '17 at 21:21
3

Gamma function arises when we consecutively differentiate an Appell sequence. An example of Appell polynomials are Bernoulli polynomials. When we differentiate it, the factors combine with themselves:

$$B_n'(x)=nB_{n-1}(x)$$

$$B_n''(x)=n(n-1)B_{n-2}(x)$$

$$B_n'''(x)=n(n-1)(n-2)B_{n-3}(x)$$

They are just another name for Hurwitz Zeta function:

$$B_n(x) = -n \zeta(1-n,x)$$

Thus, for $f(s,q)=\zeta(s,-q)$

$$\frac\partial{\partial q}f(s,q)= s f(s+1,q)$$

$$\frac{\partial^2}{\partial q^2}f(s,q)= s(s+1) f(s+2,q)$$

$$\frac{\partial^3}{\partial q^3}f(s,q)= s(s+1)(s+2) f(s+3,q)$$

Since Reihmann zeta is Hurwitz zeta evaluated at $q=1$, the expression you give is apparently consecutive derivative of Hurwitz Zeta, with factor $\pi^{-s}$ appearing if we normalize Hurwitz Zeta by stretching it horizontally by factor of pi.

Consecutive derivatives of Hurwitz Zeta in turn are nothing more than just polygamma function.


For instance, here is the function $-1/x$:

enter image description here

If we add infinitely many similar functions with a shift of pi/2 each in both directions, we get $\tan x$. But if we do the same only in one direction, we get "incomplete tangent":

http://storage7.static.itmages.ru/i/14/0910/h_1410326921_7988832_91f3fd7d7d.png

The yellow one is $\operatorname{pg}(x)=\frac 1\pi \psi (\frac x\pi)$, the blue one is $\operatorname{cpg}(x)=-\frac 1\pi \psi (1-\frac x\pi)$. They obey $\operatorname{cpg}(x)+\operatorname{pg}(x)=-\cot(x)$.

Now if we differentiate cpg(x) we get:

$$(\operatorname{cpg}(x))^{(s-1)}=\pi^{-s}\Gamma(s)\zeta(s,1-\frac x\pi)$$

Compare it with yours formula:

$$\xi(2s) = \pi^{-s}\Gamma\left(s\right)\zeta(2s)$$

Glorfindel
  • 2,743
Anixx
  • 9,302
3

I'm not sure of the history of the gamma factor, though I would suggest that no one "tried it out", but rather it simply arose in trying to prove of the functional equation. Riemann was the first to prove the functional equation, and his proof essentially follows that in Harald Hanche-Olsen's answer, which makes my explanation plausible. Alternatively, the functional equation of the zeta function comes out of the functional equation of a theta series, and the Mellin transform of a theta series gives rise to a Gamma function. This latter explanation arises more naturally for modular forms: the L-function of a modular form is also completed by a gamma factor to obtain a functional equation; in this case, the completed L-function is simply the Mellin transform of the modular form itself.

Furthermore, as Leonid Positselski answers, it is indeed true that Tate's thesis provides a uniform way of obtain the gamma factors at infinity in the same manner as one obtains the local L-factors at finite places.

More generally, there is a recipe given an arbitrary motive for the expected gamma factors that should give a functional equation for the motivic L-functions. These are due to Deligne and Serre (I believe) and are determined by the Hodge structure of the motive (see Deligne's corvallis article "Valeurs de fonctions L..."). This shows that there's a uniform way of obtaining the gamma factors as one varies the L-function one is studying, an orthogonal question to the one Leonid Positselski answered.

Rob Harron
  • 4,777
  • 2
  • 24
  • 35
-2

Although there is already an answer of mine, I want to add another answer.

This is TL;DR.

Short answer. This is because logarithmic function lacks factorial in its Taylor expansion.

Medium answer. Riemann's functional equation links exponential and trigonometric functions with logarithms and inverse trigonometric. It contains everything what you need to make an exponent from a logarithm.

Long answer.

This is Taylor series for logarithm:

$$\ln(z+1)=z-\frac{z^2}{2}+\frac{z^3}{3}-\frac{z^4}{4}+\frac{z^5}{5}-\frac{z^6}{6}+\frac{z^7}{7}-\frac{z^8}{8}+\frac{z^9}{9}-\frac{z^{10}}{10}+O\left(z^{11}\right)$$

This is Taylor series for exponent:

$$\exp (z)-1=z+\frac{z^2}{2!}+\frac{z^3}{3!}+\frac{z^4}{4!}+\frac{z^5}{5!}+\frac{z^6}{6!}+\frac{z^7}{7!}+\frac{z^8}{8!}+\frac{z^9}{9!}+\frac{z^{10}}{10!}+O\left(z^{11}\right)$$

What should we add to the former to get the later? Well, we have to add the factorial and remove the counter from the denominator.

Consider such algebraic element $\omega_+$ (not a real number) on which a function "standard part" is implemented in such a way, that $\operatorname{st} \omega_+^n=B_n^*$ where $B_n^*$ are Bernoulli numbers (with $B_1^*=1/2$), or more generally, $\operatorname{st}\omega_+^x=-x\zeta(1-x)$.

Now consider the function

$$\frac{z}{2\pi} \log \left(\frac{\omega _+-\frac{z}{2 \pi }}{\omega _++\frac{z}{2 \pi }}\right)$$ Its Taylor series is

$$-\frac{z^2}{2 \left(\pi ^2 \omega _+\right)}-\frac{z^4}{24 \left(\pi ^4 \omega _+^3\right)}-\frac{z^6}{160 \left(\pi ^6 \omega _+^5\right)}-\frac{z^8}{896 \left(\pi ^8 \omega _+^7\right)}-\frac{z^{10}}{4608 \left(\pi ^{10} \omega _+^9\right)}+O\left(z^{11}\right)$$

Following Riemann's functional equation and our definition, we have:

$$\operatorname{st}\omega_+^{-x}=\operatorname{st}\frac{-\omega_+^{x+1} 2^x\pi^{x+1}}{\sin(\pi x/2)\Gamma(x)(x+1)}$$

So we can substitute the negative powers of $\omega_+$ with positive powers without changing the standard part of the whole expression.

The non-zero terms are

$$\frac{2 \left(-\frac{1}{2 \pi }\right)^n \left(-\omega _+\right){}^{1-n}}{n-1}$$

and after substitution we have

$$\frac{\omega _+^n \sec \left(\frac{\pi n}{2}\right)}{\Gamma (n+1)}$$

The resulting series is

$$\frac{1}{2} \omega _+^2 z^2+\frac{1}{24} \omega _+^4 z^4-\frac{1}{720} \omega _+^6 z^6+\frac{\omega _+^8 z^8}{40320}-\frac{\omega _+^{10} z^{10}}{3628800}+O\left(z^{11}\right)$$

oh, wait... is not it similar to

$$\cos \left(\omega _+ z\right)=1-\frac{1}{2} \omega _+^2 z^2+\frac{1}{24} \omega _+^4 z^4-\frac{1}{720} \omega _+^6 z^6+\frac{\omega _+^8 z^8}{40320}-\frac{\omega _+^{10} z^{10}}{3628800}+O\left(z^{11}\right)$$

Well, we got:

$$\operatorname{st}\frac{z}{2 \pi } \log \left(\frac{\omega _+-\frac{z}{2 \pi }}{\omega _++\frac{z}{2 \pi }}\right)=\operatorname{st}(\cos \left(\omega _+ z\right)-1)$$

In a similar way one can establish other impressive relations:

$$\operatorname{st}(\exp \left(\omega _+ z\right)-\omega _+ z-1)=\operatorname{st}\frac{i z}{2 \pi } \log \left(\frac{\omega _+-\frac{i z}{2 \pi }}{\omega _++\frac{i z}{2 \pi }}\right)$$

$$\operatorname{st}\cos \left(\omega _+ z\right)=\operatorname{st}\frac{ z}{2 \pi } \log \left(\frac{\omega _+-\frac{ z}{2 \pi }}{\omega _-+\frac{ z}{2 \pi }}\right)$$

$$\operatorname{st}\cosh \left(\omega _+ z\right)=\operatorname{st}\frac{i z}{2 \pi } \log \left(\frac{\omega _+-\frac{i z}{2 \pi }}{\omega _-+\frac{i z}{2 \pi }}\right)$$

(where $\omega_-=\omega_+-1$).

In other words, Riemann's functional equation is a direct bridge that connects exponential function to logarithm, trigonometric functions to inverse trigonometric, transforming each term of the series separately.

Anixx
  • 9,302
  • 5
    Is there any literature on this? I don't understand any of what is written here. – Todd Trimble Oct 31 '16 at 06:25
  • 1
    No, all this comes from this user https://mathoverflow.net/questions/215762/non-standard-numbers-and-exponential-form-of-zeta-function – reuns Apr 18 '17 at 01:51