37

I saw the functional equation and its proof for the Riemann zeta function many times, but usually the books start with, e.g. tricky change of variable of Gamma function or other seemingly unmotivated things (at least for me!).

My question is, how does one really motivate the functional equation of the zeta function? Can one see there is some hidden symmetry before finding/proving it? For example, I think $\Gamma(s+1)=s\Gamma(s)$ for $s>1$ "motivates" the analytic continuation of the Gamma function.

Manfred Weis
  • 12,594
  • 4
  • 34
  • 71
36min
  • 3,758
  • 10
    Is your question about analytic continuation or the functional equation? – Qiaochu Yuan Mar 09 '11 at 22:37
  • 9
    Both, I guess. I feel like they are very related to each other. – 36min Mar 09 '11 at 23:30
  • 9
    Euler's work on summing divergent series may be viewed as leading one to believe that a continuation and functional equation might exist. – Stopple Mar 09 '11 at 23:44
  • 2
    You might be interested in some of the answers to http://mathoverflow.net/questions/2040/why-are-functional-equations-important . – Qiaochu Yuan Mar 10 '11 at 01:55
  • 1
    Stopple: is there any evidence that Riemann knew the paper of Euler in which something equivalent to the functional equation (at integers) was non-rigorously derived? – KConrad Mar 10 '11 at 05:48
  • 1
    Sorry, this is too lazy to be an answer, so it's only a comment; I can't remember the details and I haven't got time to work it out right now (maybe in Titchmarsh's old book?), BUT:

    I think I remember you can replace $n^{-s}$ with an integral $\int_n^{n+1} \ldots$ involving the function $x-n = x-[x]$, where $[x]$ is the unique integer satisfying $[x] \leq x < [x]+1$. This changes a sum into an integral, and we know integrals are often easier than sums.

    But now $x-[x]$ is periodic (a sawtooth function). Thus, it is totally natural and obvious to expand it as a Fourier series...

    – Zen Harper Mar 10 '11 at 06:16
  • 2
    ...of course this may well be basically the same thing that KConrad and Daniel Parry talk about in their answers; anyway, "with hindsight", this should make the zeta function seem less mysterious; I think Hardy or Littlewood said "a periodic function should always be expanded as a Fourier series"! – Zen Harper Mar 10 '11 at 06:20
  • 1
    Since analytic continuation in general was well-known and widely used, wouldn't it be natural to try to use it whenever you see an analytic function? – Gerald Edgar Mar 10 '11 at 14:44
  • 6
    @KConrad - Ayoub's paper 'Euler and the Zeta Funtion' American Mathematical Monthly, v.81 (1974), pp. 1067-1086 says "A. Weil remarks that the external evidence supports strongly the view that Riemann was very familiar with Euler's contributions." I looked for such a comment in "Number Theory: An approach through history..." (1984) but found only "For a hundred years after their discovery, Euler's functional equations were utterly forgotten." (p. 276) – Stopple Mar 10 '11 at 16:43
  • 2
    (continued) In Weil's 'Two lectures on number theory" (1974), he writes (p.101) "..in 1849, there were two entirely independent publications by two very respectable mathematicians, both giving the functional equation of the L-series [$L(s,\chi_{-4})$]. One of them, Schl"omilch, published it as an exercise for advanced students in the Archiv der Mathematik und Physik; the other, Malmquist, included it, also without proof, in a paper in Crelle's Journal, … with a remark that "he seemed to remember having seen something of that kind in Euler". – Stopple Mar 10 '11 at 19:03
  • 4
    Supporting Stopple's comment, I'd recommend Hirzebruch's 2007 Euler lecture, which deals with Euler's divergent series summations at the beginning: http://www.mathnet.ru/php/presentation.phtml?option_lang=eng&presentid=149 – Christian Nassau May 23 '12 at 08:02
  • The motivation I had in my mind regarding the analytic continuation of $\zeta$ is very much along the lines of what is answered below by prof. Tao. But I would like to add one more thing : by collecting the odd and even terms one has $(1- 2^{1-s}) \zeta(s) = - \sum_{n=1}^{\infty} (-1)^n n^{-s},$ and now the analytic continuation (as well as the nature of the singularity at $1$) of $\zeta$ can be seen to be motivated. Not sure of an "underlying symmetry", but this observation can make the continuation seem natural. – Aditya Guha Roy Jan 07 '21 at 17:25
  • 1
    @AdityaGuhaRoy, yep, if you peruse https://oeis.org/A131758, you'll see there are a number of starting points for analytic continuation, including the Bernoulli, Eulerian, Euler, Genocchi, and zag numbers and polylogarithms. (Find "Genocchi" on the page). Also note the relations to the Bose-Einstein and Fermi-Dirac distributions. – Tom Copeland Jan 08 '21 at 16:29
  • Thank you for the references. I did not guess the equation all by myself. I think I read it somewhere and then it was fairly easy to establish it. It gives us some insight. – Aditya Guha Roy Jan 08 '21 at 16:46
  • And also the Hermite numbers/moments via their connection to the Jacobi theta function and gaussian distribution. – Tom Copeland Jan 08 '21 at 19:28

8 Answers8

38

You do not try to motivate it! Even Riemann didn't see a nice argument right away.

Riemann's first proof of the functional equation used a contour integral and led him to a yucky functional equation expressing $\zeta(1-s)$ in terms of $\zeta(s)$ multiplied by things like $\Gamma(s)$ and $\cos(\pi{s}/2)$. Only after finding this functional equation did Riemann observe that the functional equation he found could be expressed more elegantly as $Z(1-s) = Z(s)$, where $Z(s) = \pi^{-s/2}\Gamma(s/2)\zeta(s)$. Then he gave a proof of this more symmetric functional equation for $Z(s)$ using a transformation formula for the theta-function $\theta(t) = \sum_{n \in {\mathbf Z}} e^{-\pi{n^2}t},$ which is $$\theta(1/t) = \sqrt{t}\theta(t).$$ In a sense that transformation formula for $\theta(t)$ is equivalent to the functional equation for $Z(s)$. The transformation formula for $\theta(t)$ is itself a consequence of the Poisson summation formula and also reflects the fact that $\theta(t)$ is a modular form of weight 1/2.

Instead of trying to motivate the technique of analytically continuing the Riemann zeta-function I think it's more important to emphasize what concepts are needed to prove it: Poisson summation and a connection with modular forms (for $\theta(t)$). These ideas are needed for the analytic continuation of most other Dirichlet series with Euler product which generalize $\zeta(s)$, so an awareness that the method of Riemann continues to work in other cases by suitably jazzing up Poisson summation and a link to modular forms will leave the reader/student with an appreciation for what goes into the proof.

This proof is not intuitive and I think it's a good illustration of von Neumann's comment that in mathematics we don't understand things, but rather we just get used to them.

KConrad
  • 49,546
  • 8
    The analytic continuation is proved very quickly by Riemann. It was probably a standard trick for him
    1. to write n^{-s} as an integral of a function of the form f(x) e^{-nx} ;

    2. to sum the geometric series within the integral;

    3. to replace the integration contour from 0 to infinity by

    some loop from infinity to itself, turning once around 0,

    1. to move the obtained contour.
    – ACL Mar 10 '11 at 07:51
  • The functional equation does not make sense without the analytic continuation at least into the critical strip, because the critical line $\operatorname{Re} = 1/2$ is the line of reflection. – Cloudscape Aug 08 '21 at 12:21
  • @AlgebraicsAnonymous sure, but I am not sure what point you are trying to make with that comment to my answer. The title of the post and the body of the post are focusing on different questions. I am responding to the content of the post, which is asking for a motivation for the functional equation. – KConrad Aug 09 '21 at 16:45
  • 1
    @KConrad Obviously, I realised that this shortcoming exists, and therefore commented in order to add information :-D – Cloudscape Aug 09 '21 at 19:06
27

(1) Titchmarsh points out in his book on the zeta function (section 2.3) that if you blindly apply the Poisson summation formula to the function $f(s)=|x|^s$, you get the functional equation of the Riemann zeta function immediately, and gives a reference to a paper of Mordell where this procedure is justified.

(2) However, if I had to motivate this in a (introductory graduate) class, with students knowledgeable about complex analysis, I would do as follows:

-- We want to count prime up to $x$ (after all, this is what Riemann had in mind);

-- Writing the number of primes as a sum of some function $g(n)$ for $n$ up to $x$, it is fairly natural (though perhaps with hindsight) to use harmonic analysis to go further; it is here also rather natural to use characters of the positive reals, and hence one gets a Mellin integral on a vertical line with large enough real part ($Re(s)=\sigma>1$, say), where the dependency on $x$ is entirely in $x^s$, and the logarithmic derivative of the zeta function comes out;

-- A naive attempt to estimate from this fails, since $|x^s|=x^{\sigma}$ is worse than the trivial bound for the number of primes up to $x$; it is therefore natural to try to move the contour of integration to the left to reduce the size of $\sigma$;

-- Hence whether something can be done along these lines is immediately related to the possibility of performing analytic continuation of the (logarithmic derivative) of the zeta function;

-- In Riemann's time, I have the feeling that people like him would then just shrug and say, "Ok, let's continue $\zeta(s)$ as much as we can", and would find a way of doing so. Most attempts succeed at also getting the functional equation, and it becomes natural to try to understand the latter. But there are other problems where the basic strategy is the same, and one gets a limited range of analytic continuation (and there is no functional equation), while still being able to get an asymptotic formula for the sum of original interest. [Indeed, in some sense, this is what happened to Riemann: to actually count primes, one needs to continue the logarithmic derivative, and if the zeros of $\zeta(s)$ are badly located, the strategy fails -- and he did not prove the Prime Number Theorem.]

20

Riemann's analytic continuations and derivation of the functional equations for $\zeta$ and $\xi$ seem quite natural and intuitive from the perspective of basic complex analysis.

Riemann in the second equation of his classic paper On the Number of Prime Numbers less than a Given Quantity (1859) writes down the Laplace (1749-1827) transform

$$\int_{0}^{+\infty}e^{-nx}x^{s-1}dx=\frac{(s-1)!}{n^s},$$ valid for $real(s)>0.$

With $n=1$ this is the iconic Euler (1707-1783) integral representation of the gamma function, and noting that

$(s-1)!=\frac{\pi}{sin(\pi s)}\frac{1}{(-s)!}$ from the symmetric relation $\frac{sin(\pi s)}{\pi s}=\frac{1}{s!(-s)!},$

this can be rewritten as

$$\frac{sin(\pi s)}{\pi}\int_{0}^{\infty}e^{-x}x^{s-1}dx=\frac{1}{(-s)!},$$

suggesting quite naturally to someone as familiar with analytic continuation as Riemann that

$$\frac{-1}{2\pi i}\int_{+\infty}^{+\infty}e^{-x}(-x)^{s-1}dx=\frac{1}{(-s)!},$$

valid for all $s$, where the line integral is blown up around the positive real axis into the complex plane to sandwich it with a branch cut for $x>0$ and to loop the origin in the positive sense from positive infinity to positive infinity. Deflating the contour back to the real axis introduces a $-exp(i\pi s)+exp(-i\pi s)=-2isin(\pi s)$. (This special contour is now called the Hankel contour after Hermann Hankel (1839-1873), who became a student of Riemann in 1860 and published this integral for the reciprocal gamma fct. in his habilitation of 1863. Most likely Riemann introduced him to this maneuver.)

Riemann in his third equation observes that

$$(s-1)!\zeta(s)=(s-1)!\sum_{n=1}^{\infty }\frac{1}{n^s}=\int_{0}^{+\infty}\sum_{n=1}^{\infty }e^{-nx}x^{s-1}dx=\int_{0}^{+\infty}\frac{1}{e^x-1}x^{s-1}dx$$

and then immediately writes down as his fourth equality the analytic continuation

$$2sin(\pi s)(s-1)!\zeta(s)=i\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x-1}dx,$$

valid for all $s$, which can be rewritten as

$$\frac{\zeta(s)}{(-s)!}=\frac{-1}{2\pi i}\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x-1}dx$$

as naturally suggested by the analytic continuation of the reciprocal of gamma above.

For $m=0,1,2, ...,$ this gives

$$\zeta(-m)=\frac{(-1)^{m}}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}}\frac{1}{e^z-1}dz=\frac{(-1)^{m}}{m+1}\frac{1}{2\pi i}\oint_{|z|=1}\frac{(m+1)!}{z^{m+2}}\frac{z}{e^z-1}dz$$

from which you can see, if you are familiar with the exponential generating fct. (e.g.f.) for the Bernoulli numbers, that the integral vanishes for even $m$. Euler published the e.g.f. in 1740 (MSE-Q144436), and Riemann certainly was familiar with these numbers and states that his fourth equality implies the vanishing of $\zeta(s)$ for $m$ even (but gives no explicit proof). He certainly was also aware of Euler's heuristic functional eqn. for integer values of the zeta fct., and Edwards in Riemann's Zeta Function (pg. 12, Dover ed.) even speculates that ".. it may well have been this problem of deriving (2) [Euler's formula for $\zeta(2n)$ for positive $n$] anew which led Riemann to the discovery of the functional equation ...."

Riemann then proceeds to derive the functional eqn. for zeta from his equality by using the singularities of $\frac{1}{e^z-1}$ to obtain basically

$$\zeta(s)=2(2\pi)^{s-1}\Gamma(1-s)\sin(\tfrac12\pi s)\zeta(1-s),$$

and says three lines later essentially that it may be expressed symmetrically about $s=1/2$ as

$$\xi(s) = \pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s)=\xi(1-s).$$

Riemann then says, "This property of the function [$\xi(s)=\xi(1-s)$] induced me to introduce, in place of $(s-1)!$, the integral $(s/2-1)!$ into the general term of the series $\sum \frac{1}{n^s}$, whereby one obtains a very convenient expression for the function $\zeta(s)$." And then he proceeds to introduce what Edwards calls a second proof of the functional eqn. using the Jacobi theta function.

Edwards wonders:

"Since the second proof renders the first proof wholly unnecessary, one may ask why Riemann included the first proof at all. Perhaps the first proof shows the argument by which he originally discovered the functional equation or perhaps it exhibits some properties which were important in his understanding of it."

I wonder whether, as his ideas evolved before he wrote the paper, he first constructed $\xi(s)$ by noticing that multiplying $\zeta(s)$ by $\Gamma(\frac{s}{2})$ introduces a simple pole at $s=0$ thereby reflecting the pole of $\zeta(s)$ at $s=1$ through the line $s=1/2$ and that the other simple poles of $\Gamma(\frac{s}{2})$ are removed by the zeros on the real line of the zeta function. The $\pi^{-s/2}$ can easily be determined as a normalization by an entire function $c^s$ where $c$ is a constant, using the complex conjugate symmetry of the gamma and zeta fct. about the real axis. Riemann had fine physical intuition and would have thought holistically in terms of the the zeros of a function (see Euler's proof of the Basel problem) and its poles, the importance of which he certainly stressed.

Let's extend the reasoning above for the Jacobi theta function

$$\vartheta (0;ix^2)=\sum_{n=-\infty,}^{\infty }exp(-\pi n^{2}x^2).$$

Viewing a modified Mellin transform as an interpolation of Taylor series coefficients (MO-Q79868), it's easy to guess (note the zeros of the coefficients) that

$$\int^{\infty}_{0}\exp(-x^2)\frac{x^{s-1}}{(s-1)!} dx = \cos(\pi\frac{ s}{2})\frac{(-s)!}{(-\frac{s}{2})!} = \frac{1}{2}\frac{(\frac{s}{2}-1)!}{(s-1)!},$$

and, therefore,

$$\int^{\infty}_{0}\exp(-\pi (n x)^2)x^{s-1} dx = \frac{1}{2}\pi^{-s/2}(\frac{s}{2}-1)! \frac{1}{n^s}.$$

By now you should be able to complete the line of reasoning to obtain, for $real(s)>1,$

$$\xi(s)=\int_{0^+}^{\infty }[\vartheta (0;ix^2)-1)]x^{s-1}dx=\pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s).$$

Do an analytic continuation as done for the gamma function in MSE-Q13956 to obtain, for 0<real(s)<1,

$$\xi(s)=\int_{0^+}^{\infty }[\vartheta (0;ix^2)-(1+\frac{1}{x})]x^{s-1}dx.$$

Then use symmetries of the Mellin transform and the fact that $\xi(s)=\xi(1-s)$ (as explained in MSE-Q28737) to obtain the functional equation

$$\vartheta (0;ix^2)=\frac{1}{x}\vartheta (0;\frac{i}{x^{2}}).$$


Update (Jan. 5, 2021):

This perhaps answers Edwards' question and provides a simple path to the functional identity.

Euler initially acquired a reputation with solving the Basel problem to establish the value of $\zeta(2)$ and furthermore the identities

$$\frac{2}{(2\pi)^{2n}}\:(2n-1)!\:\zeta(2n)=(-1)^{n+1}\frac{B_{2n}}{2n}=(-1)^{n}\zeta(1-2n).$$

As noted above, Riemann incorporated the e.g.f. for the Bernoulli numbers in his normalized Mellin/Laplace transform for the zeta function. It's not a great leap of faith to believe that Riemann (of the eponymous surfaces and derivative) grasped the interpolating property of the Mellin transform. Given that assumption, he could have easily noted and perhaps initially formulated the Mellin integral representation from

$$b_n = D^m_{z=0} \; e^{b.z} = D^m_{z=0} \; \frac{z}{e^z-1}$$

$$ = (-1)^{m} \; \frac{1}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}} \; \frac{z}{e^z-1} \; dz $$

$$= (-1)^{m} \; \frac{1}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}} \; e^{b.z} \; dz$$

$$ =(-1)^{m} \frac{1}{2\pi i} \; \oint_{|z|=1}\frac{m!}{z^{m+1}} \; [1 - \frac{z}{2}+ \sum_{n \geq 2} \; \cos(\frac{\pi n}{2}) (-2) \; (2\pi)^{-n} \; n! \; \zeta(n) \; \frac{z^n}{n!}] \; dz $$

$$ =(-1)^{m} \frac{1}{2\pi i} \; \oint_{|z|=1}\frac{m!}{z^{m+1}} \; [1 - \frac{z}{2}+ \sum_{n \geq 2} \; (-n) \;\zeta(1-n) \; \frac{z^n}{n!}] \; dz . $$

The Hankel deformation maneuvers above could be reversed to obtain the Mellin transforms equivalent to these contour integrals (see this MO_A), but this would not change the equivalence of the coefficients in the two e.g.f.s for the Bernoulli numbers, so from the interpolating property of the Mellin transform (and therefore the Cauchy integrals), Riemann could have simply analytically continued $n$ to $1-s$ to surmise the target identity

$$ \cos(\frac{\pi n}{2}) \; 2 \; (2\pi)^{-n} \; n! \; \zeta(n)] \; |_{n \to 1-s} = n \;\zeta(1-n) \; |_{n \to 1-s},$$

giving the functional reflection identity

$$\cos(\frac{\pi (1-s)}{2}) \; 2 \; (2\pi)^{s-1} \; (1-s)! \; \zeta(1-s) = (1-s) \;\zeta(s) , $$

or

$$2 \; (2\pi)^{s-1} \; \sin(\frac{\pi s}{2}) \; (-s)! \; \zeta(1-s) = \zeta(s) . $$


Edit 1/24/21: A fairly simple fourth way to derive the functional equation involving Fourier series and the core Poisson summation distribution, yet avoiding the theta function, is given in my answer to this recent MO-Q.

Tom Copeland
  • 9,937
  • Riemann introduces the notation $\xi(s)$ for a different but related function. – Tom Copeland May 19 '12 at 15:49
  • The Euler integral for gamma is also quite natural from the perspective of http://mathoverflow.net/questions/79868/what-does-mellin-inversion-really-mean/79925#79925 – Tom Copeland Jul 05 '12 at 09:34
  • 1
    link to the paper is broken. see hsm's chat for relevant apologies – VicAche May 15 '15 at 12:37
  • 1
    http://www.claymath.org/sites/default/files/ezeta.pdf is what you want to link to – VicAche May 15 '15 at 12:38
  • http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Zeta/ – Tom Copeland May 15 '15 at 12:43
  • See also papers on the local Riemann hypothesis by Bump, by Coffey, by Srednicki and "Local zeta values ..." by Manin (http://arxiv.org/pdf/1407.4969.pdf). – Tom Copeland Oct 09 '16 at 00:18
  • Good historical sketch in "ASPECTS OF ZETA-FUNCTION THEORY IN THE MATHEMATICAL WORKS OF ADOLF HURWITZ" by NICOLA OSWALD, JORN STEUDING https://arxiv.org/abs/1506.00856 – Tom Copeland May 20 '20 at 03:14
  • Enticingly, on pg. 20 of “Analogs of Dirichlet L-functions in chromatic homotopy theory,” Zhang states that a suggested Brown-Comenetz duality for a J-homomorphism is similar to the functional equation of the Riemann ζ-function. https://web.math.rochester.edu/people/faculty/doug/otherpapers/zhang-dirichlet2.pdf – Tom Copeland Jan 06 '21 at 15:45
  • 1
    Beside Euler's work on the Bernoulli numbers and power sums (see Nassau's links to Hirzebruch), there is the Abel-Plana summation formula presented by Plana on G.A.A. (1820), "Sur une nouvelle expression analytique des nombres Bernoulliens, propre à exprimer en termes finis la formule générale pour la sommation des suites," which also can be used to characterize zeta functions. Hard to believe Riemann may not of known of all this especially since at least Newton's times ideas had been disseminated through personal correspondence and society meetings perhaps more than journals. – Tom Copeland Jan 07 '21 at 02:34
  • https://oeis.org/A131758 lists a number of starting points for analytic continuation, including the Bernoulli, Eulerian, Euler, Genocchi, ordered Bell/Fubini, and zag numbers and polylogarithms. (Find "Genocchi" on the page). Also note the relations to the Bose-Einstein and Fermi-Dirac distributions and the Todd operator. – Tom Copeland Jan 08 '21 at 16:36
  • See also the well-known relation to Dirac delta function/operator combs nicely described in "A Correspondence Principle" by Hughes and Ninham https://riviste.fupress.net/index.php/subs/article/view/41 – Tom Copeland Jan 09 '21 at 19:28
  • For yet another method of analytic conitinuation, see https://mathoverflow.net/questions/380142/intuitive-explanation-why-shadow-operator-frac-ded-1-connects-logarithms/380189#380189 – Tom Copeland Jan 10 '21 at 16:52
  • Butzer and Jansche in "A direct approach to the Mellin transform" give a similar approach to Riemann's paper, with brief notes on the history of the direct and inverse Mellin transform. – Tom Copeland Dec 13 '22 at 00:19
18

One way to motivate the analytic (or meromorphic) continuation of the Riemann zeta function $$ \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}, \quad \mathrm{Re} s > 1$$ is to look at the continuous analogue $$ \frac{1}{s-1} = \int_1^\infty \frac{1}{t^s}\ dt, \quad \mathrm{Re} s > 1$$ which clearly extends meromorphically to the whole complex plane. So one now just has to understand the analyticity properties of the residual $$ \int_1^\infty \frac{1}{t^s}\ dt - \sum_{n=1}^\infty \frac{1}{n^s}, \quad \mathrm{Re} s > 1.$$ For instance, using the Riemann sum type quadrature $$ \int_n^{n+1} \frac{1}{t^s}\ dt = \frac{1}{n^s} + \int_n^{n+1} \frac{1}{t^s} - \frac{1}{n^s}\ dt$$ one can write this residual as $$ \sum_{n=1}^\infty \int_n^{n+1} \frac{1}{t^s} - \frac{1}{n^s}\ dt;$$ since $\frac{1}{t^s} - \frac{1}{n^s} = O_s( \frac{1}{n^{\mathrm{Re} s+1}} )$, it is a routine application of the Fubini and Morera theorems to establish analytic continuation of the residual to the half-plane $\mathrm{Re} s > 0$. Similarly, by using the trapezoidal rule type quadrature $$ \int_n^{n+1} \frac{1}{t^s}\ dt = \frac{1}{2} \frac{1}{n^s} + \frac{1}{2} \frac{1}{(n+1)^s} + \int_n^{n+1} \frac{1}{t^s} - \frac{1}{n^s} - (t-n) (\frac{1}{(n+1)^s} - \frac{1}{n^s})\ dt$$ we can write the residual as $$ -\frac{1}{2} + \sum_{n=1}^\infty \int_n^{n+1} \frac{1}{t^s} - \frac{1}{n^s} - (t-n) (\frac{1}{(n+1)^s} - \frac{1}{n^s})\ dt.$$ From Taylor's theorem with remainder the integrand here is $O_s( \frac{1}{n^{\mathrm{Re} s + 2}} )$, so now we obtain analytic continuation to the strip $\mathrm{Re} s > -1$. One can keep going in this fashion using the Euler-Maclaurin formula, as mentioned in Ustinov's answer, to extend the range of meromorphic continuation to the rest of the complex plane.

Ultimately, meromorphic continuation in this case is a reflection of the natural numbers being so evenly spaced asymptotically that one can estimate sums over the natural numbers with reasonable accuracy in terms of sums over the half-line $[1,+\infty)$, where the error terms can be made as convergent as one wishes. One can also use the Poisson summation formula to compare the sum and integral, which leads into the more traditional proof of meromorphic continuation based on theta functions etc..

Terry Tao
  • 108,865
  • 31
  • 432
  • 517
  • How does this answer the OP's question? I.e., "My question is, how does one really motivate the functional equation of the zeta function?" – Tom Copeland Jan 07 '21 at 15:28
  • @TomCopeland in that case I think one can take the following as a motivation to "look for" an analytic continuation. By collecting the odd and even terms one has $(1- 2^{1-s}) \zeta (s) = - \sum_{n = 1}^{\infty} (-1)^n n^{-s}.$ – Aditya Guha Roy Jan 07 '21 at 17:19
  • @AdityaGuhaRoy: Since Newton (integer binomial coefficient to generalized) and Euler (factorial to gamma, integer binomial to beta function, Bernoulii to zeta) at least, it would be SOP to analytically continue (AC) per se from the discrete to the right-half plane and then to the full complex plane. There are a # of ways to AC and express zeta, even a series globally convergent for $s \neq 1$ (see Wiki, e.g.), but few directly motivate the standard form for the functional equation (FE) expressed at the end of my answer. How does your rep or Terry's enable one to intuit the FE? . – Tom Copeland Jan 07 '21 at 18:29
  • (cont.) Your rep amounts to switching from the Bernoulli numbers to the Euler (not Eulerian) numbers, from which one could, I suppose, tease out the FE in a fashion similar to the one I illustrate for the Bernoullis, but as it stands I don't directly see the connection to the FE. – Tom Copeland Jan 07 '21 at 18:30
  • A simple analogy: $f(n) = (-n)^2$ is simply analytically continued to $f(x) = x^2$ for the right-half real line, but this is not a statement of reflection symmetry as is the functional equation $ x^2 = f(x) = f(-x) = (- x)^2$, extending to the full real line. The zeta functional equation amounts to a statement about reflection symmetry through $Real(s) = 1/2$ for the Landau-Riemann Xi function $\xi(s) = \xi(1-s)$ as Riemann essentially shows in his seminal paper. – Tom Copeland Jan 07 '21 at 19:26
  • 3
    The OP asks about both the analytic continuation and the functional equation (see the title of the question and also the OP's clarification in his Mar 9 2011 comment). This answer addresses the former. By using Poisson summation instead of Euler-Maclaurin as discussed at the end of my answer one can also discover the functional equation. Im my opinion Tate's adelic explanation of the functional equation is the most satisfying, see e.g., my blog post https://terrytao.wordpress.com/2008/07/27/tates-proof-of-the-functional-equation/ – Terry Tao Jan 07 '21 at 20:33
  • Yes, I usually read the comments and other answers and was aware the OP (and perhaps others) was somewhat confused about the distinction between an AC and a functional symmetry equation, and, upon Yuan's prompting, chose to address both in a comment, yet he uses "functional equation" twice in the text, giving context to the necessarily briefer titular question and emphasis on the FE. Evidently, he had not yet clarified in his mind the distinctions among, e.g., $ n!$ and $s!$ and $(s+1)! =(s+1)s!$ and $1/(\pi (-s)!) = \sin(\pi s) (s-1)!$. – Tom Copeland Jan 08 '21 at 02:45
  • As you surely know, there are many ways to AC Euler and Bernoulli's observations and Titchmarsh gives seven proofs of the FE. I was more interested in the FE from the historical and analytic perspective a la Hirzebruch (see Nassau's ref) in terms of relations to the Bernoulli and Euler #s. I understand how number theorists might be more interested in the intimately related prime factorization route and modular theta functions a la Dirichlet, Malmsten, Schlomilch, Eisenstein, and Hurwitz (see my Hurwitz ref). – Tom Copeland Jan 08 '21 at 02:46
  • To me, Poisson, Muntz, and Abel-Plana summations are all manifestations of the action of Dirac delta function/operator combs via convolutions. All fruitful perspectives for a generalist interested in the crossroads, as were Riemann and Hirzebruch. // Thanks for the ref. I'll see if I can absorb some of it. – Tom Copeland Jan 08 '21 at 02:46
  • Informative and interesting, but I think Abhyankar's approach to motivating proofs at a level appropriate for one familiar with undergraduate calculus and complex analysis rather than adeles and ideles is better here, of course. – Tom Copeland Jan 08 '21 at 03:12
  • @TomCopeland I think prof. Tao's answer clearly gives a way to feel the analytic continuation intuitive and natural, because it relates the zeta function to its continuous analog which is better understood. The equation mentioned above by me can also be taken to be a staring point once again because one side of it $- \sum_{n \ge 1} (-1)^n n^{-s}$ is well understood. – Aditya Guha Roy Jan 08 '21 at 16:45
  • The algebraic number theory in your blog post seems to be in part a generalization of the symmetry property of tbe Jacobi theta function via that of the Mellin transform I present in the MSE-A I link to in my answer, a property, naturally, the Dirac delta function shares. https://math.stackexchange.com/questions/28737/does-the-functional-equation-f1-r-rfr-have-any-nontrivial-solutions-besi/145159#145159 – Tom Copeland Jan 09 '21 at 01:27
  • Sorry, I don't quite see how to apply the "Poisson summation formula instead of Euler-Maclaurin to compare the sum and integral"... could you elaborate on this please? – D.R. Feb 27 '22 at 05:47
8

When I speak about intuition behind the proof of the functional equation. I am talking about proofs similiar to this one http://www.math.harvard.edu/~elkies/M259.02/zeta1.pdf

As far as I can tell, this idea was originally formulated out of necessity. Riemann needed it for the prime number theorem. However the intuition becomes more natural if one accepts both of the following facts:

1)The Cahen Mellin Integral transforms a Dirichlet series (something hard to work with) into a Fourier series/polynomial (which we usually are more familiar with).

2) When applying $\zeta(2s)$ to this transform, we get the Jacobi theta function $\theta(y)$ which has tons of special structure to it.

So in a sense it is the classic scheme of "I can't work with A so I will transform it to B and then transform it back." When you transform the $\zeta(2s)$ you are getting a Fourier series (which by itself has a ton of facts about it) but it also has a really useful functional equation to it.

KConrad
  • 49,546
Daniel Parry
  • 1,286
5

Here is one motivation. By elementary algebraic manipulation, we have

$$1 - 2 + 3 - 4 + \cdots = \frac{1}{4};$$

see, e.g. [broken link - ?; search Wikipedia for 1 - 2 + 3 - 4]

But (see here) we have

$$(1 - 2 \cdot 2)(1 + 2 + 3 + 4 + \cdots) = 1 - 2 + 3 - 4 + \cdots,$$

and therefore

$$1 + 2 + 3 + 4 + \cdots = - \frac{1}{12}.$$

Of course, to do all of this, one has to ignore all those rules that pesky analysis professors tell you about. But the last identity is just so cool that one feels compelled to try to prove it rigorously.

Frank Thorne
  • 7,199
  • 4
    I feel compelled to emphasize the following. Just like conditionally convergent series are not commutative (for any $x\in\mathbb R$, there exists a permutation of ${1,-\frac12,\frac13,-\frac14,\dots}$ with sum $x$), divergent series are not even associative. The most basic example is $0=0+0+\dots=(1-1)+(1-1)+\dots=1+(-1+1)+(-1+)\dots=1+0+0+\dots=1$. But (continued) – Theo Johnson-Freyd Mar 10 '11 at 01:58
  • 4
    (continuation) a deeper example is closely related to your $1-2+3-4+\dots=\frac14$. Namely, the same argument gives $s=1-1+1-1+\dots=\frac12$ --- $s+s(\text{shifted})=1$, so $s=\frac12$. But $t=1-1+0+1-1+0+1-1+0+\dots$ satisfies $t+t(\text{shifted})+t(\text{shifted twice}) = 1$, so $t=\frac13$. This also illustrates that "associativity" is actually a continuous property. $a+(b+c)$ is the addition where $b$ and $c$ are infinitely close together compared to their distance to $a$, but there are other additions like $a\quad+b;+;c$, or $a;+;b\quad+\quad c$. – Theo Johnson-Freyd Mar 10 '11 at 02:00
  • 1
    Anyway, I bring this up to emphasize that one must be very carefuly with "elementary algebraic manipulations" like whichever argument you use to conclude that $\sum (-1)^n(n+1) = \frac14$. – Theo Johnson-Freyd Mar 10 '11 at 02:01
  • 1
    I wonder if it's possible to prove that $1 + 2 + 3 + \cdots$ is also equal to $-1/14$, or $\pi$, or $e$, or zero, or $\sqrt{-163}$, or ... – Frank Thorne Mar 10 '11 at 21:20
  • 1
    Hmm, the question as well this answer is pretty old. It's worth to be noted here anyway, that the question asks for the motivation for the consideration of the "functional equation" which concerns the relation of zeta at negative and positive arguments, not for the motivation to consider the relation between the alternating to the non-alternating series. – Gottfried Helms Jan 08 '16 at 11:21
  • I like your use of the words 'elementary' and 'rigorously' ;) – shane.orourke Jan 06 '21 at 07:49
3

Euler–Maclaurin formula allows to give the analytic continuation of the Riemann zeta function to the halfplane ${\rm Re} s>-n$. It also calculates $\zeta(s)$ at integer points $s<0$. You get Bernoulli numbers as for positive $s$. This is the reason to look for a symmetry.

2

The analytic continuation of the Riemann zeta function doesn't depend on knowing the functional equation at all. It can be done, and motivated, much more simply.

Consider the Hurwitz zeta function, $\zeta(s, q) = \sum_{n = 0}^{\infty} (q + n)^{-s}$. The Riemann zeta function is of course the $q = 1$ case of this.

Actually, it will be slightly more convenient to work with the related functions $F_s(q) = -s \sum_{n = 0}^{\infty} (q + n)^{-s - 1} = -s\zeta(s + 1, q) = \frac{d}{dq} \zeta(s, q)$. Clearly, once we know all the functions $F_s$, we can recover $\zeta(s, q)$ as $-F_{s - 1}(q)/(s - 1)$.

The series defining $F_s$ converges for $\Re(s) > 0$, so there is no difficulty interpreting things in that regime. How do we extend to further $s$?

Note firstly that $\frac{d}{dq} F_s(q) = -s F_{s + 1}(q)$. This determines $F_s$ up to an additive constant once we already know $F_{s + 1}$.

But how do we determine the additive constant? Well, since $F_s = \frac{d}{dq} \zeta(s, q)$, the average value of $F_s$ on any unit interval from $q$ to $q + 1$ is to be $\zeta(s, q + 1) - \zeta(s, q) = -q^{-s}$. Thus, we simply choose the unique additive constant that makes this true.

In this way, we can determine each $F_s$ from $F_{s + 1}$. Iterating in this way starting from the $\Re(s) > 0$ regime, we find that $F_s$ has been defined for all $s$. And thus so has the Hurwitz zeta function $\zeta(s, q)$, and thus so has the Riemann zeta function $\zeta(s)$.

It's that simple.

Note also that essentially the same idea as used here can also give us the gamma function (indeed, the second derivative of the logarithm of the gamma function is $\zeta(2, q)$).

More generally, any time we have a condition "The average value of $F$ over the range from $q$ to $q + 1$ is $-f(q)$", there is at most one solution to this with the property that a sufficiently high derivative of it vanishes in the limit as its input is translated by addition of a large natural number (technically, what I mean by an $N$-th order derivative vanishing asymptotically is that its integral over any fixed size $N$-dimensional box vanishes asymptotically as the box is translated. I.e., technically what I am saying here is actually a statement about finite differences and not infinitesimal rates of change per se). This solution exists just in case applying sufficiently many differentiations (technically, finite differences) to $f$ makes the series $\sum_{n = 0}^{\infty} f^{(m)}(q + n)$ converge, in which case the solution for $F$ is given by taking this latter function of $q$ and re-integrating back up $m - 1$ times with suitable choice of additive constants at each stage.