8

At least according to the answer to this question, $\zeta(1) = \gamma $ (once reqularized, of course).

Let me rephrase that by stating that:

$$ \sigma(\zeta(1)) = \gamma $$ Here, $\sigma(x)$ is the 'summation-function'. It's a function that assigns a value to any $x$, using Borel, Abel, Ramanujan, Euler, Cesaro or any other summation method that works (e.g. It makes a divergent series summable). The $\sigma$-function 'chooses' a summation method that suits $x$ best (to assign a (finite) constant to it). We assume that the different summation methods dont have different 'working' values for the same $x$ (I now call upon this question).

Furthermore, we denote $C$ as a converging series and $D$ as a diverging one.

What would $\sigma(C + D) $ be? Is it $\sigma(C) + \sigma(D)$ ? Or what would, for example, $\sigma(\zeta(1)^3 + \zeta(2))$ be?

So, to summarize my question: Could you please explain the properties of the $\sigma$-function to me, with relation to $C$ and $D$ ?

Thanks a lot in advance.

P.S. A bonus question: What do you think of the 'summation-function'? is it useful or just mathematical bogus? Or has it been defined (even more) properly already?

Max Muller
  • 4,505
  • 1
    Have you read http://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/ ? – Qiaochu Yuan Jun 11 '10 at 20:40
  • No, I haven't, but it looks very interesting (and relevant)! Thanks, mister Yuan. – Max Muller Jun 11 '10 at 20:55
  • 6
    You should read the first chapter of Hardy's book "Divergent Series". I think you're going to find that defining what it means for $\sigma$ to "choose a summation method that fits $x$ best" will be very slippery... – David Hansen Jun 11 '10 at 21:22
  • Hm I guess so, mister Hansen, I still have a lot to learn ;). – Max Muller Jun 12 '10 at 15:18

4 Answers4

7

Making sense of "picks a summation method that works" is very difficult, because for many series there are different reasonable choices. A standard method of summing bad series is "zeta-function regularization" --- for example, the method is popular in physics, because S. Hawking uses it to compute QFT on curved backgrounds. In its easiest form, let $\sum a_n$ be the series you want to sum: then you can consider the function $\zeta_a(s) = \sum a_n^{-s}$. When the sequence $a_n$ is positive and grows at least as $n^\epsilon$ for some $\epsilon>0$, then $\zeta_a$ will converge in the far-right part of the complex plane. Now you can hope that it has a singly-valued analytic continuation to $s = -1$.

However, this summation method will not satisfy the linearity that you want. One example: you can look up values for zeta functions of the form $\sum (an+b)^{-s}$ and see directly the failure of additivity.


More generally, you should look at Hardy, Divergent Series. Among other statements in there are some no-go theorems, of the form: there is no function $\{\text{series}\} \to \{\text{numbers}\}$ that agrees with the Cauchy convergence on convergent series and that satisfies some natural requirements. (Unfortunately, I don't have the book with me, and I don't remember any exact versions of such a theorem.)

  • Ok, thanks a lot, mister Johnson-Freyd! You're not the first to recommend me Hardy's book on divergent series. I'm not sure if I'm able enough to understand it already, though... (I'm still a high-school student). I was also wondering what would happen if we plunge in some converging series in the 'summation-function'. Would the value of these series as an argument of this function be different from their actual evaluation? – Max Muller Jun 12 '10 at 21:08
  • I generally think that a "summation function" should be required to agree with Cauchy's summation function (classical convergence). But for something like zeta-function regularization, it does not: indeed, if $\sum a_n$ converges, then in particular $a_n \to 0$, and so $a_n^{-s} \to \infty$ for large $s$, and in particular $\sum a_n^{-s}$ diverges for large $s$. You can compare zeta-function summation with Abel summation in the regime that $n^\epsilon < a_n < n^\delta$ eventually, and in general they do not agree. Abel summation is an industry standard, and agrees with Cauchy. – Theo Johnson-Freyd Jun 14 '10 at 17:36
  • More on comparing these different methods: Another form of "zeta regularization" is for infinite products $\prod a_n$. The idea is that, formally, $\prod a_n = \exp\bigl(-\frac{d}{ds}\zeta_a(s)\bigr|_{s=0}\bigr)$, and so if $\zeta_a$ is regular near $s=0$, you can define the product. (It does not agree with zeta-regularized $\exp( \sum \log a_n)$.) If $a_n \sim n^\epsilon$ for $\epsilon > 1$, then $\prod a_n^{-1}$ converges in the Cauchy sense, and I don't remember if in fact $\prod a_n^{-1} = \bigl(\prod a_n\bigr)^{-1}$ where the RHS is zeta-regularized. I think not? – Theo Johnson-Freyd Jun 14 '10 at 17:41
  • Wait, actually, I'm talking crap. You still can't compare Abel and Zeta a priori. The rules for Abel summation are that $\sum a_n x^n$ should converge for $|x|<1$, and then take the limit as $x\to 1$. This is necessarily infinite if $a_n \to \infty$ are all positive. What you can try is an "Eulerian" summation (he had many) where you ask that $\sum a_n x^n$ converge for $|x|$ small, and ask that it have an analytic continuation to $x = 1$. Alternately you can try to get a grip on signed zeta-regularized sums. This is required for various "index" theorems. – Theo Johnson-Freyd Jun 14 '10 at 17:48
  • Mister Johnson-Freyd, thanks a lot. I realize now that still have to learn a lot in order to produce any valuable research on this. Although I understand most of what you've written (and I very much respect the fact that you took the time to think about and answer my question 2 days after answering the original question), I want to understand everything in detail. After looking at the first couple of pages of Hardy's book on Divergent Series, I realized that I don't posess the required prerequisite knowledge to understand it. Most of my knowledge on 'higher' mathematics I (see new comment) – Max Muller Jun 14 '10 at 18:52
  • collected from scattered rescources across the web. I bought the book 'Introductory Mathematics: Algebra and Analysis' by Smith to get acquainted with mathematics a bit better. What, in your opinion, is the best way to get to understand 'Divergent Series' and the notebooks of Ramanujan (who devised a very interesting summation-method as well, summing $\zeta(1)=\gamma$) ? Which set of books should I read, and in what order. I knew there are some questions on this, but what do you think? A (single-variable) calculus-book to start with? And then what? – Max Muller Jun 14 '10 at 19:04
  • note: after 'and in what order' the dot should be a question-mark. – Max Muller Jun 14 '10 at 19:06
  • I think I'll post this as a question to the whole community... – Max Muller Jun 14 '10 at 19:20
  • While I'm at it, I should mention that "Eulerian" summations (anything that involves analytic continuation deserves to be called "Eulerian", e.g. zeta regularization) are themselves problematic. You probably know that analytic "functions" generally are not single-valued; good examples are $\log$ and $\sqrt{}$, which are $\infty$- and $2$-valued, respectively. But there are other things. Euler, according to (my memory of) Hardy's book, tried to sum $\sum n!$, and considered the "function" $f(x)=\sum n!x^{-n}$. This function satisfies $f(\infty)=1$ and $f'(x)=x^{-1}f(x)-f(x)+1$ (cont). – Theo Johnson-Freyd Jun 14 '10 at 22:23
  • Notice that $f$ is necessarily not analytic at $\infty$, which is why I've put that point at $\infty$ rather than at $0$. Anyway, Euler succeeded in solving the differential equation with "initial" value $f(\infty) = 1$ in the far-positive regime and in the far-negative regime (I didn't say if $\infty$ was positive or negative). In each case, the function analytically continues to $x = 1$. But the values at $x = 1$ are different. The take-away is that sums like $\sum n!$ just grow too damn fast to have a single value: I had to put "function" in quotes because $\sum n!x^{-n}$ converges nowhere. – Theo Johnson-Freyd Jun 14 '10 at 22:27
  • For comparison, I hope you do/have done the following problem in a calculus class. Say you want to compute $\sum 1/n!$. Then consider $g(x) = \sum x^n/n!$. By easy estimates (compare with any geometric series, for example), this converges for any $x \in \mathbb C$. It satisfies $g'(x) = g(x)$ and $g(0) = 1$. By uniqueness of solutions to IVPs, we must have $g(x) = e^x$, and so in particular $g(1) = e$. (Then the next step is to prove that $e$ is irrational, because the sum converges too fast — see Proofs from the Book by Aigner and Ziegler.) – Theo Johnson-Freyd Jun 14 '10 at 22:32
5

In fact you could have asked for more. Let AC the set of absolutely convergent series, and $S:AC\rightarrow \mathbb{C}$ the $\mathbb{C}$-algebra homomorphism that associates to a convergent series its sum.

Then we may ask for an extension $\sigma$ of $S$, defined on some subalgebra $D_1$ of the set D of all series, that satisfies the following rules.

  • regularity: if $s\in D_1$ is converging, then $\sigma(s)=S(s)$,

  • invariance by translation: $\sigma(\sum_0^\infty a_n)=a_0+\sigma(\sum_1^\infty a_n)$,

  • linearity: $\sigma$ is $\mathbb{C}$-linear,

  • product: $\sigma$ is an homomorphism for multiplication.

Abelian summations methods satisfy these four rules. These methods associate to a divergent sum $\sum a_n$ a function, say $\sum a_n x^n$, and try to take its value at $x=1$ by some process related to analytic continuation. If you can read French, the first chapter of the book "Series divergentes et theories asymptotiques", by J.P. Ramis, is a nice introduction to these questions. The author surveys resummation methods for divergent series, from Leibniz to Ecalle.

coudy
  • 18,537
  • 5
  • 74
  • 134
  • coudy, do you have some good references where this type of multiplicative functionals have been studied? Thanks! – M.G. Jun 11 '10 at 23:02
3

Define $\tau(C+D):=\sigma(C)+\sigma(D)$. Then $\tau$ is a summation method for $C+D$, and by your assumption of uniqueness it follows that $\sigma(C+D)=\tau(C+D)=\sigma(C)+\sigma(D)$.

M.G.
  • 6,683
  • Ok.. Thank you. Would this also imply that $\sigma(\zeta(1)^3)=\gamma^3$? – Max Muller Jun 12 '10 at 14:49
  • My impression is that such summation methods don´t need to be consistent wrt. multiplication, i.e. that $\sigma(AB)=\sigma(A)\sigma(B)$ is always valid. I am not even sure if they are consistent wrt. addition... In fact, the Laurent-series of the Riemann zeta function have the first 3 coefficients $c_{-1}=1, c_0=\gamma_0=\gamma, c_1=\gamma_1$, thus the constant term of $\zeta(s)^2$ is $\gamma + \gamma_1\neq\gamma^2$. – M.G. Jun 12 '10 at 16:05
  • It is in itself an interesting question if regularization/summation methods can be throughout consistent wrt. to basic arithmetic operations, or even more general wrt. continuous functions in several variables (for several dirvegent series to plug in). – M.G. Jun 12 '10 at 16:05
  • Yes I think that's a very interesting question, and a good answer would have some research applications, I believe! By the way, $\gamma_1=\gamma$? So the constant term of $\zeta(s)^2$ is $2\gamma$? We could extend this question by asking what the constant term of $\zeta(s)^n$ would be.. – Max Muller Jun 12 '10 at 16:57
  • No, $\gamma_1\neq\gamma_0=\gamma$, see Stieltjes constants -> http://en.wikipedia.org/wiki/Stieltjes_constants – M.G. Jun 12 '10 at 17:44
0

I am currently working on a theory that assigns to divergent sums the values from a set of "extended" numbers. Each extended number has a transfinite and standard(or regular) part. The standard part corresponds to the regularized value of the series or integral.

In this theory the regular part function is linear: for extended numbers $w$, $w_1$ and $w_2$ and a regular number $a$ the following holds:

$$\operatorname{reg} (w_1+w_2)=\operatorname{reg}w_1+\operatorname{reg}w_2$$

and

$$\operatorname{reg} (a w)=a\operatorname{reg}w$$

On the other hand, the regular part of product of two transfinite numbers is not usually a product of their regular parts.

For instance,

$$\operatorname{reg} \int_0^\infty 1\, dx=0$$

but

$$\operatorname{reg} \left(\int_0^\infty 1\, dx\right)^2=-\frac1{12}$$

$$\operatorname{reg} \sum_{k=1}^\infty 1=-\frac12$$

but

$$\operatorname{reg} \left(\sum_{k=1}^\infty 1\right)^2=\frac16$$

Anixx
  • 9,302