Given operator equation like: $$i{\frac d{dt}}U(t,t_{0}) =V_I(t)U(t,t_{0})\tag{1} $$
The Dyson series solution is \begin{array}{lcl}U(t,t_{0})&=&1-i\int _{{t_{0}}}^{{t}}{dt_{1}V_I(t_{1})}+(-i)^{2}\int _{{t_{0}}}^{t}{dt_{1}\int _{{t_{0}}}^{{t_{1}}}{dt_{2}V_I(t_{1})V_I(t_{2})}}+\cdots \\&&{}+(-i)^{n}\int _{{t_{0}}}^{t}{dt_{1}\int _{{t_{0}}}^{{t_{1}}}{dt_{2}\cdots \int _{{t_{0}}}^{{t_{{n-1}}}}{dt_{n}V_I(t_{1})V_I(t_{2})\cdots V_I(t_{n})}}}+\cdots .\tag{2}\end{array}
Is this solution convergent? I always confused about this when I learn qft.
We have known that we can use feynman diagram (Dyson series) to perturbatively calculate every order, and even though after renormalization the result of every order becomes finite, the series of finite results is still a divergent asymptotic series. See also the radius of convergence is $0$ for QED
The most famous example is $0+0$-dim $\phi^4$, the partition function is $$Z(g)=\int_{-\infty}^{\infty}\frac{dx}{\sqrt{2\pi}}\exp(-\frac{1}{2} x^2 -g x^4)=\sum_{n}g^n I_n$$ with $$g^nI_n=\frac{(-g)^n}{n!}\int_{-\infty}^{\infty}\frac{dx}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}x^{4n}=\frac{(-g)^n}{n!}(4n-1)!!\propto(\frac{-gn}{e})^n$$ We see no matter how small of $g$ the sum is still divergent. But we konw the original integral must be convergent, $$Z(g)=\int_{-\infty}^{\infty}\frac{dx}{\sqrt{2\pi}}\exp(-\frac{1}{2} x^2 -g x^4)= \frac{1}{4\sqrt{g\pi}}e^{\frac{1}{32g}}K_{1/4}(\frac{1}{32g})$$ In this explicit case, it's easy to explain since we can't exchange the order of integral and infinite summation. Is this the root of divergence of Dyson series in general?
From the definition of $U(t,t_{0})$ $(1)$, it must be a finite quantity. But why in general we get a divergent series when we act $U(t,t_{0})$ between any state. The derivation seems exact when we get Dyson series $(2)$ from $(1)$. So where is the loophole or where are we cheated in the derivation of Dyson series?
Certainly there are some cases, we can get finite result from $(2)$, like $H_0=p^2/2m + 1/2 m w^2 x^2$ with linear external potential $V= - e x$ in quantum mechanics. So it seems there should exist some convergent criteria for $(2)$ which are not said in textbook.
Another related question:
We know given any $n\times n$ matrix $A$, the series $$\sum_{n=0}^\infty \frac{A^n}{n!}$$ is finite for any component. So for any $n\times n$ matrix $A$, $$e^A=\sum_{n=0}^\infty \frac{A^n}{n!}$$ is always well-defined.
In principle $U(t,t_{0})$ can also be written as a formalism of exponential of some operator: $$U(t,t_{0})=T \exp(-i\int _{{t_{0}}}^{{t}}{dt_{1}V_I(t_{1})})$$ What's about the convergence of $e^A=\sum_{n=0}^\infty \frac{A^n}{n!}$ when $A$ is a infinte dimensional matrix, i.e. operator? I never learnt functional analysis, so I don't the answer about this question.