9

Can a regularisation of the determinant be used to find the eigenvalues of the Hamiltonian in the normal infinite dimensional setting of QM?

Edit: I failed to make myself clear. In finite dimensions, there is a function of $\lambda$ whose roots are the eigenvalues (or inverses of the eigenvalues) of a given operator $U$, namely, the characteristic polynomial $\det (I-\lambda U)$. Is there some way of regularising this determinant to do the same thing in infinite dimensions? In general? Or at least for unitary operators which describe the time evolution of a quantum mechanical system?

link to a related question What does a unitary transformation mean in the context of an evolution equation?

EDIT: Perhaps the question still is not clear. The question was, and still is, ¿if you regularise $\det(I-\lambda U)$ as a complex valued function of $\lambda$, for $U$ a unitary operator, will its zeroes be the values of $\lambda$ such that $I-\lambda U$ fails to be invertible? ¿Has a non-zero kernel? ¿Or does regularising the determinant lose touch with that property of the finite dimensional determinant?

  • Can you please clarify the question? Do you mean, can you discretize the Schrordinger eigenvalue problem to a finite dimensional system, and then find the eigenvalues E of the finite dimensional matrix M using det(M-EI)=0, and recover the correct levels E in the limit that the dimension of the system goes to infinity? – Ron Maimon Jan 03 '12 at 08:37
  • That would be the best possible answer, but I could settle for less. I'll try to rephrase the question, too. – joseph f. johnson Jan 03 '12 at 08:50
  • Then your question is not about the determinant at all, but about regulating the Schrodinger operators computationally. This is very easy to do, it is normal in computational studies, but I am not sure how much is rigorously known about the continuum limit. – Ron Maimon Jan 03 '12 at 12:35
  • I don't know why you say "regularizing". I gave you the "regularized" form below. I think you mean something different, "regularization" means putting the theory on a lattice, or breaking the continuum limit some other way, like doing an infinite sum with additional factors. You are asking whether the determinants converges after appropriate rescaling in the large L small $\epsilon$ limit to a unique function of $\lambda$, and it might, for the appropriate class of potentials. Zeta-function regularization is often used. Convergence isn't necessary for PI. – Ron Maimon Jan 04 '12 at 18:12
  • what does « PI » stand for ? – joseph f. johnson Jan 04 '12 at 21:56
  • 1
    I don't know whether this makes a difference, except in grammar, but I did not say « regularising », I said « regularisation », and that is because that is the normal word for altering the definition of a divergent function (or integral) to make it convergent yet still agree with the original if the original were convergent. E.g. zeta function regularisation of the determinant. Lattice methods are one method of regularisation but not the only one. AFAIK, the zeta function regularisation doesnt work for the characteristic poly, and even if it did, wouldnt have the eigenvalues as roots. – joseph f. johnson Jan 04 '12 at 22:12
  • PI="Path Integral", which is where you get such determinants naturally. In the path integral, you take a log which gets rid of the overall multiplicative constant, and if you then differentiate with respect to $\lambda$, you get the Green's function. Then you can use Zeta function regularization, but the lattice regularization is fine too. You don't need to know the Eigenvalues to write down the lattice regulated operator, and the limiting procedure for the small energy eigenvalues should work, although it is annoying to find the class of potentials which work. – Ron Maimon Jan 05 '12 at 02:56

3 Answers3

1

The continuum eigenvalues and eigenvectors of the Schrodinger operator are the limiting low-lying eigenvalues and eigenvectors of the discrete lattice approximations. Given a Schrodinger operator

$$ H= \sum_i A_i \partial_i^2 + V(x_1,....,x_n) $$

Where V is of the appropriate class (smooth is too restrictive--- you can have delta functions too, and random potentials, but I don't know the best possible function class--- it might be any integrable potential, i.e., any potential at all EDIT: of course it can't, as the -1/r^n energy levels run away to be localized on top of the attractive spot. The correct condition on the potential is involved, but you can take it to be continuous for this discussion), you replace the x's by a square lattice of spacing $\epsilon$ and of total size L in each direction with periodic boundaries, replace the $\nabla_i$ by the lattice $\nabla_i$

$$ (H_L \psi) (x) = \sum_i {A_i\over \epsilon^2} (\psi(x_i + \epsilon) - 2\psi(x_i) + \psi(x_{i-1})) + V_L(x) \psi(x) $$

Where V_L(x) is the integral over one lattice volume of the continuum V(x) in an $\epsilon$ box centered at x, and the discrete second derivative is the difference between the forward difference and the backward difference.

Then the approximately smooth low lying eigenvectors of $H_L$ converge to the eigenvalues of H in the continuum limit, and as for the high eigenvectors, who cares, these are lattice artifacts. I am sure that it is possible to prove all this rigorously, although from the physical point of view, if it were not the case, the Schrodinger equation would be physically suspect.

You can see the convergence on a computer, if you simulate a discretized Schrodinger operator. You can prove the convergence of the discrete to continuous propagator relatively easily from the path integral. For the individual eigenvalues and eigevectors, things will be somewhat more involved. If you want a mathematical proof, I can try to sketch one.

EDIT: Determinant formula

If you look at the eigenvalue equation for the finite dimensional operator $H_L$,

$$det(H_L - \lambda I)$$

you find a finite degree polynomial, whose zeros are the eigenvalues of the equation in the limit $\epsilon\rightarrow 0$, $L\rightarrow\infty$.

  • You mean $det(H_L - \lambda I)$. This is a finite degree polynomial in $\lambda$ whose eigenvalues are those of $H_L$. – Ron Maimon Jan 03 '12 at 23:16
  • you seem to be saying that it is the set of zeroes which has a limit? Not the polynomial? – joseph f. johnson Jan 03 '12 at 23:55
  • @Joseph: yes--- it is not obvious that the polynomial is converging, but for sure the zeros are. But when you get a certain set of zeros, you can write an Euler product formula and make an analytic function which has these zeros, and perhaps this gives a unique correct continuum notion of infinite dimensional determinant, I am not sure. I always think of it regulated. – Ron Maimon Jan 04 '12 at 07:06
  • It seems, then, that the answer to my question is « no », but you are not sure. – joseph f. johnson Jan 04 '12 at 08:27
  • @Joseph: The answer seems to be yes, I didn't work out the limit of the polynomial, but an analytic function is usually specified by its infinite set of zeros and some additional constraints, like a polynomial of infinite degree. I am not sure under what conditions the convergence is guaranteed, and convergence of the determinant is not necessary for any of the physics results, but the mathematical topic is Fredholm theory. – Ron Maimon Jan 04 '12 at 08:39
  • I should expand the last comment a little--- suppose the eignevalues are bunched up, so that they accumulate at a finite value. Then it is impossible to have an analytic function which is singularity free in $\lambda$ which gives the limiting characteristic determinant, because the analytic functions have well spaced zeros. But such a bunching up requires the potential to be bounded at infinity, like a H-atom, so that the eigenvalues above the accumulation point become continuous, and you get a continuous line of zeros, like the reciprocal of a function with a cut. – Ron Maimon Jan 04 '12 at 15:08
  • This is quite intereting. It still seems that you are using the eigenvalues first and then constructing a function, one which really has nothing to do with the determinant or characteristic polynomial. but what I asked for was a regularisation of the determinant which could then be used to find the eigenvalues. For example, the zeta function regularisation makes sense even if you do not yet know the eigenvalues, and it is a function of lambda. And I asked for something that would work on a unitary operator, so obviously Fredholm determinants are not defined in this context. – joseph f. johnson Jan 04 '12 at 22:01
  • @Joseph: If you know the derivative of the log of the determinant function with respect to $\lambda$ then you know the determinant function by integrating and exponentiating. The derivative of log(det(A-\lamdaI)) is the Green's function kernel used in Fredholm theory 1/(A-\lambda I) and you can regulate this using zeta functions. – Ron Maimon Jan 05 '12 at 02:54
  • WEll, it sounds like it is worth a try, but, the question was, will this determinant have the property that its zeroes indicate the operator has a kernel? Without that property, it cannot be used to find eigenvalues the way the characteristic polynomial is for finite dimensional determinants.... – joseph f. johnson Jan 05 '12 at 19:58
1

As you have already mentioned, there are many ways to understand 'regularization' and it is not very often connected with discrete limit - rather, these are dirty tricks to give a meaning to certain sums/integrals which are clearly divergent. Here, the problem is different - we do not know a priori WHAT should be this divergent object - to have divergence we have to have a limit and we have no limit so far. So, the question is rather about the definitnion than about 'regularization' which might be necessary in later steps.

So, i may suggest a definition - we have an identity for finite dimensional operators (let us assume U unitary) , : $det(I-\lambda U) = exp(Tr(ln(I-\lambda U))$. This is always correct - because $I-\lambda U $ is normal and therefore diagonalizable.

We can expand ln in Taylor series around 1 to obtain (ordinary Taylor series when U is in eigenbasis, no problems with radius of convergence when looked upon U - as modulus of all U's eigenvalues is 1).

$$ det(I-\lambda U) = exp(- \sum_{n=1}^{\infty} \frac{\lambda^n}{n} Tr\, (U^n) ) $$

Now we have an expression explicitly containing a limit and at the same time well defined for U an operator in finite dimensional complex Hilbert space. Note, that the appearance of limit is a side-effect, not intentional. Now, we can ask if this expression makes sense when our space becomes infinite-dimensional. There are theorems that state, that if U is bounded (that is $\exists_{M>0}: \forall_{v \in V}\,\, ||U v||\leq M \, ||v|| $) and trace-class (so that trace always exists and is finite) the above formula is well defined in infinite dimensional case. For unitary operators these requirements boild down to denseness of its range, which will be dense (at least for reasonable hamiltonians generating this unitary trafo). So, the above expression is well defined in infinite-dimensional Hilbert space without nearly any singificant additional hypotheses, nor any 'regularization'. Now, all we have to do is to find zeros of this well-defined 'zeta' function: $$ \zeta(\lambda) = {det(I-\lambda U)} = exp(- \sum_{n=1}^{\infty} \frac{\lambda^n}{n} Tr (U^n) )$$ And, being honest, I haven't got a slightest idea how to do it! However, I am quite sure noone has ever prooven it cannot be done :) . I believe it would be not too difficult to start by proving that all zeros lie on a unit circle (c'mon, we all knew that from the beginning!). Unfortunately i have no time nor ideas to deal with it now. Somebody?

Terminus
  • 380
1

Finding the roots of a polynomial of finite and infinite degree is different. If the roots of a polynomial of a finite degree always exist, then for an infinite degree this is not the case, the example $$exp(\lambda)=\sum_{n=0}^{\infty}\frac{\lambda^n}{n!}=0$$ does not have exact finite roots