2

A particle is defined by a wave function, $Be^{-2x}$ for $x<0$ and $Ce^{4x}$ for $x>0$. For the wave function to be continuous at $x=0$, $B=C$. A wave function must be continuous for it to be valid.

However, another condition we were taught and I can find all over the internet, is that the first spatial derivatives of the wave function must also be continuous. For this to be true at $x=0$, $B$ cannot equal $C$. Therefore why is this a valid wave function?

Another problem: $\psi = iC/3 \times (x-2)$ from $x=2,5$ and $-iC/5 \times (x-10)$ from $x = 5,10$. else $\psi = 0$. Again, the derivative is discontinuous at $x=5$ since the lines have different slopes. Still, this example is considered a valid wave-function by the text. (Solid State Electronic Devices, 7th ed., 2.6(c) and 2.7)

Can we simply ignore isolated points of discontinuity?

Qmechanic
  • 201,751

3 Answers3

5

The derivative of $\psi(x)$ is continuous only where there is no infinite discontinuity in the potential. Examples of situations where $\psi'(x)$ is not continuous include a $\delta(x)$ potential and both ends of an infinite well.

The quick argument follows by integrating $\psi''(x)$ over a small region: \begin{align} -\frac{\hbar^2}{2m}\int_{-\epsilon}^\epsilon \psi^{''}(x)dx &=-\frac{\hbar^2}{2m}\left(\psi'(\epsilon)-\psi'(-\epsilon) \right)\\ &= \int_{-\epsilon}^\epsilon \,dx\, (E - V(x))\psi(x)\, . \end{align} Thus, if the integrand on right hand side remains finite in the interval, the integral on the right goes to $0$ as $\epsilon\to 0$ and hence on the left hand side goes to $0$, implying continuity.

If as stated there is an infinite discontinuity in the integrand, then the integral on the right may give a non-zero value, which in turns gives a discontinuous $\psi'(x)$.

Takina
  • 7
  • 5
ZeroTheHero
  • 45,515
  • If you suppose that $\psi’’$ exists you also have that $\psi’$ is continuous by basic calculus, independently of the equation and the potential. What is the logic in this argument? – Valter Moretti Sep 30 '23 at 05:59
  • @ValterMoretti it is the standard argument. See for instance Complement HI section 1b of the text by Cohen-Tannoudji et al. The book does not claim to make a rigorous argument but we can surely agree that if this argument is good enough for a Nobel-winning physicist it should be good enough for PSE. – ZeroTheHero Sep 30 '23 at 08:52
  • Sorry, first of all you are using an "Ipse dixit" argument . However, it is not an issue regarding rigour. As it stands it is a pure nonsense , because one of the first things one learns while studing elementary calculus is that a differentiable function is continuous. If this argument has some meaningful sense (I expect that it is the case), it should be presented. Otherwise one is just asked to switch off his/her mind and accept everything. – Valter Moretti Sep 30 '23 at 09:53
  • By the way, I know why, mathematically speaking, functions representing quantum states must have that type of regularity. It is because they must stay in the domain of the unique-selfadjoint extension of the Hamiltonian operator. What I am wondering is if these mathematical motivations (though physically very well motivated by the requirment that the Hamiltonian has to me properly self-adjoint so it cannot be a "naive" differnetial operator), have also a physical interpretation. I cannot see this bridge, but it does not mean that it does not exist. – Valter Moretti Sep 30 '23 at 10:30
  • @ValterMoretti yes I realize this is justification by higher authority but certainly the (pretty canonical) argument is reasonably intuitive and does not require the machinery of functional analysis: it is a physics argument. The Schrodinger equation (which is 2nd order) has issues where the second derivative is not defined yet at such points you can still extract valuable information by comparing the solution on the two sides of the problematic point. But you know all this already… – ZeroTheHero Sep 30 '23 at 14:00
  • Indeed, how can you motivate the fact that the second derivative does not exist? You started form a 2nd order equation and now you change your assumptions (?) The point is that the true Hamiltonian is not the differential operator one usually writes down, but its adjoint, which is not a differential operator and it permits weak derivatives. – Valter Moretti Sep 30 '23 at 15:05
  • That is because physics requires that $H$ is selfadjoint, but no differential operator can be selfadjoint (at most they are essentially selfadjoint). Without this fundamental remark any physical "intuition" is arbitrary in my view and should not be used as it is not a justification. That is a subtle point of mathematical nature. Yes, even of physical nature, but relying on more basic postulates. For this reason I consider all those "arguments" deeply misleading. Even if they arise from a Nobel prize laureate... – Valter Moretti Sep 30 '23 at 15:07
  • @ValterMoretti You are allowed your opinion of course. The argument is a valuable and convenient shortcut as it allowed generations of physicists to precisely bypasses the need for all the machinery you allude to. Of course as a mathematician you are free to declare any physical intuition “arbitrary” and ignore it, but I will take my queue from known masters, and I’m certainly glad you are there to critically look back at this kind of issues. – ZeroTheHero Sep 30 '23 at 21:34
  • Obviously also you are allowed to have your opinion. In my view this is not an argument, just a a posteriori attempt to say something about a fact proved elsewhere. I do not think I am thinking as a mathematician on this issue, just as a person who desires to understand. I might add that I appreciated your comments several times and I am very sorry that we disagree on this occasion. But this is the way of the life :-) bye bye – Valter Moretti Oct 01 '23 at 05:42
  • @ValterMoretti well we do have to agree to disagree. I also very much appreciate your comments (and much of your work), which does provide insight; how much would depend on each individuals but I would never qualify it as “arbitrary”. g’day to you as well. – ZeroTheHero Oct 01 '23 at 12:52
  • I do not want to pass in chat for various reasons. What I could do is to officially propose a question about the traditional answer to the issue like yours (I received similar answers when I was student). So I can specify what I find "arbitrary" in this traditional answer. – Valter Moretti Oct 01 '23 at 14:08
  • @ValterMoretti I would love that! I think this is an excellent idea and would be very valuable. – ZeroTheHero Oct 01 '23 at 14:09
2

As proposed by @ZeroTheHero in the long discussion below my question About the traditional explanation of the continuity of the first derivative of a 1D wavefunction, I write down here the rigorous argument popularly attributed to H. Weyl to prove the continuity of the first derivative.

Actually the original argument by Weyl was handled by other people and a relatively more recent source should be Helwig: Differential Operators in Mathematical Physics Adisdon Wesley 1964, Cap 11. (Unfortunately, I do not have this book, and what I write below is a re-construction of the argument extracted from an Italian theoretical physics textbook.)

Let us consider a ''naive Hamiltonian operator'', where $2m=1$ for shortness, $$H_0 := -\frac{d^2}{dx^2} + U(x) : C^\infty_0(\mathbb{R}) \to L^2(\mathbb{R}) \tag{0}$$ where $U$ is $C^\infty$ except for a finite number of points $x_k$ where it has just finite jump discontinuities $U(x_k^+)\neq U(x_k^-)$ are finite. (The Schwartz space ${\cal S}(\mathbb{R})$ can be used in place of $C_0^\infty(\mathbb{R})$ achieving the same result.)

According to several results (e.g., by Kato after adding some integrability conditions on $U$ that are irrelevant here), $H_0$ is essentially selfadjoint, i.e., the adjont $H_0^\dagger$ of $H_0$ is selfadjoint $$H:= (H_0^\dagger)^\dagger = H_0^\dagger\:.$$

Since, in the standard mathematical physics formulation of QM, observables are requested to be properly selfadjoint operators, it is assumed that $H$ is the ''true observable''.

I stress that the domain $D(H)\subset L^2(\mathbb{R})$ is larger than the domain $C_0^\infty(\mathbb{R})$ of $H_0$ and it contains functions which are not smooth. Indeed $H$ is not a differential operator differently than $H_0$.

However $H$ and its domain $D(H)$ are completely determined by $H_0$ and its domain $D(H_0):=C_0^\infty(\mathbb{R})$ through the definition of adjoint operator. In other words, physics is already embodied in $H_0$.

Nevertheless, the existence of a basis of (proper) eigenfunctions (under some standard hypotheses on $U$) is guaranteed for selfadjoint operators as $H$ and not for symmetric operators as the original $H_0$.

So, when dealing with the eigenvector problem we should refer to $H$ and not $H_0$.

From this perspective, the ''correct'' Schroedinger equation is $$H\psi_E = E\psi_E$$ where $\psi_E \in D(H)$.

It turns out that (there are many theorems leading to this result)

$H$ has the same form as $H_0$ in (0), but the derivatives $\frac{d^2}{dx^2}$ are (second) weak derivatives and they coincide with standard derivatives when $x$ is not a discontinuity point of $U$.

Saying that $g\in L^2(\mathbb{R})$ is the weak derivative (aka distributional derivative) of $\psi \in L^2(\mathbb{R})$ means that $$\int f(x) g(x) dx = -\int \frac{df }{dx} \psi(x) dx\:, \quad \forall f\in C_0^\infty(\mathbb{R})$$

The Schroedinger equation for $\psi \in L^2(\mathbb{R})$, $$H\psi = E\psi$$ therefore implies (actually is equivalent to) $$\int \psi(x)\frac{d^2f}{dx^2} dx = \int (E-U(x)) f(x) \psi(x) dx \:, \quad \forall f\in C_0^\infty(\mathbb{R})\:. \tag{1}$$

Here comes the Weyl result. It states that, if $U$ satisfies the said hypotheses, then

$\quad \quad\quad \quad$ $\psi$ is properly $C^2$ out of the discontinuity points of $U$. $\quad \quad\quad \quad$ [WEYL]

That is a remarkable result as, in principle, $\psi$ is only $L^2$ in this discussion.

The result above has a known pair of fundamental consequences. The latter is the wanted result.

(A) If $\psi$ satisfies (1), then [WEYL] implies that it also satisfies the usual differential Schroedinger equations in the set of points $x\in \mathbb{R}$ where $U$ is continuous.

PROOF. If $f\in C_0^\infty(\mathbb{R})$ smoothly vanishes on an arbitrary small neighborhood the set of (isolated and finitely many) discontinuity points of $U$, since $\psi$ is $C^2$ where $f$ does not vanish, we can take advantage of the integration by parts obtaioning

$$\int \left(\psi(x)\frac{d^2f}{dx^2} dx - \frac{d^2\psi}{dx^2} f(x) \right) dx = \int \frac{d}{dx}\left(\psi(x)\frac{df}{dx} - \frac{d\psi}{dx} f(x) \right) dx = \psi(b)\frac{df}{dx} - \frac{d\psi}{dx} f(a)=0 $$ where $-a,b>0$ are arbitrary large numbers outside the support of $f$.

Therefore, from (1), $$\int \psi(x)\frac{d^2f}{dx^2} dx = -\int (U(x)-E) f(x) \psi(x) dx\:,$$ that is, since $\psi$ is $C^2$ where $f$ does not vanish, we can integrate again by parts twice obtaining $$\int \frac{d^2\psi}{dx^2}f(x) dx = - \int (U(x)-E) f(x) \psi(x) dx\:.$$ Equivalently $$\int \left(\frac{d^2\psi}{dx^2} - (U(x)-E) \psi(x)\right) f(x) dx= 0\:.$$ The fact that $-\frac{d^2\psi}{dx^2} + (U(x)-E) \psi(x)$ is continuous outside the discontinuities of $U$ and arbitriness of $f\in C_0^\infty(\mathbb{R})$ -- exploiting a standard argument of elementary calculus of variations -- implies that, if $x$ is a point where $U$ is continuous: $$-\frac{d^2\psi}{dx^2} + (U(x)-E) \psi(x)=0$$

(B) Let $\psi$ be as in (A). Then $\psi \in C^1(\mathbb{R})$. In particular it admits continuous first derivative at the discontinuities of $U$.

PROOF. We have to prove that $\psi$ is continuous, differentiable and admits continuous derivative only at the (isolated) points where $U$ is discontinuous, since these facts are already proved by (A) in the remaining points. Let us assume that $x=0$ is a discontinuity of $U$ (where as we know there is a finite jump). From (A) we have that $$\int_{-\infty}^{0_-} f(x)\frac{d^2\psi}{dx^2} dx + \int_{-\infty}^{0_-} V(x) f(x) \psi(x) dx = E \int_{-\infty}^{0_-}f(x) \psi(x) dx \:, \tag{2}$$ when the support of $f$ is sufficiently narrowed around $0$, so that it does not touch the other discontinuity points of $U$. Simlarly, $$\int^{+\infty}_{0_+} f(x)\frac{d^2\psi}{dx^2}+ \int^{+\infty}_{0_+}V(x) f(x) \psi(x) dx = E \int^{+\infty}_{0_+}f(x) \psi(x) dx \:. \tag{3}$$ Summing both sides of the found identities we find $$\int_{-\infty}^{0_-} f(x)\frac{d^2\psi}{dx^2} dx + \int^{+\infty}_{0_+} f(x)\frac{d^2\psi}{dx^2} dx = \int (E-V(x)) f(x) \psi(x) dx $$ where we used the fact that $0\in \mathbb{R}$ has zero measure and $f(x)\psi(x)(E- U(x))$ is integrable around $x=0$ because $f\psi \in L^1$ and $U$ is bounded on the support of $f$ as it has a just finite jump. Taking (1) into account, the found identity can be arranged to $$\int_{-\infty}^{0_-} f(x)\frac{d^2\psi}{dx^2} dx + \int^{+\infty}_{0_+} f(x)\frac{d^2\psi}{dx^2} dx = \int \frac{d^2 f}{dx^2} \psi(x) dx $$ Using integration by parts in the left-hand side we find $$ \left(-\frac{d\psi}{dx}(0_+)+ \frac{d\psi}{dx}(0_-)\right)f(0) + (\psi(0_+)- \psi(0_-))\frac{df}{dx}|_{x=0}+ \int_{-\infty}^{0_-} \psi(x)\frac{d^2f}{dx^2} dx + \int^{+\infty}_{0_+} \psi(x)\frac{d^2f}{dx^2} dx = \int \frac{d^2 f}{dx^2} \psi(x) dx\:. $$ Namely, since $$ \int_{-\infty}^{0_-} \psi(x)\frac{d^2f}{dx^2} dx + \int^{+\infty}_{0_+} \psi(x)\frac{d^2f}{dx^2} dx = \int \frac{d^2 f}{dx^2} \psi(x) dx\:,$$ we have $$ \left(-\frac{d\psi}{dx}(0_+)+ \frac{d\psi}{dx}(0_-)\right)f(0) + (\psi(0_+)- \psi(0_-))\frac{df}{dx}|_{x=0} = 0 \:.$$ As $f$ is arbitrary (with the said constraints regarding its support), we can choose first $f$ with $f(0)=0$ but $f'(0) \neq 0$ and next $f$ such that $f'(0)=0$ but $f(0) \neq 0$, finding that $$\psi(0_+)= \psi(0_-)\quad and \quad \frac{d\psi}{dx}(0_+)= \frac{d\psi}{dx}(0_-)$$ as wanted.

0

I remember this exercise well, because the lecturer of my course in quantum mechanics gave us this homework assignment without us knowing anything about distributions.

The reason that the two cases are different cannot be understood properly from physics text books. The difference between $H = - \frac{\hbar^2}{2m} \Delta + V(x)$ for a “nice” potential (e. g. a smooth, bounded function with bounded derivatives) and, say, the case where $V(x) = \lambda \, \delta(x)$ is quite subtle.

The crucial notion here is that of a domain of an operator. One necessary condition for a vector $\varphi \in \mathcal{H}$ to be in the domain of an operator $H$ is that $H \varphi \in \mathcal{H}$ also needs to be in the domain. However, there may be additional conditions such as boundary conditions. So for example, you can have several mathematically distinct operators with the same operational prescription (say, $-\Delta$) but that differ on the domains they are defined on. One case that physicists are familiar with is the wave equation with Dirichlet and von Neumann boundary conditions (which could model a closed or semiopen pipe, for example) — the spectrum, i. e. the vibrational modes, will be different.

So the difference between $H$ with a “nice” and a $\delta$ potential lies in the domain of the respective Schrödinger operators: for a “nice” potential, the domain of $H$ is the domain of $-\Delta$. And in dimension one this domain consists of absolutely continuous functions whose derivative exists almost everywhere.

The mathematical definition of the Schrödinger operator with $\delta$-potential is more subtle, and what you actually do is define the free Schrödinger operator $H_0 = - \frac{\hbar^2}{2m} \Delta$ on a domain that differs from that of $-\Delta$. This domain contains the jump of derivative due to the $\delta$-potential in its definition. This justifies the computation by ZeroTheHero.

Max Lein
  • 915