2

I'm slightly confused as to answer this question, someone please help:

Consider a free particle in one dimension, described by the initial wave function $$\psi(x,0) = e^{ip_{0}x/\hbar}e^{-x^{2}/2\Delta^{2}}(\pi\Delta^2)^{-1/4}.$$

Find the time-evolved wavefunctions $\psi(x,t)$.

Now I know that since it is a free particle we have the hamiltonian operator as $$H = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2},$$ which yields the energy eigenfunctions to be of the form $$\psi_E(x,t) = C_1e^{ikx}+C_2e^{-ikx},$$ where $k=\frac{\sqrt{2mE}}{\hbar}$, and the time evolution of the Schrödinger equation gives $$\psi(x,t)=e^{-\frac{i}{\hbar}Ht}\psi(x,0)$$ but the issue I face is what is the correct method to find the solution so that I can then calculate things such as the probability density $P(x,t)$ and the mean and the uncertainty (all which is straight forward once I know $\psi(x,t)$.

In short - how do I find the initial state in terms of the energy eigenfunctions $\psi_E(x,t)$ so that I can find the time evolved state wavefunction.

Qmechanic
  • 201,751

2 Answers2

2

For a free particle, the energy/momentum eigenstates are of the form $e^{i k x}$. Going over to that basis is essentially doing a Fourier transform. Once you do that, you'll have the wavefunction in the momentum basis. After that, time-evolving that should be simple.

Hint: The fourier transform of a Gaussian is another Gaussian, but the width inverts, in accordance with the Heisenberg uncertainty principle. The phase and the mean position will transform into each other -- that is a little more subtle and you need to work it out.

Also have a look at http://en.wikipedia.org/wiki/Wave_packet.

Siva
  • 6,068
1

Some broadly applicable background might be in order, since I remember this aspect of quantum mechanics not being stressed enough in most courses.

[What follows is very good to know, and very broadly applicable, but may be considered overkill for this particular problem. Caveat lector.]


What the OP lays out is exactly the motivation for finding how an initial wavefunction can be written as a sum of eigenfunctions of the Hamiltonian - if only we could have that representation, the Schrödinger equation plus linearity get us the wavefunction for all time.

As Siva alludes to, this amounts to finding how a vector (our wavefunction) looks in a particular basis (the set of eigenfunctions of any Hermitian operator is guaranteed to be a basis). In general, one does this by taking inner products with the basis vectors, and the reasoning is as follows.

We know the set of vectors $\{\lvert \psi_E \rangle\}$ (yes, I'm using Dirac notation here - it's a good thing to get used to), where $E$ is an index ranging over (possibly discrete possibly continuous) energies, forms a basis for the space of all wavefunctions. Therefore, there must be complex numbers $c_E$ such that $$ \lvert \psi \rangle = \sum_E c_E \lvert \psi_E \rangle, $$ where $\lvert \psi \rangle$ is our initial wavefunction. If there are infinitely many energies, the sum has infinitely many terms. If there is a continuum of energies, it is an integral.1

Now the problem is clearly one of finding the coefficients $c_E$. To do that, we take inner products with the basis vectors, one by one, where presumably our energy basis is orthonormal. Pick a generic, unspecified basis element $\lvert \psi_{E'} \rangle$. Then we have $$ \langle \psi_{E'} \vert \psi \rangle = \sum_E c_E \langle \psi_{E'} \vert \psi_E \rangle = \sum_E c_E \delta_{E'E} = c_{E'}. $$ Whether the delta function is of the Kronecker or Dirac variety depends on whether the "sum" is a sum or an integral.

Here then we have our formula for coefficients, which reads (after removing the primes), $$ c_E = \langle \psi_E \vert \psi \rangle. $$ How does one go about solving this. At this point, it is okay to switch out of abstract vector notation and go into the position basis. We can do this with the somewhat cryptic yet awesome-sounding spectral resolution of the identity in, say, the position basis: $$ c_E = \langle \psi_E \vert I \vert \psi \rangle = \int_{-\infty}^\infty \langle \psi_E \vert x \rangle \langle x \vert \psi \rangle \ \mathrm{d}x. $$ Here $\langle \psi \vert x \rangle \equiv \psi(x)$ is just your wavefunction, expressed in more familiar terms.2 Furthermore, as you have hopefully been told, the correct inner product at play here introduces a complex conjugation if you switch the ordering, so $$ \langle \psi_E \vert x \rangle = \langle x \vert \psi_E \rangle^* \equiv \psi_E^*(x). $$

You now have enough to evaluate the coefficients $c_E$ for any initial problem given any orthonormal basis arising from a Hamiltonian. Given the free-particle form of $\psi_E(x)$ you can see that this process will essentially be a Fourier transform, so if you keep your wits about you you don't even need to do any messy integrals at all. Furthermore, depending on what is ultimately desired, the position basis may not be the most suitable basis for this problem, but doing a few problems the hard way builds character if nothing else.


1 Math aside: Countable infinities are not a big deal, since one of the assumptions of quantum mechanics is that our vector space isn't just a fancy inner product space, but also a really fancy Hilbert space. Then well-behaved linear combinations of wavefunctions, even countably infinitely many, well converge to perfectly well-defined wavefunctions. Justifying the integral is trickier, but it can be done.

2 Yes, this is the connection between Dirac notation and traditional "probability density as a function of space" notation students often learn first. Abstract kets become functions of position only when "bra-ed" with a generic position basis element.