6

I just started with Dirac notation, and I am a bit clueless to say the least. I can see Schrödinger's equation is given in terms of kets. Would I be correct to assume if I were given a wavefunction, say $\Psi(x)=A\exp(-ikx)$, would I be able to just use the notation $\lvert \Psi\rangle =A\exp(-ikx)$?

Qmechanic
  • 201,751

6 Answers6

12

The definition is $$ \psi(x)=\langle x| \psi\rangle, ~~~\leadsto \\ |\psi\rangle= \int dx ~~\psi(x) | x\rangle , ~~\leadsto \\ |\Psi\rangle= \int dx ~~ A e^{-ikx}| x\rangle . $$

Wavefunctions are coefficients of coordinate kets.


NB You may also then check $$\langle p|\Psi\rangle= \int dx ~~A e^{-ikx} \langle p|x\rangle ={A\over \sqrt{2\pi \hbar}} \int dx ~e^{-ix(k+p/\hbar)} \\ ={A\over \sqrt{\hbar}} \delta (k+p/\hbar). $$

Cosmas Zachos
  • 62,595
7

A ket is an abstract vector which describes the state of your system. A wavefunction is a representation of that vector in a particular basis, usually the position basis (although you will sometimes also hear about, e.x. "momentum-space wavefunctions"). The way the position-space wavefunction $\psi$ is defined is by letting $\psi_\alpha(x)$ denote the component of the vector corresponding to the state $\alpha$, in the direction of the position eigenvector with eigenvalue $x$. This is seen in the equation $$\psi_\alpha(x) = \langle x\vert\alpha\rangle$$

Therefore, it would be incorrect to say something like $\lvert\alpha\rangle=\psi_\alpha(x)$, since $\lvert\alpha\rangle$ is a vector in the state space, whereas $\psi_\alpha(x)$ is just some scalar complex number telling you the $\lvert x\rangle$ component of $\lvert\alpha\rangle$.

Sandejo
  • 5,478
4

The relationship of the values that wave functions take to "kets" is the same as that of vector components to vectors - which makes sense since "kets" are a type of vector, just one in a space that is more general and quite easily (very!) infinite-dimensional. In Euclidean space, you have a vector of the form

$$a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots + a_n \mathbf{e}_n$$

where $\mathbf{e}_j$ are basis vectors, which have the property that combinations like the above generates every other vector in the space. The components are the coefficients $a_j$. Note that the relationship

$$j \mapsto a_j$$

is a function whose domain is the finite index set $\{ 1, 2, \cdots, n \}$ that indexes the dimensions of the space. This function is exactly the analogue of the wave function in the ket case, only here, the "index of dimensions" is a point in space, so that there now are as many dimensions as there are points in space, and we need to add up an uncountable number of terms, so we want to use an integral:

$$|\psi\rangle = \int_{P \in \mathbb{R}} [\psi(P)\ dV]\ |P\rangle$$

. And that's it.

Now an _eigen_ket is something that only makes sense in the context of also having a particular linear operator we are interested in - a certain kind of ket-valued function of kets. If $\hat{A}$ is a linear operator, that is, that takes a ket as an input argument and outputs another ket, then if there's a particular ket $|\psi_E\rangle$ for which it acts like a scaling, i.e.

$$\hat{A}(|\psi_E\rangle) = a |\psi_E\rangle$$

for some number $a$, then we say $|\psi_E\rangle$ is an eigenket of the operator $\hat{A}$. If you think of kets as infinite-dimensional vectors, then eigenkets are those whose "direction" doesn't change when the operator is applied. There is no such thing as "an eigenket" by itself, without any reference to a particular linear operator that works this way when applied to it.

It should also be mentioned that once you have a vector space, you can also consider alternative sets of basis vectors. These give different vector components, and the same goes for kets as well - which means there are different kinds of "wave function", such as a momental wave function (dimension index is momentum values), or an energetic wave function (dimension index is energy), and so forth.

1

Yeah, as a beginner you can basically do this with no problems, but a physicist would tend not to do that because it's mixing notations. $|Y\rangle$ is a state, i.e. a vector. $Y(x)$ is a function that eats a value of $x$ and spits out the value of the wave function $Y(x)$. So whereas $Y(x)$ "depends" on an input, $|Y\rangle$ does not, and the expression $|Y\rangle = Y(x)$ isn't "balanced" on both sides. A slightly more correct expression would be to write $$ \langle x | Y\rangle = \int dx' \delta(x-x')Y(x') =Y(x) $$ But really this is a minor quibble.

user1379857
  • 11,439
  • 7
    I wouldn't call this minor, and introducing the correct formalism from the start helps a lot in avoiding confusion later on. – KilianM Feb 11 '21 at 17:44
  • 2
    Depending on how one looks at it, a function is a vector, just as a ket is. In fact, the Hilbert space of kets $|\alpha\rangle$ is isomorphic to that of wavefunctions $\psi_\alpha(x)=\langle x|\alpha\rangle$. The difference is that kets keep the basis hidden, while in the wavefunctions it's explicitly chosen. – Ruslan Feb 16 '21 at 16:08
0

You are indeed confusing eigenket and eigenfunction. The eigenkets are the possible forms your system takes after a measurement, they are the eigen vectors (eigenkets) of the measurement operators (ex $\hat{H}$) you choose to apply on your eigenfunction (your system). The eigenfunction of a system can be expressed as a superposition of eigenkets (integral ->continuous or series ->discret). They are both vectors in the Hilbert space you are working in thought

Don't hesitate if you need clarification

Urb
  • 2,608
0

It depends on the equation, that is: $V(x)$. If it's free particle ($V(x)=0$), with $\hbar=1$, then:

$$ \psi(x) = Ae^{-ikx}$$

is proportional to the momentum eigenstate:

$$ |k\rangle$$

Then:

$$ \hat p |k\rangle = \hbar k|k\rangle = k|k\rangle$$

is easier to deal with then:

$$ i\frac{d}{dx}\psi(x)=i\frac{d}{dx}Ae^{-ikx} = -i^2kAe^{-ikx}=k\psi(x)$$

and likewise for:

$$ \langle k'|k\rangle = \delta(k'-k)$$

JEB
  • 33,420