0

The system I am studying is one molecule comes in close proximity with another molecule, and we are interested in calculating the energy of the resulting induced dipole-dipole interactions. I want to physically understand what $\int \psi^{1*}_0 \psi^{2*}_0 H \psi^{1}_\mu \psi^{2}_\nu dv$ means in the context below.

To do so, we are using perturbation theory. For two molecules having orthonormalized eigenfunctions $\psi^1_\mu$ for molecule 1 and $\psi^2_\nu$ for molecule 2, with energies $E_\mu$ and $E_\nu$. Both molecules are in the ground state such that $\mu=\nu=0$. The dispersion energies as a second order perturbation are given by:

$$ W_{00} = \sum_{\mu \nu}'\frac{(\int \psi^{1*}_0 \psi^{2*}_0 H \psi^{1}_\mu \psi^{2}_\nu dv )^2}{E_{00} - E_{\mu\nu}} $$

where $E_{\mu\nu}=E_{\mu}+E_{\nu}$ and the $\sum'$ denotes summation excluding $\mu\nu=00$, because we are only interested in induced dipole dipole interactions of the excited states.

The Hamiltonian operator here is given as

$$ H = \sum_{ik} e_i^1 e_k^2 [ \frac{\mathbf{r^1_i}\cdot \mathbf{r^2_k} }{R^3} -\frac{3}{R^5}(\mathbf{C} \cdot \mathbf{r_i^1})(\mathbf{C}\cdot \mathbf{r_k^1}) ] $$

where $e_i^1$ is the charge of the i-th charge center of molecule 1, $\mathbf{r^1_i} = [x_i^1,y_i^1,z_i^1]$ is the location of the i-th charge center of molecule 1 with respect to a coordinate system centered at the center of mass of molecule 1, $\mathbf{r^2_k} = [x_k^2,y_k^2,z_k^2]$ and and $\mathbf{C}$ is the vector representing the location of the center of mass of molecule 2 in the coordinate system centered at molecule 1: $\mathbf{C} = [X,Y,Z]$, and R is the distance between two molecule's center of mass.

My difficulty is that given the form of H above, it should give a scalar when it acts on the excited state vectors i.e. $H | \mu \nu \rangle = E_{dipole-dipole}$. Thus, what does it mean mathematically and physically for us to do $\langle 0_1 0_2 | H | \mu \nu \rangle$ as shown in $W_{00}$ above? Typically, I would interpret $\langle a | b \rangle $ as an inner product between vectors/functions a and b, which still makes sense in a case like $\langle a | O | b \rangle$ when $ O | b \rangle$ gives another vector itself. So, shouldn't the inner product like this only be possible between two vectors? Not a vector and a scalar?

I understand that this formula comes from time independent second order perturbation theory generally, and have gone through the derivation of that via Wikipedia, but it does not help me physically interpret what this term means in this context. I understand that the entire term is negative energy correction to the perturbation energy, but that also does not help. Another answer here said that in general these matrix elements can be interpetated as relating to the amplitude of going from the excited states to the ground state in the presence of the perturbation - which kind of makes sense given the intuition that the inner product measures the overlap between two things.

Another answer explained that $\langle e_i | A | e_j \rangle$ is the $ij$th matrix element of the operator $A$, $A_{ij}$ in the (orthonormal complete) basis set spanned by $\{ | e_i\rangle\}$ Thus, the full matrix of the operator is $A = \sum_{ij}A_{ij}|e_i \rangle \langle e_j|$. However, I don't exactly see how this can be true unless $A|e_i\rangle$ equals another vector, and not a scalar. Also, what would the physical interpretation of that matrix element be in this case if $H | \mu \nu \rangle = E_{dipole-dipole}$ already?

march
  • 7,644
  • I'm confused by "My difficulty is that given the form of H above, it should give a scalar when it acts on the excited state vectors, i.e., ... " An operator acting on a state will yield a state not a scalar. Why do you think that acting with $H$ on $|\mu\nu\rangle$ yields just an $E$ and not some other state? And of course $A|e_i\rangle$ is equal to some vector, not a scalar, because it's the action of a linear operator (linear transformation) acting on a vector, which always yields a vector. – march Oct 19 '23 at 20:01
  • @march Just looking at the form for H given, it only involves the sums of dot products - thus should give a scalar, no? – McKinley Oct 19 '23 at 21:09
  • 1
    I mean, it should give a scalar in 3-space, but it's still an operator that acts on wave functions. In other words, it's a function of the coordinates $x_i^j$, $y_i^j$, etc., which are quantum mechanical operators. Since there are no derivatives in $H$, $H$ acts as multiplication when it acts on a position-space wave function, which means the wave function stays there. I suspect that you aren't realizing that real-space vectors like $\vec{r}$ don't live in the "same space" as the vector space of quantum mechanical states. – march Oct 19 '23 at 23:00
  • As an example, the position vector $\vec{r}$ is a vector in 3D space, but each component $x$, $y$, and $z$ are operators that act on wave functions(which are vectors in the Hilbert space of quantum states). The two different vector spaces don't exactly "talk" to each other in the way that you seem to think they do. – march Oct 19 '23 at 23:01
  • @march thank you for the helpful comments, given that I don't know what you are talking about - I suspect you are correct. I thought only eigenvalue equations act like what you described as in $H| \psi \rangle = k| \psi \rangle$? Are all operators like eigenvalue equations? Is there anywhere you could recommend that I could see how the vector spaces do "talk" to each other? – McKinley Oct 21 '23 at 05:25
  • It is hard to do quantum mechanics without having learned linear algebra, so perhaps you've forgotten some of the details of that. Any linear operator (read: linear transformation, or even matrix in the context of finite dimensional vector spaces), such as $H$, acts on a vector to make a new vector. For every (Hermitian) operator, there is a special set of vectors, which are the eigenvectors of $H$, such that $H|\psi\rangle = \lambda |\psi\rangle$, but in general, $H$ acting on an arbitrary vector makes some other vector which is not a scalar multiple of the origial vector... – march Oct 21 '23 at 14:22
  • ...Since your are working with spatial wave functions, this math is much easier. The action of an operator like the $H$ you give is just to multiply the function by $H$. That is $H\psi(x) = H(x)\psi(x)$. Since generally, $H(x)\psi(x)\neq \lambda \psi(x)$, just leave it as $H(x)\psi(x)$, and then compute the integral. – march Oct 21 '23 at 14:22
  • @march I see, thank you. So, then how do you interpret the matrix element in that case? – McKinley Oct 24 '23 at 18:07
  • It depends on the context. Off-diagonal matrix elements can represent transition amplitudes between two different states (showing up in, for instance, stimulated and spontaneous emission rates that directly relate to the intensity of spectroscopic lines). In 2nd-order perturbation theory, they represent (to some extent) how much other states get mixed into the unperturbed states due to the perturbing Hamiltonian. More than that, it's tough to put a direct physical spin on their meaning. – march Oct 24 '23 at 18:32

0 Answers0