2

From S.M. Blinder's Introduction to QM book, p. 116:

In seeking an approximation to the ground state, we might first work out the solution in the absence of the $1/r_{12}$ term. In the Schrodinger equation thus simplified, we can separate the variables $r_{1}$ and $r_{2}$ to reduce the equation to two independent hydrogen like problems. The ground state wavefunction (not normalized) for this hypothetical helium atom would be:

$$\psi(r_1, r_2) = \psi_{1s}(r_1)\psi_{1s}(r_2) = e^{−Z(r_1 + r_2)}$$

Why is it only the product and not some linear combination of the two wavefunctions? I heard somewhere that it has something to do with "tensor product". Can someone provide a detailed explanation about this?

Reference:

  1. Blinder, S. M. Introduction to Quantum Mechanics: in Chemistry, Materials Science, and Biology; Elsevier, 2012,. ISBN 978-0-08-048928-5.

NB: This question has also been asked on Chemistry Stack Exchange: Why is the electronic wavefunction of helium a product of the two 1s wavefunctions when ignoring electron-electron repulsion?

  • 1
    Without the interaction term, the Helium atom is effectively "the hydrogen atom twice". How do you find the ground state of 2 independent hydrogen atoms? Put both of them in the ground state (and ensure that Pauli Exclusion is satisfied). – By Symmetry Dec 12 '17 at 11:06
  • My question is WHY is taking the "product" of the two wavefunctions the equivalent to "putting both of them in the ground state". – Pauling0304 Dec 12 '17 at 11:09
  • 4
    One of the categorical axioms of QM is that given independent subsystems $S_i$ with Hilbert space ${\cal H}_i$ and Hamiltonian $H_i$, then the full Hilbert space ${\cal H}= \otimes_i{\cal H}_i$ is the tensor product, and the full Hamiltonian $H= \sum_iH_i$ is the sum. – Qmechanic Dec 12 '17 at 11:23
  • Cross-dupe of https://chemistry.stackexchange.com/q/87255/564, which should have been migrated and merged here... – Tobias Kienzler Dec 13 '17 at 07:11
  • @Tobias I don't think it should be migrated. It is on-topic on [chemistry.se]. The cross post is unfortunate, but sometimes these things happen. – Martin - マーチン Dec 13 '17 at 08:16
  • @Martin-マーチン True, but OP should at least have mentioned so, to avoid users wasting time writing an answer that's already on the other site – Tobias Kienzler Dec 13 '17 at 11:48
  • @Tobias That is absolutely correct, and if we had caught that before any answers were given, we probably would have closed both questions until one magically disappears. Now it's too late for these shenanigans... I have included a disclaimer on chemistry.se and submitted a suggested edit here, to do the same. – Martin - マーチン Dec 13 '17 at 12:17

3 Answers3

2

Because if

$$\require{cancel}\hat{H}=\hat{H}_{1}+\hat{H}_{2}=\left[-\frac{\hbar^{2}}{2m}\nabla_{1}^{2}-\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{1}}\right]+\left[-\frac{\hbar^{2}}{2m}\nabla_{2}^{2}-\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{2}}\right]+\cancel{\color{red}{\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{12}}}}$$

is your Hamiltonian and $\psi_{0}$ is the ground state of $\hat{H}_{1,2}$, i.e.

$$\hat{H}_{1}\psi_{0}\left(\vec{r}_{1}\right)=E_{0}\psi_{0}\left(\vec{r}_{1}\right)$$

$$\hat{H}_{2}\psi_{0}\left(\vec{r}_{2}\right)=E_{0}\psi_{0}\left(\vec{r}_{2}\right)$$

then you can argue that

$$\hat{H}\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)=\left[E_{0}+E_{0}\right]\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)$$

In other words, $\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)$ is a solution with energy $2E_{0}$ which is of course the lowest possible.

A common hand-waving argument is that this choice satisfies the fact that

$$P\left(A\cap B\right)=P\left(A\right)P\left(B\right)$$

for independent events $A$ and $B$. Here $A=\color{blue}{{\rm electron\:1\:is\:at\:}r_{1}}$ and $B=\color{blue}{{\rm electron\:2\:is\:at\:}r_{2}}$ so

$$P\left(A\right)=\left|\psi_{0}\left(\vec{r}_{1}\right)\right|^{2}$$

$$P\left(B\right)=\left|\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}$$

$$P\left(A\cap B\right)=\left|\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}=\left|\psi_{0}\left(\vec{r}_{1}\right)\right|^{2}\cdot\left|\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}=P\left(A\right)P\left(B\right)$$

in agreement with the probabilistic interpretation of quantum mechanics.

eranreches
  • 4,199
2

Indeed, you could quite happily assume that the joint wavefunction is the sum of two $1s$ hydrogenic wavefunctions, say, $$ \psi_\mathrm{sum} (r_1, r_2) = \psi_{1s}(r_1) + \psi_{1s}(r_2). $$ However, you get into trouble the minute you start wanting to use it. To start with, consider the probability that the first electron is in the interval $[a,b]$, with no restrictions on $r_2$, which should naturally be given by the integral over the entire space of $r_2$: \begin{align} P_{(\rm sum)}(r_1\in[a,b]) & = \int_a^b \mathrm dr_1 \int \mathrm dr_2 |\psi_\mathrm{sum}(r_1,r_2)|^2 \\ & = \int_a^b \mathrm dr_1 \int \mathrm dr_2 |\psi_{1s}(r_1) + \psi_{1s}(r_2)|^2 \\ & = \int_a^b \mathrm dr_1 \int \mathrm dr_2 \left[|\psi_{1s}(r_1)|^2 + 2\mathrm{Re}\left(\psi_{1s}(r_1)\psi_{1s}(r_2)^*\right) + |\psi_{1s}(r_2)|^2 \right]. \end{align} This is going to give a fixed contribution of $[a,b]$ from the $|\psi_{1s}(r_2)|^2$ term (which then tells you that this probability doesn't even have the right units), and a contribution of $\infty$ from the $\int_a^b \mathrm dr_1|\psi_{1s}(r_1)|^2$ term (which is what really interests us) integrated over the unbounded interval $\int\mathrm dr_2$ with nothing to temper the $r_2$ dependence.

Consider, on the other hand, the behaviour of the tensor-product wavefunction for this observable: \begin{align} P(r_1\in[a,b]) & = \int_a^b \mathrm dr_1 \int \mathrm dr_2 |\psi(r_1,r_2)|^2 \\ & = \int_a^b \mathrm dr_1 \int \mathrm dr_2 |\psi_{1s}(r_1)\psi_{1s}(r_2)|^2 \\ & = \int_a^b \mathrm dr_1|\psi_{1s}(r_1)|^2 \int \mathrm dr_2 |\psi_{1s}(r_2)|^2 \\ & = \int_a^b \mathrm dr_1|\psi_{1s}(r_1)|^2, \end{align} i.e. exactly what you need it to be.

OK, so that rules out one alternative, but it doesn't fully prove that the tensor product is the only working alternative. However , it does provide a harder criterion for what we need: we want the probability densities $|\psi_i(r_i)|^2$ to combine together like standard probability densities do when the underlying events are independent. In other words, the position-space probability density needs to be $$ |\psi(r_1,r_2)|^2 = |\psi_{1s}(r_1)\psi_{1s}(r_2)|^2. $$

By itself, that's not yet enough, but we can say more - we need this to work for all observables, not just for position-space measurements. This takes some doing, but one can show that when you do that general formalism, you just wind up at the universal property of the tensor product, which constrains the resulting structure to being canonically isomporphic to the tensor product.

Emilio Pisanty
  • 132,859
  • 33
  • 351
  • 666
2

QMechanic gave a hint to the answer in the comments section. It is usually taken as a separate axiom of QM that if a quantum system A has the identifiable subsystems B and C (chosen two for simplicity, the argument can be easily extended to an arbitrary but finite number of subsystems), then the Hilbert space of A is the tensor product of the Hilbert spaces of B and C. This entails that:

$$\Psi_A = \psi_B \otimes \psi_C$$ at the level of normalizable (pure) quantum states and

$$H_A = H_B \otimes \hat{1}_C + \hat{1}_B \otimes H_C $$ at the level of Hamiltonians.

This description is consistent and leads to experimentally verifiable predictions for any multiparticle system (the simplest would be a Hydrogen atom). This axiom is amended for subsystems made of identical elements (for example the two electrons in the three-particle Helium atom) case in which the states and operators are multiplied or acted on by symmetrization or antisymmetrization operators.

The answer by Emilio Pisanty should be read after mine.

DanielC
  • 4,333