4

In general for two operators to be equal, all their (matrix) elements must be equal

$$A = B \rightarrow \langle \phi_1|A| \phi_2\rangle=\langle \phi_1|B| \phi_2\rangle$$

However, I am asked to show that in complex vector spaces it is enough to just state:

$$A = B \rightarrow \langle \phi_1|A| \phi_1\rangle=\langle \phi_1|B| \phi_1\rangle$$

In my attempt to do show this I did the following:

$$ | \phi_1\rangle = | \psi_1\rangle + i| \psi_2\rangle \\ \langle \phi_1|A| \phi_1\rangle = (\langle \psi_1| + i \langle \psi_2|)A(| \psi_1\rangle + i| \psi_2\rangle) = (\langle \psi_1| + i \langle \psi_2|)B(| \psi_1\rangle + i| \psi_2\rangle) = \langle \phi_1|B| \phi_1\rangle$$

which when expanded out gave me $$ \langle \psi_1|A| \psi_1\rangle + i\langle \psi_1|A| \psi_2\rangle - i\langle \psi_2|A| \psi_1\rangle + \langle \psi_2|A| \psi_2\rangle = \langle \psi_1|B| \psi_1\rangle + i\langle \psi_1|B| \psi_2\rangle - i\langle \psi_2|B| \psi_1\rangle + \langle \psi_2|B| \psi_2\rangle $$

cancelling out terms on either side leaves me with:

$$ i\langle \psi_1|A| \psi_2\rangle - i\langle \psi_2|A| \psi_1\rangle = i\langle \psi_1|B| \psi_2\rangle - i\langle \psi_2|B| \psi_1\rangle $$

In addition to this I constructed another equality by following these steps, but starting from:

$$\langle \phi_1|A^\dagger| \phi_1\rangle=\langle \phi_1|B^\dagger| \phi_1\rangle $$

and in doing so obtained:

$$ i\langle \psi_1|A^\dagger| \psi_2\rangle - i\langle \psi_2|A^\dagger| \psi_1\rangle = i\langle \psi_1|B^\dagger| \psi_2\rangle - i\langle \psi_2|B^\dagger| \psi_1\rangle $$

my plan was to attempt to combine the two equality's in an attempt to produce $$\langle \psi_1|A| \psi_2\rangle=\langle \psi_1|B| \psi_2\rangle $$

as someone in the class mentioned they had luck with this method, but I'm stumped with where to go from here, or if I've made a mistake along the way. Any help would be greatly appreciated, I've been wracking my brain trying to think of something else to try.

Zolous
  • 81
  • 2
    What you are looking for is called polarization identity in Hilbert spaces (you have to use the complex one). – yuggib Oct 12 '15 at 07:50
  • I do not think that the polarization identity, though related, can be directly exploited, unless $A-B$ is Hermitian. I wrote a direct proof in my answer. – Valter Moretti Oct 12 '15 at 09:14
  • 1
    @ValterMoretti In your proof you essentially prove that two sesquilinear forms are equal if the related quadratic forms are equal; and that immediate once the polarization identity is proved. – yuggib Oct 12 '15 at 09:25
  • $(x,y) = \langle x| A y\rangle$ is not sequilinear because $(y,x) \neq \overline{(x,y)}$ unless $A \subset A^*$. That's my point... – Valter Moretti Oct 12 '15 at 09:29
  • We have a different definition of sesquilinear. For me sesquilinear is just linear in the right argument, antilinear in the left one. – yuggib Oct 12 '15 at 09:30
  • OK. But the polarization identity requires also my property? – Valter Moretti Oct 12 '15 at 09:31
  • @ValterMoretti No, not really. – yuggib Oct 12 '15 at 09:32
  • I tried to prove it: it is necessary :) instead. Try yourself. Already in the real case. By direct inspection you see that, if $(|)$ is $\mathbb R$ bilinear, $4(x|y)= (x+y|x+y) -(x-y|x-y)$ it is equivalent to $(x|y)=(y|x)$. – Valter Moretti Oct 12 '15 at 09:39
  • 1
    @ValterMoretti The hermiticity of the sesquilinear form is not necessary for the polarization identity. That should be easily proved by an explicit computation; but now sadly I do not have the time to do it ;-) In addition, every hermitian sesquilinear form has real associated quadratic form, but in general $s(f,f)$ is complex; that should be taken into account in checking the identity. – yuggib Oct 12 '15 at 11:51
  • 1
    I tried myself, you are right. Differently from the real case, where symmetry is necessary, the polarization identity does not need hermiticity just only sequilinearity is enough. The wikipedia page you quoted assumes the validity of hermiticity instead. It should be corrected... – Valter Moretti Oct 12 '15 at 12:44
  • You are correct that sesquilinearity is sufficient. If you would like to see the explicit calculation, I proved a very similar theorem to this one here https://physics.stackexchange.com/a/691645/326392. – Joel Croteau Jan 30 '22 at 07:39

1 Answers1

4

Yes, your conjecture is true (there is no need for involving the adjoint operators in the proof). Indeed, in a complex Hilbert space $\cal H$ (more generally a complex vector space equipped with an Hermitian scalar product) we have the following proposition.

Proposition. Let $A,B : D \to \cal H$ be a pair of linear operators defined in the dense subspace $D\subset \cal H$. $A=B$ if and only if $\langle x|Ax \rangle = \langle x|Bx \rangle$ for all $x\in D$.

Proof. It is enough proving that $\langle x|Ax \rangle=0$ for all $x\in D$ implies $A=0$. To this end, first use $x= y\pm z$ and then $x= y\pm iz$ in the identity above observing that $\langle y|Ay \rangle= 0$ and $\langle z|Az \rangle=0$. Linearity in the right entry and anti linearity in the left entry easily produce $\langle y|Az \rangle=0$ for all $y,z \in D$. Since $D$ is dense, there is a sequence $D \ni y_n \to Az$. Continuity of the scalar product immediately yields $||Az||^2 = \lim_{n\to +\infty} \langle y_n | Az\rangle =0$ for all $z\in D$. In other words $A=0$. $\Box$

In real Hilbert spaces the proposition is false. For instance, in $\mathbb R^n$, antisymmetric matrices satisfy $\langle x|Ax \rangle =0$ for every $x \in \mathbb R^n$, but $A\neq 0$ in general.

  • Can you elaborate on how the proof that $\langle x | Ax \rangle = 0$ for all $x \in D$ implies $A=0$ is sufficient to prove that $A = B$ iff $\langle x | Ax \rangle = \langle x | Bx \rangle$ for all $x \in D$? That is non-obvious to me. It is also not obvious to me that for a generalized dense subspace $D \subset \mathcal H$ that $| x \rangle \in D$ automatically implies that its components $| y \rangle$ and $| z \rangle$ are in $D$ as well. – Joel Croteau Jan 26 '22 at 22:12
  • 1
    For the first question, simply consider $C=A-B$. Saying that $\langle x|C x\rangle =0$ is equivalent to $\langle x| (A-B) x\rangle =0$, which by linearity, is equivalent to $\langle x|A x\rangle = \langle x |B x\rangle$. Regarding the second issue, you do not need to use that $x\in D$ implies $y$ and $z$ do, but that if $y,z \in D$, then $y\pm iz \in D$. This is trivially true because $D$ is a subspace. – Valter Moretti Jan 27 '22 at 13:21