This is from the book Quantum Theory from First Principles: An Informational Approach, which I thought I'd give a read as I found the authors' two papers on the same subject to be rather impenetrable. The book begins with a review of quantum theory from a mathematical perspective of Hilbert spaces and quantum information theory. The very first exercise in the book is this:
Now the book does say it assumes a certain familiarity with Hilbert spaces, but also contains a brief review of concepts the authors consider important. And normally, when a book asks me to prove something, I assume it means with information already provided, as if I am allowed to go online and look for any known properties about Hilbert spaces to use in a proof, then just about any proof not worthy of a Nobel prize becomes trivial. However, the sum-total of information the book has given us about Hilbert spaces prior to this point, is this:
Obviously getting from that to even understanding the notation used for the polarization identity here requires some prior knowledge, and I'm doing my best to remember my Dirac notation from undergraduate quantum mechanics, but I still don't see how we get from the identity they provide to a linear operator being completely specified by its expectations of the set of all vectors of some fixed positive length. The solution provided for this exercise at the end of the chapter doesn't exactly help either:
Now I will admit that I'm jumping in at the deep end here on subjects that are definitely more than a little rusty for me, but I think this explanation could really benefit from some elaboration. I believe I understand the second sentence, it is saying basically that since we are in a linear space, if we know any property of a particular vector, then we also know that property of any scaled multiple of that vector.
Where they lost me was in the first sentence though. I almost wonder if the printers put the wrong equation in by mistake. How do we get from that elaborate sum in (2.6) do being able to express any matrix-element $\langle y|A|x \rangle$ of $A$ in terms of its diagonal elements? How does that help us to completely specify $A$ for any $|\lambda \rangle$? And what does that i.e. at the end mean? Is proving that $\langle \lambda|A|\lambda \rangle = 0$ for every $\lambda \in \mathbf{S}_s(\mathcal{H})$ iff $A = 0$ sufficient to completely specify $A$, or is that unrelated?
Am I missing something obvious here or is this just a very badly written exercise?