I heavily suspect that your question is somewhat ill-posed. You seem to be asking for a depth that might well exceed the boundaries of science itself.
But if you liked DanielSank's good answer, then I can hazard a personal opinion. I do not claim that his is the interpretation taken by standard physics, for whatever that means.
It is, however, related to Einstein's disagreements with Copenhagen views in the 1927 Solvay Conference. The issue is this: Whatever it is that we designate as the wavefunction of the photon, this wavefunction can become macroscopically large at the time of detection. Nowadays we can think of a bright laser light making that huge interference pattern that you can project onto the entire wall of a lecture theatre. Manifestly macroscopically large. But when it is detected, only one atom on the wall may absorb the entire wavefunction's worth of one photon.
And it is one photon wavefunction, not multiple photons. The historical and the textbook example of this is the G.I. Taylor experiment, but we now know it is insufficient. Nonetheless, modern experiments show that, when you only have one photon in the system, you will still get the interference pattern that multiple photons would have had. Theoretically, we know that there must be some correction terms when the photon occupancy numbers get tremendously huge, but if there is experimental verification of such, I am not made aware of their existence. This is good, because we can swap from doing incredibly slow experiments with single photons to wait for the interference pattern to show up to statistical significance, to doing quick experiments with bright lasers, and get the same results.
So what Einstein was worked up about, was that, how is the entire macroscopic system supposed to know that only one atom is going to absorb the photon? Is there a backward propagating wave from the absorbed atom, to tell all the other atom that this photon is mine and not for you to absorb? It is very easy to see how it would conflict with Relativity's speed limit, or with causality. The 1927 Solvay Conference proceedings is now sorted out and published as ``Quantum at the Crossroads", and it is a really nice read. You can see that, just because there was not yet coined the term "non-locality", that the Copenhagen young upstarts simply did not understand what Einstein was talking about. Einstein had a 20-year head start on them all, not for nothing! They even ignored the sequel, EPR.
If you take the Copenhagen view, then there is some form of collapse of the photon wavefunction when the atom absorbs it. Which contradicts other Copenhagen views that there is no collapse. Copenhagen is just such a mess. My own interpretation is that when a photon is absorbed, the entire wavefunction, across the entire universe, is removed at once. That is the rôle of the mathematical quantum operators that DanielSank's answer used. It is why we cannot use classical functions, and why the commutation relations for these quantum operators are of such great importance to QFT.
Note that this answer has very little to do with photoelectric effect. Your choice of experiment to discuss this is just sad. If you merely wanted the salient effects in the photoelectric experiment, you can get them with quantum material and classical light. The only bit that classical light cannot explain in the photoelectric effect, is the lack of time delay for the classical light to collect enough energy to kick out an electron, the lack thereof being due to the fact that photons are actually quanta, and so there is no need to wait because the quantum operator would reach across the universe to remove one photon's worth of energy to kick an electron out immediately as the photon reaches the atom. If you wanted to get a better understanding as to why the photon field must also be quantised, you should be thinking of other experiments.