This a well-known discussion in Griffths's Electrodynamics book. Cutting to the chase, the paradox between the apparent zero divergence of the vector field $\frac{\hat{\mathbf{r}}}{r^2}$ and the fact that the flux around the surface is $4\pi$ is dealt with by introducting Dirac's Delta. More specifically, using the two facts stated above, he arrives directly into the equation: \begin{equation} \nabla\cdot\dfrac{\hat{\mathbf{r}}}{r^2}=4\pi\delta^3(\mathbf{r}) \end{equation}
Well, as far as Dirac's Delta goes, seems like pretty straight-forward stuff. What I want to understand is the $4\pi$ part. We're trying to make sense out of the divergence of the field, and he quite directly uses the flux.
In other words, at the singularity, he's using a result derived from the integral of the divergence, at not the divergence itself, to consctruct the desired equation. Shouldn't we be using something more intrinsically connected with the divergence in order to explicitly represent it?
Guess it's somewhat clear that I'm searching more from the intuition behind Griffhts's argument. Even though, I've looked up some math based proofs of the relation, but it really does not add anything up as far as the physical understanding goes in this particular case. Anyway, I apologize in advance if the questions poses as something rather naive, but nevertheless, there it is. Any genuine response will be welcome.