3

I understand how the field lines enter and leave the gaussian surface. But my concern is that the field isn't constant everywhere on the gaussian surface, i.e, there exactly doesn't exist an $- E\cdot da$ corresponding to every $E\cdot da$. I understand the idea of how the enlargement of the area compensates for the reduction in the field. But I want it mathematically. If you could prove that for a point charge on an irregular surface, the job would be done.

I have however, made an attempt to prove it in the below image, where I basically proved how the flux doesn't depend on $r$ for an infinitesimally small surface $da$. But I am not satisfied with it. Please help. Ignore the proof if it isn't good enough and doesn't make sense, I am just a fresher, hence a noob. [Proof]1

1 Answers1

1

Suppose we have a point charge $q$ at the origin $\vec{r}=0$. Then choose an arbitrary Gaussian surface $S$ enclosing a volume $V$. By definition of flux, the electric flux through the surface is

$$\Phi=\iint_S\vec{E}\cdot\vec{dS}$$

By the divergence theorem, this is equal to

$$\Phi=\iiint_V\nabla\cdot\vec{E}\ dV\tag{1}$$

Then, since we know the form of $\vec{E}$, namely

$$\vec{E}=\frac{1}{4\pi\varepsilon_0}\frac{q}{r^2}\hat{r}$$

we can calculate directly its divergence

$$\nabla\cdot\vec{E}=\frac{q}{4\pi\varepsilon_0}\nabla\cdot\left(\frac{\hat{r}}{r^2}\right)=\frac{q}{\varepsilon_0}\delta^3(\vec{r})\tag{2}$$

where in the last step I have used the mathematical identity1

$$\nabla\cdot\left(\frac{\hat{r}}{r^2}\right)=4\pi\delta^3(\vec{r}).$$

Inserting $(2)$ in $(1)$ we have

$$\Phi=\iiint_V\frac{q}{\varepsilon_0}\ \delta^3(\vec{r})\ dV$$

And finally, if the surface does not enclose the charge, i.e., $\vec{r}=0\notin V$, the last integral vanishes due to the translation property of the Dirac delta2.


1Take a look at this Math.SE post for details.

2Here it is $$\iiint_{V}\delta^3(\vec{r}-\vec{r}_0)\ dV=\begin{cases}0\quad\text{if }\vec{r}_0\notin V\\1\quad\text{if }\vec{r}_0\in V\end{cases}$$

Urb
  • 2,608
  • Amazing! Thank you. Totally satisfied with the answer. However to accept an answer,I am waiting for someone to derive it from the very basics, i.e, without using Dirac delta and divergence theorem. I mean just from the integral E.da part of the theory. The reason being that it will be relatively easy to understand for a larger set of audience (those who don't know Dirac delta and divergence) . Else I don't have any issue with your answer, It's totally perfect. – AMISH GUPTA Oct 05 '20 at 14:10