You seem to be starting from the integral formulation of Faraday's law; i.e your hypothesis is that for every "nice" 2-dimensional surface $S$ (i.e a compact oriented smooth $2$-dimensional submanifold of $\Bbb{R}^3$), we have
\begin{align}
\int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=-\frac{d}{dt}\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
I'm not sure why your professor is trying to invoke time-varying surfaces. Trying to bring in time-varying surfaces only obscures the matter (because if we consider a smoothly varying family of surfaces $\Sigma_t$, then $\frac{d}{dt}\int_{\Sigma_t}\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}$ will be more complicated and have extra terms).
So, fix such a nice surface $S$ for the rest of the discussion. Then, Leibniz's integral rule (one of the simplest versions will suffice here) tells us that the $\frac{d}{dt}$ can be brought inside and converted into $\frac{\partial}{\partial t}$, i.e
\begin{align}
\int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=\int_{S}-\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
Using Stokes' theorem, and rearranging, we get
\begin{align}
\int_S\left(\nabla\times\mathbf{E}+\frac{\partial \mathbf{B}}{\partial t}\right)\cdot d\mathbf{S}&= 0
\end{align}
Note that an integral of a function over a surface being equal to zero tells us nothing about the function. However, in our case, the integral is zero for EVERY possible surface $S$. Therefore, it must happen that the integrand vanishes identically, and thus
\begin{align}
\nabla \times \mathbf{E}&=-\frac{\partial \mathbf{B}}{\partial t}.
\end{align}
As for why we can swap the $\frac{d}{dt}$ with the integral to get a partial derivative, just observe the following. Define
\begin{align}
\mathbf{b}(t)&=\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
Then, writing out a difference quotient, and using linearity of integrals, we have
\begin{align}
\frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}&=\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}
\end{align}
Thus, by taking the limits $h\to 0$, we get
\begin{align}
\mathbf{b}'(t)&=\lim\limits_{h\to 0}\frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}\\
&=\lim\limits_{h\to 0}\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\\
&=\int_S\lim_{h\to 0}\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\tag{$*$}\\
&=\int_S\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
In the step $(*)$, we of course have to justify why we are allowed to interchange the limits with integrals. For this one has to make some "regularity assumptions" on $\mathbf{B}$, which for all intents an purposes, you can assume are satisfied in any applications to physics (the most common way to justify the exchange is known as "dominated convergence theorem", but this is a fairly advanced theorem of calculus. Having said this, if we assume $\mathbf{B}$ is continuously differentiable, then one can provide a 'simple' proof of this, assuming one knows the basic $\epsilon,\delta$ definitions of limits).