Let's not even talk about big bangs yet. Consider a simple non-linear ODE $\frac{dx}{dt}=-x^2$ with the condition $x(1)=1$. There is a unique maximal solution defined on a connected interval, which in this case is easily seen to be $x(t)=\frac{1}{t}$ for $t\in (0,\infty)$. Ouch. Even for such a simple looking ODE, a simple non-linearity already implies that our solution blows up in a finite amount of time, and we can't continue 'backwards' beyond $t=0$. You as an observer living in the 'future', i.e living in $(0,\infty)$ can no longer ask "what happened at $t=-1$?" The answer is that you can't say anything. Note that you can also cook up examples of ODEs for which solutions only exist for a finite interval of time $(t_1,t_2)$, and blowup as $t\to t_2^-$ or as $t\to t_1^+$.
The Einstein equations (which are PDEs, not merely ODEs) are a much bigger nonlinear mess. It is actually a general feature of nonlinear equations that solutions usually blow up in a finite amount of time. Of course, certain nonlinear equations have global-in-time existence of solutions, but a-priori, there's no reason you should expect them to have that nice property. For instance, in the FRW solution of Einstein's equations, the scale factor $a(t)$ vanishes as $t\to t_0$ (if you plug in some simple matter models you can even see this analytically), and doing a bunch more calculations, you can show this implies somme of the curvature components blow up. What this says is the Lorentzian metric cannot be extended in a $C^2$ sense. We can try to refine our notion of solution and singularity, but that would require a deep dive into the harshness of Sobolev spaces etc, and I don't want to open that can of worms here or now.
Anyway, my simple point is that it is very common to have ODEs which only have solutions that exist for a finite amount of time, so your central claim of
It seems to me that I could plug these into my differential equations and find out the state of the universe infinitely far back or infinitely in the future.
is just not true.
Edit:
@jensenpaull good point, and I was debating whether or not I should have elaborated on it originally, but since you asked, I’ll do so now. Are there functions that satisfy the ODE $\frac{dx}{dt}=-x^2$ which are defined on a larger domain? Absolutely! The general solution is $x(t)=\frac{1}{t}+C(t)$, where $C(t)$ is constant on $(0,\infty)$, and a perhaps different constant on $(-\infty,0)$. So, we we have completely lost uniqueness. But, why is this physically (and even mathematically in some regards) such a big deal?
In Physics, we do experiments, and that means we have only access to things ‘here and now’ (let’s gloss over technical (but fundamental) issues and say we have the ability to gather perfect experimental data). One of the goals of Physics is to use this information, and predict what happens in the future/past. But if we lose uniqueness, then it means our perfect initial conditions are still insufficient to nail down what exactly happened/will happen, which is a sign that we don’t know everything. We are talking about dynamics here, so our perfect knowledge ‘initially’ should be all that we require to talk about existence and uniqueness of solutions (Otherwise, our theory is not well-posed). So, anything which is not uniquely predicted by our initial conditions cannot in any sense be considered physically relevant. Btw, such ‘well-posedness’ (in a certain class) questions are taken for granted in Physics, and occupy Mathematicians (heck the Navier-Stokes Millenium problem is roughly speaking a question of well-posedness in a smooth setting). Dynamics is everywhere:
- Newton’s laws are 2nd order ODEs and require require two initial conditions (position, velocity). From there, we turn on our ODE solver, and see what the result is.
- Maxwell’s electrodynamics: although in elementary E&M we simply solve various equations using symmetry, the fundamental idea is these are (linear, coupled) evolution equations for a pair of vector fields, which means we prescribe certain initial conditions (and boundary conditions) and then solve.
- GR: initially, there was lots of confusion regarding what exactly a solution is. It wasn’t until the work of Choquet Bruhat (and Geroch) that we finally understood the dynamical formulation of Einstein’s equations, and that we had a good well-posedness statement and a firm understanding of how the initial conditions (a 3-manifold, a Riemannian metric, and a symmetric $(0,2)$-tensor field which is the to-be second-fundamental form of the embedding) give rise to a unique maximal solution (which is globally hyperbolic).
So, my first reason for why we don’t continue past $t=0$ (though of course, the reasoning is not really specific to that ODE alone) has been that dynamics should be uniquely predicted by initial conditions. Hence, it makes no physical sense to go beyond $t=0$. The second reason is that in physics, nothing is ‘truly infinite’, and if it is, then our interpretation is that we don’t yet have a complete understanding of what’s going on. So, rather than trying to fix our solution, we should fix our equations (e.g maybe the ODE isn’t very physical). But before we throw out our equations, we may wonder: have we been too restrictive in our notion of solution? For instance, maybe it is too much to require solutions to be $C^1$. Could we for instance require only weaker regularity of $L^2=H^0$ or $H^1$? Well, $H^1$-regularity is indeed more natural for many Physical purposes (because $H^1$-regularity means ‘energy stays finite’). However, for this solution, we can see that $\frac{1}{2}\int_0^{\infty}|x(t)|^2+|\dot{x}(t)|^2\,dt=\infty$. In fact, this is so bad that for any $\epsilon>0$, $\int_0^{\epsilon}[\dots]\,dt=\infty$, so the origin is a truly singular point that even energy blows up. So, there’s no physical sense in continuing past that point.