The uncertainty principle is listed in most textbooks and articles as $$ \Delta E \Delta t \geq \frac{\hbar}{2}.$$ This can be derived in many ways in many different settings, most of them involving commutation relations with appropriate operators.
This is often interpreted such that $\Delta E$ is the amount of energy that can be "borrowed" and $\Delta t$ is the time for which it can be borrowed. The uncertainty principle is then used to argue that if $\Delta t$ is large (that is, if the energy is borrowed for long times), then $$ \Delta E \sim \hbar/2\Delta t$$ is small, making this effect negligible on large time scales.
However, the uncertainty relation is an inequality. If $\Delta E$ is actually the amount of energy we can borrow, and if $\Delta t$ is the time for which it can be borrowed, then if $\Delta t$ is large, $\hbar/2\Delta t$ is small, and $$\Delta E \geq \hbar/2\Delta t.$$
This puts no upper limit on $\Delta E$ at all, and in fact gives it a lower limit. $\Delta E$ must be at least $\hbar/2\Delta t$, but it could also be orders of magnitude larger and still satisfy the uncertainty principle. In fact, $\Delta E$ and $\Delta t$ can both be infinite and satisfy the uncertainty principle.
My question is, then, since this argument is so frequently cited in textbooks, what justifications are used for interpreting $\Delta E$ and $\Delta t$ as relating to energy nonconservation? Am I missing something? Is there a reason for the switch from $\geq$ to $\sim$ in the relation? Why don't we observe the infinite violations of conservation of energy (in time and energy) that are predicted by this interpretation of the uncertainty principle? Why are only the minimum-uncertainty cases used in the literature?
I'm assuming there is a reason, and trying to figure it out. Thanks in advance for clarification!