26

From Taylor's theorem, we know that a function of time $x(t)$ can be constructed at any time $t>0$ as $$x(t)=x(0)+\dot{x}(0)t+\ddot{x}(0)\frac{t^2}{2!}+\dddot{x}(0)\frac{t^3}{3!}+...\tag{1}$$ by knowing an infinite number of initial conditions $x(0),\dot{x}(0),\ddot{x}(0),\dddot{x}(0),...$ at $t=0$.

On the other hand, it requires only two initial conditions $x(0)$ and $\dot{x}(0)$, to obtain the function $x(t)$ by solving Newton's equation $$m\frac{d^2}{dt^2}x(t)=F(x,\dot{x},t).\tag{2}$$ I understand that (2) is a second order ordinary differential equation and hence, to solve it we need two initial conditions $x(0)$ and $\dot{x}(0)$.

But how do we reconcile (2) which requires only two initial conditions with (1) which requires us to know an infinite number of initial informations to construct $x(t)$? How is it that the information from higher order derivatives at $t=0$ become redundant? My guess is that due to the existence of the differential equation (2), all the initial conditions in (1) do not remain independent but I'm not sure.

SRS
  • 26,333
  • 4
    As a rule, I do not comment about anonymous down-votes I receive because honestly, I don't see the need to give them a second thought. But I just noticed that this question has received 3 anonymous down-votes and each answer has at least one anonymous down-vote. That seems a bit odd to me. – Alfred Centauri Jun 25 '18 at 22:00
  • 1
    @AlfredCentauri Best to cast a flag for that sort of thing. There's no guarantee the moderators will be able to figure out whether anything nefarious is going on, but we can at least try to look into it. – David Z Jun 26 '18 at 01:51
  • see https://physics.stackexchange.com/q/399647/45664 for the same question applied to the wave equation--and a similar accepted answer. – user45664 Jun 26 '18 at 18:26

5 Answers5

39

On the other hand, it requires only two initial conditions x(0) and x˙(0), to obtain the function x(t) by solving Newton's equation

For notational simplicity, let

$$x_0 = x(0)$$ $$v_0 = \dot x(0)$$

and then write your equations as

$$x(t) = x_0 + v_0t + \ddot x(0)\frac{t^2}{2!} + \dddot x(0)\frac{t^3}{3!} + \cdots$$

$$m\ddot x(t) = F(x,\dot x,t)$$

Now, see that

$$\ddot x(0) = \frac{F(x_0,v_0,0)}{m}$$

$$\dddot x(0) = \frac{\dot F(x_0,v_0,0)}{m}$$

and so on. Thus

$$x(t) = x_0 + v_0t + \frac{F(x_0,v_0,0)}{m}\frac{t^2}{2!} + \frac{\dot F(x_0,v_0,0)}{m}\frac{t^3}{3!} + \cdots$$

In other words, the initial value of the 2nd and higher order time derivatives of $x(t)$ are determined by $F(x,\dot x, t)$.

12

FGSUZ has given part of the answer in his comment, but he has not given full details.

Consider $\ddot{x} (t)=F(x,\dot{x},t)$. In this case you have the second derivative in terms of lower order terms. You can therefore use this to remove the second derivative in favor of lower order items.

You can then take the time derivative of this equation. This will give you the third order time derivative of $x$ in terms of lower order derivatives. And you can use the first equation and its derivative to write everything in terms of at most the first derivative.

So, order by order, you can construct the Taylor expansion.

Now the general case may require you to deal with derivatives of $F(x,\dot{x},t)$. That is because you need the following (if I've recalled my calculus correctly).

$$\frac{d^3 x}{dt^3}=\dot{F}(x,\dot{x},t)= \frac{\partial}{\partial x}F(x,\dot{x},t) \frac{dx} {dt} + \frac{\partial}{\partial \dot{x}}F(x,\dot{x},t) \frac{d\dot{x}} {dt} +\frac{\partial} {\partial t}F(x,\dot{x},t)$$

This will often not be explicitly solvable. However, it also can be Taylor expanded in a similar fashion. And, at each order you keep only the corresponding order in the expansion of this equation.

So, order by order, you can construct the Taylor series. At each step you can use the equation of motion to remove all but the $x$, $\dot{x}$, and $t$ dependence. And so you will only need two initial conditions. Tedious, but possible.

The nice cases are those few where you can derive a simple formula that gives an easy recursion. So you might, for simple forms of $F$, get some simple thing that the $(n+1)$ derivative is some simple function of the $n$ derivative. In such cases, it is potentially useful in numerical solutions, since you can write things in terms of the time step and a nice Taylor expansion. Though, even in such cases, there are often more efficient methods.

  • 1
    The nice cases are those few where you can derive a simple formula that gives an easy recursion. Nice insight! – SRS Jun 26 '18 at 07:10
9

Power series expansion does not hold for all functions $f(t)$ or for all $t\in\mathbb{R}$, but only for real analytic functions and for $t$ in the radius of convergence. In particular, it does not hold at any point e.g. for functions $C^2(\mathbb{R},\mathbb{R}^d)\smallsetminus C^3(\mathbb{R},\mathbb{R}^d)$. Therefore it is not possible to define any function by giving countably many real numbers $(x^{(n)}(0))_{n\in\mathbb{N}}$.

In particular, Newton's equation may have solutions in $C^2(\mathbb{R},\mathbb{R}^d)\smallsetminus C^3(\mathbb{R},\mathbb{R}^d)$, that therefore do not admit a power series expansion, or in general solutions that are not real analytic for all times, and therefore that do not always admit a Taylor expansion. Nonetheless, these functions are uniquely defined by two real numbers ($x(0)$, $\dot{x}(0)$) and by being solution of Newton's equation (i.e. they are also determined by $m$ and the functional form $F$ of the force).

In case that a solution of the Newton equation is real analytic, then the value of the higher order derivatives in zero is determined uniquely by the solution itself, and thus they also depend only on $x(0)$, $\dot{x}(0)$, $m$ and $F$; no further knowledge is required.

yuggib
  • 11,987
  • 2
    In pure mathematics, this is true. But this is physics. Any realistic physical system can be considered analytic, since not only is it impossible to make measurements to infinite precision, it is impossible to even define the quantities being measured to such. (Consider for example, the volume of an object - beyond a certain precision, you can't even say where the boundary of the space filled by the object is.) – Paul Sinclair Jun 25 '18 at 16:34
  • 3
    @PaulSinclair I really do not see the point of your comment. The OP is asking about the mathematical properties of a function, and how it can be obtain as either the solution of a differential equation, or as the expansion in power series. If one should argue taking into account the limitations of accuracy in measurements, then one should very well deny the concept of derivative as being physical, since it involves a limit procedure with quantities becoming arbitrary small. So even Newton's equation does not make sense as such, since it involves two derivatives. – yuggib Jun 25 '18 at 16:43
  • You did indeed completely miss the point. However, as I see that in editing I accidently misphrased the second sentence, this is understandable. Mathematics is used in physics for modelling. Anywhere a non-analytic function is used to model a physical phenomenon, an analytic function could also be used with the same or better accuracy. Thus in physics it is entirely reasonable to assume all functions are analytic and not worry about the fiddly details that mathematicians must spend so much attention on. – Paul Sinclair Jun 25 '18 at 23:06
  • The solution to Newton's equation may not be analytic in the sense you define. However, a solution $x(t)$ of (2) e.g., $x(t)=A\cos(\omega t+\phi)$ for a harmonic oscillator, can always be constructed from a Taylor series of the form (1). So IMHO it's important to understand, how do the higher derivatives get automatically determined from (2). – SRS Jun 26 '18 at 07:07
  • 3
    @PaulSinclair It is not true in general that whenever a non-analytic function is used, it could be replaced by an analytic one from the physical point of view. It is often necessary from the physical perspective to use rough functions. An example that comes into my mind is the brownian motion. You cannot explain as accurately the trajectory of a particle in brownian motion using a smooth path as you would using a continuous nowhere differentiable path. Not to mention the necessity of using distributions (that are usually not even functions) in classical electromagnetism and signal theory. – yuggib Jun 26 '18 at 07:43
  • 1
    @SRS The smoothness of the solution depends on the forces involved in the process. An harmonic oscillator has smooth solutions, but other potentials (e.g. the Newton/Coulomb potential) may have singularities, and so the solution may fail to be smooth. Nonetheless, once you have the solution, smooth or otherwise, you can compute its derivatives explicitly. Since the solution depends only on the parameters of the system (mass and force field), and on the initial conditions, then also the higher-order derivatives will only depend on these. At least to me, it seems quite natural/straightforward. – yuggib Jun 26 '18 at 07:52
  • @PaulSinclair I see it exactly the other way around: because of measurement uncertainly, it doesn't make sense to even discuss analyticity, since there's no way to determine the higher derivatives and confirm they match the Taylor series. You can have a model that makes some prediction of some particular derivative based on a directly-measurable value (often, 2nd-order differential equations)... – leftaroundabout Jun 26 '18 at 12:07
  • ...and this tells us that the function should be at least that often differentiable. Such a model may have an exact solution which happens to be analytic. But for some PDEs we actually know that the exact solutions develop discontinuities, even when you give an analytic initial condition! In that sense, they're fundamentally non-analytic. That doesn't prove that the “true physical function” is non-analytic (this is not a well-defined question), but it does prove that it doesn't make sense to just assume that everything is analytic. – leftaroundabout Jun 26 '18 at 12:08
4

Long story short, to get to the core of your question, I hope

First, some functions don't correspond to their Taylor series at $0$. But let's ignore that for this answer.

But, more importantly: The Taylor series representation has more degrees of freedom simply because not all functions are solutions to the equation (2)! This should be rather obvious if you think of it: If I throw a ball, then if you didn't know any physics or not have any experience on real world, its path could be anything, it could fly to Mars and return to me, it could vibrate between two points, it could draw your name on air. If you only use (1), you can't discard these possibilities. But once you realize it's following Newton's equations, the possible paths are very limited.

JiK
  • 777
1

As an example, suppose we have Hooke's Law, F = -kx. Writing the Taylor (technically, Maclaurin, since it's centered at zero) series as

$x(t) = \sum_{n=0}^{\infty}\frac{x^{(n)}(0)t^n}{n!}$

Where $x^{(n)}$ is the nth derivative of $x$, then

$x^{(2)}(t) = \sum_{n=2}^{\infty}\frac{x^{(n)}(0)t^{n-2}}{(n-2)!}$

Shifting the index, this can be written as

$x^{(2)}(t) = \sum_{n=0}^{\infty}\frac{x^{(n+2)}(0)t^{n}}{n!}$

We can then write Hooke's Law as

$m\sum_{n=0}^{\infty}\frac{x^{(n+2)}(0)t^{n}}{n!} =-k \sum_{n=0}^{\infty}\frac{x^{(n)}(0)t^n}{n!}$

Setting like terms equal, we have

$m \frac{x^{(n+2)}(0)}{n!} =-k \frac{x^{(n)}(0)}{n!}$

or

$x^{(n+2)}(0) =\frac{-k}{m}x^{(n)}(0)$

So given any n, we can find the (n+2)th coefficient in terms of the nth coefficient. This means that the even coefficients are determined by the 0th coefficient, and the odd coefficients are determined by the 1st coefficient. (The even powers correspond to a solution in terms of cosine, and the odd powers correspond to a solution in terms of sine, and the general solution is a linear combination of the two.) This is known as an analytic solution of the ODE. In general, it won't be as simple as this. However, since the LHS of Newton's Equation has only a second order term, and the RHS is first order, the (n+2)th coefficient will be able to be expressed in terms of the nth and (n+1)th coefficients, giving the 0th and 1st coefficients as initial conditions.

So, the key is that for polynomials to be equal to each other, the coefficients of corresponding powers must be equal, and this can be extended to Taylor series. This gives a recurrence relation giving coefficients in terms of lower order coefficients, and the infinite Taylor series collapses down to being determined by two coefficients.