4

Natural phenomena (e.g. heat flow) and systems (e.g. electrical circuits) are usually described using differential equations. Why is that?

Also, usually people use "constant coefficients linear differential equations" of low order (one or two, rarely three). Is this use (constant coefficients, linear, low order) justified by the adequacy with the modeled phenomena or just by model simplification?

There is also this seemingly equivalent version used when dealing with discrete systems (i.e. the independent variable is discrete) called "difference equation" instead of "differential equation" such that:

  • Constant coefficients differential equation:

    $$\sum_{k=0}^N a_k \frac{d^ky(t)}{dt^k}= \sum_{k=0}^Mb_k \frac{d^kx(t)}{dt^k}$$

  • Constant coefficients difference equation:

$$\sum_{k=0}^N a_k y[n-k]= \sum_{k=0}^Mb_k x[n-k]$$

I can't see how the difference is equivalent to the derivative. I know that this might not be a physics question but any insights would be appreciated.

  • 1
    Your first question is really philosophy. Physicists observe phenomena and try to describe it; why it follows the description isn't something we answer. – Kyle Kanos Jul 29 '17 at 14:05
  • Alll models are wrong. Some models are useful. A model is more likely to be useful if we can mathematically analyze it. And for much of the history of science, analysis meant paper and pencil methods instead of just throwing the thing at a numerical method and hoping. – The Photon Jul 29 '17 at 14:10
  • 2
    @KyleKanos I guess so... Nonetheless, I think that there must be some general principle that makes physical quantities depend on each other and especially on their respective derivatives. – Learn_and_Share Jul 29 '17 at 14:12
  • @MedNait sure there could be, but it's not really something physics describes. That is the philosophers job, ultimately. – Kyle Kanos Jul 29 '17 at 14:13
  • I also discuss some of the ideas of discretization of real space in this answer of mine that might help with the last question of yours. – Kyle Kanos Jul 29 '17 at 18:07

5 Answers5

5

Given that time and space are believed to be continuous one would expect that the equations governing changes in time and space would reflect this continuity.

In other words, we can make sense of the concept of two points arbitrarily close in space or two moments arbitrarily close in time, something that difference equations do not capture.

The precise form of the differential equation depends on the physical phenomena and is not restricted to equations with constant coefficients.

(As an aside: the great mathematician Henri Poincare was troubled by the quantized nature of some quantities (energy in particular), precisely asking if this would imply the rewriting of the laws of physics in the form of difference equations.)

ZeroTheHero
  • 45,515
5

Well, Aristotle said that physics is the study of change and he then related change to motion, of which he said the primary kind of motion is physical motion. This means, that although there was no kinetic theory of heat in his time, he would not have been surprised to be told that heat (a form of change) was related to motion (of atoms).

Now, since Newtons/Liebniz discovery of the calculus, physical motion is quantified and expressed through differential equations; this, in part, is conventional as the same can be expressed through integral equations.

The kind of equations that come up are justified both by model simplification and adequacy, for example the first differential equation that one generally comes across is F=ma, which is a second order differential equation with a constant coefficient, which in this case is mass. Generally we take mass to be a constant, but it may vary as in a rocket which is rapidly burning fuel so it's mass is varying.

Why order two? Well this comes from observation, recalling that motion is unaffected by position (all places are alike), and is also unaffected by the frame velocity (all constant frame velocities are alike), together this is Galilean relativity, and Newtons equation is the simplest possible with regards to these constraints. The real puzzle here is why motion is independent of the underlying frame velocity, and it turns out after Einstein that it's not and there is an absolute speed, not zero, the minimal speed possible as one might have suspected intuitively, but the speed of light which is a maximal speed.

However, not all equations in physics are such, for example the equations in GR are notoriously non-linear, furthermore they are a system of partial differential equations.

Difference equations are not the same as differential equations, as the former relies on a discrete change and the latter on an infinitesimal change, but there are analogous techniques in solving such equations.

Mozibur Ullah
  • 12,994
2

First let me try to answer several points that may interest you, and then we will go onto the answer.

1) Not all PDEs modelling physical phenomena are linear. There cases where we have non-linear dynamics and we simply neglect the non-linear terms, because non-linear PDEs cannot be solved analytically in most cases. There are a lot of situations where we can approximate very well our equations to the linear case. For example, for wave propagation we have the PDE,

$${\partial_t^2 f(x,t)}=c^2\partial_x ^2f(x,t)$$

which is linear and can be solved. This represents a wave. But not all waves are described by this equation -there are also non-linear waves! For example, the cnoidal waves and solitons you can observe sometimes in fluids can be described by the Korteweg-De Vries equation:

$$\partial_t f(x,t)=6f(x,t)\partial_x f(x,t) - \partial _t ^3 f(x,t)$$

This is a non-linear PDE that have solutions that are physically waves, but are not solutions of the linear wave equation.

So, the linear equations can account a lot of phenomena, but there are many cases where they don't describe accurately the physical systems.

2) Differential equations and difference equations are not the same stuff. This point is very, very important. For example, take the following differential equation:

$$\dot{x} = 4rx(1-x)$$

This is the logistic equation (by the way, note that it is non-linear). In ecology, it represents the growth of a population, that has a density denoted by $x$ and grows with a rate $4r$. This equation has two fixed points, $x=0$ (extinction) and $x=1$ (population reaches the maximum value). It turns out that the extinction is unstable, while $x=1$ is a stable point, so for every initial condition the system will end up with $x=1$, following a simple, smooth trajectory.

However, you may think that in biology it is better to work with discrete data. I measure population every week, and the population at that time depends simply on the last record. Then, take the logistic map,

$$x_{n+1}=4rx_n(1-x_n)$$

where $x_n$ is the population at week $n$. It turns out that this map is chaotic for some values or $r$, so it jumps between different values of $x_n$ without apparent order, forever.

Note that the behaviour of this two things is very, very different, even when they represent the same process. In biology, the continuous case is often prefered since it describes better the population dynamics.

You may think that the problem is that if you write $\dot{x}$ as differences, then the equation you have for your system is different. However, when you take finite differences in PDEs using the explicit Euler method, you can have divergent behaviour, depending on you integration step $h$ and other constants of the problem (even for simple cases such as $\dot{x}=-rx$). Even worse, if you work in 2-D space, for example, you need to select a discretization for the Laplacian. The centered difference is very usual, but you have to take in account also the symmetries of your problem: in a problem with a non-conserved flux in X direction you may want to use advanced differences for X and centered differences for Y. If you use the centered differences for X also, the numerical solution will not be correct: meaning that the discrete map and the continuous system are not equivalent!

So, two conclusions:

a) When you change directly the continuous for a discrete system they are not exactly the same thing,

b) When you do discrete differences, depending on how you them you may obtain a map which is not equivalent to your original continuous system.

Then, there are equivalent versions of the differential equations using difference equations? Well, for numerical computations, if you do well, in the limit $\Delta t\rightarrow0$, $\Delta x\rightarrow0$ they are the same. Outside this limit, are they valid? The answer is no.

3) Why PDEs? Well, using the information I have written above let me give my opinion on why PDEs and not any other description.

3a) They are intuitive. It is really easy to write these equations from an intuitive point of view. For example, speed. Let's say I increase it at a constant rate, and that the friction with air is proportional to the speed. Then, speed is going to decrease with the friction. This is written simply as

$$\dot{v} = a - f(v)$$

Now you only have to select how the friction depends on the velocity. If $f(v)$ is a smooth, well-behaved function, and we assume that for low velocities the friction is not too high, then you can Taylor expand at $v=0$ and simply take the linear part, $f(v)\simeq b v$.

You see: you have a physical phenomena, that you can model with a differential equation easily. Notice that I didn't invoke Newton's law nor any other physical concept. Simply a bit of "common sense" will do.

3b) We work with undetermined fields, on which we need conditions. This conditions are usually set over how the function evolves. See what I said before: speed changes depending on... This is by definition a derivative. Conditions over derivatives lead to differential equations.

If you work, for example, with probabilities, you often face functional equations, because the conditions are set over the arguments the functions and not over how they change. Or you can have even functional differential equations, which is the case of master equations.

3d) Well... you don't like PDEs? Not a problem, you can change all of them into operators and try to solve them using algebra. When the problem is linear, this is simply the diagonalization of the operator. However, for non-linear problems... Well, what do you know about non linear algebra? Because I think that physicists, in general, are not very well trained in non linear algebra.

Take for example Schrödinger equation. Heisenberg found it before... written in operators. Nobody at the time knew operators, so when Schrödinger found his equation, everybody leaved that complicated and strange thing and went for the traditional calculus everybody knew. Even now... what it is easier, to diagonalize the infinite-dimensional matrix momentum operator, or to solve the first order linear differential equation?

So basically many of the PDEs could be substituted by other things, but they are more complicated or we are not trained with them.

  • Interesting thoughts. Shouldn't be $f(v)\approx bv$ in your friction model? (I suppose also that $b$ is the value of the $f'(v) $at $0$). What do you mean by "Conditions over derivatives lead to differential equations"? – Learn_and_Share Jul 29 '17 at 21:48
  • @MedNait Yes, it was an error. I have edited it. A differential equation is simply an equation, in which you have to find a function, given some constrains over the derivatives. For example, for $\dot{f}=-af$, I want to find a function $f(t)$ such as the derivative is the same as the function, multiplied to a constant. I can put also constraints over the arguments of the functions, as $f(t) = f(t^2) - 3$. This is a functional equation. As in the case of differential equations, you don't know $f$, but the constraints are in the argument, not in the derivatives of the function. – Victor Buendía Jul 30 '17 at 08:25
  • Okay. But there isn't a causality in the relationship between conditions over derivatives and obtaining a differential equations as your statement "Conditions over derivatives lead to differential equations" implies! By conditions I thought you meant boundary conditions? Your comment suggests though that "conditions" means a relationship or equation. – Learn_and_Share Jul 30 '17 at 08:40
  • Mmmmm, I am not sure if I understood correctly your problem with that statement. What I mean is that if you put constraints over derivatives, the function that fulfills the constraints is the solution of a differential equation given by constraints. I am not talking of boundary conditions here. Of course, if you add them, you must have a solution of the differential equation compatible with the boundaries, restricting even more your answer. – Victor Buendía Jul 30 '17 at 09:39
  • Okay. Got it! What was misleading is the use of the word conditions (that made me think of boundary conditions), otherwise, I think we're on the same page. – Learn_and_Share Jul 30 '17 at 11:20
1

I can't see how the difference is equivalent to the derivative. ... any insights would be appreciated.

I doubt if equivalent is the appropriate word here but consider a definition of the derivative of $f(x)$:

$$f'(x) \equiv \lim_{\Delta x \rightarrow 0} \frac{f(x) - f(x - \Delta x)}{\Delta x}$$

The fraction $\frac{f(x) - f(x - \Delta x)}{\Delta x}$ is an approximation of $f'(x)$ that is exact for infinitesimal $\Delta x$.

In the discrete domain, we have a function of a discrete variable $f[n]$ and the (non-zero) difference $\Delta n$ cannot be infinitesimal but rather, finite. Indeed, the smallest $|\Delta n|$ is $1$.

Thus, it seems reasonable that the discrete domain analog of the derivative, as defined above, is a finite difference (the 1st backward difference)

$$f'[n] \equiv f[n] - f[n - 1]$$

To further reinforce this, we know that

$$\int_{-\infty}^x\,f'(\tau)\,\mathrm{d}\tau = f(x)$$

which is to say that integration and differentiation are inverse operations. In the discrete domain, we have

$$\sum_{m = -\infty}^n\,(f[m] - f[m-1]) = f[n]$$

and so the discrete domain analog of integration is summation.


Actually, I know the reason behind the use of difference equation but I don't get why or how $d^ky/dt^k$ is equivalent to $y[n−k]$

They aren't equivalent. For example, the 2nd backward difference is

$$f''[n] = f'[n] - f'[n-1] = f[n] - 2f[n-1] + f[n-2] \ne f[n-2]$$

However, an equation like

$$af''[n] + bf'[n] + cf[n] = 0$$

can be written as

$$\alpha f[n] + \beta f[n-1] + \gamma f[n-2] = 0$$

  • In the last part of your answer, the expression you gave don't actually explain why the $k^{th}$ order (continous-time) derivative is not equivalent to $y[n-k]$ since it's a relationship between a discrete-time derivative and a discrete value. It does however show that these last quantities are not the same.

    I see the point you made with the last two equations. This may explain the use of the difference as a substitute for the derivatives: the direct equivalence is not true but the sum of derivatives and sum of delayed functions can be equivalent!

    – Learn_and_Share Jul 31 '17 at 14:46
0

Adopt the following axiom:

The Tomography Axiom:
The unfolding of a system over sufficiently small bounded regions is determined by its configuration on the boundary of the region, as a sufficiently smooth function(-al) of the configuration.

I won't answer the question for field theory, but we can consider the question of mechanics. Here, the regions are intervals of time. A bounded region is a connected interval that can - without loss of generality - identified as $[0,1]$.

The system is described mathematically by $f(t)$ over $t ∈ [0,1]$. By "sufficiently smooth" will mean that $f(t) = F(f(0), f(1), t)$, where $F$ has continuous derivatives up to the second order (plus whatever else is needed to get the job done). Define $$G(a,b,t) = \frac{∂F}{∂t}(a,b,t).$$ Then $f'(t) = G(f(0), f(1), t)$. Invert the combined function $$(f(t),f'(t)) = (F(f(0), f(1), t), G(f(0), f(1), t))\\⇒\\(f(0),f(1)) = (H(f(t),f'(t),t),I(f(t),f'(t),t)).$$ There are extra conditions on $F$ required for this that we'll assume already went into the "smoothness" condition for $F$ and the "sufficiently small" condition for the interval that's been relabeled $[0,1]$.

Then the result is two first order differential equations for $f(t)$: $$H(f(t), f'(t), t) = f(0), \quad I(f(t), f'(t), t) = f(1),$$ i.e. a second order law of motion for $f(t)$.

Example: - The Kepler Problem
$$m\frac{d}{dt} = ,\quad \frac{d}{dt} = -\frac{μ}{||^3}, \quad (μ,m > 0).$$ Let $$ _- = \left(t_-\right),\quad _- = \left(t_-\right),\\ _+ = \left(t_+\right),\quad _+ = \left(t_+\right), $$ where $t_- < t_+$. Then, $$_± = m\frac{_+ - _-}{Λ} ∓ \frac{μΛ}{Π}\frac{_±}{|_±|},$$ where $$Π = _+·_- + |_+||_-|.$$

The constants of motion $$ = ×, \quad = \frac{×}{μm} - \frac{}{||}, \quad H = \frac{||^2}{2m} - \frac{}{||},$$ which are related by $$||^2 = \frac{2H}{μ}\left(\frac{||}{μ}\right)^2 + 1,$$ can be written as $$ = m\frac{_+×_-}{Λ},\quad H = \frac{mΔ^2}{2Λ^2} - \frac{μΣ}{Π} + \frac{μ^2Λ^2}{2mΠ^2} = \frac{1}{2m}\left(\frac{mΣ}{Λ} - \frac{μΛ}{Π}\right)^2 - \frac{mΠ}{Λ^2},\\ = \left(\frac{mΣΠ}{μΛ^2} - 1\right)\frac{|_+|_- + |_-|_+}{Π} - mΠ\frac{_+ + _-}{μΛ^2}, $$ where $$Σ = |_+| + |_-|, \quad Δ = |_+ - _-|.$$

The term $Λ$ is explicitly dependent on the time and serves as the de-facto clock. There is also a constant of motion that is explicitly dependent on the time that determines the time of close passage (or: the most recent such time, in the cases $H < 0$ and $|| < 1$ where the orbit is periodic).

You can find what $Λ$ by differentiating $_±$ and $_±$ with respect to $t = t_+$ or $t = t_-$ and matching up with the starting differential equations.

A "sufficiently small" bounded time interval, here, means $|t_+ - t_-|$ is less than half the period of the orbit, if it is periodic, because at half the period $Π = 0$, and $Π$ is in the denominator in some of the expressions above.

NinjaDarth
  • 1,944