3

I posed a closely related question here but it received a tumbleweeds award. So I thought I would post it from a different angle to see if I can illicit at least some thoughtful comments if not answers.

The modeling of many physical systems utilize the mathematical tools of calculus, by writing the relationship of physical quantities in the form of differential equations.

Considering time dependent operations of integration and differentiation, the dynamics of a physical system may be expressed in terms of one form or the other. A good example are the Maxwell Equations which are often written in both differential and integral forms.

Integral forms tend to express where the system has been up to where the system is at present while differential forms tend to express where a system is now and where it will be in the near future. So the two forms tend to imply a sense of causality.

So this brings me to my question. Since we tend to observe a causal universe (at least at a macroscopic level) are integral forms a more natural approach to modeling systems?

I'm using the word 'natural' in the sense that the nature of the universe tends to work one way vs another. In this case I'm saying nature tends to integrate rather than differentiate to propagate change. We can write our equations in differential form, solve them and predict, and they are useful tools. But isn't mother nature's path one of integration?

I tend to believe this is so by my experience in simulating systems. Simulating systems in an integral form rather than differential form always seems to lead to better results.

docscience
  • 11,663
  • 2
  • 32
  • 69
  • 2
    I'm personally more interested in this, 'Simulating systems in an integral form rather than differential form always seems to lead to better results.' Can you give a specific example? – lemon May 12 '15 at 14:53
  • @lemon Constructing simulations using derivatives always leads to algebraic loops. Algebraic loops imply infinitely fast feedback of information, thus noncausal. Integral forms prevent this issue. – docscience May 12 '15 at 16:36
  • My research involves differentiating, so I'd say that the DEs are clearly the more natural approach. This then leads me to conclude that this is an opinion based question and, though interesting, not a good fit for the site. – Kyle Kanos May 12 '15 at 17:03
  • @KyleKanos So are you 'analyzing' or are you actually building simulations? Also the systems you are working with, do they involve feedback of any of the system states? Physical system models that involve feedback and that are built into simulation using differential forms always result in an algebraic loop. That's fact, not opinion. The only way to 'fix' the algebraic loop is to rewrite the system in the integral form or put in an arbitrary delay into the algebraic loop. But the second choice spoils the model. – docscience May 12 '15 at 18:36
  • 1
    Upon further review, you ultimately are introducing a false dilemma by forcing a choice upon us. Really, both equations are useful in doing the same thing (albeit differently) and neither are more fundamental than the other because we can transform from one to the other. – Kyle Kanos May 12 '15 at 18:58
  • @Kyle Kanos Now that's an opinion! I'm not forcing anything on anyone. Sorry I ruffled feathers. I'm seeing a connection between the forms and causality . I never said one is more useful than another. – docscience May 12 '15 at 19:07
  • Your question is asking which is more natural, integrals or derivatives--i.e., we have to pick one of the two (forcing a choice). I also did not say that you said one is more useful than another, read my statement again! – Kyle Kanos May 12 '15 at 19:12
  • @KyleKanos I explained what I intended by 'natural '. Sure we can tranform the forms mathematically but considering causality does mother nature integrate or differentiate. – docscience May 12 '15 at 19:40
  • 2
    That's not a question physics can answer. Philosophers probably couldn't answer that one either too, so not sure where this question really belongs. – Kyle Kanos May 12 '15 at 20:02
  • I concur, asking what nature "really does" is not a physics question. It is not clear where the observable physical difference between "nature integrates" and "nature differentiates" would possibly lie, since both ways lead (without approximations) to the same physical results (integral and differential forms are equivalent, as you say). – ACuriousMind May 12 '15 at 21:27
  • My point they are not equivalent when you also consider causality. We interpret nature with theories of physics and model it with mathematics. So it's a valid question. – docscience May 12 '15 at 21:35
  • While philosophically interesting, I concur this is more an opinion-based question (since you hope to elicit "thoughtful comments", maybe you feel that way yourself). I think the distinction is somewhat arbitrary. You can't integrate the future - it hasn't happened yet. You arrived at the here and now because of integration of the differential equation governing everything that came before; but with imprecise initial conditions, you will have limited success propagating this into the future. I agree with Kyle - don't make us choose. Use what works for you. – Floris May 20 '15 at 02:29
  • @Floris thanks for your comments. I see both tools very useful to the physicist or engineer, and I'm not begrudging the choice of one or the other. But as your examples show 'mother nature' may be stuck with integration (please help me if I am I still not seeing your's and Kyle's point). When I create dynamic simulations I'm sort of playing mother nature in a virtual world. Sure I can build my models using either form, but I always seem to get better results using integral forms. In the world of paper and pencil, either form will do, and often the differential form may prove more useful. – docscience May 20 '15 at 04:17

1 Answers1

4

There is a mathematical point that can be made, and in my opinion is related to a deeper understanding of what it means to solve a (partial) differential equation.

I will try to keep things simple, and consider only linear models.

Suppose that you have a space $X$ with some properties, for example it has a topology. We suppose that the state of our system is an object of that space $x\in X$. The evolution of the state is given by a map $x(\cdot): I\to X$, where $I\subseteq \mathbb{R}$ represents the time interval. Now this is what "nature" offers at the most basic level, and does not give any a priori information about causality. No one assures that there would be a relation between $x(t_1)$ and $x(t_2)$, when $t_1<t_2$. Nevertheless, we usually observe one thing: the map $x(\cdot)$ is continuous with respect to time in the bounded intervals $I$ we are able to observe. Given such a continuous map, we may ask what happens to the derivative of the map, and we may define the derivative to preserve causality: we define $$\partial_t^- x(t)\lvert_{t=t_0}=\lim_{h\to 0^-}\frac{x(t_0+h)-x(t_0)}{h}\; ,$$ so our information that comes from the derivative is only "from the past". If the limit exists, we say that $x(t)$ is (left)-differentiable at $t_0$. Now suppose that we observe that for any $t> t^-$, where $t^-$ is the minimum of our bounded interval, $\partial_t^-x(t)=A^- x(t)$ for some linear operator $A^-$ that acts on $X$. Here you have your differential equation "from the left", that takes only informations about the past. However, since the interval is bounded (and we can observe only bounded intervals of time), we can also "a posteriori" define the right derivative $\partial_t^+ x(t)$. Suppose that again for any $t< t^+$, $\partial_t^+ x(t)= A^+x(t)$, where $A^+$ is a linear operator that takes information "from the future". Since in most cases $A^-=A^+=A$, we can in that case infer (however that is not provable, just an inference) that for the system the differential equation $$\partial_t x(t)= Ax(t)$$ can describe the state $x(t)$ at any time in a unique fashion (fixed the value at one point).

So as you see, it is not necessary that the differential equation contain "information about the future": in fact we may restrict ourselves to the hypothesis that the system obeys $$\partial_t^-x(t)=A^- x(t)$$ for any $t> t^-$. It is observation that leads us to infer that the correct equation is actually $\partial_t x(t)= Ax(t)$.

Nevertheless, there is a difference between differential and integral formulation of an equation, but it is more semantical. Consider the so-called Cauchy problem: $$\left\{\begin{aligned}&\partial_t x(t)= Ax(t)\\&x(0)=x_0\end{aligned}\right .\; .$$ For the equation to be satisfied, it is necessary that the derivative is defined everywhere, and takes values in the space $X$; in addition it is necessary that $x(t)$ is on the domain $D(A)\subseteq X$ of $A$ for any $t$. So a solution, if it exists, will be of the type $x(t)\in C^0(I,D(A))\cap C^1(I,X)$, i.e. a differentiable map from an interval $I$ (that contains zero) to $D(A)$, such that its derivative is continuous with values in $X$. This imposes restrictions on the map $x(t)$, i.e. it has to be quite regular (in math terminology). From the point of view of numerical simulations, this required regularity gives you the additional computational cost, in my opinion.

Are we able to formulate the equation in another way, that enables us to consider more general solutions? In fact a priori we need only that $x(t)\in C^0(I,X)$, i.e. it is a continuous function. The answer is yes, we may write the integral equation $$x(t)=x_0 +\int_0^t Ax(s)ds\; .$$ Obviously it depends from case to case, but it is often possible to find solutions of the equation in this form (especially for nonlinear systems) that need less regularity than before, for example we may find solutions $x(t)\in C^0(I, X)$ for any $x_0\in X$. The less required regularity should give, computationally, better performance.

However it is clear that a solution that is only $C^0(I, X)$ of the integral equation is not, strictly speaking, a solution of a differential equation, but any solution of the integral equation that is also $C^0(I,D(A))\cap C^1(I,X)$ is a solution of the differential equation. Conversely, every solution of the differential equation is also a solution of the integral one. So they are not exactly equivalent.

A concrete example may be the Schrödinger equation $\partial_t \psi = -iH\psi$ in $L^2$. Often $H$ is defined on a domain $D(H)$ smaller than the whole $L^2$; so the differential equation has solution only for maps $\psi(t)$ that are always in the domain $D(H)$. If, as usual, $H$ is self-adjoint, then the solution of the equation is written $\psi(t)=e^{-itH}\psi_0$, and this solves the integral equation for any $\psi_0\in L^2$. However, it solves the differential (usual) form only if $\psi_0 \in D(H)$.

yuggib
  • 11,987
  • Excellent examination. So we know that nature (physical systems) tend towards minimizing energy. Does nature also minimize 'computational cost'? If that's so, then maybe that's the connection I'm trying to understand? – docscience May 14 '15 at 14:18
  • @docscience Well I don't think that "computational cost" is something that is related to nature, but more with our way of describing it. My comment was by an "entropy/information point of view": more regular structures need more information/rules to be defined, and thus probably more "computational cost" (I am not sure about that, just a supposition); while less regularity means less information and thus less cost. Since satisfying a differential equation is related to more regularity (that is a mathematical fact)... – yuggib May 14 '15 at 14:45
  • our way of modelling it needs more effort than dealing with an integral equation that in general "contents itself" with less regular solutions. But I would not say that it is a feature of "nature", because the natural object is as it is, and a priori we do not know how much it is regular, that is an information we obtain by experimentation. Then we model our system accordingly with the solution that better fits the observations. – yuggib May 14 '15 at 14:47