139

It is often said that "Differentiation is mechanics, integration is art." We have more or less simple rules in one direction but not in the other (e.g. product rule/simple <-> integration by parts/u-substitution/often tricky).

There are all kinds of anecdotes alluding to this fact (see e.g. this nice one from Feynman). Another consequence of this is that differentiation is well automatable within CAS but integration is often not.

My question
We know that there is a deep symmetry based on the Fundamental theorem of calculus, yet there seems to be another fundamental structural asymmetry. What is going on here...and why?

Thank you

EDIT
Some peope asked for clarification, so I try to give it. The main objection to the question is that asymmetry between two inverse operations is more the rule than the exception in math so they are not very surprised by this behaviour.

There is no doubt about that - but, and that is a big but, there is always a good reason for that kind of behaviour! E.g. multiplying prime numbers is obviously easier than factoring the result since you have to test for the factors doing the latter. Here it is understandable how you define the original operation and its inverse.

With symbolic differentiation and integration the case doesn't seem to be that clear cut - this is why there are so many good discussions taking place in this thread (which by the way please me very much). It is this Why at the bottom of things I am trying to understand.

Thank you all again!

vonjd
  • 5,875
  • 32
    differentiation increases entropy, integration reduces it, so physics answers the asymmetry (just kidding). – Suvrit May 29 '11 at 16:49
  • Because differentiation is easier than integration. – C.S. May 29 '11 at 16:52
  • 1
    @Suvrit: Nice one - ROFL :-) – vonjd May 29 '11 at 16:56
  • 22
    The fundamental theorem of calculus tells you that they're inverses. That's not necessarily a symmetry. There are plenty of examples of functions that are easy to compute whose inverses are hard to compute: http://en.wikipedia.org/wiki/One-way_function – Qiaochu Yuan May 29 '11 at 17:00
  • 3
    @Vonjd: A similar question was asked on MSE. This thread may be of interest: http://math.stackexchange.com/questions/20578/why-is-integration-so-much-harder-than-differentiation – Eric Naslund May 29 '11 at 17:24
  • 1
    Differentiation is a "linearizing" limit process: it results in an approximation to the original function that is a linear map. Integration does not necessarily result in a linear map from the original function-quite the opposite most of the time. That's my 2 cents on the question. – The Mathemagician May 29 '11 at 19:12
  • 1
    @Qiaochu: I understand that but my question is why this asymmetry is the case (the underlying reason) with differentiation and integration. – vonjd May 29 '11 at 19:54
  • Your question isn't clear. What's this "other" assymmetry you're referring to? – Ryan Budney May 29 '11 at 20:01
  • @Ryan: That it is easy and doable one way and hard or sometimes not even doable the other way round. I am trying to understand the structural (deep) reason for that asymmetry. When you say isn't clear what are the alternative interpretations or what is missing? – vonjd May 29 '11 at 20:07
  • 27
    I think it just amounts to the existence of the chain rule (including in its multivariable forms) for differentiation, which makes differentiation easy; if it wasn't for that, you'd call differentiation an art as well.

    I.e., we think of "nice" functions as those built up by compositions from some basic stock of primitive functions we understand well; the chain rule allows us to take our knowledge of the derivatives of the primitive functions and easily build up from this knowledge of the derivatives of all other "nice" functions, in this sense. Integration has no such chain rule, so it's hard

    – Sridhar Ramesh May 29 '11 at 20:17
  • (Note that, for example, the product rule (mentioned in the original question) is just the instance of the chain rule as applied to the particular multivariable function of multiplication) – Sridhar Ramesh May 29 '11 at 20:28
  • I think you really ought to qualify what you mean by "doable", since Riemann integrals exist in most "standard" axiom systems for mathematics. IMO the primary reason for the apparent "deep" nature of your question is largely due to improper formulation. If you talk about "doable" in terms of some class of elementary functions, then you see what the problem is, as has been mentioned by several people. – Ryan Budney May 29 '11 at 21:32
  • 20
    Although it's usually said that integration is harder than differentiation there are many senses in which the opposite true. As linear operators in functional analysis, integration is often much better behaved than differentiation. When doing exact real arithmetic, integration over an interval is computable but differentiation is not. (Eg. see homepages.inf.ed.ac.uk/als/Research/lazy.ps.gz) – Dan Piponi May 29 '11 at 23:48
  • 2
    It seems to me that before we discuss differentiation and integraiton, could someone first explain why division is harder than multiplication? (Why should it be surprising that a process is easier to carry out than its inverse?) – Deane Yang May 30 '11 at 03:57
  • 2
    @Deane: Which one is the inverse? It's not surprising to me that one should be harder, but you can still ask why integration in particular. – Ryan Reich May 30 '11 at 04:06
  • 3
    Ryan, why is division harder than multiplication? – Deane Yang May 30 '11 at 04:47
  • 2
    @Deane, if that's a question you're interested in and think is appropriate for MO I encourage you to start a new thread. As-is, you seem to be derailing the discussion here. – Ryan Budney May 30 '11 at 06:05
  • Not meaning to pile on Deane here, but the question is more, "why is integration art and differentiation science?", rather than, "why is integration harder than differentiation?" The OP's second paragraph says more on this, and I came away thinking that the OP is more interested in the question of whether antidifferentiation algorithms exist or if not, why not. – Todd Trimble May 30 '11 at 11:38
  • 3
    @Deane: For the same reason that integration is harder than differentiation: because of how we express our functions/numbers. Elementary functions are perfectly suited for differentiation because we have a rule for each operation. Likewise, writing numbers in decimal (or any) base is well-suited for multiplication because we have distributivity in both factors. If you want to divide easily, write a number as its prime factorization (though then you can't add). – Ryan Reich May 30 '11 at 16:10
  • 1
    I concede that my comments were a bit too testy. Although I think the question is a valid and good one, I think it is a lot more subtle and difficult than people are accounting for. Terry Tao's answer below is pretty good but still feels incomplete to me. – Deane Yang May 30 '11 at 18:04
  • Ryan (Reich), I'm not sure I buy your argument. Do you have a different way to express functions/numbers that changes the situation you describe? – Deane Yang May 30 '11 at 18:06
  • @Deane: Thank you, I feel the same way: "[...] I think it is a lot more subtle and difficult than people are accounting for. Terry Tao's answer below is pretty good but still feels incomplete to me." - couldn't have said it better. – vonjd May 30 '11 at 18:15
  • 1
    I just got around to taking a closer look at Todd Trimble's answer. That (like Tao's) is a really good answer but again feels a little incomplete to me. In the end both his and Tao's answers seem to all come down to the fact that there are algorithmic ways to differentiate a function built out of old ones in standard ways (composition, product, quotient) but there are no analogous algorithms for integrals. This explains why differentiation is "easy", but I'm still not sure whether that explains why integration is "hard". I do like Todd's comparison to squaring and square-rooting. – Deane Yang May 30 '11 at 18:26
  • 2
    @Deane: Power series; works for any function you would otherwise have trouble integrating symbolically. See my comment to Thierry Zell's answer. For fractions: prime factorization, like I said. But no matter how you write things, you give up some convenience (ease of composition/identification of closed form, or ease of addition) – Ryan Reich May 30 '11 at 20:18
  • @Deane A bit tongue in cheek, but: division of real numbers is harder than multiplication because it's secretly still multiplication, just with the additional step of inverting. Multiplication by $x$ is a continuous function, but multiplication by $x^{-1}$ is no longer continuous near zero. But if you look at, say, the circle group, multiplication and division are clearly equally difficult! – Peter Luthy May 30 '11 at 21:33
  • 3
    Ryan (Reich), thanks for the reply. You're completely right about this, and I rescind my doubts. – Deane Yang May 30 '11 at 23:31

17 Answers17

144

One relevant thing here is that you are referring to differentiating and integrating within the class of so-called elementary functions, which are built recursively from polynomials and complex exponential and logarithmic functions and taking their closure under the arithmetic operations and composition. Here one can argue by recursion to show that the derivative of an elementary function is elementary, but the antiderivatives might not be elementary. This should surprise one no more than the fact than the square of a rational number is rational, but the square root of a rational number might be irrational. (The analogy isn't completely idle, as shown by differential Galois theory.)

In other words, the symmetry you refer to is really based on much wider classes of functions (e.g. continuous and continuously differentiable functions), far beyond the purview of the class of elementary functions.

But let's put that aside. The question might be: is there a mechanical procedure which will decide when an elementary function has an elementary antiderivative (and if it does, exhibit that antiderivative)? There is an almost-answer to this, the so-called Risch algorithm, which I believe is a basis for many symbolic integration packages. But see particularly the issues mentioned in the section "Decidability".

There is another interesting asymmetry: in first-order logic, derivatives are definable in the sense that given some expansion of the structure of real numbers, say for example the real numbers as an exponential field, the derivative of a definable function is again definable by a first-order formula. But in general there is no purely first-order construction of for example the Riemann integral (involving quantification over finer and finer meshes). I seem to recall that there are similar difficulties in getting a completely satisfactory notion of integration for recursively defined functions on the surreals, due in part to the incompleteness (i.e., the many holes) in the surreal number line.

Todd Trimble
  • 52,336
  • 14
    Let me add that I highly recommend Manuel Bronstein's book "Symbolic Integration I" (2005) on this topic. It is the closest you will find to a detailed account of the issues around implementing the Risch algorithm. – Andrés E. Caicedo May 29 '11 at 18:56
  • 1
    A similar question was asked in mathSE in February and I gave pretty much the same answer. At least the first part. Yours is much more complete. – lhf May 30 '11 at 01:10
  • In this CS MIT video lecture (at 3:56) this issue is briefly discussed in the terms of pattern matching. "If I'm trying to produce integrals […] more than one rule matches. […] I may get to explore different things. Also, the expressions become larger. […] There's no guarantee that [it] will terminate, because we will only terminate by accidental cancellation. So that's why integrals are complicated searches and hard to do." However, the things you write are way above my head, so I’m not sure whether this adds anything to the discussion. – Lenar Hoyt Aug 21 '13 at 18:32
  • @mcb I'm not sure how to pitch a response since you say what I wrote is above your head. I did watch a few minutes of the video. A key word is 'recursion': one can define formal differentiation on expressions that represent elementary functions by recursion, focusing on the last operation (addition, multiplication, composition, etc.) used to build an expression out of subexpressions, and recursively defining the formal derivative of the expression in terms of the subexpressions and their derivatives. These procedures are easy and are routinely taught to all calculus students. The Risch (cont.) – Todd Trimble Aug 29 '13 at 20:20
  • 2
    The Risch algorithm, which I have yet to thoroughly understand myself, is also a recursive procedure, but the recursive calls have a more complicated structure. Risch's paper is well worth having a look at: http://www.ams.org/journals/tran/1969-139-00/S0002-9947-1969-0237477-8/S0002-9947-1969-0237477-8.pdf – Todd Trimble Aug 29 '13 at 20:25
  • 1
    @ToddTrimble Thanks for the link. In the lecture the mechanical difficulty of integration is explained by the fact that the rules for differentiation produce expressions which cannot be uniquely matched with the inverse rules (for example whether a $+$ was produced by the linearity of differentiation or by the product rule). I’m not sure whether this is trivial or a fact that has not been considered here. – Lenar Hoyt Aug 29 '13 at 22:34
115

I want to try a different way of answering the question of why differentiation is somehow the "primary" operation and anti-differentiation the inverting operation. I'll try to make it as elementary as possible.

First let me say what I don't count as an answer. (This is not supposed to be a new contribution to the discussion -- just making my starting point explicit.) It's not enough to point out that differentiation obeys the product and chain rules and explicit differentiability of 1/x (thereby giving the quotient rule as well) and that anti-differentiation doesn't. Somehow one wants an answer that explains why we should have expected this in advance. I was going to say something about differentiation tending to simplify functions, but then realized that that's not really true: it may be true for polynomials but there are lots of functions for which it's false.

The small suggestion I wanted to make was to discretize the question and think about summation versus taking difference functions. Here the situation is slightly confusing because expressions like $\sum_{n=1}^Nf(n)$ tend to come up more often than expressions like $f(n)-f(n-1)$. But let's forget that and think about what it is that we have to do if we want to work out $\sum_{n=1}^Nf(n)$ explicitly. Usually we need to guess a function g and prove that $g(n)-g(n-1)=f(n)$ for every $n$, from which it follows by induction that $g(N)=\sum_{n=1}^Nf(n)+g(0)$. This (the finding of the function $g$) is a discrete analogue of anti-differentiating.

Looked at this way, to work out the sum we have to solve the functional equation $g(n)-g(n-1)=f(n)$ (where we are given $f$ and are required to solve for $g$). By contrast, when we work out the difference function we are solving the similar looking, but much easier, functional equation $g(n)=f(n)-f(n-1)$. It's much easier because the unknown function is involved only once: indeed, in a sense there's nothing to solve at all, but experience shows that we can usually simplify the right-hand side. Of course, with the first equation we can't simplify the left-hand side in a similar way because we don't know what $g$ is.

It's a bit like the difference between solving the equation $x^2+x=10$ and solving the equation $x=10^2+10$ (that is, the difference between algebra and arithmetic). So here is a real sense, in a closely analogous situation, where one operation is direct and the other is indirect and involves solving for something.

gowers
  • 28,729
  • 2
    Thank you, I really like these kinds of intuitions! My question though would be why is it the other way round numerically? WLOG solving $x^2+x=10$ for x is obviously also numerically harder than solving $x=10^2+10$. Here symbolic and numeric complexity coincides but in the case of differentiation/integration often not - why? – vonjd May 31 '11 at 08:32
  • 2
    The reason, I think, is that the analogy between the two situations isn't all that good. When you solve numerically you are trying to find a number. That coincides with what you are looking for when you solve a quadratic equation, whereas when you are antidifferentiating you are looking for a function, and numerical methods just give you function values. – gowers Jun 01 '11 at 21:15
  • 2
    Good, but can you provide a vision how this analogy could work in the case of differentiation? – Anixx May 03 '14 at 08:28
85

Differentiation is inherently a (micro-)local operation. Integration is inherently a global one.

EDIT: the reason that locality helps with symbolic differentiation of elementary functions (and is not present to help with symbolic integration of the same functions) is that the basic arithmetic operations used to build elementary functions are simpler locally than they are globally. In particular, multiplication and division become linear in the infinitesimal variables,

$$ (f + df) (g + dg) \approx fg + f dg + g df$$

$$ \frac{f+df}{g+dg} \approx \frac{f}{g} + \frac{g df - f dg}{g^2},$$

leading of course to the product and quotient rules which are two of the primary reasons why symbolic differentiation is so computable.

Note that properties such as holomorphicity, mentioned in the comments, are not quite as local as differentiation, because in order to be holomorphic at a point, one must not only be complex differentiable at a point, but also complex differentiable on a neighbourhood around that point. (In the jargon of microlocal analysis, it is merely a local property rather than a microlocal one.)

Finally, the reason why the inverse of a local operation (differentiation) is global is because differentiation is not locally injective (constants have zero derivative). In order to eliminate this lack of injectivity, one needs to impose a global condition, such as a vanishing at one endpoint of the domain.

SECOND EDIT: Another way to see the relationship between locality and computational difficulty is to adopt a computational complexity perspective. A Newton quotient, being local, only requires O(1) operations to compute. On the other hand, a Riemann sum, being global, requires O(N) operations to compute, where N is the size of the partition. This helps explain why the former operation preserves the class of elementary (or bounded complexity) functions, while the latter does not.

(This is only if one works in the category of exact calculation. If one is instead interested in numerical calculation and is willing to tolerate small errors, then the situation becomes reversed: numerical integration, being more stable than numerical differentiation, usually has lower complexity thanks to tools such as quadrature. Here, one can turn the global nature of integration to one's advantage, by allowing one to largely ignore small-scale structure, assuming of course that the integrand enjoys some regularity.)

Terry Tao
  • 108,865
  • 31
  • 432
  • 517
  • 3
    I'm afraid I have to vote this down. How does the local-ness of differentiation make it more mechanical than integration? It seems to me (especially after the latest edit) that the question is about symbolic calculus, but your answer is about theoretical analysis. Though it would be interesting, if you had this in mind, to know how the global nature of integration influences, say, its effect on elementary functions. – Ryan Reich May 30 '11 at 15:05
  • 10
    Ryan --- he's saying that computing the derivative is simpler because it only depends on local information at a point. Integration depends on information about every point on a fixed interval, simultaneously. Thus differentiation is fundamentally simpler than integration, simply from the definitions. Doesn't it seem a lot easier to organize the cutting down of a forest one tree at a time than having to cut them all down simultaneously? – Peter Luthy May 30 '11 at 16:03
  • @Peter: In complex analysis, it is easy to prove that a holomorphic function is infinitely differentiable using Cauchy's integral formula, but very difficult to do so just from the definition of a derivative. The integral formula uses a path that is necessarily bounded away from the point at which you want to differentiate. So is differentiation still easier than integration just because it's local? – Ryan Reich May 30 '11 at 16:24
  • 1
    My guess is, Ryan finds this unsatisfactory because maybe the question is referring of the algorithmic nature of the problems of integration and differentiation (one being essentially trivial when starting from a formula, the other not so much) and it is not clear how Terry's answer relates to that. – Mariano Suárez-Álvarez May 30 '11 at 16:37
  • Yes, that's it. – Ryan Reich May 30 '11 at 16:44
  • 1
    I've expanded upon my original answer in response to the above comments. – Terry Tao May 30 '11 at 20:22
  • I'm un-downvoting but I'm still not sold on the local/global business. However, I'm curious if you could expand on your idea of elementary functions being those of "bounded complexity". Is there some way you can look at a function (like $e^{-x^2}$) and say that this O(N) complexity of doing Riemann sums will somehow push it over the line? – Ryan Reich May 30 '11 at 21:00
  • Unfortunately, rigorously proving lower bounds on circuit complexity is a notoriously difficult task. But at a heuristic level, once an operation takes an unbounded number of operations to compute by a "naive" method, and the amount of algebraic structure present does not seem strong enough to suggest a significantly "smarter" method than the naive method, then the default assumption would be (barring sporadic or otherwise rare coincidences) that the operation is truly complicated. This is not a proof of anything, but it does provide guidance as to what to expect with such inverse problems... – Terry Tao May 31 '11 at 00:53
  • @Terry: Just for the sake of intuition an oversimplification: Having a picture, basically you’re saying that it is easier to find a closed form for a sharpened version of it which is tantamount to finding a linearization of the function in every single point (but only those points respectively) - than finding a closed form for a blurred version of it because you have to include the information of the whole picture in every single point (which is a de-linearization in its nature, i.e. finding a non-linear function from all the surrounding linear approximations). Is that basically correct? – vonjd May 31 '11 at 07:58
  • @Terry: I would very much appreciate if you could give me a hint if I am on the right track with my last comment. The problem still bothers me a lot... Thank you! – vonjd Jun 09 '11 at 06:53
53

I hesitate to answer again, but I agree with comments by Deane Yang and others that so far the discussions haven't quite gotten to the bottom of things. (Not that I promise to succeed in doing so now, but let's see what happens.)

In a nutshell, you could say that differentiation succeeds largely because

  • The derivative is a functor.

That's a modern-day way of stating the chain rule. One way to make this precise is by defining the derivative as a functor from the category of pointed smooth manifolds to the category of vector spaces, taking $f: (M, x) \to (N, y)$ to the linear map $Df_x: T_x M \to T_y N$ between the tangent spaces. The chain rule for smooth maps is exactly the statement that $D$ is functorial.

Better yet, it's a product-preserving functor. This is nice because it allows you to get at derivatives of other algebraic operations; for example, the product of two functions $f, g: \mathbb{R} \to \mathbb{R}$ is a composite

$$\mathbb{R} \stackrel{\Delta}{\to} \mathbb{R} \times \mathbb{R} \stackrel{f \times g}{\to} \mathbb{R} \times \mathbb{R} \stackrel{\text{mult}}{\to} \mathbb{R}$$

and so to derive the product rule, you just have to know how to take the derivative of that last map $\text{mult}$, and the product-preserving functoriality can take care of the rest.

Notice by the way that this point of view meshes very well with how Terry Tao emphasized the local aspects of differentiation; presently, the locality was handled by working with pointed manifolds. One can take this a little further and claim "morally" that the derivative functor is represented by a very local object (i.e., an object with one point), sometimes called "the walking tangent vector". This would be something like

$$T = Spec(\mathbb{R}[x]/(x^2))$$

so that the tangent bundle of a manifold $M$ is the manifold of smooth maps $T \to M$. Suffice it to say that all this can be effectively formalized in synthetic differential geometry (SDG), where in fact the category of manifolds can be fully embedded in a suitable smooth topos $\mathcal{T}$, and the derivative functor becomes an internal hom-functor

$$(-)^T: \mathcal{T} \to \mathcal{T}$$

So in summary, the derivative functor is represented by a local object, and this representability can be regarded as "explaining" the product-preserving functoriality (insofar as representable functors preserve products).

All this is to suggest that the motto "integration is inverse to differentiation" might be adopting a very misleadingly narrow view of what differentiation is. In the present context, I think we were considering primarily the case of real-valued functions on an interval, but this masks the fact that the derivative is something far more general and functorial and representable on a larger category.

I am having a hard time understanding integration in anything like these terms, so I'm thinking this could be suggestive of what's going on here.

Todd Trimble
  • 52,336
  • 9
    Now we need to involve $(1,\infty)$-groupoids somehow and we are set :) – Mariano Suárez-Álvarez May 30 '11 at 22:30
  • 24
    No, no, no; its with $(0,∞)$-groupoids that we are $\mathrm{Set}$ – Sridhar Ramesh May 30 '11 at 22:59
  • 2
    @Mariano: I'm not sure what to make of your comment. Is there anything unreasonable in my answer that you'd like to point out? (Same to whoever voted my answer down.) – Todd Trimble May 30 '11 at 23:10
  • 13
    A related point is that differentiation can be defined in much more general settings than real-variable ones (e.g. formal differentiation of algebraic maps on varieties over an arbitrary field), whereas integration is mostly restricted to real-variable (and complex-variable) settings. So whereas integration needs both analysis and algebra, differentiation needs only algebra, which helps explain why it is simpler. – Terry Tao May 31 '11 at 01:21
  • Very true, Terry. – Todd Trimble May 31 '11 at 01:50
  • 5
    Something that puzzles me a bit, when seeing the differential of a function $f:\mathbb{R}\to\mathbb{R}$ as a map of vector bundles $df:T\mathbb{R}\to T\mathbb{R}$ on the real line manifold, is that it already contains all the information about its "antiderivative": indeed it's defined by $(x,v)\mapsto d_x f(v):=(f(x),f'(x)\cdot v)$. That is, if somebody gives you the derivative of a function as a map of vector bundles on the real line, then you are trivially able to reconstruct the function. – Qfwfq Oct 02 '12 at 19:02
  • If I understand correctly, this answer 'explains' why differentiation is simple. Explains is in ''s marks because what is really done is illustrating that the definition of derivative can be easily extended to a definition of a functor. On the other hand, it still leaves open the question of why integration is hard. – O.R. Oct 31 '13 at 00:15
23

(I've split this off from my original answer, as it is unrelated to that answer.)

A counting (or entropy) argument can also be used to heuristically indicate why inverting differentiation should not be easy. One can empirically verify that if one differentiates an elementary function of complexity N (i.e. it takes N mathematical symbols in order to describe it), one usually obtains an elementary function of complexity greater than N. (Polynomials are an exception to this rule, but they are virtually the only such exception. Not coincidentally, polynomials are one of the rare subclasses of elementary functions for which integration is easy.) If differentiation was invertible within the class of elementary functions, this would suggest that if one integrated a typical elementary function of complexity at most N, one should get an elementary function of complexity strictly less than N. (This is unfortunately not a rigorous implication, because the notion of "typical" need not be preserved by differentiation or integration, but let us suspend this issue for the sake of the heuristic.) But there are significantly more functions in the former class than the latter (the number of functions of a given complexity grows exponentially in that complexity), a contradiction.

Note though that the above argument is quite far from rigorous. There are certainly many operations (e.g. inverting a large square matrix) such that both the operation and its inverse (which, in the case of matrix inversion, is itself) map bounded complexity objects to bounded complexity objects, and typically increase complexity. However, it does suggest that in the absence of any particular reason to believe that inversion is easy, the default assumption should be that inversion is hard.

Terry Tao
  • 108,865
  • 31
  • 432
  • 517
20

Todd Trimble's (fantastic) answer is the clear limiting factor as to why integration is harder in this setting. But I think this leaves the OP scratching his head: why do we take the point of view that differentiation is the basic operation and integration is the inverse? The answer is that there is a good reason it is harder but that people still take this point of view.

One could attempt to take the opposite point of view and assume integration is the "basic" operation and that differentiation is the opposite of integration; the basic question is then: given a function $g$, when do we have an $f$ such that $g(x)=C+\int_0^xf(t)dt$? The answer is: whenever $g$ is absolutely continuous. This property is quite a bit stronger than regular continuity and also quite a bit more difficult to check (unless $g$ were Lipschitz, say). Continuity allows you to check each point, but for absolute continuity one essentially has to look at all points in an interval simultaneously. Thus transferring theorems about integrals to theorems about derivatives is more delicate.

Even so, this is in many ways the "modern" view of the derivative in analysis, e.g. Sobolev spaces, weak solutions to differential equations, and so on. The derivative is such a poorly behaving operator that it is actually more convenient to take this approach.

Peter Luthy
  • 1,546
  • 1
    +1: I think you hit the nail on its head: "why do we take the point of view that differentiation is the basic operation and integration is the inverse?" Thank you for that! Then you write: "but for absolute continuity one essentially has to look at all points in an interval simultaneously." Two observations: 1. doing e.g. u-substitution (or any other integration-'rule') you don't actually 'check' for some continuity conditions. 2. finding a derivative means also finding a function which is valid for all points simultaneously, too. So the reason for the asymmetry is still not clear to me. – vonjd May 30 '11 at 06:10
  • 1
  • u-sub is a statement about derivatives pushed into integration via the fundamental theorem of calculus: it's just a restatement of the chain rule. You can still prove a version of u-sub for absolutely continuous functions, but it takes more work.
  • You still need to check continuity at each point, but you can basically check the points one-by-one. But in any case, every absolutely continuous function is continuous, but plenty of continuous functions are not absolutely continuous. Indeed I would guess "most" continuous functions are not absolutely continuous. Isn't that an asymmetry?
  • – Peter Luthy May 30 '11 at 06:41