3

This question has been inspired by a question asked on Mathoverflow about "effectiveness of (epsilon, delta) definition". Most mathematicians have a strong opinion about the need of such definitions: They are a must for mathematics as a discipline and accordingly, they are a must for whom studies mathematics. But, It happens (like the current term) that I do teach calculus to physics students where I cannot come up with a straightforward decision about the use of such definitions. The question is: To what extent are they also a "must" for a person who studies physics?

  • Not at all. I think I've never used that formalism in the context of physics; proofs in the mathematics sense aren't all too popular in most branches. – Danu Feb 23 '14 at 13:26
  • 8
    I agree with the math people -- those definitions are essential to what you mean by a limit and by continuity, as well as foundational for the whole field of topology, which is critical for several higher-level physics diciplines. Discounting them is silly. Difficulty level is irrelevant. – Zo the Relativist Feb 23 '14 at 13:45
  • 5
    Frankly, this definition isn't really all that hard if when saying it you also make a drawing, denoting $\varepsilon$, $\delta$ and the neighborhood of limit point. (I realized this when I first studied analysis at university and tried to understand all this epsilon-delta — it appeared really simple as compared to non-rigorous handwaving). – Ruslan Feb 23 '14 at 14:27
  • Possible duplicate: http://physics.stackexchange.com/q/234/2451 – Qmechanic Feb 23 '14 at 15:04
  • 1
    (IMO this is not a duplicate question.) At the very least epsilon and delta give some flavor of rigor, and that's valuable, even to engineers and experimentalists. But it's not essential for that audience. I think that if there's any chance that a student will make a career in physics, then yes, include it. Apocryphal story-- Student: How much math does a physics student need to know? Victor Weiskopf: More. – garyp Feb 23 '14 at 17:11
  • 4
    As a personal opinion, I need the mathematical rigor and explicit, definite notation to really understand most concepts in physics. Not just for basic stuff like epsilon-delta and limits/continuity/..., for nearly everything. I'm sure this differs from one person to another, but I can't make a physical reasoning if I don't feel confident in the maths behind each step. I need to know how everything I do is mathematically grounded (so I can fall back on that when needed as well) and it's often far easier for me to come up with new ideas if I have the actual expressions in front of me. – Wouter Feb 23 '14 at 22:06
  • And as @Ruslan states, the epsilon-delta definitions really aren't that hard to understand if they're explained well. They're actually quite beautiful in the sense that they feel logical and intuitive. – Wouter Feb 23 '14 at 22:08

4 Answers4

3

Physics needs math, and math needs the definition.

I learned calculus without the epsilon-delta definition of a limit, then learned the epsilon-delta definition. I think it makes sense to teach it that way -- understand the idea of a limit, and see some useful applications of it, then learn a rigorous definition. So my answer is: it's a "must" for physics, to be presented at the right time.

Ruslan's comment above said it well:

Frankly, this definition isn't really all that hard if when saying it you also make a drawing, denoting ε, δ and the neighborhood of limit point. (I realized this when I first studied analysis at university and tried to understand all this epsilon-delta — it appeared really simple as compared to non-rigorous handwaving).

It just doesn't make sense to go through life without that clear understanding of what a limit really is.

Martin
  • 15,550
3

Science education should not only be about giving your students a small amount of tools they will most certainly use, but it should be about teaching to find and understand these tools even when you are not around.

If you teach a subject to people that do not want to become an expert, but who might need it somehow, you want to try to teach it in a way that lets them understand the basic structure of the field, so that they can use its results and talk to its experts. The definition of "limits" and "continuity" is at the very heart of some areas of mathematics, so it is essential for a basic understanding of its structure.

Put differently: While some future physicists may never come in contact with a lot of mathematics and mathematicians, others most certainly will (most theoreticians, especially in more mathematically inclined areas, one example being quantum information) and telling them the right definitions will make it possible for them to talk to those mathematicians: Without proper definitions, you cannot read and really understand mathematics books or mathematicians' talks, because you don't speak their language.

Martin
  • 15,550
3

As a profession and as a society, it is most certainly not a must. For an individual that depends on their individual situation. On absolutely any day of the week I'd rather students learn approximation theory, and learn it well rather than learn about a delta-epsilon definition. A useful skill is something, whereas a little bit of terminology and definitions is nothing in comparison. Newton was a great Physicist, and he didn't need delta-epsilon, so we should be really clear about whether, why, and the trade-offs and costs.

The why is firstly that the delta-epsilon definition is a historical artifact, and secondly that it is useful enough to know and easy enough to teach and learn that we keep teaching it and learning it. Those are legitimate things.

For the historical artifact part let's be honest, most people couldn't figure out how to do Calculus like Newton and Leibniz did. Nowadays, of course, we could use something like nonstandard analysis or synthetic differential geometry or even just a carefully done approximation theory to get all the same solutions or predictions required to do science.

But even so, if it was just a historical accident that could be fixed, and in fact we'd have fixed it by now if not for the other things: the usefulness and the easiness. The usefulness is like learning a foreign language. If a physics student knew neither English, nor German, nor French, nor even Latin then each lack of knowledge makes vast sections of literature inaccessible as well as raising a barrier to potential living experts. So it's useful to know languages that make lots of historical and current physics resources available. The exact same thing applies to limits and the delta-epsilon definition. Plus you have teachers well versed in it and even teachers experienced in teaching it. So you have a high usefulness and a high easiness. Now your link suggested that some people have trouble with learning it, and indeed maybe the people that have trouble but were interested in math become scientists or mathematics education professionals and those that are interested in math and don't have trouble either become scientists or become mathematics professionals. And whether you like it could affect how well you teach it, or perceive its effectiveness. Even straight physics education is strange, the interesting thing about the Hestenes Test (Force Concept Inventory) is that physics educators can think that their general physics students are learning those concepts and learning them in general physics, and think from their own tests that they do, but the test shows they aren't learning it then and there. But the question should be about physics students, do they have trouble? Yes. Are the alternatives better? Yes.

The first alternative is that mathematicians who say that delta epsilon are essential are just wrong, even for mathematics proper. You can define linear maps, and norms, and then make a characterization of approximation and do that all fully rigorously then assert that a derivative is linear map that approximates your function well enough. Making definitions for 1d and then 2d and then 3d and then for $\mathbb R^n$ and then for complex numbers and then for manifolds is a bit of a waste of everyone's time. Especially if the end result is that they have to go and take another course on approximation theory just to know that they can approximate things. Focus on approximations well and you will see that $2x_0$ well approximates the slope of the secant for $f(x)=x^2$ near $x=x_0$.

Which gets to a problem in physics. If we have a continuous charge distribution or a continuous matter distribution or a fluid, the idea is that the continuous function is an approximation to the more truthful discrete distribution. We don't need a limit for most things, at some point you are getting so close that further approximation is swamped by uncertainty in your parameters or the fact that your system wasn't really continuous to begin with. Even if you had a field that rightly is continuous, the sources have some uncertainty, or the constants or parameters have some uncertainty, so you often aren't solving one differential equation, but looking at a whole family of them. And physicists don't work with functions the same way a mathematician does. Because we accept experimental results and have to make testable predictions a physicist has some parameters and experimentally determined values all with uncertainty, and has to do real computations that terminate in a finite amount of time with a finite amount of effort. A limit is about a hypothetical action where you stick an infinite number of things into a function, it doesn't correspond to what a physicist does as well as approximation theory would. A mathematician is interested in proofs and generality and wants definitions that help find proofs and make the proofs as general as possible. They are simply different goals. A physicist might not care much about the proofs, particularly if they use assumptions that are untrustworthy or are made overly general just to apply to untrustworthy things.

Another answer mention topology, which doesn't actually use limits and even has their own version of continuity, and again if a mathematician is teaching topology it will be about general topologies and making general proofs for general topologies, rather than focusing on, say, manifolds and subsets of $\mathbb R^n$ with the usual topology.

So a physicist needs to make computations and approximations, and really should learn how to do that properly, and how to make models. Even a mathematical physicist has to make things that eventually make actual predictions. Mathematicians want to make proofs. A mathematician wants definitions that make the same amount of proof effort give the strongest possible theorem, and usually would rather build upon old results just because they are already there. Physicists want to make new models and extract testable predictions from existing models. And each person can only handle so many concepts, so each one comes at a human cost.

We also have to look at the ask in context. Mathematicians develop an ever expanding amount of baggage and always claim that every piece of baggage is essential to understanding even the most basic thing. Hey, they come from a tradition of not truly doubting established lore, so of course they will hardly ever throw anything out no matter how much problems it causes. For instance, they invented the whole field of measure theory because they didn't and don't want to give up the axiom of choice (even though by construction it can't possibly effect any computation, so has no effect on physics), whereas the piecewise polynomials are dense in the standard $L^2$ norm, so they approximate $L^2$ as a complete linear space just fine, and if we stuck with the approximations they are super easy to integrate and differentiate and we can just have arbitrarily good approximations, you can even treat a sequence of polynomials as an object in its own right, so we never need to bring up measure or Lebesgue integration (which makes it weird to call it $L^2$, but that's history for you), measure theory and Lebesgue integration were made to avoid dealing with sets that never come up in any experimental measurement and never come up in any computable prediction. Mathematicians themselves wanted to avoid those sets, but want to pretend to have weird functions that then require weird ways to integrate them. And the functions aren't actually even defined pointwise, so they still have a class of functions (instead of individual functions) anyway, we could just use the piecewise polynomials.

So, it would be nice to be able to approximate things, piecewise polynomials work fine. It would be nice to approximate their slopes over small intervals, and there are many ways to do that, and do it easily. It would be nice to then have weak derivatives and weak solutions to differential equations, and again it isn't at all hard to do that, and we can do it all in a way that takes approximation by easily computable things as a fundamental.

If that's what physicists learned we could all still be fine fine physicists, though there are existing people and historical resources that would then be harder to access and its easy to learn just one definition and one piece of terminology, so there isn't much harm in learning it. But if that is just the first ask, and all the things asked are 90% (or more) just wasting a physicists time. Well, then it of course makes sense to stop at some point and draw the line and say "no more".

And delta-epsilon is a fine place to draw the line. It's a crutch for 1d that doesn't generalize to higher dimensions, it needlessly divorces itself from approximation theory, and let's be honest its because when something merely has a limit you don't have to discuss how well any approximation approximates. It's pure laziness disguised as essential mathematics.

I've seen students trying to compute an electric field due to a continuous charge distribution struggle with carving space into pieces small enough to have the vector from the charge to the field point be fairly constant in the region because they want some function with independent variable $q$ that goes from $q_a$ to $q_b$ in the integral $$\vec{E}(\vec{A})=\int\frac{(\vec{A}-\vec{B})dq@\vec{B}}{4\pi\epsilon_0|\vec{A}-\vec{B}|^3.}$$

They clearly don't see that there are discrete charges located at discrete points, and that we have a finite sum of numbers from the finite number of charges that we are approximating by an integral because the integral is easier to compute. Maybe they can recite delta-epsilon perfectly, but that doesn't matter. When you have real polynomials you can say exactly how well they approximate, or you can underestimate how well they approximate, that's useful. Having a limit doesn't mean much. A mathematician wants to make a big deal about having a limit to get general results. They do that because they actually truly want theorems about functions that can't be computed, functions for whom it is impossible (even in principle) to ever know how fast the approximations converge on each other. Those theorems are not as useful as they make them seem, since in the very true end a physicist must make a testable prediction so you have to be able to say, I predict that the experimentally observed results will be between this and that and you have to be able to compute both those numbers. So you have to finish your computations or approximate it well enough that your bounds are sound. A result (a theorem) that applies to functions you can't work with isn't a real result that gets used when you make your predictions. And a physical theory is ultimately about making predictions that can be tested, so any part that leaves town before that happy day is just baggage. It's an extended warranty that won't be honored, or a mail in rebate that never arrives, it's a bad check. You can humor a mathematician that the check is good if humoring them won't empower them so much that they can mess everything up for your physics students.

But let's be honest, it's politics at most universities that makes things go the way they do. There are plenty of qualified physicists that could teach younger physicists the mathematical tools they need to know. But the mathematics department wants to get service credits for teaching mathematics service classes. And teaching the same service classes they always did or the same ones that your institutions reference "peer institutions" (you know your school has them) do and saying that it has to be that way is the most common tactic. And trying to change things is a risk, a political risk.

What's the reward for changing? Maybe you choose wisely and your students do well and then everyone copies you, your advantage is short lived unless strategically parleyed. So the credit you get will be small since the advantage is short lived. If you change badly, you will be blamed, so the change will probably only come from someone that can handle the blame. Maybe a powerful dean that has a big grant to implement the change, if the grant is large enough it could be called a win regardless of the human cost to others.

I started by telling you the advantages of just teaching the delta-epsilon (usefulness for reading literature and talking to current people, like learning a foreign language), and they are strong enough to justify learning it for many individuals. I mentioned all the rest merely to avoid over selling it. A physicist could learn very powerful tools in a concrete setting that when fully developed actually give them stronger and more general results than they would likely succeed at learning from the standard curriculum.

I'd like to make a final comment about rigor. There is no magic method, garbage-in garbage-out applies to mathematics too. The things physicists will actually use to make predictions could be tested (but most mathematicians do not test them), and the rest can't even be tested even in principle because they are about things that can't be done. Which means all mathematicians can do is make assumptions about these time-sinks and hogers-of-brain-real-estate, and hope that if their unfounded assumptions about how things-that-can't-be-done-even-in-principle internally contradict themselves that noone will find out about it at least until they are dead. I bring this up because the false generality of a limit where you don't know how fast it converges is exactly to make definitions for theorems that only apply to these non-things. Concrete definitions about convergence rates apply to real things.

You can bring up convergence rates to physicists in concrete and useful settings and let them see the pattern about how it works in all the other useful and concrete settings, and then discus convergence results in general. To a physicist, that's generality. The kind of generality that actually "requires" a delta-epsilon (where you can't compute the delta, just that it merely "exists" for every epsilon) is a wild generalization that is only needed for functions where 1) you can't compute the delta, where 2) unfounded assumptions and axioms assert that the function merely "has" a delta for every epsilon, those kinds of functions literally cannot be the ones that physicists actually use to do real physics. So to a physicist it is, at best, a false generality. So the 100% only real reason to learn the delta-epsilon definition of a limit is to talk to mathematicians that like it and for historical reasons (like learning a language). That said, the thing about talking about convergence rates in general might not actually in the classroom sound much different than what you were going to bring up for limits anyway, so there might be a perfect compromise.

Timaeus
  • 25,523
  • I'm upvoting this, not because I necessarily agree with it, but because you've presented an argument which touches on various aspects of philosophy of mathematics. As Leopold Kronecker (who opposed Cantor's set theory) said, "God created the natural numbers, all else is the work of man." – CoolHandLouis Aug 08 '16 at 01:22
2

This is a generic problem physicists face, we can often do with less rigor, but not always and that causes some in the field to not know what they should know. As David Hilbert put it: "Physics is too hard for physicists". So, unfortunately, we do need to learn the rigorous definition of the limit, despite the fact that in most practical cases we'll not use it.

Count Iblis
  • 10,114