39

The recent talks of Voevodsky (for example, http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf), which describe subtle errors in proofs by him as well as others, as well as the famous essay by Jaffe and Quinn (http://www.ams.org/journals/bull/1993-29-01/S0273-0979-1993-00413-0/) and responses to it (http://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994-00503-8/), raises for me the following question:

What are some explicit examples of wrong or non-riogour proofs that did damage to mathematics or some significant part of it? Famous examples of non-rigorous proofs include Newton's development of calculus and the latter stages of the Italian school in algebraic geometry. Although these caused a lot of dismay and consternation, my impression is that they also inspired a lot of new work. Is it wrong for me to view it this way?

In particular, I'm told that people proved false theorems using Newton's approach to calculus. What are some examples of this and what damage did they do?

Deane Yang
  • 26,941
  • 17
    To me personally it appears that it's not unrigorous proofs themselves that are doing damage, but the community's lack of acceptance of the fact that they are unrigorous and there is value in doing better, and disinterest for corrections and improvements to the theoretical underpinning. I am not sure if I have good examples for this, though, as in all situations I am aware of, my own ignorance is just as much suspect of being the reason I never got to see the rigorous treatments. – darij grinberg Jul 18 '14 at 15:39
  • Thanks, darij. So you feel that there are theorems accepted by the community but whose proofs are known to unrigorous and no one seems interested in finding rigorous proofs? – Deane Yang Jul 18 '14 at 16:56
  • Yes, I feel this way, though I cannot rule out my own lack of understanding (papers often need to be read in the context of their references, which I am almost always too lazy for). – darij grinberg Jul 18 '14 at 17:17
  • For various reasons, including the one you cite, I'm wary about what one person says about a theorem or proof. I'm more interested in situations where there is a consensus among a group of well-regarded mathematicians that something is flawed. – Deane Yang Jul 18 '14 at 17:23
  • old answer, http://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong/35679#35679 – Will Jagy Jul 18 '14 at 17:29
  • Thanks, Will. But did this do any damage? – Deane Yang Jul 18 '14 at 17:38
  • Deane, I don't know. There are many other examples at the same question, maybe some will be helpful. Mine was a gap that was fixed, most of those answers are things that remained wrong. – Will Jagy Jul 18 '14 at 17:42
  • All of the examples I know are either isolated theorems that had little impact on the rest of mathematics or important theorems where the first proofs were incomplete or wrong but the theorem is close enough to being right (largely because, I think, no one found any logical inconsistencies that would invalidate the theorem) that most if not all of the important consequences also remained essentially correct after the proof was repaired. I'm particularly curious about whether Newton's calculus is a counterexample to my belief about this. – Deane Yang Jul 18 '14 at 17:58
  • I agree with darij. To whatever extent "communities" exist in giving an imprimatur to a proof, they are at fault when there is underacknowledgement that the rigour are crufty. Gromov and Thurston are examples of quality mathematicians (from the 80s or so) who are terse with details, but their "communities" have largely accepted their work, but in no short part due to the assistance of others (and sometimes quiz-sessions with the principals). That others wished to understand their work, made putting the foundations firmer a necessity. Note explaining one's intuition to others is a goal itself. – NAME_IN_CAPS Jul 19 '14 at 03:26
  • NAME_IN_CAPS, I'm a little confused by what you're saying. Are Gromov and Thurston's "proofs an example of what Darij is talking about or not? – Deane Yang Jul 19 '14 at 03:33
  • 1
    There are probably some good examples from premodern Euclidean geometry, such as naive proofs relating to infinities, or proofs of the parallel postulate. –  Jul 19 '14 at 15:40
  • 1
    Ben, that sounds pretty interesting. I would love to learn about an explicit example of that. – Deane Yang Jul 19 '14 at 15:58
  • 3
    For a discussion of mistakes of the "Italian" geometers see http://mathoverflow.net/questions/19420/what-mistakes-did-the-italian-algebraic-geometers-actually-make –  Jul 19 '14 at 16:13
  • 1
    quid, thanks! I had seen that already. That does seem rather damaging, but I'm wondering whether it was even worse than that. My impression is that Severi published several very interesting but wrong theorems, which must have misled others working in the subject. This definitely qualifies as negative impact. But I was also wondering if there are examples of people using Severi's work or ideas to compound the problem, publishing results that were wrong because they relied on Severi's work. – Deane Yang Jul 19 '14 at 16:18
  • the question is slightly leading. any erroneous theorem or lemma anywhere leads to "0=1" elsewhere. it can be very timeconsuming for others to discover/find these "defects" (there is some loose analogy to software engineering). so "damage" is an overwrought/dramatic word/adjective in this context. but, ofc, "to err is human..." even by professional mathematicians. agreed that fixing errors can lead to highly constructive new math/research etc. – vzn Jul 19 '14 at 19:55
  • 3
    It would be very relevant to include in your list Thurston's own response to Jaffe and Quinn: http://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994-00502-6/ – Lee Mosher Jul 19 '14 at 20:01
  • This was a prescient remark of Thurston (from On Proof and Progress in Mathematics): "It is unlikely that the proof of the general geometrization conjecture will consist of pushing the [proof for Haken manifolds] further." – Joseph O'Rourke Jul 20 '14 at 13:29
  • Negative impact starts only when there is some "negative impact" – Victor Jul 27 '14 at 16:23

6 Answers6

72

Since the question is specifically about damage:

I think that what really causes damage to a mathematical area is when an important result is claimed by someone prominent in the field, but the proof is never completely written. Younger researchers are then likely to spend a lot of time and energy "cleaning up the mess", for little credit.

Things are even worse when there is some freedom of interpretation of what might have been proven. A younger researcher might want to use the announced result for some other purpose, but they might use a version of the theorem that ends up not being the one that got proved.

When a proof is (widely) accepted to be wrong or non-rigorous, or when someone retracts the claim of having proved a given result, that's when things are getting better for a field.

  • 5
    André, thanks. This, I believe, is the same as what Jaffe and Quinn said, citing the work of people like Gromov, Sullivan, and Thurston. It appears to be difficult to establish or debunk this assertion, but I'm looking for an explicit example where this negative impact might have occurred. I have the impression that the contributions of Gromov, Sullivan, and Thurston have had the exact opposite effect of stimulating a lot of activity clarifying their ideas, and I'm looking for counterexamples to my belief. – Deane Yang Jul 19 '14 at 15:56
  • 11
    "I think that what really causes damage to a mathematical area is when an important result is claimed by someone prominent in the field, but the proof is never completely written. Younger researcher are then likely to spend a lot of time and energy `cleaning up the mess', for little credit." - I have given this +1 but only because I cannot give +100 – Yemon Choi Jul 19 '14 at 20:40
  • 1
    Yemon, can you cite a specific example where you believe this happened? – Deane Yang Jul 19 '14 at 21:35
  • 3
    @DeaneYang You may be experiencing some selection bias. The people who choose to work in areas with lots of non-rigorous claims are a subset of those who are not driven away by the lack of clarity about what has been proved. – S. Carnahan Jul 20 '14 at 03:02
  • 2
    @DeaneYang I don't think there is a contradiction. A lot of interesting math was discovered by thinking about these ideas and a lot of young people's careers were also made difficult by working in a field with a lack of clarity in the ways that Andre Henriques eludes to. – user36931 Jul 20 '14 at 03:09
  • @S. Carnahan Many graduate students wouldn't necessarily be aware of these issues until they are very deeply invested in the subject. At that point, it can be difficult to switch to a different field. – user36931 Jul 20 '14 at 03:10
  • S. Carnahan, that's a good point. I'm willing to entertain rumors about this really happening. – Deane Yang Jul 20 '14 at 03:11
  • 4
    Let me add that I don't see evidence for the claim that people doing the "cleaning up" receive little credit. If "cleaning up" means filling in routine details, then it's generally acknowledged that the original proof is in fact a rigorous one. If, on the other hand, "cleaning up" requires the introduction of new ideas or techniques, then it appears to me that the people doing this are given a lot of credit and the fact that their work provides rigorous proofs of the "theorems" of a prominent mathematician makes it look only better and not worse. – Deane Yang Jul 20 '14 at 03:15
  • user36931, I agree. – Deane Yang Jul 20 '14 at 03:17
  • 20
    @Deane: I know an article that contains the 1st complete proof of an important theorem (proven by someone else; never published, never written up completely), whose abstracts has: "There is no claim to originality in this approach. All of the results are the results of other people. The remaining mathematical errors, inconsistencies, and points of inelegance in these notes are mine and mine alone." The article (50pp) appeared in a conference proceedings. I'm sure it took the author an enormous amount of time & energy (which he could have used to prove other theorems)... for not enough credit. – André Henriques Jul 20 '14 at 14:39
  • 1
    @DeaneYang on reflection, my response was motivated more by cases where a big name proves a key case of a result, other people do work to extend to full generality and fill in the details, and only the big name gets mentioned by people who don't believe in the difference between primary and secondary sources – Yemon Choi Jul 20 '14 at 22:36
  • 3
    @DeaneYang Actually, as soon as I finished typing this I have thought of relatively recent examples which is closer to what I think you are asking about. A result was announced in a talk but has never been published, and it is now quoted and used in papers but no one has written out details or tried to extend the result in its natural directions. In a second instance, the result was announced, and some notes based on the talk were circulated as a quasi-preprint, but a paper has not been published & I've heard that the claimed result may have been quitely retracted, while people are citing it – Yemon Choi Jul 20 '14 at 22:40
  • 12
    I know of some prominent researchers who adamantly refuse to acknowledge, beyond essentially belittling, those who correct and clarify their work. They insist that's obviously what they said the first time, this other person just read it wrong, etc. Not that everyone in the field agrees with such people or behaves this way. – zibadawa timmy Jul 21 '14 at 00:34
  • 1
    Come to think of it, I heard not that long ago about an entire field full of what you all are talking about, a field from which any sensible young person would flee, even if they love the subject. Forgive me. I'm getting old and my memory is terrible. – Deane Yang Jul 21 '14 at 03:24
  • 1
    @DeaneYang What field would that be? Might as well be open about it if it is true. – Steven Gubkin Aug 05 '14 at 21:01
  • 1
    I'm not sure that it's such a good idea to "be open about it"... It's could be really controversial, and it's also very subjective. You can email DeanYang if your really curious to know what he had in mind. – André Henriques Aug 05 '14 at 21:07
  • 3
    @AndréHenriques I know I shouldn't have a discussion in the comment section, but I really do think that naming the field without naming any particular researchers is pretty important. It would at least alert vulnerable graduate students to the possibility that the field might be (currently!) dangerous. Of course, some ambitious graduate students might take this as a challenge to go in and clean everything up! – Steven Gubkin Aug 07 '14 at 19:02
  • Omg. What an excellent answer. This is a truly poisonous thing. – Jon Bannon Aug 10 '20 at 00:36
14

A proof being wrong can mean many things:

By 1932, when the Hungarian-American mathematician John von Neumann claimed to have proven that the probabilistic wave equation in quantum mechanics could have no “hidden variables” (that is, missing components, such as de Broglie’s particle with its well-defined trajectory), pilot-wave theory was so poorly regarded that most physicists believed von Neumann’s proof without even reading a translation.

More than 30 years would pass before von Neumann’s proof was shown to be false, but by then the damage was done. The physicist David Bohm resurrected pilot-wave theory in a modified form in 1952, with Einstein’s encouragement, and made clear that it did work, but it never caught on. (The theory is also known as de Broglie-Bohm theory, or Bohmian mechanics.)

The problems is the interpretation of what has actually been proved. I don't like the No Free Lunch Theorems for Optimization, because their assumptions are unrealistic and useless in practice, but the theorem itself certainly feels true (but in a less trivial way than what is actually proved). And the conclusion is deeply flawed. It claims that there is no difference between a buggy implementation of a flawed heuristic and a correct implementation of a reasonable solution strategy. The conclusion should rather be that we should explicitly specify what our solution strategy is supposed to achieve, not just claim that it is a good black box search strategy.

  • This is not the story I have heard about the hidden variables result. What I heard was that von Neumann proved there could be no local hidden variable theory, and physicists not sensitive enough to adjectives ignored the "non-local" part and went about saying for 30 years that von Neumann had proved there could be no hidden variable theories. Then Bohm finally, against significant headwinds, exhibited his example of a non-local hidden variable theory. I probably should be careful to look at the primary sources for this but don't have time. The curious should look at vN's original statement. – Jon Bannon Aug 10 '20 at 00:42
11

There are several examples of wrong proofs which were believed to be correct for some time, but I would not say that they "did damage to mathematics".

One of the most famous examples is Dulac's proof that a 2 times 2 polynomial system of differential equations in the plane has finitely many limit cycles. A gap was found 60 years later, and after some substantial efforts the proof was fixed. Now we have two different published proofs, both are quite complicated. The story is told in great detail in several publications of Ilyashenko. His book
MR1133882 contains a complete proof as well as the history.

Another example from the same area is an upper estimate of the number of these limit cycles for quadratic systems. An incorrect proof was published by Landis and Petrovski, but soon retracted. The problem is not solved to this day, to the best of my knowledge.

There are many other examples. In the beginning of 20-s century some people believed that the Riemann Hypothesis was proved by Stiletjes, who published an announcement. Stieltjes died at a young age, and never published his proof.

If some one in really interested in the result, s/he would make all efforts to understand the proof, and eventually the things will be sorted out. If no one is seriously interested, there is no damage to mathematics anyway:-) Remember, huge efforts were made in 19 century to make Calculus rigorous. Many Fourier arguments were also doubtful. Best mathematicians of 19 and 20 century made efforts to put Fourier analysis on a rigorous basis.

10

It has to be said that in the history of mathematics sometimes quite new profound ideas suddenly arise, so that tools, methods, and foundations are still lacking in a first phase of the new theory. This is the case of certain parts of Analysis at Cauchy's times, and it is also the case of the "Italian Geometry", which later grew into modern Algebraic Geometry. In such special occasions, I wouldn't say that non-rigorous proofs have a negative impact on the theory; on the contrary, they bring the attention of the mathematical community on it, and turn into a call for well-founded methods.

Pietro Majer
  • 56,550
  • 4
  • 116
  • 260
10

In particular, I'm told that people proved false theorems using Newton's approach to calculus.

I'm intrigued by this claim, but it seems very vague, both because there is no information about what the theorems might be and because it's not clear how widely accepted these false results are claimed to have been.

First off, I don't think it makes much sense to consider Newton in isolation from Leibniz. Newton didn't fully develop the calculus. His results were scattered in a variety of places and are not complete or systematic.

The rigor and logical validity of Newton and Leibniz's calculus were vigorously debated from very early on. Archbishop Berkeley published The Analyst in 1734, seven years after Newton's death. He claimed that although Newton's results were correct, they were derived through incorrect methods: "I have no Controversy about your Conclusions, but only about your Logic and Method." This would seem to suggest that if otherwise competent people arrived at incorrect and widely known results based on the Newton-Leibniz methods, it didn't happen during Newton's generation or the generation after that, since clearly a rabid critic like Berkeley would have made a big deal out of such a thing.

Blaszczyk at al. have an interesting revisionist take on the early history of the calculus. In their reading:

Leibniz’s heuristic law of continuity was implemented mathematically as Los’s theorem and later as the transfer principle over the hyperreals ..., while Leibniz’s heuristic law of homogeneity... was implemented mathematically as the standard part function ...

If you buy this account, then Newton-Leibniz calculus had a fully formed, well-defined, and consistent set of methods that are isomorphic to some subset of the methods of NSA. For example, say we calculate, in the Leibniz style,

$$d(x^2)/dx=[(x+dx)^2-x^2]/dx=(2x\,dx+dx^2)/dx=2x+dx{}_{\ulcorner\!\urcorner}2x.$$

The symbol ${}_{\ulcorner\!\urcorner}$ is Leibniz's notation for what he called "adequality," which Blaszczyk argues is the same as NSA's standard-part relation; today we'd write the final step as $\operatorname{st}(2x+dx)=2x$. This is a completely valid calculation if interpreted in terms of NSA. Of course the notation ${}_{\ulcorner\!\urcorner}$ never caught on, and practitioners of the calculus traditionally just wrote $=$, which made the practice of discarding squares of infinitesimals seem logically suspect. But that doesn't mean that the methods were wrong, just that they were traditionally written in a way that may have obscured their correctness.

Blaszczyk, Katz, and Sherry, "Ten Misconceptions from the History of Analysis and Their Debunking," 2012, http://arxiv.org/abs/1202.4153

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
  • How does this answer the question? –  Jul 19 '14 at 14:44
  • 3
    @quid: It answers the Newton-related part of question in the negative: no lack of rigor, no incorrect proofs, no damage. –  Jul 19 '14 at 15:35
  • 1
    Thanks, Ben. Are you're saying that Newton and Leibniz's approach to calculus did not lead to many harmful consequences, because people used a well-defined set of logically consistent axioms (which we now like to call nonstandard analysis) that we now know always produces correct results? – Deane Yang Jul 19 '14 at 15:52
  • Ben, thanks again. I posted my comment before seeing yours. – Deane Yang Jul 19 '14 at 15:53
  • 2
    Thanks for the reply. In my opinion it is still a considerable stretch to post this as answer (a shorter version could be a comment). The point of the question is not Newton's approach to calculus, and it got merely metioned as OP thought (perhaps wrongly) there were examples of what they asked about (wrong and non-rigorous proofs) to be found there. –  Jul 19 '14 at 16:09
9

First, three possible areas of damage (though there are surely more):

  1. Subsequent results that make use of these "proofs" (especially when the claim is not true);

  2. Using the methods of the incorrect proofs, when, in fact, this is where the problem lies; and

  3. Causing others to lose trust in the institution of mathematics (e.g., questioning rigor more broadly).

Second, an explicit example: Du-Hwang's proof of the Gilbert-Pollak Conjecture, which was later shown to contain a serious gap. The go-to for a "proof" of it was a text by Ivanov and Tužilin, but since the error in the proof has been discovered, those two have gone on to explain not only where the Du-Hwang proof went wrong, but also why attempts to patch it up have been unsuccessful. To this latter end, see their arXiv note here from February 2014.

For a related MO post, see here (where I believe the top comment is from Ivanov) and a link to the note mentioned above (which contains references for further reading).

More generally, one might reasonably expect that realizing a proof is wrong took some insight, and where there is insight, it seems quite possible that there will be an inspiration for new work. Whether or not that work will lead to newfound success is sure to occur on a case-by-case basis; I'm not sure that the error in the Du-Hwang proof has led to anything of great import at this time, though it has renewed a bit of interest in the area of Steiner minimal trees.

  • I'm interested in explicit examples of 1-3, but only where the new theorems proved were of some significance to the community. – Deane Yang Jul 18 '14 at 17:01