53

What are some examples of serious mathematical theory-building around hypotheses that are believed or known to be false?

One interesting example, and the impetus for this question, is work in number theory based on the assumption that Siegel zeros exist. If there were such things, then the Generalized Riemann Hypothesis would be false, which it presumably isn't. So it's unlikely that there are Siegel zeros. Still, lots of effort has gone into exploring the consequences of their existence, which have turned out to be numerous, interesting, surprising and so far self-consistent. The phenomena generated by the Siegel zero hypothesis are sometimes referred to as an "illusory world" or "parallel universe" sitting alongside that of ordinary number theory. (There's some further MO discussion e.g. here and here.)

I'd like to hear about other examples like this. I'd be particularly grateful for references, especially those that discuss the motivations behind and benefits of undertaking such studies. I should clarify that I'm mainly interested in "illusory worlds" built on hypotheses that were believed to be false all along, rather than those which were originally believed true or plausible and only came to be disbelieved after the theory-building was done.

Further context: I'm a philosopher interested in counterfactual reasoning in mathematics. I'd like to better understand how, when and why mathematicians engage with counterfactual scenarios, especially those that are taken seriously for research purposes and whose study is viewed as useful and interesting. But I'd like to think this question might be stimulating for the MO broader community.

  • 8
    There is a bunch of work looking into Reinhardt and Berkeley cardinals over ZF. They are known to not exist over ZFC and are generally believed to also be inconsistent over ZF. – Wojowu Jul 15 '21 at 11:48
  • 1
    Isn't every proof by contradiction an instance of the phenomenon you seek? – Joel David Hamkins Jul 15 '21 at 12:58
  • 1
    Also, proofs by induction are often cast as a ruling out of minimal counterexamples. One supposes, contrary to fact, that a number $n$ does not have the property, but all smaller numbers do have it. – Joel David Hamkins Jul 15 '21 at 13:03
  • 4
    @JoelDavidHamkins It seems like what they are asking for is a bit socially broader than a single proof by contradiction. In the case of Siegel zeros there are many different papers, and they paint a seemingly consistent picture. That's in contrast to many situations where even if we can't derive a contradiction, there's a lot that would be needed that puts it wildly at tension. (See for example in the other direction, pre-Wiles the "first case" of Fermat's Last Theorem.) – JoshuaZ Jul 15 '21 at 13:21
  • 4
    An example which I am dimly aware of, but lack expertise to write a proper answer on, would be various "putative" objects that arise in finite group theory en route to some proof by contradiction, such as "simple non-abelian groups of odd order" (https://en.wikipedia.org/wiki/Feit%E2%80%93Thompson_theorem ) – Yemon Choi Jul 15 '21 at 13:23
  • 4
    Hi @JoelDavidHamkins! Thanks for stopping by. Yes, things like proofs by contradiction are certainly an example of counterfactual reasoning, but like @JoshuaZ says, I'm interested in cases that (a) involve more substantive theory-building and (b) seem to have some epistemic goals beyond just trying to derive a contradiction and falsify the hypothesis. – William D'Alessandro Jul 15 '21 at 13:30
  • 1
    @YemonChoi I believe that Greg Cherlin's proposal to undertake and simplify the classification of finite simple groups by looking at tameness and minimal Morley rank counterexamples may be an instance of this. – Joel David Hamkins Jul 15 '21 at 14:00
  • 3
    The hypothesis "there exists a field of one element" sort of qualifies. https://en.wikipedia.org/wiki/Field_with_one_element – Terry Tao Jul 15 '21 at 14:41
  • 4
    I would like this question if “believed to be false” were replaced by “expected to be disproved”, to make clear that any such discussion is relative to a base theory, and to facilitate discussion with a variety of possible base theories. –  Jul 15 '21 at 17:00
  • @JoelDavidHamkins every proof of a negation, I would say :-) – David Roberts Jul 16 '21 at 01:48
  • 2
    I don't think there is a fundamental distinction between theories with hypotheses "believed to be false" and those without. Mathematics is primarily about provability, not truth. As long as a hypothesis cannot yet be disproved, understanding its consequences can be interesting. If it is "believed to be false", but disproving it is hard, then building theories on it is a way of tackling that problem (i.e., building toward a hoped-for contradiction). But, as mentioned in answers here, it can also pay off when no contradiction turns out to exist. – nanoman Jul 17 '21 at 05:35
  • Even based on provably false assumptions, some well-thought parallel-world theories may end up as drastically useful. Think of $\sqrt{-1}$. – Claude Chaunier Jul 18 '21 at 11:10
  • The question seems to assume that there is a philosophically significant difference between studying the consequences os something I think is false for a short time (in a proof by contradiction) and doing it for a long time (in a series of papers by many authors, perhaps also hoping for an eventual contradiction). There is clearly a psychological difference, but is it philosophically significant? – Fernando Jul 20 '21 at 21:14

6 Answers6

52

Girolamo Saccheri in his Euclides Vindicatus (1733) essentially discovered Hyperbolic Geometry, by building around the hypothesis that the angles of a triangle add up less than 180°. This was widely believed to be always impossible, since people at that time were convinced of the absolute nature of Eucliden Geometry.

43

Computational complexity theory involves investigating illusory worlds, since so many of the results depend on unanswered questions. A vivid example is given by Russell Impagliazzo's paper "A Personal View of Average-Case Complexity" where he discusses different hypotheses related to P vs. NP. He describes it in terms of 5 possible worlds, which he gives colorful names: Algorithmica, Heuristica, Pessiland, Minicrypt, Cryptomania. By necessity, at most one of these worlds can be true, so the other 4 are illusory. He discusses the implications for computer algorithms in each possible world. Algorithmica is the world in which P = NP, but the other four worlds consider different ways in which P != NP, which have different consequences for applications such as cryptography.

arsmath
  • 6,720
  • 5
    Another good example along these lines is the unique games conjecture. The computational complexity community has vacillated between believing and disbelieving this conjecture, and so they have investigated the consequences of the conjecture as well as the consequences of its being false. – Timothy Chow Jul 15 '21 at 22:35
30

The first mathematical objects studied that are believed not to exist seems to be odd perfect numbers

In 1496, Jacques Lefèvre stated that Euclid's rule gives all perfect numbers, thus implying that no odd perfect number exists.

Euler began the study of the properties of odd perfect numbers, showing any such number must be of the form $q^\alpha N^2$ with $q\equiv 1\equiv \alpha \bmod 4$. Many later results are on the linked Wikipedia page. They are all properties of the numbers themselves, however, and the existence of an odd perfect number does not seem to have consequences elsewhere, unlike the Siegel zero.

Stopple
  • 10,820
26

I have heard that Jack Silver's discovery of zero sharp ($0^\#$) was part of his attempt to show measurable cardinals inconsistent. Instead of finding the long-sought-after contradiction, however, he instead built the beautiful and elaborate theory, now much studied and extended.

Many set theorists today do not view this as an instance of counterfactual reasoning, since they think measurable cardinals are consistent with ZFC, but from Silver's point of view, he was developing the elaborate theory in attempt to refute the measurability assumption.

Of course, in light of the incompleteness theorem, we know that Silver's view is at least as consistent as his opposition, since if ZFC is consistent, then it is consistent with ZFC to suppose that measurable cardinals are not consistent. So one cannot really criticise Silver's view as incoherent.

  • 10
    That reminds me of the story about Vopenka's principle: that Vopenka originally intended to demonstrate the absurdity of some large cardinal axioms, and introduced this "principle" with an eye to refuting it. But this backfired, and instead it's been studied enthusiastically. https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle – Todd Trimble Jul 15 '21 at 14:05
21

I believe that there are many instances of this phenomenon in set theory, where an elaborate theory is developed over a period of years by many people, even though the theory is not viewed ultimately as true.

Examples would include:

  • The axiom of constructibility $V=L$. This is the hypothesis introduced by Kurt Gödel in order to prove the relative consistency with ZF of the axiom of choice and the continuum hypothesis. There are hundreds if not thousands of published papers developing the nature of set theory under this hypothesis, but at the same time, it is a standard view in set theory, especially large cardinal set theory, that this axiom is not ultimately "true" of the intended platonic set-theoretic realm. Maddy has written on this, explaining how it violates her maximization maxim. Meanwhile, I have argued that a multiverse perspective allows for a more forgiving perspective on this axiom (See Hamkins, Joel David, A multiverse perspective on the axiom of constructibility, Infinity and truth. LNS, NUS 25, 25-45 (2014). ZBL1321.03061.)

  • The inner model theories of large cardinals. This is an extremely active subject in set theory, developing the analogue of the constructible universe for the large cardinal context, with again hundreds of researchers and many papers, developing the intricate theories of these models. And yet, the most common standard view amongst large cardinal set theorists, even those undertaking the inner model theory, is that the actual (platonic) set-theoretic universe is not actually one of these canonical inner models. We study them, to learn about what things would be like in these inner models, since this allows us to prove relative consistency results, and gives us a deeper understanding of the large cardinal hypotheses in question.

  • In some parts of the inner-model analysis, the inner model theories are developed under an anti-large cardinal assumption, that there is no inner model with a certain kind of large cardinal. This assumption is not held ultimately as true, but it can be true in inner models, and can be viewed as a kind of inductive analysis.

  • The axiom $V=L(\mathbb{R})$, intensely studied in the context of determinacy. Set theorists study this theory especially under the assumption of AD+DC. Again, it is not part of the set-theoretic conception that this axiom is true, but rather only that it is true in that inner model, where the determinacy issues have certain very useful consequences.

In all these cases, set theorists have introduced and developed an ongoing elaborate foundational theory, which for philosophical reasons is not viewed ultimately as true.

Of course, a fundamental philosophical issue here is the difficulty of saying what it means for a foundational set theory to be "true."

  • 3
    I don't actually know what this means, but I am curious to know if you think the following quote from Wikipedia fits the bill: "Vopěnka's principle was originally intended as a joke: Vopěnka was apparently unenthusiastic about large cardinals and introduced his principle as a bogus large cardinal property, planning to show later that it was not consistent." – Steve Huntsman Jul 15 '21 at 14:31
  • 5
    Isn't there a qualitative difference between the example given by OP (Siegel zeroes) and the various set-theoretic universes? It's fairly well understood how the universes relate to each other (in terms of consistency, building one from another by forcing, etc), whereas it may just turn out that there are no Siegel zeroes. Phrased differently: if there are no Siegel zeroes, the whole subject will collapse, but if there are no measurable cardinals (say), set theory will experience a boom. – Andrej Bauer Jul 15 '21 at 14:34
  • 4
    @SteveHuntsman Yes, I find that to be an instance of the phenomenon (and Todd mentioned it on my other answer). And there are other large cardinal hypotheses that were born in the same manner, including the Berkeley cardinals. – Joel David Hamkins Jul 15 '21 at 14:40
  • 5
    @AndrejBauer If measurable cardinals are inconsistent, then huge parts of set theory will collapse, and I've heard set theorists say that if large cardinals are refuted, then they will feel as though they had understood nothing. But that isn't the right analogy here anyway. The hypothesis that is studied, but held to be false, is not the LC assumption, but the inner-model assumption, that the set-theoretic universe is the canonical inner model. And yes, this has a different character than the Siegel case, because in set theory it is less clear what it means to say that a statement is "true". – Joel David Hamkins Jul 15 '21 at 14:44
  • 1
    @AndrejBauer I would put it the other way: if there are no Siegal zeroes, and this is proved, then great progress will be made in analytic number theory; if there are no measurable cardinals, then as Joel says ZFC-ists will have to throw out a lot of work! – David Roberts Jul 16 '21 at 03:38
  • 1
    Yes, on second thought, the best thing that could happen to number theorists would be the discovery of an even prime larger than 3. – Andrej Bauer Jul 16 '21 at 06:44
6

This is more of a theoretical/mathematical physics example, but it can happen in mathematical physics that a lot of stuff is built around hypotheses which are essentially known to be false from the beginning or objects which are known not to exist. Some examples:

1.) I know that you asked for hypotheses which were always known to be false, but Tait originally started to develop knot theory because Kelvin had hypothesised that atoms can be obtained as knots in the ether. The concept of the ether was invalidated by experiments of Michelson and Morley which obviously also invalidated the hypothesis of Kelvin regarding the physical basis of knot theory. The mathematics of knots has survived and flourished up to the present day although this application did not work.

2.) A huge amount of theoretical and mathematical papers have been published on magnetic monopoles (including Dirac and 't Hooft-Polyakov monopoles) although the consensus looks to be that magnetic monopoles likely do not exist in this Universe which we live in. I am not sure if Dirac regarded the hypothesis of existence of monopoles to be false ''all along'', but this is certainly possible, as magnetic monopoles are forbidden by the mathematical equations of classical electromagnetism and Dirac was considering what happened theoretically if you increased the amount of symmetry which the equations have.

  • I don't think 1) qualifies. In his first big paper On Knots, https://www-biodiversitylibrary-org.ezproxy.cul.columbia.edu/item/126566#page/191/mode/1up, Tait said: "I was led to consideration of knots by Thomson's Theory of Vortex Atoms, and consequently the point of view I adopted was classifying knots by the number of crossings; or, what comes to the same thing, investigation of the essentially different modes of joining points in a plane, to form single closed plane curves with a given number of double points." He saw a physical motivation for knot theory, not a falsifiable physical basis. –  Jul 22 '21 at 12:14