38

I am looking for examples of mathematical theories which were introduced with a certain goal in mind, and which failed to achieved this goal, but which nevertheless developed on their own and continued to be studied for other reasons.

Here is a prominent example I know of:

  • Lie theory: It is my understanding that Lie introduced Lie groups with the idea that they would help in solving differential equations (I guess, by consideration of the symmetries of these equations). While symmetry techniques for differential equations to some extent continue to be studied (see differential Galois theory), they remain far from the mainstream of DE research. But of course Lie theory is nonetheless now seen as a central topic in mathematics.

Are there some other examples along these lines?

Sam Hopkins
  • 22,785
  • 6
    Does the proof of Feit-Thompson really eschew representation theory? I don't know much about the area, but the linked wiki page gives quite the opposite impression. – lambda Sep 18 '19 at 22:34
  • @lambda: maybe you are right – Sam Hopkins Sep 18 '19 at 22:37
  • 34
    Non-Euclidean geometry was initially developed in hopes of deriving the parallel postulate from the other axioms of Euclidean geometry. Nowadays it describes two of the basic model geometries in Riemannian geometry - the sphere and hyperbolic space. – Terry Tao Sep 18 '19 at 22:38
  • 21
    Many parts of graph theory originated or were stimulated in attempts to tackle the 4-colour conjecture. Perhaps the chromatic polynomial is the principal example of something explicitly introduced for the 4CC, which played no role in its solution, but which then went on to be significant in the study of phase transitions in the Potts model. – Gordon Royle Sep 18 '19 at 23:45
  • 7
    I think there are plenty of results in differential equations which use Lie theory in a non-trivial way. Ask a physicist about the Schrödinger equation, the hydrogen atom and the representation theory of SO(3). – Thomas Rot Sep 19 '19 at 10:27
  • 3
    Partcularly relevant "String Theory and Math: Why This Marriage May Last" by Mina Aganagic https://arxiv.org/abs/1508.06642. Camps have been formed by supporters and detractors, fusilades fired, but no doubt string theory has invigorated large branches of math and given rise to offshoots. – Tom Copeland Sep 19 '19 at 17:20
  • @TerryTao , Long history in Wikipedia on failed attempts to prove the parallel postulate from other axioms, but I'm not sure how much of a theory relevant to non-euclidean geometry (NEG) there was untill the postulate was taken as such.and not approached as a theorem. Knot theory is knot theory regardless of any assumption on the nature of atoms, a motivation for its development and a failed application, but NEG isn't NEG untill you throw out the parallel postulate (a "rad revolution"). Any refs illustrating a geometry was comparable to a nascent NEG before Gauss and Bolyai and Lobachevsky? – Tom Copeland Sep 20 '19 at 13:12
  • (Cont) In a similar vein, could I say Maxwell's theory of EM along with the failed attempt to verify the ether in the Michelson-Morley experiment constitute a special theory of relativity or rather that Einstein's special theory of relativity was a radical revolution in thought about the nature of space and time in reaction to this failure and consistent with Maxwell's theory? – Tom Copeland Sep 20 '19 at 13:45
  • 2
    The work of Giovanni Saccheri, establishing many of the foundational theorems of elliptic and hyperbolic geometry in a vain attempt to prove the parallel postulate via proof by contradiction, can definitely be viewed as a "nascent non-Euclidean geometry". – Terry Tao Sep 20 '19 at 13:56
  • 1
    Perhaps from a strictly logical perspective there is little difference between "trying to prove the parallel postulate via proof by contradiction" and "exploring the consequences of the negation of the parallel postulate as an axiom", but from the perspective of intent (which is the focus of the original question) the two paradigms are polar opposites. – Terry Tao Sep 20 '19 at 14:02
  • Thanks, I look forward to learning about Saccheri's contributions. – Tom Copeland Sep 20 '19 at 14:23
  • 1
    This reference may be a starting point: https://www.jstor.org/stable/27957056 . It seems that there is no conclusive evidence that Saccheri directly influenced the subsequent work of Gauss, Bolyai, and Lobachevsky, but there is a strong case for indirect influence, as his work was mentioned by other contemporary mathematicians such as Lambert and Legendre, who corresponded with Gauss and the others. – Terry Tao Sep 20 '19 at 14:25
  • 2
    As a further (but admittedly weaker) example, Ptolemy's theory of epicycles was initially introduced with the goal of being the foundational theory of the motion of heavenly bodies, but ultimately failed at this goal (though it was the most accurate theory available for a millennium until the work of Copernicus, Brahe, and Kepler). However the basic idea that any motion can be described using epicycles survives in the modern theory of Fourier series. Admittedly this is a weak case though because it is unlikely Fourier was heavily influenced by Ptolemy. – Terry Tao Sep 20 '19 at 14:36
  • @TerryTao, very nice work by Sister Mary. (You should convert the comments into an answer.) McTutor History of Mathematics has a quick overview for those who can't access Jstor: http://www-groups.dcs.st-and.ac.uk/history/HistTopics/Non-Euclidean_geometry.html – Tom Copeland Sep 20 '19 at 18:40
  • 3
    If this topic wasn't limited to mathematics, the most prominent answer would have been alchemy, which formed the basis of all chemistry and a bit of other fields while being miserably unsuccessful. – polfosol Sep 21 '19 at 06:36
  • 1
    Further the comments by Tao and polfosol, the mathematics developed by astrologists, dating back to the Babylonians (even some integration) and onward through Brahe and Kepler, failed dismally in its intent to predict the future for enumerable dynasties but forms the basis of the calculus, modern astronomy, and celestial mechanics. Some basic combinatorics was developed for divination and to categorize personality traits, for example, in Asia (cf. "The roots of combinatorics" by N. L. Biggs in Historia Mathematics 6 (1979), 109-136). – Tom Copeland Sep 21 '19 at 20:54
  • 2
    And perhaps mathematical theories of modern divination in the dismal science economics have failed at their original intent but provided mathematical foundations for game theory and evolutionary and socio- biology. – Tom Copeland Sep 21 '19 at 21:07
  • The theory of universal functions and morphisms is inactive for a long time (and was never popular) but it's obvious that the time will come that it will be truly dynamic. – Wlod AA Nov 17 '20 at 12:11
  • THe present question is related to https://mathoverflow.net/questions/375607/profound-but-not-popular-mathematical-topics-and-notions. – Wlod AA Nov 17 '20 at 12:13
  • The universal functions/morphisms could serve as an answer also to the other question. – Wlod AA Nov 17 '20 at 12:21

17 Answers17

52

I quote at length from the Wikipedia essay on the history of knot theory:

In 1867 after observing Scottish physicist Peter Tait's experiments involving smoke rings, Thomson came to the idea that atoms were knots of swirling vortices in the æther. Chemical elements would thus correspond to knots and links. Tait's experiments were inspired by a paper of Helmholtz's on vortex-rings in incompressible fluids. Thomson and Tait believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do. For example, Thomson thought that sodium could be the Hopf link due to its two lines of spectra.

Tait subsequently began listing unique knots in the belief that he was creating a table of elements. He formulated what are now known as the Tait conjectures on alternating knots. (The conjectures were proved in the 1990s.) Tait's knot tables were subsequently improved upon by C. N. Little and Thomas Kirkman.

James Clerk Maxwell, a colleague and friend of Thomson's and Tait's, also developed a strong interest in knots. Maxwell studied Listing's work on knots. He re-interpreted Gauss' linking integral in terms of electromagnetic theory. In his formulation, the integral represented the work done by a charged particle moving along one component of the link under the influence of the magnetic field generated by an electric current along the other component. Maxwell also continued the study of smoke rings by considering three interacting rings.

When the luminiferous æther was not detected in the Michelson–Morley experiment, vortex theory became completely obsolete, and knot theory ceased to be of great scientific interest. Modern physics demonstrates that the discrete wavelengths depend on quantum energy levels.

Gerry Myerson
  • 39,024
29

"The modern study of knots grew out an attempt by three 19th-century Scottish physicists to apply knot theory to fundamental questions about the universe".

Nik Weaver
  • 42,041
  • 3
    You beat me by 16 seconds. – Gerry Myerson Sep 19 '19 at 04:15
  • 4
    @GerryMyerson well, there was a bit more typing in yours... – Nik Weaver Sep 19 '19 at 11:04
  • 7
    Not really, it was all cut'n'paste from Wikipedia. – Gerry Myerson Sep 19 '19 at 11:51
  • 3
    Perhaps the same thing will happen with string theory, in the future it may be remembered as something which give birth to a lot of interesting mathematics, but without having direct physical relevance. – Hollis Williams Sep 20 '19 at 12:42
  • Great paper. (Admire Maxwell and Clifford even more now.) That must be Kirkman of the celebrated Kirkman-Cayley numbers related to the number of distinct faces of the associahedra (see my comments to მამუკა ჯიბლაძე answer to my MO-Q The Guises of the Stasheff Polytopes, the Associahedra, related to braid groups and so to "knots" and string theory, so maybe not so unsuccessful in describing fundamental physics afterall, TBD. https://mathoverflow.net/questions/184803/guises-of-the-stasheff-polytopes-associahedra-for-the-coxeter-a-n-root-system – Tom Copeland Oct 05 '19 at 19:24
  • 1
    Links tend to break over time, so best to give the title and author of any linked paper: "Knot Theory’s Odd Origins" by Silver. – Tom Copeland Oct 05 '19 at 21:10
  • More on knot theory in physics in "Topology and physics-a historical essay" by C. Nash https://arxiv.org/abs/hep-th/9709135 – Tom Copeland Oct 09 '19 at 00:58
29

Motives and the standard conjectures were developed by Grothendieck to prove the last of the Weil conjectures. They failed at this as none of the standard conjectures were proven - despite some progress on this, I would say we are not closer to proving the Weil conjectures via the standard conjectures today than we were when they were first formulated - and Deligne showed that Grothendieck's earlier invention of etale cohomology was perfectly sufficient to prove the Weil conjectures.

However, since that time different notions of motive were constructed, with different useful properties, in addition to Grothendieck's, and many of them have found applications in areas of algebraic geometry and number theory, with the first really big one being Voevodsky's Fields medal-winning proof of the Milnor conjecture.

Will Sawin
  • 135,926
22

(Converted from a comment to an answer as requested.)

Non-Euclidean geometry was initially developed in hopes of deriving the parallel postulate from the other axioms of Euclidean geometry, as can be seen in particular through the pioneering work of Saccheri in this area, who tried in vain to prove the parallel postulate by contradiction and ended up proving a large number of foundational results in what we would now call elliptic and hyperbolic geometry as a consequence. (See for instance this article of Fitzpatrick, or this McTutor article on Non-Euclidean geometry.)

Nowadays, the classical non-Euclidean geometries (the elliptic geometry of the sphere, and the hyperbolic geometry of hyperbolic space) play the important role of describing two of the basic model geometries in Riemannian geometry, namely the simply connected geometries of constant and isotropic positive or negative curvature respectively. (In two dimensions, where Riemann curvature is effectively a scalar quantity, these two geometries, together with Euclidean geometry, are the only models needed; in higher dimensions there are however other model geometries of interest, such as the remaining five Thurston geometries of the geometrisation conjecture in three dimensions.)

Terry Tao
  • 108,865
  • 31
  • 432
  • 517
  • 2
    Great ecample of how a beautiful theory can be in the air until it finally sharply crystalizes in the minds of innovators looking at it from a radical angle. – Tom Copeland Sep 25 '19 at 13:40
  • MacTutor link: https://mathshistory.st-andrews.ac.uk/HistTopics/Non-Euclidean_geometry/ – Tom Copeland Oct 09 '21 at 01:24
  • 1
    "Saccheri and, before him, Abu ‘Alı Ibn al-Haytham and ‘Umar al-Khayy¯am (1048–1131) made similar studies. Of course, in all these works, the existence of hyperbolic geometry was purely hypothetical. The approaches of these authors consisted in assuming that such a geometry exists and in trying to deduce a contradiction." -- From a footnote in "On Klein’s So-called Non-Euclidean geometry" by Norbert A’Campo and Athanase Papadopoulos. – Tom Copeland Oct 09 '21 at 01:27
20

String Theory!

String Theory was born in the context of strong interactions inside atomic nuclei, since the 60s-70s. The theory turned out not suited to describe the strong force, and was supplanted around 1973 by the rising Quantum Chromodynamics (our current best model for the strong force interactions). Among the reasons for the failure, there was the mandatory presence of unwanted spin 2 particles...

Those particles are now interpreted as gravitons! And String Theory is now seen as a theory of quantum-gravity, describing all the known forces (electromagnetic, weak, strong) and gravity at the same time! That’s a pretty big afterlife!

Rexcirus
  • 131
  • 29
    The "is describing" in your final paragraph is a trifle optimistic. Nobody has yet produced a string theory that describes the standard model in 4 dimensions as a low-energy limit. – Robert Furber Sep 19 '19 at 15:32
  • 7
    Oh yes, those are the hidden terms and conditions of those words. – Rexcirus Sep 19 '19 at 15:47
  • @RobertFurber That's not true. The problem is rather that there are too many ways to produce the Standard Model in string theory. As an example, take a look at https://arxiv.org/abs/1903.00009 – Vigod Sep 28 '19 at 02:08
  • Can string theory be considered a "mathematical theory" when not even QFT has been put on solid rigorous mathematical grounds yet? Or is the situation with string theory different somehow? – Qfwfq Nov 17 '20 at 12:44
19

This is a copy of a copy of some history of the origins of free probability by Dan Voiculescu extracted from a response by Roland Speicher, a developer of the field, to an MO-Q:

This is from his article "Background and Outlook" in the Lectures Notes "Free Probability and Operator Algebras", see http://www.ems-ph.org/books/book.php?proj_nr=208

Just before starting in this new direction, I had worked with Mihai Pimsner, computing the K-theory of the reduced $C^*$-algebras of free groups. From the K-theory work I had acquired a taste for operator algebras associated with free groups and I became interested in a famous problem about the von Neumann algebras $L(\mathbb{F}_n)$ generated by the left regular representations of free groups, which appears in Kadison's Baton-Rouge problem list. The problem, which may have already been known to Murray and von Neumann, is:are $L(\mathbb{F}_m)$ and $L(\mathbb{F}_n)$ non-isomorphic if $m \not= n$?

This is still an open problem. Fortunately, after trying in vain to solve it, I realized it was time to be more humble and to ask: is there anything I can do, which may be useful in connection with this problem? Since I had come across computations of norms and spectra of certain convolution operators on free groups (i.e., elements of $L(\mathbb{F}_n)$), I thought of finding ways to streamline some of these computations and perhaps be able to compute more complicated examples. This, of course, meant computing expectations of powers of such operators with respect to the von Neumann trace-state $\tau(T) = \langle T e_e,e_e\rangle$, $e_g$ being the canonical basis of the $l^2$-space.

The key remark I made was that if $T_1$, $T_2$ are convolution operators on $\mathbb{F}_m$ and $\mathbb{F}_n$ then the operator on $\mathbb{F}_{m+n} = \mathbb{F}_m \ast \mathbb{F}_n$ which is $T_1 + T_2$, has moments $\tau((T_1 + T_2)^p)$ which depend only on the moments $\tau(T_j^k)$, $j = 1, 2$ , but not on the actual $T_1$ and $T_2$. This was like the addition of independent random variables, only classical independence had to be replaced by a notion of free independence, which led to a free central limit theorem, a free analogue of the Gaussian functor, free convolution, an abstract existence theorem for one variable free cumulants, etc.

Tom Copeland
  • 9,937
17

The chromatic polynomial of a graph was originally introduced as part of an attempt to prove the four-color conjecture (now a theorem), but was unsuccessful in that goal. However, the chromatic polynomial continues to be studied to this day as an interesting algebraic invariant of a graph.

Timothy Chow
  • 78,129
13

Multiplication of quaternions was introduced for use in physics for purposes for which cross-products of vectors came to be used and have been used ever since.

But today quaternions are used in computer graphics. I suspect they also have other applications.

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
  • 12
    I don't think quaternions were unsuccessful---they successfully provide an algebraic realization of rotations in 3-space. It just happens that this approach was to a large extent superceded by vectors and linear algebra. – Kimball Sep 19 '19 at 09:36
  • 1
    Quaternions are extremely important in pure mathematics still, but Hamilton's original purpose to use them in mechanics was taken over by algebra of finite-dimensional vectors, which is extremely simple and stream-lined. – Hollis Williams Sep 20 '19 at 12:46
  • 3
    Hypercomplex number systems in general can be included here: The dual quaternions are used to represent rigid body motions in computer graphics; the dual numbers are used to implement auto-diff (in forward mode only, and largely for pedagogical purposes); Clifford algebras are used to construct spinors in quantum physics. – wlad Sep 20 '19 at 20:13
  • 1
    @Tom Is there any published expository summary of the most important or most interesting uses of quaternions in pure mathematics? – Michael Hardy Sep 23 '19 at 04:13
  • It's a historical thing mainly, their historical impact is essential as they were the first non-commutative division algebra to be written down explicitly which freed mathematicians from the idea that such algebras had to be commutative. It's not a really a question of applications to other areas in pure mathematics. They also turn up in spin geometry. – Hollis Williams Sep 23 '19 at 10:13
  • 3
    The unit quaternions form a (Lie) group isomorphic to $SU(2)$ and $Spin(3)$ which is ubiquitous in physics. – W. Edwin Clark Sep 24 '19 at 22:46
  • 1
    Witty lecture on the historical competition between the proponents of the algebra of quaternions vs. those for modern vector analysis: "A History of Vector Analysis" by Crowe (https://www.researchgate.net/publication/244957729_A_History_of_Vector_Analysis). – Tom Copeland Feb 27 '21 at 16:21
10

Logic and set theory were developed by Frege, Russell and Whitehead, Hilbert and others in the late 19th, early 20th centuries with the goal of providing a firm foundation for all of Mathematics. In this they failed miserably, but nevertheless they have continued to develop and to be studied for other reasons.

Gerry Myerson
  • 39,024
  • 20
    The current wording sounds like you’re suggesting “logic and set theory” failed miserably as a firm foundation for mathematics, which would be a pretty extreme claim (and I say that as a big proponent of non-set-theoretic foundations). Do you mean just that the specific systems Frege and Russell–Whitehead used failed as foundations? If so then that’s certainly true, but as far as I know they’re not studied much today except for historical interest. – Peter LeFanu Lumsdaine Sep 19 '19 at 12:53
  • 6
    I would say rather "they failed in their project of reducing mathematics to logic" I'd also add that their work led directly to the development of modern computer science. – Chris Sunami Sep 19 '19 at 15:32
  • 6
    Maybe the word "firm" is ambiguous. If we interpret "firm" as "absolutely certain, unassailable, and indubitable" then they indeed failed, but with a weaker notion of "firm" (and also a suitable notion of what a "foundation" is supposed to be) then I think they succeeded. – Timothy Chow Sep 19 '19 at 16:02
  • 5
    I had a feeling that this answer would be a bit more controversial than some others, and perhaps I worded it in an unnecessarily inflammatory way. But I reckon that the idea was to show that arithmetic and other mathematical systems were complete and consistent, and that project was derailed by the incompleteness theorems. – Gerry Myerson Sep 19 '19 at 22:45
  • It's pretty common to have to weaken a notion in this way and then have a lot of success with that weaker notion: for example, 'weak solutions' – Hollis Williams Sep 20 '19 at 12:44
  • 3
    @GerryMyerson: I guess this comes down to an interesting historical question: did most early logicians really view proving completeness+consistency of foundations (i.e. the “failed” goals) as an essential or major goal of the foundational project, or were their central motivations more in line with the aspects that succeeded (i.e. a formal system able to encode all mathematics, giving a clear consensus standard for proof correctness in principle)? My impression is more the latter, but I’m not enough of a historian to be certain. – Peter LeFanu Lumsdaine Sep 20 '19 at 21:47
  • 5
    @PeterLeFanuLumsdaine : Zermelo, Russell, et al. specifically stated that one of their main goals was to eliminate the antinomies, and they all saw that the path forward was to lay down a minimum number of assumptions with great precision. They also hoped to prove consistency (Hilbert's 2nd problem). So I think the answer to your question is "both," and their project did partially succeed. Even on the question of consistency, Gentzen's consistency proof of arithmetic can be viewed as a partial success. – Timothy Chow Sep 21 '19 at 02:43
  • 1
    Regarding your first comment in this thread, I don't think that your statement is accurate @PeterLeFanuLumsdaine. It is true that, for example, Principia Mathematica is studied primarily by a group of specialists today, but it is not the case "they’re not studied much today except for historical interest". You may take a look at this, this, this. –  Sep 21 '19 at 13:43
  • In particular see the second reference (in particular section 5), where the author writes (in footnote14), "It is likely that Gödel’s exposition of PM was not entirely faithful to the original; in particular, Gödel did not fully take into account that numerals, as signs for classes, in PM were incomplete symbols and not genuine terms. This issue, and the complications arising from it, cannot be explored in depth without providing a full reconstruction of PM, which cannot be attempted here." –  Sep 21 '19 at 13:46
  • 2
    @user170039: I’m well aware that those early systems are still studied, and the motivation isn’t only historical — but the papers you link, and all the work on them I’ve seen, clearly have the systems’ historical significance as a pretty major part of their motivation, to a far larger extent than is typical in mathematics. – Peter LeFanu Lumsdaine Sep 21 '19 at 18:50
9

Gauge-theory might be another example at the border to physics. The original idea of deriving physics from gauge-symmetries and indeed the use of the term/prefix "gauge-" (in German "Eich-") itself goes back a paper by Hermann Weyl in 1919 ("Eine neue Erweiterung der Relativitätstheorie"). In this paper he somehow tried to unify electrodynamics and general relativity using this approach, by postulating that the notion of scale (or "gauge") might be a local symmetry. This of course was a total failure as it contradicted several experiments.

It was only about a decade later that he and some others picked up the idea again, applied it to electromagnetism and quantum physics (this time with phase as a gauge) and made it work. And then of course in 1954 there came Yang and Mills and now Weyl's "failed idea" is at the core of the Standard model of physics. However the original goal of adding general relativity to the mix still hasn't been achieved.

mlk
  • 1,974
7

Ronald Fisher's theory of fiducial inference was introduced around 1930 or so (I think?), for the purpose of solving the Behrens–Fisher problem. It turned out that fiducial intervals for that problem did not have constant coverage rates, or in what then came to be standard terminology, they are not confidence intervals. That's not necessarily fatal in some contexts, since Bayesian credible intervals don't have constant coverage rates, but everyone understands that there are good reasons for that. Fisher wrote a paper saying that that criticism is unconvincing, and I wonder if anyone understands what Fisher was trying to say. Fisher was brilliant but irascible. (He was a very prolific author of research papers in statistical theory and in population genetics, a science of which he was one of the three major founders. I think he may have single-handedly founded the theory of design of experiments, but I'm not sure about that.)

However, fiducial methods seem to be undergoing some sort of revival:

https://statistics.fas.harvard.edu/event/4th-bayesian-fiducial-and-frequentist-conference-bff4

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
4

Continuing what was said by @GerryMyerson, the project of providing foundations for mathematics started by Frege was presented in a treatise called Grundgesetze der Arithmetik (Basic laws of arithmetic). The axioms of this treatise were proven inconsistent by Bertrand Russell in what we know today as Russell's paradox.

This paradox also affects naive set theory, understood as the theory comprising the following two axioms:

  1. Axiom of extensionality: $(x \in a \leftrightarrow x \in b) \rightarrow a = b$. That is, if two sets $a$ and $b$ have the same elements, then they're the same set.

  2. Axiom (scheme) of unrestricted comprehension: $x \in a \leftrightarrow$ P$x$, for each formula P$x$. That is, to each property P uniquely corresponds one set $a$.

Naive set theory, thus understood, follows from Frege's axioms and seems to capture very well the notion of set. But since $x \notin x$ is a formula, the axiom scheme of unrestricted comprehension guarantees that the following is an axiom:

  1. $x \in x \leftrightarrow x \notin x$

Now, when we ask whether $x \in x$ or $x \notin x$, we obtain contradictory situations in both cases.

This paradox was solved by discarding this axiomatisation of set theory and, hence, Frege's axiomatics. But some logicians, mathematicians and philosophers have considered that perhaps this wasn't the right way to solve this. Instead of rejecting this naive set theory or Frege's theory, they propose to reject the principle of explosion or ex contradictione sequitor quodlibet:

  1. $P \wedge \neg P \rightarrow Q$. That is, from a contradiction follows any formula or statement.

This research programme is often known as the paraconsistent programme, because they work with paraconsistent logics. A logic system is said to be paraconsistent iff the logical thesis (4) is not valid in general. Hence, if the theory is inconsistent, it doesn't mean that anything follows from it (which means that it still may be useful). You can find out more about his programme in:

You will find there (specially in the second link) a whole programme for researching inconsistent mathematical theories, which are generally considered of no mathematical interest. (You will also find that mainly philosophers are working in this programme.)

Whether this programme is of any scientific value, that's for you to judge. But I accept this probably wasn't the kind of answer you were looking for. There is a chance, however, that you find it very interesting. I hope it helps in any case.

lfba
  • 121
4

It's perhaps slightly (if any) exaggerated, but the development of algebraic number theory (particularly the study of cyclotomic fields) is strongly motivated by attempts to prove Fermat's last theorem.

And everyone knows the end of that story ...

To quote Wiki:

Fermat's last theorem:

The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century.

Cyclotimic field:

The cyclotomic fields played a crucial role in the development of modern algebra and number theory because of their relation with Fermat's last theorem. It was in the process of his deep investigations of the arithmetic of these fields (for prime n) – and more precisely, because of the failure of unique factorization in their rings of integers – that Ernst Kummer first introduced the concept of an ideal number and proved his celebrated congruences.

WhatsUp
  • 3,232
  • 16
  • 22
4

I think Dirac's equation is a good example. Dirac was looking for a special-relativistic version of Schrödinger's equation. For the probabilistic interpretation to work, it had to have only first-order time derivatives, unlike the field equations known at the time.

He found a Lorentz invariant field equation with first-order derivatives, and it turned out to have enormous theoretical importance since it kicked off the study of relativistic field theories and Lie group representations in physics.

But the Dirac equation isn't a relativistic version of Schrödinger's equation. It can't describe multiparticle entanglement, it doesn't violate Bell's inequality, you can't build a quantum computer in it, etc. From a modern perspective it's just the massive, spin-½ counterpart to Maxwell's equations.

A version of the Dirac equation appears in the Lagrangian of quantum electrodynamics and the Standard Model. But it's right alongside a version of Maxwell's equations, complete with second-order derivatives, which turned out not to be a problem after all.

It's often still taught in introductory courses that Dirac's equation explained the electron's spin and magnetic moment, but both of those retrodictions were essentially accidental. Dirac's argument for spin ½ would imply that all fundamental particles must have half-integer spin, which doesn't appear to be the case; and Weinberg says "there is really nothing in Dirac's line of argument that leads unequivocally to this particular value for the magnetic moment" (The Quantum Theory of Fields, Vol. 1, p. 14).

benrg
  • 281
2

E.H. Moore's General Analysis was to be a unifying framework for (at least) analysis. This goal was never achieved, in part due to the rather complicated formalism used by Moore.

Nonetheless, a part of that theory, called Moore-Smith sequences or nets, survived and even thrived as a way of describing topology where it is known that sequences do not suffice.

Sam Sanders
  • 3,921
0

The typical oracle methods of Computability theory AKA Recursion theory were shown to be insufficient to settle the P vs. NP problem by Baker, Gill and Solovay 1975.

Thus recursion theory became divorced from the problems of efficient computability and experienced a bit of a setback (not as many papers in Ann.Math. anymore etc.).

Nevertheless it continued as the study of in principle computability.

  • 8
    Oracle methods predate the interest in or even formulation of P=NP. The failure of oracle methods for that problem surely highlighted the distance between computabilirt and efficient computation, but I don’t think the two subjects were ever very married. –  Sep 18 '19 at 23:28
  • @MattF. Fair enough but people became more interested in efficient computability than in-principle computability, because of practical applications. – Bjørn Kjos-Hanssen Sep 19 '19 at 06:49
0

There is the ramified type theory of Bertrand and Russell, which at the time was seen as a great step forward and which is why they named it as the Principia Mathematica, hoping to set mathematics on the same solid foundations as Newton did with his Principia for natural philosophy, that is physics as it would be described now.

Its only with the flowering of a new field, computer science, did type theory come into its own ...

There is also Maxwells Theory, which as recognised was an enormous achievement, in particular by Heaviside, but was generally ignored, until a couple of decades later.

Mozibur Ullah
  • 2,222
  • 14
  • 21