33

Recently, I've become interested in proof assistants such as Lean, Coq, Isabelle, and the drive from many mathematicians (Kevin Buzzard, Tom Hales, Metamath, etc) to formalise all of mathematics in such a system. I am very sympathetic to some of the aims of this project, such as finding mistakes in published proofs, and definitively verifying complex and controversial proofs [Kepler conjecture (Hales), four-colour theorem (Appel/Haken), Fermat's last theorem (Wiles), ...].

However, I'd like to understand more about the ultimate goals of this project, and the vision for achieving a computer formalisation of all of mathematics. I understand a good portion of core (undergraduate) mathematics has been formalised. Yet, the bulk of research-level mathematics remains unformalised, and it seems to me that we are currently creating new mathematics at a far greater rate than we are formalising mathematics. Are we always going to be "chasing our tails" here? What needs to change so that we can catch up and achieve a complete formalisation of mathematics?

Another issue is, of course, new mathematics is being created all the time. If we achieve complete formalisation, how do we sustain this? Will there be "professional formalisers" whose job it is to formalise the recent results of mathematicians? Will mathematicians themselves be expected to formalise their results as they are proved? What role will proof assistants play in 20 (or 50, or 100) years?

  • 13
    "What role will proof assistants play in 20 (or 50, or 100) years?" Well, in order to answer this, I guess one should possess some kind of pre-cog ability :-) – Francesco Polizzi Dec 09 '20 at 07:31
  • 1
    This seems like a question about the practice of mathematics rather than about research mathematics per se. While there are certainly plenty of such questions here that have done well, this one seems too subject to opinion (or, as @FrancescoPolizzi suggests, knowledge of the future) really to belong here. – LSpice Dec 09 '20 at 18:17
  • 6
    One near-term goal motivation of formalization can be seen in Scholze's recent formalization challenge. He (and Dustin Clausen) have a result that is "logically involved" (of the form $\forall\exists\forall\exists\forall\exists$), and that could be the basis of a new approach to real functional analysis by applying the theorem in a "black box" way. The "black box" application means the details of the proof may get examined less than other foundational results. One should read the linked post for more details. – Mark Schultz-Wu Dec 09 '20 at 18:22
  • 1
    @FrancescoPolizzi A question about future expectations is perfectly reasonable, and particularly relevant in this case. Just not be be taken literally, possibly in good faith, as reading the future. – YCor Dec 09 '20 at 18:28
  • 6
    Who is talking about formalizing all of mathematics, and where? That assumption of the question seems dubious, and I don't see any references for that claim. –  Dec 09 '20 at 19:30
  • @MattF.: My initial reaction to the question was identical to yours, and I posted a nearly identical complaint as a comment. However, this actually seems to be what Buzzard is proposing in the lecture linked from my answer. I agree that the question would be improved by editing to make this more explicit. –  Dec 09 '20 at 19:35
  • @BenCrowell, that is odd since Buzzard begins elsewhere by saying "I want to digitise some parts of mathematics. Which parts?" http://wwwf.imperial.ac.uk/~buzzard/docs/buzzard_big_proof2019.pdf –  Dec 09 '20 at 19:48
  • 1
    @MattF.: My answer is a summary of an hour-long lecture, which I linked to. In that lecture, I think he tries his best to explain his goals for an audience of computer scientists. If you read my answer, I think you'll see that he's pretty explicit about proposing to digitally formalize all of mathematics. Feel free to watch the lecture or look at his other writings, and let me know if you think my answer mischaracterizes his thoughts. But quoting a single sentence from a powerpoint, without context, seems to me like a less reliable way to know what he's proposing. –  Dec 09 '20 at 19:55
  • 5
    I would suggest, as with several questions that have surfaced concerning what Kevin has said and written in a certain rhetorical vein, that we are best off waiting for him to turn up in person, when he can either row back or double down as he sees fit :) – Yemon Choi Dec 09 '20 at 20:02
  • @YCor: sorry, I do not completely understand the part about "good faith". Of course I was joking but, still, asking about the role of proof assistants in 100(!) years is a bit like asking to predict whether one year in advance. A more reasonable question would be asking the role of the assistants in the next few years, but also the answer to this is not completely obvious (at least, to me). – Francesco Polizzi Dec 09 '20 at 20:07
  • 3
    @BenCrowell, regardless of that video, mathematics is more than proofs! It is also thinking, talking, teaching, publishing, hiring.... A reasonable question is “what is the role of formalizing proofs within our social activity of mathematics?” But once you ask the question that way it’s clear that even formalizing all proofs is still not formalizing all mathematics. –  Dec 09 '20 at 21:29
  • @MattF. : The closest thing to a statement about "formalizing all mathematics" is probably the QED manifesto. – Timothy Chow Dec 09 '20 at 23:46
  • 3
    I pushed the button on this one because the question did in fact succeed in evoking a lot of opinions, however well meaning they were. – Andrej Bauer Dec 10 '20 at 09:07
  • 20
    What's the point of digitising music? Vinyl works fine. But digitising music was a game changer -- now we have Spotify. What's the point in digitising books? What's the point in LaTeX? Typewriters worked fine. What's the point of digitising mathematics? We have no idea what will happen as a consequence. People will use it to do crazy things we can't even imagine right now. Ask the machine learning people what the point of digitising mathematics is. Ask the educators. Ask Scholze. People will figure out uses. We should do it because we can. Because nobody tried to do it yet. – Kevin Buzzard Dec 10 '20 at 09:52
  • 1
    Hi @KevinBuzzard - glad to see you joining in. As you know, I am generally sympathetic to/interested in what you and others are seeking to do with formalization, but in a tongue-in-cheek spirit I can't help remarking that your closing sentences and the mention of ML make me think of this scene from Jurassic Park ( https://www.youtube.com/watch?v=4PLvdmifDSk ) – Yemon Choi Dec 10 '20 at 18:42
  • 1
    Yemon, Michael Harris has also raised the issue as to whether this is a good or sensible idea, and whilst I have changed my mind about many issues here I still think that suggesting we don't do it is the point of view of a Luddite. Sure educators might use it to do the wrong things, sure ML people might try to use it to generate proofs which are of no use to humans, but also people will use it to enable us to think about mathematics in new ways, to make our lives easier, to increase human understanding. You can't stop progress. Formalisation will happen and it will inevitably bring benefits. – Kevin Buzzard Dec 11 '20 at 11:47
  • @YemonChoi: While, at a pop cultural level, I really like this "rockstar mathematician" in Jurassic Park who "hates being always right", I have to say that I find his purported scepticism a bit... well, let's say, flat. ;-) – Jochen Glueck Dec 13 '20 at 23:11
  • 1
    @JochenGlueck It was meant as a tongue-in-cheek response. FWIW, there are some (but not all) of Kevin's stated reasons for seeking to formalise as much maths as possible, which I think I agree with. What I don't agree with is Kevin's occasional line that "we should do it because we can". We should do it because we think it could have benefits A, B and C or because we think structural problems X, Y and Z need fixing. – Yemon Choi Dec 14 '20 at 02:03
  • For instance, although it might not be so relevant to discussions on this post, I think the issue of tacit identifications and unchecked details in literature are a real concern, which formalization might help to address. That to me is a worthwhile and defensible objective of formalising mathematics, even if it is an asymptote and not an "endgoal" in the sense of the OP. – Yemon Choi Dec 14 '20 at 02:06
  • 1
    @YemonChoi: Thank you for your response! Yes, I took your comment to be tongue-in-cheek, and my comment was meant to be so, too (I apologize in case that it sounded more serious than I meant it to be). Anyway, I would be interested to better understand your objection against the claim "we should do it because we can": Is your main point that "we can do it" is not a good reason to do something, and one should only do things for good reasons (for instance, due to the fact that "doing things" requires recources)? [to be continued] – Jochen Glueck Dec 14 '20 at 18:41
  • [continuation] Or are you mainly concerned about potential negative consequences of research that is carried out without much thought about such consequences (as indicated by your Jurassic Park reference)? – Jochen Glueck Dec 14 '20 at 18:41
  • @JochenGlueck A bit of both, but mainly the latter. I also think that "we should do it because we can" is a bit of a cop-out since it avoids specifying intended outcomes. Even "we should do it because the act of formalizing will make us understand our existing assumptions better" counts as a justification, even though it is not promising new results or fixing identified gaps. – Yemon Choi Dec 14 '20 at 19:34
  • @YemonChoi: Thanks for the explanation! I'd say, there are several very interesting things to discuss about this; but this comment section is most likely not the right place for it. So I'll restrict myself to just thanking you again for your responses! – Jochen Glueck Dec 15 '20 at 22:31
  • @KevinBuzzard - I am reminded of a scifi story in which formalization is fraught: https://en.wikipedia.org/wiki/Luminous_(book) is summarized as "A pair of researchers find a defect in mathematics, where an assertion (X) leads to a contradictory assertion (not X) after a long but finite series of steps. Using a powerful virtual computer made of light beams (Luminous), they are able to map and shape the boundary between the near-side and far-side mathematics, leading to a showdown with real consequences in the physical universe." – Steve Huntsman Jan 22 '21 at 14:04

4 Answers4

22

This answer is based only on my having watched this lecture by Kevin Buzzard, followed by my applying some skepticism and critical thinking.

Buzzard has had some experiences that make him not trust some of the results that people are claiming in mathematics papers. He tells the story of refereeing a paper and fighting (and winning) a long, drawn-out battle because he didn't believe the authors had proved what they said they had. He also says there are papers coming out in which a result is said to be proved, but there are 100 pages of details that have been left out and will be supplied in a forthcoming publication.

With, in my opinion, less justice, he complains that professional mathematicians "do math like Gauss and Euler," "don't know foundations," "don't know set theory," and decide what is correct mathematics based on a consensus of beliefs among the elders in the community. This seems absurd to me. I would consider a theorem by Gauss or Euler to be less reliable if it depended on ZFC. If someone told me that a theorem in analysis depended on choice, I would decide that that theorem was not really true but only true conditioned on AC. If ZFC is proved tomorrow to be inconsistent, I will still have great faith that the theorems of Gauss and Euler are correct within some more conservative foundational system.

However, if you believe at least some of his complaints about real uncertainty and doubt in published mathematical results, then it does make some sense to see if you can "formalize" these results in the sense of getting a computer to verify them all the way down to some foundation.

What seems to have been going on for a long time with computer theorem checkers is that because using them was so labor-intensive, people would try to code up the tiniest possible fragment of mathematics in order to achieve their goal. The trouble with this strategy seems to be the following. If you believe that a lot of math really is in doubt, then the doubtful parts are in areas where there is a big pyramid of math. At the apex you have Fermat's last theorem or something about perfectoid spaces (which I know nothing about, but he cites them as modern pieces of mathematical machinery with a lot of moving parts). The apex is supported by a whole bunch of more basic stuff, going all the way down through freshman calculus, etc. Therefore you can't do a tiny fragment of mathematics, because these aren't self-contained fragments.

So I think that Buzzard's argument is: (1) there are important, significant doubts about whether results deemed true by the elders really are true; (2) theorem provers are a good way of solving this problem; and (3) in order to carry out this solution, one needs to build up a comprehensive database of the beliefs of the elders (his characterization of Hale).

My own opinion is that 1 is probably true but not a threat to the enterprise of mathematics, 2 is speculative, and 3 is infeasible. Of course my opinion is worthless because I'm not even an amateur in this area. But note that even if I'm right, it doesn't mean that proof assistants and automated theorem checkers shouldn't be worked on at all -- it just means that Buzzard's grandiose dream isn't going to happen.

If we achieve complete formalisation, how do we sustain this? Will there be "professional formalisers" whose job it is to formalise the recent results of mathematicians?

Looking into my crystal ball, I predict that it will not be achieved, because it costs too much, and the necessary vast funding will not be allocated. It won't be allocated because not many mathematicians agree that it would be a worthwhile use of the resources. Systematic ongoing formalization will also not happen, for the same reasons.

Will mathematicians themselves be expected to formalise their results as they are proved?

No. By all accounts this work is tedious and extremely time-consuming. Mathematicians want to work with new ideas, not spend their time writing code to explain their work to a computer that is orders of magnitude dumber than the dumbest undergrad they've ever taught.

  • 1
    This brings up an interesting point. As mentioned, if the goal of formalization is to feel more secure about the cutting-edge of math by formalizing everything, that would require a huge change in mathematical practice and culture. On the other hand, if the goal is to root out bogus arguments, perhaps a different system could be designed: one which could be fed in the logical "shape" of an argument and try to detect places where the logic might fail. Notice that in the first picture, the onus is on the prover, and in the second it is on the doubter. – Sam Hopkins Dec 09 '20 at 19:03
  • How do you define "true" when you say that some theorem would not be true, but conditional on AC? – Asaf Karagila Dec 09 '20 at 19:14
  • 1
    @AsafKaragila: OK, I'll give it a shot, but it's going to be a definition with sociological elements. I mean that there would be some alternative foundation, call it F2, with the following properties. F2 does not imply the theorem we're talking about. F2 suffices to prove every theorem of analysis proved by Gauss and Euler. F2 is more conservative than ZFC (has models in ZFC). F2 does not imply full AC. Analysts agree that F2's axioms are intuitively true. –  Dec 09 '20 at 19:26
  • 2
    So, true is the theory you get from the reverse mathematics of Gauss + Euler + whatever fragments you have that don't require things like "the real numbers are not a countable union of countable sets" and other choicey consequences. – Asaf Karagila Dec 09 '20 at 19:28
  • 3
    @AsafKaragila: Sure, or maybe we should call it classical-analysis-true. Seems to me that there is a whole separate Q&A implied here, which you would be more qualified than I am both to ask and to answer :-) –  Dec 09 '20 at 19:32
  • 1
    I perceive some disharmony between "If someone told me that a theorem in analysis depended on choice, I would decide that that theorem was not really true but only true conditioned on AC." and the very next sentence about implications of inconsistency of ZFC. Surely you know that if ZFC is inconsistent, then ZF is inconsistent, too, and not only theorems conditioned on AC, but also theorems of ZF fall. What next? Shall we give up infinity, power set, or replacement, in the quest for the "conservative enough theory"? Which one of those three is the most dispensable one for Gauß and Euler? – Jirka Hanika Dec 09 '20 at 21:08
  • "1 is probably true but not a threat to the enterprise of mathematics". What the heck is the enterprise of mathematics then? – cody Dec 09 '20 at 21:28
  • @JirkaHanika: you could argue infinity is too strong, and not really self-evident since it gives the existence of an object which doesn't exist in the real, physical world. See Steve Simpson and others on "potential infinity" vs "actual infinity". – Jordan Mitchell Barrett Dec 09 '20 at 21:50
  • @cody: I think the point is that there are almost certainly errors in major proofs such as Fermat's last theorem, but none of these errors would actually have an impact on whether these results are true. – Jordan Mitchell Barrett Dec 09 '20 at 21:52
  • 1
    @AsafKaragila My preferred theory of analysis includes the Arzelà-Ascoli theorem and Tikhonov's theorem, but the negation of the well-ordering theorem. As you know, I don't really like analysis. – Carl-Fredrik Nyberg Brodda Dec 09 '20 at 21:55
  • @JirkaHanika: I'm not a specialist, and my answer is mainly just journalism reporting what Buzzard said, interspersed with inexpert opinions that you can feel free to ignore. But re your comment: -- In the unlikely event that ZFC were proved tomorrow to be inconsistent, I imagine that the methods and concepts of the proof would reveal big things to us about the foundations of mathematics that we previously hadn't thought of. Maybe based on these new ideas we'd find more satisfactory foundations. Those would clearly not just be ZFC with one axiom omitted, [...] –  Dec 09 '20 at 22:45
  • [...] but any mathematician looking at the axioms of ZFC tends to feel that if there's anything not intuitively true about them, it's choice. So maybe we would end up with a mathematics that was "less choicey." Or, as suggested by @JordanMitchellBarrett, maybe it would be more finitistic. –  Dec 09 '20 at 22:48
  • 12
    @BenCrowell : I don't think that Buzzard's point about "elders" has anything to do with ZFC specifically. If ZFC bothers you, replace it with something much weaker, such as RCA$_0$. As I see it, Buzzard is contrasting a situation in which all the details of a proof are truly available for public inspection, with a situation where many crucial details are locked up inaccessibly in the brains of a few experts. – Timothy Chow Dec 09 '20 at 23:06
  • 2
    'professional mathematicians "do math like Gauss and Euler,'"' - Wouldn't it be great if that were true? – Jonny Evans Dec 10 '20 at 05:50
  • @BenCrowell - I think that people tend to single out AC mainly because it was last to the party. That made it more visible than the rest of the ZF (while less visible than large cardinals). But I get your general point now. – Jirka Hanika Dec 10 '20 at 07:44
  • @JirkaHanika: Last to the party? It was the whole reason the party is here to begin with! People are "suspicious of choice" because it's "magic" and non-explicit. This approach leads you to doubt LEM, and then power set, infinity, and then Replacement (or in its equivalent form: induction). Anyway, AC is not where inconsistencies can even hide. It's Replacement, Infinity or Power Set. Those are the points where consistency strength jumps up. – Asaf Karagila Dec 11 '20 at 22:38
  • @AsafKaragila - Last to the popular party called ZFC. If set theory ever made me doubt LEM, the prize was to save unrestricted comprehension. Otherwise you and I are saying the exact same thing, I am happy to observe. – Jirka Hanika Dec 11 '20 at 22:59
  • @JirkaHanika In what sense was AC the last to the ZFC party? Surely that honor belongs to Replacement. – Timothy Chow Mar 22 '22 at 23:23
  • @TimothyChow - It's a complicated question. Independence of the AC is from 1922, but I'm not sure how quickly the AC received unconditional acceptance among set theorists (ZFC vs. ZF). By 1922, Replacement was already considered a firm member of ZF (by Fraenkel). I suppose you must have meant Regularity (aka. Foundation) missing from the Fraenkel formulation which was however established still during the twenties (pushing out Fraenkel's AR). Some much later developments dropped either AC or Foundation (AD in 1962, AFA in 1988) to gain very interesting "alternative foundations" within ZF. – Jirka Hanika Mar 23 '22 at 08:31
  • This part of the answer: "If someone told me that a theorem in analysis depended on choice, I would decide that that theorem was not really true but only true conditioned on AC. If ZFC is proved tomorrow to be inconsistent, I will still have great faith that the theorems of Gauss and Euler are correct within some more conservative foundational system." still strikes me as showing a poor grasp of the relative consistency results. – Jirka Hanika Mar 23 '22 at 08:37
20

Since you mentioned 100 years out, let me suggest that the end goal is to put ourselves out of business.

We just need to formalize enough mathematics so that AI programs will get the idea of what we're trying to do. Then they'll take over the job of doing mathematics research. We won't need to formalize any old math at that point because the AIs will rediscover everything worth rediscovering anyway.

[EDIT: For an example of some first baby steps in this direction, see Generative language modeling for automated theorem proving by Stanislas Polu and Ilya Sutskever.]

I am half joking here, but only half. I don't think that the only, or even the main, goal of formalizing is to increase the certainty of a proof from 99.9% to 99.9999999%. The main goal is to enable new approaches to doing research that don't currently exist.

For that goal, complete formalization isn't necessary, but for what it's worth, I also don't believe that "chasing our tail" will be a problem. Setting aside my pie-in-the-sky vision of putting mathematicians out of business, I think that once the tools get good enough, and the mathematical populace is sufficiently trained, new proofs will be formalized as a matter of course, just as papers are now written in $\LaTeX$ as a matter of course. As for the older papers, formalizing them will be assigned as exercises for students who are just learning the system.

Timothy Chow
  • 78,129
  • 5
    Music has been thoroughly digitized, but that has not put all musicians out of business. –  Dec 09 '20 at 22:06
  • @MattF. : But in 100 years, perhaps we'll all have folded hands, that don't even play music. – Timothy Chow Dec 09 '20 at 22:56
  • 5
    It's not going to take 100 years... – Andrej Bauer Dec 10 '20 at 09:01
  • I find your second paragraph interesting because, speaking purely for myself and quite possibly at 180 degrees from what Kevin intends, I have the exact reverse priorities. (Actually to be more precise I'd settle for increasing the completeness of selected proofs from 80% to 95% ) – Yemon Choi Dec 10 '20 at 18:46
  • @YemonChoi : If there's a 20% chance of incorrectness of an important theorem then that sounds like a sociological problem that needs more than a formalization effort to address. – Timothy Chow Dec 10 '20 at 21:04
  • I said incompleteness rather than incorrectness, and I never mentioned "importance" :) What I think is true is that the literature can accumulate various statements or groups of statements which are known to "basically be true in the cases we worry about and we just stated them more generally because the argument should go through", which is fine when all the experts know where the solid ground is, but not so good as experts move on metaphorically or, eventually, literally – Yemon Choi Dec 10 '20 at 21:12
  • 1
    ... and then years later people may want to apply what is stated in print rather than "what was in the heads of the experts" – Yemon Choi Dec 10 '20 at 21:12
  • @YemonChoi : Thanks for the clarification. I'm not sure what would happen in your scenario if people were "forced" (or expected) to write down a formal proof. They might choose to formalize just a special case, because that's all they really need and because formalizing the general case requires a lot more work. In that case, we might still be struggling with the same problem of needing a more general proof ten years later, and not having it available. – Timothy Chow Dec 10 '20 at 22:59
  • 1
    @TimothyChow My understanding is that Kevin's Lean Math group has had to rewrite its category theory library several times because of this kind of thing. – Harry Gindi Jan 17 '21 at 17:44
  • 1
    Timothy Chow's answer may not necessarily reflect the goals of all (even a plurality?) of folks currently working on formalization, but it is surely the clear answer to "what are the most important implications of formalizing mathematics?" I expect that within my lifetime, mathematics research will transform into a very different business from what it is today. I don't think we'll be out of jobs, but our role in research will be fundamentally different. What exactly it will be, I'm not sure except that it will continue to evolve well after the initial adoptation of AI methods as they improve – Tim Campion Feb 09 '21 at 22:51
17

Other answers address several aspects of your question, but to add a point not mentioned yet — you write:

it seems to me that we are currently creating new mathematics at a far greater rate than we are formalising mathematics. Are we always going to be "chasing our tails" here? What needs to change so that we can catch up and achieve a complete formalisation of mathematics?

Another issue is, of course, new mathematics is being created all the time. If we achieve complete formalisation, how do we sustain this? Will there be "professional formalisers" whose job it is to formalise the recent results of mathematicians? Will mathematicians themselves be expected to formalise their results as they are proved?

Certainly, at the moment, formalising maths is very laborious and time-consuming. But I think most people who work the formalisation hope that it won’t stay so bad! It’s been getting steadily better as proof assistants and their libraries have progressed — “better” both in terms of how quickly experienced users can cover ground, and in terms of the learning curve for new users. This improvement hasn’t perhaps been as fast as we’d like, or might have hoped — but it’s been steady, and continuing.

If usability continues to improve, then it’s reasonable to imagine a future proof assistant with a role comparable to LaTeX today. Typesetting our work in LaTeX takes a bit of work, and has a bit of a learning curve — but most mathematicians accept the time and effort as worthwhile for the payoff of good typesetting. A very few mathematicians outsource this to professional typesetters; a larger minority leave it to their co-authors when possible. But it’s part of the expectations of the field, and there’s no problem of “tail-chasing” — no question of “can we typeset mathematics as fast as we can produce it?”, or of requiring major grant support to get work typeset. This is one possible future that I and some others in formalisation envisage, and that it seems reasonable to hope for in a timescale of a few decades.

One likely difference from LaTeX is that this could well arrive in some subfields of maths sooner than others, since (so far) some topics seem much more amenable to formalisation.

  • 1
    I think Autoformalization is a keyword here https://link.springer.com/chapter/10.1007/978-3-030-53518-6_1 – 2080 Dec 13 '20 at 13:04
9

I've thought more about what would be required for a complete formalisation of mathematics. I believe this could only happen if mathematicians formalise their results as they go, i.e. all of them are actively using proof assistants as part of the research process. To this end, here are some properties that I think would be desirable in a proof system:

  1. Syntax that is "proof-y" rather than "program-y". I played around with Lean last night, on Kevin Buzzard's great natural number game, but it still felt like I was programming, rather than writing a proof. Yes, I know they're equivalent by the Curry-Howard isomorphism, but mathematicians don't want to learn a new programming language, they want to write proofs. Why couldn't we have syntax that looks like an actual, natural-language proof, e.g.

    Theorem:
    Let U, V be vector spaces.
    Let f: U -> V be a linear map.
    Then dim(U) = rank(f) + nullity(f).
    

    Proof: Fix v in V. ....

    Here, Let could be a command indicating that one is defining a new object, be / be a / be an indicates the type of said object, Then indicates you are done defining things, . indicates the end of a command, etc.

  2. A typing system that reflects mathematical intuition. I know very little type theory, but it seems like dependent types are necessary to the way mathematics is currently practiced. Other properties such as univalence, polymorphism, inheritance, abstract data types also seem necessary to reflect "real" mathematical practice. I know this might add complexity to a proof assistant; however, the proof assistant should be designed to meet mathematicians, rather than us having to meet the proof system.

  3. I work in logic, and there are many subfields (set theory, reverse mathematics, constructive math) where axioms can vary. So to formalise these areas, we would need a way to change the axioms as necessary. E.g. $\mathsf{ZFC}$ would be taken as the default set of axioms, but you could declare

    Axioms = ZF+AD.
    

    at the start to change the axioms for your proofs. Further, you would want to be able to treat axioms as objects in their own right.

  4. Many steps of reasoning will often be skipped in a mathematician's proof (as it is painstaking to write it all out in full). Therefore, I would want an "autofill" system, where one puts in the main steps of the proof (i.e. those that you would write in the natural language proof):

    (n + 1)^3
    ...
    n^3 + 3n^2 + 3n + 1
    

    and the proof assistant would search to find correct reasoning to fill the space between.

  5. One of the current issues is that not enough prerequisite material has been formalised for most research mathematicians to start using proof assistants for their work. I wonder if it's possible to have a "modular" approach, where mathematicians can take the theorems they want to use as "axioms", and proceed with formalising their work. Then, someone else can come along later and prove these "axioms" from more foundational principles. This would mean that mathematicians could start using the proof system right now, rather than in 20 years when all the necessary background has been formalised (and their research has moved on). (I suppose the relevant definitions still need to be formalised beforehand).

I don't know much about the different proof assistants, and whether there are any that have the above properties.

  • 4
    As an absolute non-expert, let me make some comments. 1. Natural language is ambiguous. Any attempt to make it unambiguous is bound to make the language more "mechanical". Sure some basic things like "let" can be simplified, but quickly it will be less feasible. 4. You are asking for (what in Lean at least is called) tactics. Many such exist, for instance there is one which will automatically prove any identity in the language of commutative rings. 5. This is already (to an extent) happening. However, the entry barrier is high, so almost everyone is first working out the basics.. – Wojowu Dec 09 '20 at 22:51
  • 4
    Overall though, I do agree with the sentiment that there is some work left to be done to make proof assitants more mathematician-friendly. Kevin Buzzard is very vocal in expressing his frustration when first learning Lean and how the documentation was written "by computer scientists for computer scientists". One has to remember though that someone has to build those programs, and someone with nearly enough programming proficiency to more than likely to come from CS background than math one. – Wojowu Dec 09 '20 at 22:55
  • 2
    In terms of proofs that look more like natural language, the proof assistants based on set theory tend to be more readable; see the Mizar examples in Freek Wiedijk's Notices article. Also, set-theoretic proof assistants arguably are closer to mathematicians' intuition. I discuss some of the objections to set-theoretic proof assistants in another MO answer. – Timothy Chow Dec 09 '20 at 23:35
  • 3
    Historical minutiae: When the BASIC programming language first appeared in 1964, defining a new object (variable) did in fact require the keyword "LET", for precisely the math-idiom reasons seen here. But over time that was considered burdensome, and the use of LET was made optional and then dropped (instead, the first time a new variable is seen it's assumed to be declared). https://www.dartmouth.edu/basicfifty/commands.html – Daniel R. Collins Dec 10 '20 at 03:33
  • 2
    I don't think being based on set theory has anything to do with what input syntax provers offer. Isabelle also has declarative proof syntax, which is probably still quite a bit away from what you are looking for, but certainly closer than most systems. (see the examples here. – Manuel Eberl Dec 10 '20 at 07:48
  • 1
    Kevin Buzzard gives a similar list (and his opinion is probably more informed than mine) from 11 mins onwards in https://www.youtube.com/watch?v=q5-pykbfViA – Jordan Mitchell Barrett Dec 10 '20 at 08:48
  • 1
    Re 5., proof assistants will happily accept axioms; formalizing definitions might still be nontrivial, but e.g. Buzzard argues it can be “fast enough” (he formalized perfectoid spaces after all). You “just” need the axioms you state to be “true enough” as written and to disambiguate the on-paper notation correctly. – Blaisorblade Feb 08 '21 at 11:15
  • About #1 and #4, Naproche (natural language proof checker, page naproche.github.io) is a proof checker with a syntax that resembles natural proof, which to my knowledge uses an automated theorem prover to fill in the usual kind of smaller steps left implicit in the proof. – C7X Feb 16 '24 at 09:02