89

Ultrafinitism is (I believe) a philosophy of mathematics that is not only constructive, but does not admit the existence of arbitrarily large natural numbers. According to Wikipedia, it has been primarily studied by Alexander Esenin-Volpin. On his opinions page, Doron Zeilberger has often expressed similar opinions.

Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?

Edit: Neel Krishnaswami in his answer gave a link to a paper by Vladimir Sazonov (On Feasible Numbers) that seems to go a ways towards giving a formal foundation to ultrafinitism.

First, Sazonov references a result of Parikh's which says that Peano Arithmetic can be consistently extended with a set variable $F$ and axioms $0\in F$, $1\in F$, $F$ is closed under $+$ and $\times$, and $N\notin F$, where $N$ is an exponential tower of $2^{1000}$ twos.

Then, he gives his own theory, wherein there is no cut rule and proofs that are too long are disallowed, and shows that the axiom $\forall x\ \log \log x < 10$ is consistent.

  • 12
    Edward Nelson is another well-known exponent of untrafinitism. He has many essays about it on his webpage here : http://www.math.princeton.edu/~nelson/papers.html – Andy Putman Oct 30 '10 at 02:45
  • Thanks, I'll take a look! I'd looked a bit at his version of nonstandard analysis a while ago, but I didn't read any of his ultrafinitist writings. – Michael O'Connor Oct 30 '10 at 03:26
  • 2
    In the FOM (Foundations of Mathematics) email list there have been some interesting and productive (and some not so interesting) discussions on ultrafinitism that you may want to check out. The archives are public, and can be read at http://www.cs.nyu.edu/pipermail/fom/ – Andrés E. Caicedo Oct 30 '10 at 05:45
  • Nelson doesn't look too much sane to me ;) – Aleksei Averchenko Oct 30 '10 at 12:37
  • 26
    Winking emoticons aside, it's probably good to have people like Nelson and Zeilberger around, to test received opinions and keep people on their toes. – Todd Trimble Oct 30 '10 at 12:53
  • 44
    I hope everyone has heard the story told by Harvey Friedman: "I have seen some ultrafinitists go so far as to challenge the existence of 2^100 as a natural number, in the sense of there being a series of 'points' of that length. There is the obvious 'draw the line' objection, asking where in 2^1, 2^2, 2^3, … , 2^100 do we stop having 'Platonistic reality'? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yesenin-Volpin during a lecture of his." (cont...) – Todd Trimble Oct 30 '10 at 22:37
  • 74
    "He asked me to be more specific. I then proceeded to start with 2^1 and asked him whether this is 'real' or something to that effect. He virtually immediately said yes. Then I asked about 2^2, and he again said yes, but with a perceptible delay. Then 2^3, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2^100 times as long to answer yes to 2^100 then he would to answering 2^1. There is no way that I could get very far with this." – Todd Trimble Oct 30 '10 at 22:38
  • 4
    I thought astronomers have seen quasars (whatever those are) with their telescopes whose distance from here is about 2^100 millimeters. – Zsbán Ambrus Nov 01 '10 at 10:41
  • Closer to home, the sun is about 2^100 kg. I don't know about Yesenin-Volpin, but the stuff of Vladimir Sazonov's that I've read uses 2^1000, not 2^100. This should be in excess of the number of planck-volume-by-planck-time units in all of observable space-time. – Daniel Mehkeri Nov 01 '10 at 17:06
  • 7
    @Zsban,Daniel: Those are real numbers, not integers. Perhaps we know their values to less than 100 bits of accuracy... – Gerald Edgar May 23 '11 at 21:05
  • 4
    @Gerald Edgar: though we don't know their exact values, the thing is even if you harnessed every planck volume in the observable universe to perform one operation per planck time, you still could not effect a computation of length 2^1000. Whereas 2^64 is already within the realm of current technology, so 2^100 is within the realm of possibility. – Daniel Mehkeri May 24 '11 at 02:05
  • 1
    @ToddTrimble: Regarding your comment: Could it rightly be said that for (some) Ultrafinitists, $n$$\lt$$2^{100}$ are 'real' numbers, and $m$$\ge$$2^{100}$ are 'ideal' numbers (as in Hilbert's real/ideal distinction)? This would seemingly give some resonable sense to Yesenin-Volpin's point of view.... – Thomas Benjamin Dec 18 '16 at 12:45
  • @ThomasBenjamin I am not at all up on ultrafinitist philosophy. Some self-proclaimed ultrafinitists like Zeilberger do use this type of language, though, referring to entities like $\sqrt{2}$ as ideal objects (see http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf, page 5). But to what extent this line of thinking is philosophically coherent, I won't venture to say. I also have no idea where Zeilberger draws the line between real natural numbers and ideal ones. – Todd Trimble Dec 18 '16 at 15:58
  • 2
    @ToddTrimble: Parikh's paper, "Existence and Feasibility in Arithmetic" (JSL 36(3), 494-508), and Sazonov's paper "On Feasible Numbers", might provide a basis by which to understand such claims, and the Ultrafinitist claims of 'inconsistency' as well (this includes Nelson's claim of inconsistency also--maybe....). – Thomas Benjamin Dec 19 '16 at 08:39
  • @PyRulez The old link works fine. Your new link has a paywal. – Najib Idrissi Feb 25 '18 at 17:26
  • @NajibIdrissi are you sure? It opens up as empty on my computer. – Christopher King Feb 25 '18 at 17:26
  • @PyRulez I am sure, yes, I was able to download the file... Do you have a PS viewer installed? – Najib Idrissi Feb 25 '18 at 17:34
  • @Thomas Benjamin: Nelson's "claim" of inconsistency being somewhat closer described as a feeling of the possibility of an inconsistency. And it seems to have dwindled over the years. – Tommy R. Jensen Sep 17 '19 at 13:04
  • 2
    @MichalGajda recently presented a paper at CCC in which each formula in a proof is annotated with a polynomial. – Paul Taylor Oct 07 '21 at 14:10

10 Answers10

66

Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?

There are no foundations for ultrafinitism as satisfactory for it as (say) intuitionistic logic is for constructivism. The reason is that the question of what logic is appropriate for ultrafinitism is still an open one, for not one but several different reasons.

First, from a traditional perspective -- whether classical or intuitionistic -- classical logic is the appropriate logic for finite collections (but not K-finite). The idea is that a finite collection is surveyable: we can enumerate and look at each element of any finite collection in finite time. (For example, the elementary topos of finite sets is Boolean.) However, this is not faithful to the ultra-intuitionist idea that a sufficiently large set is impractical to survey.

So it shouldn't be surprising that more-or-less ultrafinitist logics arise from complexity theory, which identifies "practical" with "polynomial time". I know two strands of work on this. The first is Buss's work on $S^1_2$, which is a weakening of Peano arithmetic with a weaker induction principle:

$$A(0) \land (\forall x.\;A(x/2) \Rightarrow A(x)) \Rightarrow \forall x.\;A(x)$$

Then any proof of a forall-exists statement has to be realized by a polynomial time computable function. There is a line of work on bounded set theories, which I am not very familiar with, based on Buss's logic.

The second is a descendant of Bellantoni and Cook's work on programming languages for polynomial time, and Girard's work on linear logic. The Curry-Howard correspondence takes functional languages, and maps them to logical systems, with types going to propositions, terms going to proofs, and evaluation going to proof normalization. So the complexity of a functional program corresponds in some sense to the practicality of cut-elimination for a logic.

IIRC, Girard subsequently showed that for a suitable version of affine logic, cut-elimination can be shown to take polynomial time. Similarly, you can build set theories on top of affine logic. For example, Kazushige Terui has since described a set theory, Light Affine Set Theory, whose ambient logic is linear logic, and in which the provably total functions are exactly the polytime functions. (Note that this means that for Peano numerals, multiplication is total but exponentiation is not --- so Peano and binary numerals are not isomorphic!)

The reason these proof-theoretic questions arise, is that part of the reason that the ultra-intuitionist conception of the numerals makes sense, is precisely because they deny large proofs. If you deny that large integers exist, then a proof that they exist, which is larger than the biggest number you accept, doesn't count! I enjoyed Vladimir Sazonov's paper "On Feasible Numbers", which explicitly studies the connection.

I should add that I am not a specialist in this area, and what I've written is just the fruits of my interest in the subject -- I have almost certainly overlooked important work, for which I apologize.

  • 4
    Thanks! That Sazonov paper (which is available not through Springer here: http://www.csc.liv.ac.uk/~sazonov/papers/lcc.ps) seems quite interesting. It makes me think the correct answer to my original question might be "yes"! – Michael O'Connor Oct 30 '10 at 17:53
  • 12
    Taking cut away is really brutal, though -- it's mathematics with no lemmas allowed! This is where linear logic shines: it lets you bound the size of the expansion from the abbreviation power of lemmas. – Neel Krishnaswami Oct 30 '10 at 18:21
  • I don't understand the "weaker induction principle" you mention. It seems that you can use it to prove that for all x: x=0. – Sune Jakobsen Oct 31 '10 at 08:51
  • 2
    @Sune: This is a flooring division, so $x/2$ gives the largest integer less than or equal to $\frac{x}{2}$ For example, $3/2 = 1$. In particular, this means that in general we only have $2 \times x/2 \leq x$. – Neel Krishnaswami Oct 31 '10 at 10:12
  • 12
    A clarification: Buss’s polynomial induction schema is equivalent to usual induction. It only gets weak when you severely restrict the class of formulas allowed in the schema, and this works for usual induction too. When the schemata are restricted to a class of formulas not closed under bounded quantification, the two schemata are no longer equivalent, hence both are used to axiomatize different theories of bounded arithmetic (such as $S^1_2$). However, what makes these theories weak is the restriction on formula complexity, not the form of the induction schema. – Emil Jeřábek Jun 07 '11 at 14:51
  • @MichaelO'Connor That link is dead. Is there another non-paywall link? – Christopher King Feb 25 '18 at 16:26
  • @PyRulez The link is not dead for me, but rather downloads a postscript (.ps) file. Perhaps you need to check your browser settings (or maybe others can verify that it doesn't work for everyone). – Robert Furber Feb 25 '18 at 18:42
  • 4
    Reliance on complexity appears strange. The usual notion of polynomial computability only makes sense as an observation of asymptotic behaviors. In other words, how fast computation happens within a range of input sizes taken from just an initial segment of natural numbers has no influence whatsoever on determining whether the computation is polynomial or not, or even bounded by a constant. This seems to clash with the basic philosophy of finitism. – Tommy R. Jensen Sep 17 '19 at 13:12
  • 2
    @TommyR.Jensen I'm no specialist in this area, but by my limited understanding it's possible to characterize polynomial-time algorithms in a way that should be "acceptable to ultrafinitists", without making explicit reference to asymptotic complexity. From a "traditional" perspective, the characterization can then be proved equivalent to the usual one. See e.g. Bellantoni and Cook, "New recursion-theoretic characterization of the polytime functions", 1992; or Hoffman, "Safe recursion with higher types and BCK-algebra", 2000, which generalizes it to higher types. – Robin Saunders Apr 02 '21 at 12:42
  • Anyway, I believe some (though perhaps not all) ultrafinitists would be happy using induction for a very limited class of statements, as Emil describes above. This could be enough to express that the asymptotic running time of an algorithm is polynomial - or at least, bounded by a given specific polynomial. – Robin Saunders Apr 02 '21 at 12:49
  • @RobinSaunders If ultrafinitists accept induction for certain predicates, though more limited than the usual infinite class of all well-formed predicates, then that would presumably pertain to only a finite collection of those predicates. Is there some way that we can hope to get to know about such a class, or will it be kept hidden within their circles? – Tommy R. Jensen Apr 03 '21 at 13:18
  • Not all ultrafinitists claim that there is a largest natural number, so in general there's no reason for the collection of formulae on which a particular ultrafinitist accepts induction to be finite. – Robin Saunders Apr 04 '21 at 14:39
23

I've been interested in this question for some time. I haven't put any serious thought into it, so all I can offer is a further question rather than an answer. (I'm interested in the answers that have already been given though.) My question is this. Is there a system of logic that will allow us to prove only statements that have physical meaning? I don't have a formal definition of "physically meaningful" so instead let me try to illustrate what I mean by an example or two.

Consider first the statement that the square root of 2 is irrational. What would be its physical meaning? A naive suggestion would be that if you drew an enormous grid of squares of side length one centimetre and then measured the distance between (0,0) and (n,n) for some n, then the result would never be an integer number of centimetres. But this isn't physically meaningful according to my nonexistent definition because you can't measure to infinite accuracy. However, the more finitistic statement that the square root of 2 can't be well approximated by irrationals has at least some meaning: it tells us that if n isn't too large then there will be an appreciable difference between the distance from (0,0) to (n,n) and the nearest integer.

As a second example, take the statement that the sum of the first n positive integers is n(n+1)/2. If n is too huge, then there is no hope of arranging a huge triangular array and counting how many points are in it. So one can't check this result experimentally once n is above a certain threshold (though there might be ingenious ways of checking it that are better than the obvious method). This shows that we can't apply unconstrained induction, but there could be a principle that said something like, "If you keep on going for as long as is practical, then the result will always hold."

One attitude one might take is that this would be to normal classical mathematics as the use of epsilons and deltas is to the mathematics of infinities and infinitesimals. One could try to argue that statements that appear to be about arbitrarily large integers or arbitrarily small real numbers (or indeed any real numbers to an arbitrary accuracy) are really idealizations that are a convenient way of talking about very large integers, very small real numbers and very accurate measurements.

If I try to develop this kind of idea I rapidly run into difficulties. For example, what is the status of the argument that proves that the sum of the first n integers is what it is because you can pair them off in a nice way? In general, if we have a classical proof that something will be the case for every n, what do we gain from saying (in some other system) that the conclusion of the proof holds only for every "feasible" n? Why not just say that the classical result is valid, and that this implies that all its "feasible manifestations" are valid?

Rather than continue with these amateur thoughts, I'd just like to ask whether similar ideas are out there in a better form. Incidentally, I'm not too fond of Zeilberger's proposal because he has a very arbitrary cutoff for the highest integer -- I'd prefer something that gets fuzzier as you get larger.

Edit: on looking at the Sazonov paper, I see that many of these thoughts are in the introduction, so that is probably a pretty good answer to my question. I'll see whether I find what he does satisfactory.

gowers
  • 28,729
  • 1
    In this system of logic, should the Poincare recurrence theorem be provable? In some way it contradicts the second law of thermodynamics... – Łukasz Grabowski Nov 04 '10 at 18:16
  • 2
    If I am not making a mistake, the mathematical philosophy you are referring to is called actualism. I also think finite model theory is relevant here, they use rational languages (no function symbols like + and .) to avoid forcing the models to be infinite. – Kaveh Nov 09 '10 at 23:51
  • 3
    Tim: "can't be well approximated by rationals". – Tommy R. Jensen Sep 17 '19 at 13:15
  • Cynically maybe; if a statement is true for all natural numbers in the classical sense, and they insist that it is only true up to some $N,$ then it seems a natural reaction to ask for a counterexample $>N.$ – Tommy R. Jensen Sep 17 '19 at 13:21
  • 1
    The second law of thermodynamics is only a "law" statistically. There is no physical mechanism preventing, for instance, a dumbbell-shaped container with two gases mixed together from spontaneously segregating itself; all the second law says is that, at the scales we can see, it's so unlikely that it's pretty much impossible. – Zemyla Sep 27 '19 at 00:56
  • 3
    @Zemyla : This statistical sense in which the second law of thermodynamics is correct is much like the sense in which the ultrafinitists are correct that very large numbers don't exist. You say that such a container of gases could unmix, but if you actually make the container and observe it, it never will. I don't have an ironclad proof of that, just as I don't have an ironclad proof that I'll never count to a googolplex, but it's true all the same. – Toby Bartels Jun 23 '21 at 18:31
13

Here is an ultrafinitist manifesto I have co-written a few years ago:

It is absolutely not a finished paper (there are a few inaccuracies, and many parts are sloppy), but it contains a brief history of ultrafinitistic ideas from the early greeks all the way to present time, a number of refs, as well as a sketch of a programme towards a model theory and a proof theory of ultrafinitistic mathematics.

You may also want to google the FOM list on "ultrafinitism", there are a few posts by Podnieks, Sazonov, myself, and a few others pro and contra ultrafinitism.

David Roberts
  • 33,851
12

Just my personal opinion, as a non-specialist in this area, is that ultrafinitism cannot be formalized for the same reason that the straw which breaks the camel's back cannot be formalized. We know that a healthy camel can carry one kilogram, but not 10 tons. So there must be some point at which the camel cannot carry another straw, but it is impossible to define. In practice, various things would start to go wrong with the camel as the limit is reached.

In the same way, we cannot list all of the elements of the von Neumann universe 6th stage, which contains $2^{65536}$ elements, which are the sets that you can construct from the empty set alone in ZF set theory up to a nesting depth of 6. However, it is not possible to say exactly what the bounds are for the representation of sets and numbers. There is some $n$ for which a truly random sequence of $2^n$ decimal digits cannot be written down. There are only finitely many atoms in the universe, one assumes, which means that there must be a bound on how large an integer can be written down. But things would just start to go badly wrong as we approach the limit. For example, we might run out of trees to make paper with, or we might run out of silicon to make memory chips with, and so forth.

My own personal solution to this problem is to divide all numbers into dark numbers, which include those real numbers which are truly random and therefore impossible to write as a finite formula as one would do for $\pi$, and grey numbers, which are clearly impossible to represent with all of the atoms in the universe, and then the bright numbers which we use every day, like 3. The problem with this classification is that there is no clear-cut boundary between the bright numbers and grey numbers. And in my opinion, there never will be, just as we will never be able to formalize how many straws will break the camel's back. The best approach, I think, is to acknowledge philosophically that there are limits, but to not worry about them in practice. Because in practice, we will never go over the limit.

10

One could take ETCS - which is a finite, first order axiomatisation of the category of sets - and remove the axiom that guarantees the existence of a natural numbers object. Then in this set up one can prove the existence of sets of finite cardinality, but not the existence of a set with cardinality $\geq \aleph_0$. Moreover one could weaken this to a finitist version of Palmgren's constructive, predicative version of ETCS, which would be a well-pointed $\Pi$-pretopos with enough projectives.

This latter version minus function sets (the '$\Pi$' in $\Pi$-pretopos) would perhaps be closer to Nelson's idea, because at one point he expresses doubts about the finiteness of $n^m := \{1,\ldots,n\}^{\{1,\ldots,m\}}$ for large $n$ and $m$. EDIT: I should say that in a formal setting this would translate to the unprovability of the statement "$n^m$ is finite", which would be the case in a model of a "finite set theory" without function sets.

Or one can just work with Nelson's arithmetic, which is the most ultrafinitist thing I know. For example, exponentiation is not a total function in his theory.

David Roberts
  • 33,851
9

I've always thought that assuming a formalist position (i.e., mathematics is merely the manipulation of symbols) easily allows for an ultrafinitist position. The formalist may easily grant that $b=10^{10^{10^{10}}}$ is a formal number, in the sense that it is permissible in the grammar,

(e.g., $\log_{10}(10^{10^{10^{10}}})>10^{9^8}$ is TRUE)

without it being an ontological number (case and point: there is no string of characters which is $10^{10^{10}}$ long, the length of would-be decimal representation of $b$).

Similarly, axioms which fool us into thinking they are about infinity are happily read as finite strings, from which point they may be perfectly acceptable.

From this point of view, the entities which might otherwise be numbers, but will be rejected here, are not those which are expressible in a few characters or even a few pages of characters, but those for which no human will ever come close to expressing, calculating with, etc.

I am not advocating formalism here, but it seems to make ultrafinitism philosophically defensible. As I have put it, it also makes ultrafinitism inconsequential, except as a philosophical point.

  • 1
    to manipulate symbols you need to accept something like finitism. Remember that Hilbert himself was fine with finitism, the goal of his formalism was reducing higher stuff to finitism. – Kaveh Sep 28 '11 at 11:20
  • 5
    An ultrafinitist would argue that any mathematics that is more consequential than a mere philosophical point is, pretty much by definition, ultrafinitist, and it is statements about very large numbers that are "inconsequential, except as a philosophical point." – pron Jan 08 '20 at 00:32
6

I would suggest the following axiomatization to my ultrafinitist friends. Let Nx mean "x is a natural number", Sxy mean "y succeeds x", and 0 to be "zero". The Peano Axioms are:

1/ N0

2/ S is into N

3/ S is total on N (every number has a successor)

4/ S is a function

5/ S is one-to-one

6/ 0 is not in the image of S

7/ Induction

Remove Axiom 3. Then the models of these axioms are: the standard model (if it exists) and the initial segments. {0} for instance is a model.

One can work either in second-order logic and define sequences as second-order entities (https://www.researchgate.net/publication/351561089_Arithmetic_Without_the_Successor_Axiom) or work in first-order logic and add sequences directly as first-order entities, with some additional axioms (https://www.researchgate.net/publication/354605078_A_Natural_First-Order_System_of_Arithmetic_Which_Proves_Its_Own_Consistency).

With sequences one can make the usual recursive definitions of addition, multiplication, and exponentiation, and then towers of powers. It will not, of course, be able to prove any of them total.

So the ultrafinitist who has any particular idea which numbers are permissible and which are not can simply add in the axioms he wants, such as

E/ "the product of 100 and 100 exist" and

F/ "a tower of 10 powers of 2 does not exist".

IMHO, these assumptions are not of any mathematical interest, since the system without Axiom 3 is capable of proving many mathematical theorems (Quadratic Reciprocity...), and adding any axioms such as E or F only adds trivial capabilities to prove additional theorems. So it is better, mathematically at least (and IMHO philosophically), to be agnostic about the successor axiom, rather than an atheist or a theist.

abo
  • 1,844
  • Maybe statement E adds only trivial capabilities, but you would want to have some statements like it, such as the statement that the product of 2 and 2 exists (which is pretty relevant to, for example, quadratic reciprocity). – Toby Bartels Jun 23 '21 at 18:58
  • 2
    @TobyBartels. The system which I have described (without assumption E) is "downward" in the sense that if you know or you can prove that $n$ exists, every $m < n$ can be proven to exist. So Quadratic Reciprocity goes through without the assumption of E or "the product of 2 and 2 exists", because the theorem begins by "Let $p$ and $q$ be distinct odd primes." So every number less than $p$ or $q$ can be shown to exist, which suffices for the proof to go through. – abo Jun 24 '21 at 20:11
  • 1
    I meant that some of the statements that we think of as part of QR refer to $4$, as in ‘if $q \equiv 1 \pmod 4$, then …’. But I see your point; we don't have to accept that $4$ exists in order to state that as a hypothesis. – Toby Bartels Jun 25 '21 at 21:46
  • @TobyBartels. I'm not clear on your last sentence. If p and q are distinct odd primes, then 5 exists, so 4 exists. That is, the existence of 4 can be proven given the assumptions of QR. – abo Jun 26 '21 at 05:18
  • 1
    Yes, if you phrase it that way. The version labelled ‘Gauss's Statement’ at https://en.wikipedia.org/wiki/Quadratic_reciprocity#Statement_of_the_theorem doesn't require assuming that the primes are distinct. But I see that you did specify distinct primes in your phrasing; my mistake. – Toby Bartels Jun 28 '21 at 04:09
  • I think the end of this answer is a bit blithe (to put it mildly) about just how much is lost by dropping the totality of successor. While $\mathsf{E}$ does suffice to prove many $\Pi^0_1$ sentences, things get much harder as we move up the complexity hierarchy. E.g. in order to formulate a version of finite Ramsey's theorem which is provable in $\mathsf{E}$ we need to know ahead of time a bound for the classical finite Ramsey's theorem ("For all $n$, if $F(n)$ exists then there is some $k$ such that every $c:[k]^2\rightarrow 2$ has a homogeneous set of size $n$"). And things only get worse. – Noah Schweber Sep 15 '21 at 15:49
  • And there are $\Pi^0_1$ sentences provable in $\mathsf{PA}$ but not in $\mathsf{E}$ (semantically: not all models of $\mathsf{E}$ can be extended to models of $\mathsf{PA}$) - one such sentence being $\mathsf{Con(I\Sigma_1)}$, for example, which per the ("internalized") MRDP theorem can be recast as the unsolvability of a concrete (if horrible) Diophantine equation. – Noah Schweber Sep 15 '21 at 15:51
  • @NoahSchweber. Are you misreading what E is? E (at least here) is the assertion that 100 * 100 exists. – abo Sep 15 '21 at 17:02
  • @abo Sorry, I misread - I thought $\mathsf{E}$ was the name of the system you describe (consisting of axioms 1,2,4,5,6,7). My point is that contra what the end of your answer seems to suggest, that's actually an extremely weak system. – Noah Schweber Sep 15 '21 at 17:14
  • @NoahSchweber. I think you're also misreading the last paragraph. Anyway, if there were a problem with the strength of the system, it is probably that it is too strong rather than too weak. That is, while it eliminates one axiom (Axiom 3 here) which cannot claim to be logical (logically necessary), the system still does not just consist of logical assumptions. Or in any case that is what I believe. See https://mathoverflow.net/questions/287256/when-we-count-the-same-set-must-the-number-always-be-the-same – abo Sep 16 '21 at 16:57
4

There is this argument against Nelson's predicative arithmetic which basically says that the assumption that exponentiation is not total, which is in some sense the whole reason to start predicative arithmetic, implies the inconsistency of the predicative arithmetic.

Glorfindel
  • 2,743
Stefan Geschke
  • 16,346
  • 2
  • 53
  • 80
  • 3
    The link appears broken - do you have a stable one? – Noah Schweber Jan 10 '21 at 22:48
  • 2
    @NoahSchweber A Google result for mentions of that web address suggests it used to lead to "On The Untenability Of Nelson's Predicativism" by Stefan Iwan, which can be found at https://link.springer.com/content/pdf/10.1023/A:1005651027553.pdf. – Robin Saunders Sep 11 '21 at 21:14
3

This is an old question, and I'm far from a specialist, but I've recently been reading Dmytro Taranovsky's 2016 preprint Arithmetic with Limited Exponentiation, which seems to me highly relevant to this question.

In it, he constructs a hierarchy of theories with finitely many types $I_0, ..., I_n$, in which exponentiation is (essentially) an operation $I_{k+1} \to I_k$, taking us e.g. from the natural number $x$ to the type $2^x$ of binary strings with length $x$, which is not assumed to be finite. In this way, $(n+1)$-fold iteration of exponentiation, and the associated huge numbers, are undefined.

Nevertheless, Taranovsky shows that theories near the bottom of his hierarchy are enough to support a significant portion of ordinary mathematics. Not only that, but by failing to treat exponentiation as a function $\mathbb{N} \to \mathbb{N}$, each of the theories (but not their union) is interpretable straightforwardly in $I\Delta_0$, and therefore in Robinson arithmetic, which should be acceptable to all but the most hardcore ultrafinitists.

2

There is a consistent logic which explicitly limits computational complexity of valid statements, and thus is ultrafinitist: https://arxiv.org/abs/2106.13309

The idea is that only computationally bounded function are total in practice, and the bound must be checked against limit of our computing power.

Apparently, we can express any bounded Turing Machine computation.

Michal Gajda
  • 129
  • 2