114

"Everyone knows what a curve is, until he has studied enough mathematics to become confused through the countless number of possible exceptions."

Felix Klein

What notions are used but not clearly defined in modern mathematics?


To clarify further what is the purpose of the question following is another quote by M. Emerton:

"It is worth drawing out the idea that even in contemporary mathematics there are notions which (so far) escape rigorous definition, but which nevertheless have substantial mathematical content, and allow people to make computations and draw conclusions that are otherwise out of reach."

The question is about examples for such notions.

The question was asked by Kakaz

kakaz
  • 1,596
  • 1
    Used by whom? Defined where? – Yemon Choi Feb 25 '11 at 21:02
  • 14
    In mathematics, by mathematicians. Everything is clear? I suppose mathematics is still live nowadays... – kakaz Feb 25 '11 at 21:08
  • 9
    "Everything is well defined in modern mathematics" - We really don´t know that for sure yet (i.e. consistency of ZFC)... "Mathematics is more about correctness than about truth." -I would argue that it is more about the relative truth. Being about correctness is contains too much of a self-purpose... – M.G. Feb 26 '11 at 01:55
  • 4
    This question has a meta thread: http://tea.mathoverflow.net/discussion/968/notions-used-but-not-rigorously-defined/#Item_0 – JBL Feb 26 '11 at 01:57
  • @kakaz: I would appreciate if you were more specific about what a "notion" is, and what it means to "use" it. – Qiaochu Yuan Feb 26 '11 at 23:18
  • 6
    Qiaochu - I suppose than natural language meaning is enough for "notion" and "used". If You are in trouble You may refer to nouns present more that 10 times in books from LCC classification, class Q, subclass QA, from last 60 years, as a names referring mathematical objects ( that is other than common names for things, people, and animals or plants. By "thing" I mean any part of physical reality which may be observed). – kakaz Feb 27 '11 at 10:30
  • 3
    @kakaz: no, it is not. If you insist on the natural language interpretations I think this question is far too vague and will vote to close. – Qiaochu Yuan Feb 27 '11 at 12:37
  • 11
    @Qiaochu - I understand Your position, but I think nobody have to fear vagueness in such situation. It is just another soft-question on math-overflow. It is some fun. I know that You are professionals but I am an amateur. I would like to play with mathematics. Usually it is worth of mention what do we use without proper definition, fighting between intuition and complicated formalism, and possibly why. I do not understand why question which is obviously interesting and have potential to broaden horizons for many people is so controversial. – kakaz Feb 27 '11 at 19:05
  • 4
    kakaz: what you're "playing" with isn't mathematics then, for mathematics has these notions well-defined. It's like asking what the smallest positive real number is, there simply isn't one. Similarly, it's not useful or horizon-broadening to talk about things which several generations of mathematicians have already thought carefully about and have discarded precisely because they are NOT interesting--being ill-defined and therefore impossible to deal with in a mathematical fashion.

    (contd.)

    – Adam Hughes Feb 27 '11 at 20:51
  • 2
    However, I do know that there are some notions which have several definitions, none of which is 100% standard, and perhaps if you were to ask the more specific question of "what terminology stands for multiple notions", you might get a better answer as well as a more satisfying one, for it assuredly would be a more meaningful response. – Adam Hughes Feb 27 '11 at 20:52
  • 2
    @Adam - sorry - I cannot understand Your position - do You think You always read about clearly defined notions in papers You are reading in mathematical journals for example? Please take a look in big list below... In my opinion it has eye opening potential. Of course it may be no mathematics in Your opinon... – kakaz Feb 27 '11 at 22:12
  • 32
    Closing this thread seems more like punishing someone for being an amateur rather than enhancing the quality of the site.

    Now and throughout history, I believe, a large percentage of the most interesting mathematics revolves precisely around those notions that are used but not (yet) clearly defined. A big list of such subjects seems extremely valuable to me.

    Vote to reopen.

    – Louigi Addario-Berry Mar 01 '11 at 16:01
  • 2
    @kakaz: There are only a couple of buttons available. I have more problems with most of the answers, to be honest. Instead of reopening this question I'd see more sense in breaking it up into 20 real questions, like "what is a motive", "what is a field with one element", etc., and what are the problems (if any) with making their definition precise. – Franz Lemmermeyer Mar 01 '11 at 16:02
  • I am not sure if it is a possible answer so I leave it in comment. The notion of weak $\omega$-categories has many acceptations. Everyone agrees that it is something with higher dimensional morphisms but there are a lot of possible axioms. – Philippe Gaucher May 27 '15 at 08:38
  • Maybe the notion of quasicrystal could qualify? (See this comment: https://mathoverflow.net/questions/34699/approaches-to-riemann-hypothesis-using-methods-outside-number-theory/34700#comment288366_34700) – Watson Apr 11 '20 at 20:04

32 Answers32

116

Surprised nobody mentioned fractal yet. (Chaos has been mentioned but the connection is tenuous.)

No satisfactory definition of fractal exists. Mandelbrot tentatively defined a fractal as a set whose Hausdorff dimension is strictly larger than its topological dimension. But this leaves out many sets that most people agree are fractals, and it's hard to extend to other objects (like measures) that one also wants to consider as fractals.

Taylor defined a fractal as a set with coinciding Hausdorff and packing dimensions. His goal was to leave out too irregular objects (for which different concepts of fractal dimension may differ), but according to his definition any smooth object is a fractal, and clearly fractal sets such as Bedford-McMullen carpets are left out.

In applied fields, a fractal is often defined as a set having some kind of similarity: small parts are similar to the whole set, perhaps in a statistical or approximate sense. While many fractals arising in practice do enjoy this feature, this is still a very vague definition.

Some authors consider any set or measure in Euclidean space to be a fractal, when the goal is to study properties typically associated with fractal sets, such as Hausdorff dimension.

At the end of the day, there is agreement that giving a universal definition of fractal is impossible, yet it is a useful concept to have around, and people know a fractal when they see it.

  • The notion of a self-similar structure in general is an 'intuitive' one. It is given various precise definitions in specific contexts, but the general idea is usually left to the imagination. – Colin Reid Mar 02 '11 at 20:05
  • 1
    Could a natural definition of fractal be "not of integer dimension" (whatever definition of dimension you take)? – Andreas Rüdinger Sep 30 '11 at 20:26
  • 19
    @Andreas, there are sets of integer dimension which have all the hallmarks of fractality. For example, consider the four corner Cantor set, constructed replacing the unit square $[0,1]^2$ by the four corner squares of side length $1/2$ and continuing inductively. This object is the epytome of a fractal (strictly self-similar, purely unrectifibable), and has Hausdorff and box counting dimension $1$. – Pablo Shmerkin Sep 30 '11 at 23:55
  • 7
    @Pablo (if you're still here, 7 years later), I think you want the corner squares of side length $1/4$, not $1/2$. – Gerry Myerson Aug 22 '18 at 23:10
  • 1
    @Gerry, certainly! – Pablo Shmerkin Sep 08 '18 at 12:44
  • Naive question from a non-mathematician: why can't a fractal simply be defined by the following two words: recursive geometry? (Are the terms "recursive" and/or "geometry" perhaps not exact enough, thus entailing a goal post moving game of recursive -fractal?- reification?) – Will Nov 26 '21 at 13:48
  • 2
    @Will The words "recursive" and "geometry" are far, far more nebulous than "fractal", so defining "fractal" as "recursive geometry" seems like a step in the wrong direction. – Carl-Fredrik Nyberg Brodda Nov 26 '21 at 18:52
  • 1
    @AndreasRüdinger This page contains a list of fractals, several with integer dimension. – user76284 Nov 04 '23 at 03:16
  • Set A is a fractal <=:=> the Hausdorff dimension of A is DIFFERENT (hence larger than) the topological dimension of A. – Wlod AA Nov 16 '23 at 23:43
97

One of the most important contemporary mathematical concepts without a rigorous definition is quantum field theory (and related concepts, such as Feynman path integrals).

Note: As noted in the comments below, there is a branch of pure mathematics --- constructive field theory --- devoted to making rigorous sense of this problem via analytic methods. I should add that there is also a lot of research devoted to understanding various aspects of field theory via (higher) categorical points of view. But (as far as I understand), there remain important and interesting computations that physicists can make using quantum field theoretic methods which can't yet be put on a rigorous mathematical basis.

Emerton
  • 56,762
  • 4
    Do you mean Feynman path integrals? (as opposed to an integral attached to a single Feynman diagram) – S. Carnahan Feb 26 '11 at 06:54
  • 3
    I'd guess Emerton means Feynman path integrals, as the integral attached to a single Feynman diagram is pretty well-defined. – Kelly Davis Feb 26 '11 at 11:43
  • 4
    Maybe "quantization" is a related such concept. – Gil Kalai Feb 26 '11 at 16:33
  • 8
    It is not fair to say there is no rigorous definition of a Feynman path integral. It is a probability measure on the space of tempered distributions. The difficulty is in constructing nontrivial examples. This is what constructive field theory is about. – Abdelmalek Abdesselam Feb 26 '11 at 18:59
  • 1
    "Feynman integrals" clarified to "Feynman path integrals". Also, I added some elaboration on the issue of rigorous foundations (reflecting my limited understanding of the situation). – Emerton Feb 27 '11 at 00:09
  • 1
    Kevin Costello just publishes a very interesting book on those themes: http://www.ams.org/bookstore-getitem/item=surv-170 – Thomas Riepe Jun 02 '11 at 08:18
  • A little more elaboration on what I said in my comment above is in the MO answer I gave here https://mathoverflow.net/questions/260854/a-roadmap-to-hairers-theory-for-taming-infinities/260941#260941 – Abdelmalek Abdesselam Aug 23 '18 at 13:16
96

The field with one element, $F_1$.

Georges Elencwajg in http://mathoverflow.tqft.net/discussion/968/notions-used-but-not-rigorously-defined/#Item_0

kakaz
  • 1,596
  • 4
    If I remember correctly, there are many definitions of F1, but it is not correct that this means that there is no one ... – Martin Brandenburg Feb 26 '11 at 11:13
  • Yes, Martin is exactly right. For instance there is very good reason to believe that something as concrete as the sphere spectrum models $\mathbb{F}_1$ – Harry Gindi Feb 27 '11 at 15:23
  • The concept of a field is well-defined. The field with one element is too vacuous to be considered useful, so no one bothers to define it. But there is no uncertainty as to how to define it. – David Harris Feb 27 '11 at 17:59
  • 36
    @David: You're missing the point. The "field with one element" is neither a field nor has it one element (probably). It's a figure of speech to describe a more or less hypothetical object that should behave in some aspects like a field with one element should if it existed. – Johannes Hahn Feb 27 '11 at 23:29
  • 16
    @David: Wherever one sees "$\mathbb{F}_1$", one should think "universal base", not "field with one element". To elaborate, the idea of the "field with one element" is an algebro-geometric concept that works roughly as follows: The universal base in commutative algebra and algebraic geometry is $\operatorname{Spec}(\mathbb{Z})$. However, this is in many ways annoying, since we usually expect in geometry that the universal base be a point. At a technical level in alg. geom., there are a number of reasons why we would like our base to be a field, and so to prove things about (continued) – Harry Gindi Feb 28 '11 at 07:36
  • 16
    the category of all algebro-geometric objects (which necessarily is the category of appropriate objects over the base $\operatorname{Spec}(\mathbb{Z})$), we have to attack things indirectly by proving things in all characteristics (when such proofs are even valid!). The idea of $\mathbb{F}_1$ is to find an appropriate category of "generalized commutative rings" that has a deeper base than $\operatorname{Spec}(\mathbb{Z})$ and generalizes algebraic geometry in the classical case. One algebraic approach I've seen is Jim Borger's approach via $\lambda$-rings, which is on the arXiv (contd) – Harry Gindi Feb 28 '11 at 07:42
  • 16
    Another approach, using theory of $E_\infty$-ring spectra, is originally due, I think, to Jack Morava. This approach, called Spectral or "Brave New" algebraic geometry, is founded on the framework developed in the recent work of Toen-Vezzosi and Lurie. If you believe that the theory of $E_\infty$-ring spectra really generalizes the theory of commutative rings, then this seems natural, but conversely, the fact that the universal base $E_\infty$-ring spectrum indeed has all of the expected properties of $\mathbb{F}_1$ seems like convincing evidence that this is an (continued) – Harry Gindi Feb 28 '11 at 07:55
  • 16
    extremely natural extension of the classical theory. The major drawback of this theory is that the universal base $E_\infty$-ring spectrum is an object that exists only up to homotopy (it is a "virtual" field), and further, it happens to be a notoriously difficult object to compute with (for instance, computing its homotopy groups is by definition equivalent to computing the stable homotopy groups of spheres, which have only been computed to a stunningly low finite degree (somewhere between 30 and 40 are known, depending on who you ask)). – Harry Gindi Feb 28 '11 at 08:01
  • 9
    By the way, the reason I didn't say more about Jim Borger's approach is that I don't really understand it that well (the paper was a bit beyond my reach when I tried to read it last year), but I attended a talk recently that described how $\lambda$-structure and Witt ring approach is related to the "Brave New" algebra approach (it was quite complicated and involved the Adams spectral sequence and the chromatic spectral sequence, so I didn't really understand the technical details there either). There's also a connection between these two and Langlands-stuff. – Harry Gindi Feb 28 '11 at 08:40
  • 10
    Anyway, the point here is that $\mathbb{F}_1$ has an extremely deep and rich mathematical structure and is closely related to at least two major open problems, the stable homotopy groups of spheres and the Langlands program, and how those two problems relate to one another. I've heard some speculation (code for wishful thinking) that a better understanding of $\mathbb{F}_1$ could lead to a proof of the Riemann hypothesis by adapting the methods used by Deligne to prove the Riemann hypothesis over finite fields. – Harry Gindi Feb 28 '11 at 08:46
  • 3
    @David: There's a short survey here if you're interested in why it might be useful to have it as an example:

    http://arxiv.org/abs/0704.2030

    – Nick Loughlin Mar 01 '11 at 15:45
  • 3
    Good comments (by Harry) make a good answer! – Gil Kalai Mar 01 '11 at 17:53
  • 2
    @Harry: The "universal $E_\infty$-ring spectrum is the sphere spectrum, which is not only defined up to homotopy. But the most sensible way to deal with $E_\infty$-ring spectra is to look at their homotopy. Btw: The stable homotopy groups of spheres are computed into the 60s (although not too many people may have checked these computations). – Lennart Meier Jun 25 '13 at 17:24
  • @LennartMeier I wrote these comments several months before I read EKMM, I guess! – Harry Gindi Aug 11 '17 at 13:56
66

I have three (somewhat related) examples:

  1. The notion of explicit construction. Seeking explicit constructions to replace non-constructive existence proofs is an old endeavor. Computational complexity offers, in some cases, formal definitions (constructions that can be dome in P or in polylog space.) But these definitions are slightly controversial. In any case people looked for explicit constructions before any explicit definition for the term explicit construction was known.

  2. The notion of effective bounds/proofs. There are many important problems about replacing a proof giving non effective bounds with a proof giving effective bounds. Usually I can understand a specific such problem but the general notion of effectiveness is not clear to me. (A famous example: effective proofs for Thue Siegel-Roth theorem.)

  3. Elementary proofs. I remember that finding elementary proofs for the prime number theorem was a major goal. I was told what this means many times and in a few of those I even understood. But the notion of "elementary" proof in analytic number theory remained quite vague for me.

Vincent
  • 2,437
Gil Kalai
  • 24,218
  • 14
    It's good that these are vague, because it guarantees that we can always look for yet more explicit construction, yet more effective bounds and yet more elementary proofs! – darij grinberg Feb 26 '11 at 17:15
  • 2
    I always thought that explicit constructions had more to do with decidability than with computational complexity. Many constructions have branchings (if $a \neq 0$, divide by $a$, otherwise do something else...) that are not at all helpful if you cannot decide which branch you should follow. – Thierry Zell Feb 28 '11 at 00:12
  • 7
    How about ‘closed form’ (http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=1699262)? – LSpice May 06 '11 at 06:38
  • blog post by Bill GASARCH about what is an elementary proof: http://blog.computationalcomplexity.org/2010/02/what-is-elementary-proof.html – Kaveh Jan 05 '12 at 22:57
  • 3
    I thought an elementary proof in analytic number theory means a proof that doesn't use complex analysis. However, "elementary" is overloaded so it means different things in different contexts. – Zsbán Ambrus Jun 21 '15 at 18:22
  • 2
    Effective is a clearly defined concept: there is a Turing Machine which (perhaps for some problem-connected input) will in finite time output the constant (Note: it's quite possible to have an effective upper bound for an uncomputable constant, even a sequence of effective upper bounds provably tending to the constant, which can be confusing).

    Explicit construction is not well-defined, though - it's always personal taste (You might rule out a brute-force search of log size as it doesn't give you any idea what the resulting object looks like: but this is P-time possible in a construction)

    – user36212 Jun 21 '15 at 20:01
61

I'm not sure how well this fits the bill, but in algebraic geometry and number theory, the notion of mixed motives is still undefined, although people have a fairly good idea of what properties they want the category of mixed motives to have.

Alex B.
  • 12,817
  • 7
    Indeed, they are so much of undefined that even the wiki link pops up an error ;) – M.G. Feb 26 '11 at 13:13
  • Thanks for alerting me to the broken link. I seem to never get the hang of these links. – Alex B. Feb 26 '11 at 14:25
  • 5
    One should mention that, thanks to Beilinson, Voevodsky, Déglise-Cisinski and others, we have a good candidate for the Derived category of mixed motives over most bases. The definition of an Abelian category of mixed motives still relies on Gorthendieck's Standard Conjectures on algebraic cycles. – AFK Feb 26 '11 at 17:56
48

Not only is the notion of chaos not well-defined (cf. the answer of Gerry Myerson), but the same holds true for its opposite: there is no universally accepted definition of integrable system yet.

38

The notion of a $q$-analogue in enumerative combinatorics.

  • 3
    I take issue with that, despite my understanding of $q$-calculus being feeble, at best. It seems to me that most other examples here "require" a definition, whereas a $q$-analogue is simply something that we can ask for, but need not define. We can ask what the $q$-analogue of Chu-Vandermonde's identity is (and quickly figure out the answer because it's not difficult), but we wouldn't assume its existence-statement and try to build on that. At any rate, my naive understanding of $q$-analogues is the replacement of the word "set" with (finite) "vector space". Perhaps it goes much deeper though. – the_fox Jul 30 '15 at 01:36
  • 4
    @the_fox There is a "cheap" way to define "$q$-analogue," which is any formula involving $q$ that specializes to the original formula when we set $q=1$ (or let $q\to1$). By this definition, existence is trivial, because we can throw in a $q$ in all kinds of dumb ways. But that's not what people have in mind by a $q$-analogue. A $q$-analogue should be "nice" in some way. But "nice" is not clearly defined. Sometimes there might be more than one $q$-analogue. And there are certainly many $q$-analogues where the $q$ is not just the size of a finite field. – Timothy Chow Nov 26 '21 at 21:03
36

The notion of canonicity (with respect to maps and objects) has thusfar evaded attempts by mathematicians to formalize it. If I remember correctly, Bourbaki tried to give it a definition based on some ideas of Chevalley, but, at least to my knowledge, it was deleted from later drafts of the Elements because it was not a particularly useful notion (or perhaps it just didn't work out. There was a thread on MO asked by Kevin Buzzard about this particular section of Bourbaki, and maybe you could find more details there). Jim Dolan more recently tried to give a definition of a canonical transformation between functors, but his notion is essentially that of a transformation that is natural when restricted to the core groupoid. However, this doesn't really capture all of the cases that we want, and I don't know of any serious attempt to make use of the notion.

Harry Gindi
  • 19,374
29

So-called Stiff ODEs might qualify. In the literature one finds plenty of different attempts to define the notion of a stiff initial value problem for an ODE, some of them more, some less precise and they all try to capture the phenomenon of rapid step size decrease when numerically integrating some IVPs with explicit schemes whereas some implicit schemes do very well without slowing down significantly. In fact, some authors use this as the definition of a stiff IVP.

user8707
  • 316
  • 1
    Some people say an equation is "stiff" if explicit methods require a very small step size to work, but the solution is still smooth. I think it is also "defined" as having components that vary on very different length scales. – Darsh Ranjan Feb 26 '11 at 18:04
  • 2
    I use the working definition "stiff problems are problems that explicit Runge-Kutta cannot efficiently solve"... – J. M. isn't a mathematician May 08 '11 at 18:53
29

There are several examples in set theory; the three I mention are related so I will include them in a single answer rather than three.

1) Large cardinal notion.

I have seen in print many times that there is no precise definition of what a large cardinal is, but I must disagree, since "weakly inaccessible cardinal" covers it. Of course, if you retreat to set theories without choice then there may be some room for discussion, but this is a technical point.

People seem to mean something different when they say that large cardinal is not defined. It looks to me like they mean that the word should be used in reference to significant sign posts within the large cardinal hierarchy (such as "weakly compact", "strong", but not "the third Mahlo above the second measurable") and, since "significant" is not well defined, then...

However, it seems clear that nowadays we are more interested in large cardinal notions rather than the large cardinals per se. To illustrate the difference, "$0^\sharp$ exists" is obviously a large cardinal notion, but I do not find it reasonable to call it (or $0^\sharp$) a large cardinal.

And large cardinal notion is not yet a precisely defined concept. A very interesting approximation to such a notion is based on the hierarchy of inner model operators studied by Steel and others. But their meaningful study requires somewhat strong background assumptions, and so many of the large cardinal notions at the level of $L$ or "just beyond" do not seem to be not properly covered under this umbrella.

2) The core model.

This was mentioned by Henry Towsner. I do not think it is accurate that we were proving results about it without a precise definition. What happens is that all the results about it have additional assumptions beyond ZFC, and we would like to be able to remove them. More precisely, we cannot show its existence without additional assumptions, and these additional assumptions are also needed to establish its basic properties.

The core model is intended to capture the "right analogue" of $L$ based on the background universe. If the universe does not have much large cardinal structure, this analogue is $L$ itself. If there are no measurable cardinals in inner models, the analogue is the Dodd-Jensen core model, and the name comes from their work. Etc. In each situation we know what broad features we expect the core model to have (this is the "not clearly defined part"). Once in each situation we formalize these broad features, we can proceed, and part of the problem is in showing its existence.

Currently, we can only prove it under appropriate "anti-large cardinal assumptions", saying that the universe is not too large in some sense. One of the issues is that we want the core model to be a fine structural model, but we do not have a good inner model theory without anti-large cardinal assumptions. Another more serious issue is that as we climb through the large cardinal hierarchy, the properties we can expect of the core model become weaker. For example, if $0^\sharp$ does not exist, we have a full covering lemma. But this is not possible once we have measurables, due to Prikry forcing. We still have a version of it (weak covering), and this is one of the essential properties we expect.

(There are additional technical issues related to correctness.)

But it is fair to expect that as we continue developing inner model theory, we will find that our current notions are too restrictive. As a technical punchline, currently the most promising approach to a general notion seems to be in terms of Sargsyan's hod-models. But it looks to me this will only take us as far as determinacy or Universal Baireness can go.

3) Definable sets of reals.

We tend to say that descriptive set theory studies definable sets of reals as opposed to arbitrary such sets. This is a useful but not precise heuristic. It can be formalized in wildly different ways, depending of context. A first approximation to what we mean is "Borel", but this is too restrictive. Sometimes we use definability in terms of the projective hierarchy. Other times we say that a definable set is one that belongs to a natural model of ${\sf AD}^{+}$. But it is fair to say that these are just approximations to what we would really like to say.

Andrés E. Caicedo
  • 32,193
  • 5
  • 130
  • 233
  • 5
    Andres, regarding your proposal that weakly inaccessible cardinals cover all large cardinal notions, how about the notion by which $\theta$ is fairly big if $V_\theta\satisfies$ ZFC? The least such cardinal is not weakly inaccessible, since it has cofinality $\omega$, but I would still regard this as a large cardinal notion. – Joel David Hamkins Feb 26 '11 at 21:11
  • Yes, this is in accordance with what I meant: It makes sense to think of this as a "large cardinal notion" (just as with most rungs of the ladder that is the consistency strength hierarchy) but I wouldn't call it a "large cardinal". – Andrés E. Caicedo Feb 26 '11 at 21:20
  • Maybe the definition of a what is a set itself ? (I am not a specialist of ZFC) – Duchamp Gérard H. E. May 27 '15 at 09:44
  • 1
    @Burak These are precisely the sets that are (first-order) definable from parameters in $(\mathbb R,\mathbb N,+,\times)$. – Andrés E. Caicedo Nov 19 '15 at 18:53
  • @AndrésCaicedo Sorry for deleting the comment. I realized after posting the comment that one side of what I had mind is problematic. But I should add the question back so that your comment makes more sense. (I was simply asking the historical reason of the use of the phrase "definable sets of reals" for prejective sets and whether it had anything to do with $\mathcal{L}_{\omega_1 \omega}$ definability in the field of real numbers.) – Burak Nov 19 '15 at 18:57
  • @AndrésE.Caicedo What I have most commonly seen is the claim that there is no precise definition of a large cardinal axiom. – Timothy Chow Nov 26 '21 at 02:38
25

For a number of years, different authors were using different definitions of "chaos", but I think that has settled down now.

"Quantum group" may be a good answer. If Wikipedia can be trusted on this issue, "In mathematics and theoretical physics, the term quantum group denotes various kinds of noncommutative algebra with additional structure. In general, a quantum group is some kind of Hopf algebra. There is no single, all-encompassing definition, but instead a family of broadly similar objects."

Gerry Myerson
  • 39,024
  • 7
    TBH defining what a "quantum group" is doesn't seem to me like the most pressing problem in quantum group theory. "Classical groups" were long undefined, yet not hindering their exploration. As long as we know how every single quantum group we need is defined... – darij grinberg Feb 26 '11 at 10:16
  • 5
    @Darij: Well, it would be nice to have an axiomatization that allowed us to prove things simultaneously about say, quantized enveloping algebras and quantized coordinate rings of algebraic groups. But I agree that it doesn't seem pressing... – Sheikraisinrollbank Mar 03 '11 at 02:38
20

In Leo Corry's book Modern Algebra and the Rise of Mathematical Structures , he chronicles how mathematicians have tried to give a formal definition of structure via lattice theory, Bourbaki's set theoretic structures, and category theory. At least according to Corry, the concept is still elusive and not really captured by any of the attempts.

  • The question asks for notions that "allow people to make computations and draw conclusions that are otherwise out of reach"; this one is a bit too metamathematical. –  Dec 05 '19 at 19:33
15

Today, there are 101 papers on mathscinet using the notion of planar algebra, discovered by V. Jones in 1998.
The fundation of planar algebras theory is waiting for some detailed proved with all i's dotted and t's crossed.
See for example the post: What's the detailed proof of "the composition of planar tangles is well-defined"?

14

The set of equivalence classes of irreducible, smooth representations of a reductive $p$-adic group $G$ should be partitioned into finite subsets called $L$-packets. Each $L$-packet should correspond to a Langlands parameter, but since this correspondence remains conjectural, $L$-packets are not defined in general. In some important cases, one knows exactly what the $L$-packets are. For example, if $G$ is a general linear group, then the $L$-packets are singletons. For other groups, there are some properties that $L$-packets are believed to satisfy, but that's not a definition.

  • 2
    I believe though that the situation is in better shape now than it once was. The proof of the fundamental lemma should allow Arthur to complete his work on the stable trace formula for classical groups, and lead to a theory of local $A$-packets for classical groups. Since in the tempered case $A$-packets and $L$-packets coincide, and since the general description of $L$-packets reduces to the description of tempered $L$-packets for Levi's, my understanding is that this should also lead to a theory of local $L$-packets for classical groups. – Emerton Feb 27 '11 at 20:35
11

The notion of noncommutative set is used for the intuition as the noncommutative analog of a set, as the von Neumann algebras or the ${\rm C}^*$-algebras are for the measurable or topological spaces. But unlike these notions of noncommutative topological or measurable space which are well-defined in the operator algebras framework, the notion of noncommutative set is not (yet) (well-)defined. See the post: What's a noncommutative set?

11

To state that a mathematical assertion is morally correct or morally true seems to convey a significant amount of mathematical content. This may indicate to the reader/audience that the assertion has every right to be true, even if it may not yet be proven.

See Eugenia Cheng's article discussing morality in mathematics.

Mark S
  • 2,143
11

The concept of turbulence is still vaguely or ill defined such as applied to too many phenomena.

Examples from Is there a mathematically precise definition of turbulence for solutions of Navier-Stokes? and elsewhere

  • In the Ptolemaic Landau–Hopf theory turbulence is understood as a cascade of bifurcations from unstable equilibriums via periodic solutions ([the Hopf bifurcation][2]) to quasiperiodic solutions with arbitrarily large frequency basis.

  • According to [Arnold and Khesin][3], in the 1960's most specialists in PDEs regarded the lack of global existence and uniqueness theorems for solutions of the 3D Navier–Stokes equation as the explanation of turbulence.

  • Kolmogorov suggested to study minimal attractors of the Navier-Stokes equations and formulated several conjectures as plausible explanations of turbulence. The weakest one says that the maximum of the dimensions of minimal attractors of the Navier–Stokes equations grows along with the Reynolds number Re.

  • In 1970 Ruelle and Takens formulated the conjecture that turbulence is the appearance of global attractors with sensitive dependence of motion on the initial conditions in the phase space of the Navier–Stokes equations ([link][4]). In spite of the vast popularity of their paper, even the existence of such attractors is still unknown.

  • Existence of energy cascades (eg. Big vortices feeding on smaller vertices). This reflects the physical notion that mechanical energy injected into a fluid is generally on fairly large length and time scales, but this energy undergoes a “cascade” whereby it is transferred to successively smaller scales until it is finally dissipated (converted to thermal energy) on molecular scales

  • Von Karman: “Turbulence is an irregular motion which in general makes its appearance in fluids, gaseous or liquid, when they flow past solid surfaces or even when neighboring streams of the same fluid flow past or over one another.”

  • Hinze: “Turbulent fluid motion is an irregular condition of the flow in which the various quantities show a random variation with time and space coordinates, so that statistically distinct average values can be discerned.”

  • Chapman: “Turbulence is any chaotic solution to the 3-D Navier–Stokes equations that is sensitive to initial data and which occurs as a result of successive instabilities of laminar flows as a bifurcation parameter is increased through a succession of values.”

  • Criteria listed in McDonough's notes:

    1. nonrepeatability (i.e., sensitivity to initial conditions);
    2. disorganized, chaotic, seemingly random behavior
    3. extremely large range of length and time scales
    4. enhanced diffusion (mixing) and dissipation (both of which are mediated by viscosity at molecular scales)
    5. three dimensionality, time dependence and rotationality (hence, potential flow cannot be turbulent because it is by definition irrotational);
    6. intermittency in both space and time.
Thomas Kojar
  • 4,449
11

In proof theory, the notion of a "natural well-ordering" comes up, but isn't (perhaps can't be) defined formally.

In a similar vein, I'm told that inner model theorists were proving results about "the core model" for decades without having a precise definition of what it was.

11

The notion of a solution concept in game theory. Although the most famous example of such---Nash equilibrium---is rigourously defined, as are several others (correlated equilibrium, rationalizability, sequential equilibrium, etc.), there is no satisfactory general definition of the type of object of which these are tokens. Indeed, the purported definition that appears in this Wikipedia article is, in a sense, as far from informative as it could be without incurring a type mismatch.

Adam Bjorndahl
  • 383
  • 3
  • 14
10

The natural numbers! We discuss them as if there were a "standard model" ${\bf N}=(\{0,1,2\ldots\},_,\cdot,<)$ that (by incompleteness) doesn't happen to have a recursively enumerable first-order theory, and then act like that's fine because we all know what ${\bf N}$ is. Or do we?

Believing in a standard ${\bf N}$ doesn't seem much different than the belief that there's a standard universe of sets, where CH has a truth value that we don't happen to know. These days there is at least partial (not universal) acceptance (e.g. multiverse theory) that there isn't a unique set universe.

So are natural numbers different? Or is ${\bf N}$ itself not a well-defined concept? Why does anyone think every sentence in the language of arithmetic has a truth value? Even if one believes this for $\Pi_1^0$ sentences (every Turing machine halts or doesn't), why believe it at higher quantifier depth? Why believe it when there are set quantifiers?

Anyway even if we could somehow get our hands on the complete first-order theory of ${\bf N}$ (aka true arithmetic), that theory still (by Löwenheim–Skolem) has infinitely many models in any given "true" universe of sets.

none
  • 61
  • 9
    There isn't even any universal agreement as to whether zero is in $\bf N$. – Gerry Myerson Aug 22 '18 at 22:34
  • 5
    @Gerry as countable sets with a (well-founded) linear order with successor the N with 0 and the N with no 0 are uniquely isomorphic, so unless you need the additive monoid of natural numbers, rather than the additive semigroup, then it won't make too much difference. – David Roberts Aug 23 '18 at 00:36
10

In response to Colin Tan's request (below), I have posted these remarks as the TCS StackExchange question "Do the undecidable attributes of P pose an obstruction to deciding P versus NP?"


That a mathematical idea be "clearly defined" is itself an idea that perhaps could be more clearly defined ... one candidate for a more rigorous assertion is that a mathematical intuition be formally decidable. Moreover, widespread intuitions that are eventually proved to be decidable versus undecidable have an illustrious history in mathematics.

These reflections lead to the suggestion this community wiki's question would be better-posed mathematically (and might perhaps be more useful too) if it were amended to read:

"What intuitions are commonly embraced and/or have proved to be broadly useful, but nonetheless are formally undecidable, in modern mathematics?"
One specific example that comes to mind is Emanuele Viola's theorem, with its implication that the set of Turing machines {M} associated to P has no decidable runtime ordering. Viola's proof of undecidability was eye-opening to me, and it has filled the valuable role of leading me to wonder "What else is out there?"

To show the utility of these reflections, Section 1.5.2 of Sanjeev Arora and Boaz Barak's well-respected textbook Computational Complexity: a Modern Approach is titled "Criticisms of P and some efforts to address them". I have often wished that Arora and Barak had written more on this theme. With the help of Viola theorem, this wich becomes specific and rigorous: a section titled "What properties of P are not decidable in modern mathematics?"

No doubt many more examples of "undecidable intuitions of modern mathematics" could be posted, and it would be great fun to read other people's examples. However, it seems inappropriate to amend the topic of a community wiki in such a fundamental respect, and so I am posting this amended question as a suggested general "answer" instead.


Partially in response to Colin Tan's request (in the comments below), I have posted on TCS StackExchange the specific question "What is the proper role of verification in quantum sampling, simulation, and extended-Church-Turing (E-C-T) testing?".

More broadly, on Lance Fortnow's weblog, under the topic "75 Years of Computer Science", the question is raised

"Do there exist languages $L$ that are recognized solely by those Turing machines in $P$ whose runtime exponents are undecidable? Can examples of these machines and languages be finitely constructed?"

... but I am not (yet) prepared to post this as a MathOverflow and/or TCS StackExchange question. Thanks and appreciation are extended to Colin.

John Sidles
  • 1,359
  • I was about to post this comment myself when I saw it here. – Michael Hardy Mar 03 '11 at 18:06
  • (I.e. I was about to post this answer.) – Michael Hardy Mar 03 '11 at 18:06
  • 1
    @John, could you post your rephrased question as a separate question? I think it would be very worthwhile to see what responses the MO community has to your rephrased question. –  Mar 11 '11 at 05:47
  • @Colin, I am preparing to do precisely what you ask ... it turns out that these qiestions are to a some extent addressed in Juris Hartmanis' article Feasible computations and provable complexity properties (1978) ... and it is taking awhile to decide how to phrase this question in a well-posed yet stimulating way. – John Sidles May 16 '11 at 14:26
  • @Colin, I am making slow progress toward your request. – John Sidles Jun 01 '11 at 18:55
  • @Colin, the question is now formally posted on TCS StackExchange – John Sidles Jun 20 '11 at 18:37
  • The question seems to be http://cstheory.stackexchange.com/questions/7059/do-the-undecidable-attributes-of-p-pose-an-obstruction-to-deciding-p-versus-np . – LSpice Jul 12 '16 at 18:17
10

I asked about Defining variable, symbol, indeterminate and parameter previously on MO, and did not get any satisfying answers for all these concepts. The one exception is that of variable (and meta-variable) where Neel gave good pointers.

9

Infinitesimals are almost in this category.

Technically, calculus generally uses limits instead of infinitesimals. And there are logical systems (e.g. nonstandard analysis) in which genuine infinitesimals are rigorously defined. However, people find infinitesimals easier for intuition even in the context of the standard analysis. This type of infinitesimal reasoning generally then needs to be transformed into standard proofs.

David Harris
  • 3,397
  • 2
    For these to qualify, don't infinitesimals need to be "used but not clearly defined in modern mathematics"? I am not sure they satisfy both these conditions, unless by "used" you mean "as a pedagogical aid". (That's not to downplay Loeb measures etc etc but where they are used they are most assuredly "clearly defined".) – Yemon Choi Feb 28 '11 at 06:35
  • 1
    Yemon, I agree with you completely. I said they are "almost" in this category. They are not usually used, and if so they are often defined rigorously. But there is still a category of intuitive, non-rigorous uses of the infinitesimal concept. – David Harris Feb 28 '11 at 12:30
  • 7
    What about nonstandard analysis? – David Corwin Jun 02 '11 at 02:06
8

'Applied Mathematics' is a much-used term in modern mathematics, but I've yet to find a universally-agreed upon definition. Given its use as a major category ('pure' vs 'applied') and repository of sundry generalizations ('non-rigorous','relevant', 'not deep', 'critical to science', etc.), surely a precise definition is in order.

In the MSC, there is only one MSC code with this phrase (00A69). Based on this, maybe 'Applied Mathematics' is a field of inquiry which is not important

Nilima Nigam
  • 1,161
  • 1
  • 13
  • 15
  • 2
    I too find the term "applied mathematics" confusing. I think part of the confusion I have with it, is that by using the term there is the direct implication that all other branches of mathematics are non-applied, non-relevant. And that's certainly not the case -- a lot of non-"Applied Mathematics" has quite a few real-world applications. So I much prefer to just call people simply mathematicians and describe precisely what they do, rather than support the pure/applied division. – Ryan Budney Jun 18 '11 at 20:04
  • Agreed. It is more illuminating to describe the specifics of a mathematician's pursuits than the broad labels of 'applied/pure'. – Nilima Nigam Jun 18 '11 at 21:24
6

Left/right derived functors. If $F$ is an additive functor from a category $A$ to another category $B$, then the left/right derived functors of $F$ go from $A$ to... where? Not to $B$ certainly, because this would require global choice on $A$ or break canonicity.

There seem to be solutions nowadays, with the notions of derived categories and anafunctors. Unfortunately, there seems to be no introductory text yet which would systematically develop homological algebra in a clean way, without cheating and speculating over one's head. I am more than glad to be proven wrong...

PS. This might be what Harry Gindi is referring to.

  • After all in the applications the axiom of global choice can be avoided. This is some theorem in set theory which others may explain better than me. – Martin Brandenburg Feb 26 '11 at 11:16
  • Martin, do you mean the result that adding global choice is a conservative extension of ZFC? – arsmath Feb 26 '11 at 11:27
  • I think that Grothendieck's $\delta$-functors and their relatives, triangulated functors, give a language that allow us to speak of a derived functor as something defined up to 2-natural equivalence. – Leo Alonso Feb 26 '11 at 12:28
  • 5
    @Darij: You're in luck. The book Homotopy limit functors on model and homotopical categories by Dwyer-Hirschhorn-Kan-Smith gives the precisely correct abstract definition of a derived functor. One actually ends up with a theory very close to Lurie's $\infty$-categories but with less of the simplicial formalism. It's actually at the point that we can declare "case closed" on this particular question. – Harry Gindi Feb 26 '11 at 14:53
  • Let me clarify. It is a hundred-something page book that develops exactly the theory about which you're asking, essentially from scratch. Homotopical categories should be seen as the intersection of model categories, local-isomorphism-equipped-presheaf categories, and the classical Grothendieck-Verdier approach. – Harry Gindi Feb 26 '11 at 14:57
  • Grothendieck's derivators may provide an answer, but I'm no expert. See http://www.math.jussieu.fr/~maltsin/groth/Derivateursengl.html. – Jonathan Chiche Feb 26 '11 at 16:55
  • I fear Grothendieck's derivators are not going to provide an answer TO ME, as long as they are not properly written up. But thanks, Harry, for the link; it looks readable! – darij grinberg Feb 26 '11 at 17:07
  • 2
    Here's a clear and considerably shorter account by Kahn and Maltsiniotis that is at the same time more general than Dwyer-Hirschhorn-Kan-Smith. It clarifies the interrelations between various approaches and also has the advantage that it isn't written with a homotopic bias: http://www.math.jussieu.fr/~maltsin/ps/bkgmdef.pdf – Theo Buehler Feb 26 '11 at 18:11
  • It's also worth mentioning that Barwick and Kan proved more recently that a slight generalization of DHKS's homotopical categories (called relative categories) actually admits a model structure that models the full theory of $(\infty,1)$-categories (they proved that there exists a Quillen equivalence between relative categories and Rezk's model category of omplete Segal Spaces). This is a pleasant surprise, since it essentially says that every $(\infty,1)$-category arises up to homotopy as the (weak) localization of an ordinary $1$-category at a class of weak equivalences. – Harry Gindi Feb 28 '11 at 09:40
  • 1
    Is this a real example? If you formulate everything in Goedel-Bernays set theory, then you actually have global choice. – arsmath Mar 07 '11 at 15:32
  • But this seems to be more a hack than a solution. We would, first of all, expect these things to be canonical without having to bend our system of axioms... – darij grinberg Mar 07 '11 at 16:16
  • I have no idea why choice is relevant, but these days, you don't need 100 pages to give a precise definition of derived functors. If $F : A \to B$ is a functor between relative categories and $H_A : A \to hA$, $H_B : B \to hB$ are the functors to the homotopy categories, then the left derived functor of $F$ is the right Kan extension of $H_B F$ along $H_A$, $LF : hA \to hB$. Dually, the right derived functor of $F$ is the left Kan extension of $H_B F$ along $H_A$. See Riehl's Categorical Homotopy Theory. There's a natural $\infty$-categorical enhancement of the notion too. – Tim Campion Mar 09 '23 at 21:15
4

An author might sometimes concede in a paper that they are engaging in abuse of notation, to simplify or clarify a concept or to relate one concept to another without worrying about particularities of formal mathematical notation. I like the Wikipedia article and some of the examples therein.

Occasionally the conceit by the author is just affirmation that the author is not trying to be overly pedantic in their notation. But indeed, by its very nature, often an admission that notation is being abused may nonetheless convey substantial mathematical content to the reader.

It's not clear if the concept of an "abuse of notation" of some piece of formal mathematics has itself been formalized.

Mark S
  • 2,143
3

In category theory, the concept of forgetful functor does not have a precise definition. I know of the following two candidate definitions, neither of which is fully adequate.

In the formalism of stuff, structure, property, every functor is considered a forgetful functor. While this makes a lot of sense from the perspective of that framework, it doesn't match how the term "forgetful functor" is used in practice.

One could also try to define a forgetful functor as a right adjoint, based on the idea that a "free" functor is left adjoint to a forgetful functor. This also doesn't work, since then a product formation functor $(A, B) \mapsto A \times B$ would be forgetful while a coproduct formation functor $(A, B) \mapsto A + B$ would not be, and this tension is clearly undesirable.

Tobias Fritz
  • 5,785
  • 1
    Another possibility is to take forgetful functors to be just the first-projection functors from the total category of a displayed category to its base. Actual usage in practice seems to vary somewhere between “precisely just these functors” and “anything isomorphic (sic!) to such a functor”. This is deliberately not invariant under equivalence, since the usage of “forgetful” in practice is not so. – Peter LeFanu Lumsdaine Nov 16 '23 at 19:19
1

There is no definition of what a set is.

badmf
  • 542
  • 7
    There are pretty useful axiomatic systems, though. Choose your favourite one, then assume it has a model. All the objects in this model will be your sets. You can work with them without caring which model they come from. For most branches of mathematics (excluding set theory, of course), this is a satisfactory point of view. As people have thought for a long time over these foundational issues, I am sure that this is as close to a "definition" as anything you will ever get. – Sebastian Goette Dec 05 '15 at 18:18
1

I could never find a both rigorous and universal definition of what exactly an L-function is, despite Selberg's introduction of the class bearing his name.

1

Some people claim to have defined the concept of "closed form" fully and precisely. Have they?

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
0

Notion of calculability:

A function of positive integers is calculable only if recursive.

Calculable function ( in a objective meaning) as used in Church-Turing Thesis http://plato.stanford.edu/entries/church-turing/

Andrés E. Caicedo
  • 32,193
  • 5
  • 130
  • 233
kakaz
  • 1,596
  • In my view it is the most used and probably important notion which is not clearly defined. But of course I have only very limited view ... – kakaz Feb 26 '11 at 11:34
  • 23
    My view of this is the opposite: the beauty of the Church-Turing thesis is that it does give a precisely defined and widely agreed upon meaning to "effectively calculable". From a mathematical perspective, the thesis is itself a definition, which takes a somewhat vague notion (anything that can be computed by any systematic method or algorithm) and equates to it a rigorous mathematical concept (recursive functions). – Henry Cohn Feb 26 '11 at 15:01
  • @Henry - is Busy beaver function not well defined algorithm for a calculable function for any given n? http://en.wikipedia.org/wiki/Busy_beaver#Non-computability_of_.CE.A3 We even know several its values: 1, 6, 21, 107 ... But it is non-coputable as stated in link above. – kakaz Feb 26 '11 at 16:47
  • 9
    I do think the busy beaver function isn't effectively calculable in the intuitive sense, as well as the technical sense. It's a well-defined function, but we have no algorithm for actually computing it. We do have an algorithm for proving lower bounds that will eventually converge to the true answer for any given case, but the convergence is incredibly slow (there is no computable upper bound on the time to convergence) and there is no way of knowing when convergence has happened. Being able to recognize when you have arrived at the answer seems like an essential property of algorithms. – Henry Cohn Feb 26 '11 at 17:10
  • Henry - of course there is algorithm for computing in: for given n try any Turing machine smaller than n. It is pretty simple algorithm for that. From wikipedia link: "theoretically, every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is (trivially) computable, even though the infinite sequence Σ is not computable (see computable function examples)." – kakaz Feb 26 '11 at 17:16
  • Church-Turing thesis states that there is no computable functions which are not recursive. It is not a form of definition but a theorem which is - at present - an example of inductive reasoning in mathematics en.wikipedia.org/wiki/Inductive_reasoning. We do not have any prof to say it is true, beside examples of computable function definitions we use, which are equivalent. – kakaz Feb 26 '11 at 17:19
  • 6
    kakaz: There is no algorithm for computing the busy beaver function, you are misunderstanding a side remark in that wikipedia link. You are also misunderstanding the Church-Turing thesis. When we use notions such as "effectively computable" we mean the formal notions, not some vague intuitions. As Henry is pointing out, the Church thesis, which is not mathematics, can be seen as a definition. – Andrés E. Caicedo Feb 26 '11 at 17:34
  • @Andreas - of course You may say that by definition of computable function is <=> recursive <=> effectively computable function. Of course I have no example computable function which is not recursive i that meaning. I agree. There is no prof that only recursive function represent notion of calculable function in intuitive meaning. This vague intuition is something meaningful, do not You think? Or perhaps I am wrong. – kakaz Feb 26 '11 at 17:49
  • @Andreas: I edit the original post to reflect Your remarks. – kakaz Feb 26 '11 at 17:50
  • 2
    Note that the Church-Turing thesis could almost be regarded as a statement of physics - the laws of physics of this universe only permit computation of recursive functions, but it is conceivable that other possible universes could do better. I.e. it would be the statement recursive <=> physics-computable. I say "almost", however, because I don't think anyone has actually come up with a precise notion of physics-computable! – Harry Altman Feb 27 '11 at 00:43
  • @Harry - Your claim that TCT is statement of physics is very kind elusion of mathematics in order to avoid important problem which we cannot solve... – kakaz Feb 27 '11 at 09:13
  • 1
-1

I think that the widely-used concept of concepts being clearly defined is not clearly defined.

For example: How could one decide whether a single "concept" is "clearly defined" or not? If one could, I would argue that then all of modern mathematics would be not clearly defined. Already the concept of a set seems to lack a precise definition, and even lack the possibility to be defined precisely. Being clearly defined therefore seems to me at best like a vague comparative notion. For example, we could say that we regard some concept as clearly defined if its definition is as clear as the definition of a set, whatever "as clear" means in this context...

B K
  • 1,890
  • 2
    This seems to me to be a poor answer, for the same reasons I gave in a comment on the answer by Buschi Sergio. –  Jun 21 '15 at 18:50