62

With which notation do you feel uncomfortable?

Ben McKay
  • 25,490
user2330
  • 1,310
  • 22
    I'm just crotchety, and don't like this type of question. – Theo Johnson-Freyd Mar 18 '10 at 16:18
  • 1
    At while back I was considering asking a similar question, but people quickly convinced me that it was a bad idea (http://tea.mathoverflow.net/discussion/100/on-lets-build-a-really-big-list-questions/#Item_9). The basic argument is that this question is about as subjective and argumentative as math gets. – Anton Geraschenko Mar 18 '10 at 17:38
  • 1
    I don't know, I like some of the answers. Although, it is in danger of a small signal-to-noise ratio. – Ilya Grigoriev Mar 18 '10 at 17:54
  • 5
    Closed. I know, I know, it's nice to have some questions like this to waste some time on, but we really don't need to have this at the top of the home page for several days.

    I'd recommend going to vote up this feature request http://meta.stackexchange.com/questions/3414/allow-moderators-to-sink-a-question if you happen to have meta.SE rep, and feel the same way.

    – Scott Morrison Mar 19 '10 at 06:13

22 Answers22

205

There is a famous anecdote about Barry Mazur coming up with the worst notation possible at a seminar talk in order to annoy Serge Lang. Mazur defined $\Xi$ to be a complex number and considered the quotient of the conjugate of $\Xi$ and $\Xi$: $$\frac{\overline{\Xi}}{\Xi}.$$ This looks even better on a blackboard since $\Xi$ is drawn as three horizonal lines.

  • 18
    The full story involves a t-shirt that Barry had commissioned and worn for the occasion... – Pete L. Clark Mar 18 '10 at 15:44
  • 7
    Crikey, I used that myself several times, including the quotient \bar{\Xi}/\Xi: I had a real reason to choose \Xi for a complex number though (at least it made sense to me), it is suppose to be the potential for a closed one-form H, so it made sense to me that you should just rotate two of the line segments in H to alternate positions, because, you know, H is curl free... – Willie Wong Mar 18 '10 at 16:00
  • 10
    @Pete: Can you fill us in with the full story? – Kevin H. Lin Mar 18 '10 at 19:20
  • 31
    I should mention that I am a Mazur student, hence a designated keeper of the flame of Mazur lore (although there are about 50 other such people) but that I was not there to see the event myself. (Actually I never met Lang, to my regret.) The most authoritative description I have found is given by Paul Vojta here: http://www.ams.org/notices/200605/fea-lang.pdf – Pete L. Clark Mar 18 '10 at 21:06
  • 3
    In my own personal version of the story, Barry was wearing one copy of the t-shirt underneath a dress shirt, which he planned to unbutton at the correct moment. Rereading Vojta's account, it seems that I probably made up this detail myself. – Pete L. Clark Mar 18 '10 at 21:09
  • 12
    I had heard two additional versions of this story that differed from Vojta's in the ending, in the sense that Vojta's Serge didn't make a peep, and the other two Serges exploded. – S. Carnahan Mar 19 '10 at 05:06
  • 17
    @PeteL.Clark: About a year ago I was at a reception with Mazur and Rohrlich, and asked Mazur what had really happened during that lecture. What he described matched Vojta's description, except that Lang did not keep quiet. Instead, he said (calmly) to Mazur "You really seem to have a thing for horizontal lines". Since the intended reaction had not been produced, Mazur then opened the box containing the shirt and gave it to Serge. – KConrad Jul 14 '14 at 04:08
  • Amusingly, I saw a (partial) solution to this problem on a stone in Greece, which had a capital xi with two connecting NE-SW lines, like the leftmost letter in this picture, and incidentally also explained the connection between the upper- and lower-case forms. – Alex Shpilkin Apr 06 '18 at 21:51
  • 1
    @AlexShpilkin: Could you please fix the link to the picture? – José Hdz. Stgo. Jul 12 '20 at 20:37
156

My favorite example of bad notation is using $\textrm{sin}^2(x)$ for $(\textrm{sin}(x))^2$ and $\textrm{sin}^{-1}(x)$ for $\textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($\textrm{sin}^2(x)$ should mean $\textrm{sin}(\textrm{sin}(x))$ if $\textrm{sin}^{-1}(x)$ means $\textrm{arcsin}(x)$).

It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general.

  • 67
    I take issue with "it rarely leads to confusion". Among mathematicians that's true, but among calculus students is another story. – Mark Meckes Mar 18 '10 at 18:54
  • 19
    I refuse to use the notation $\sin^{-1}(x)$, except perhaps to complain about it, and use the notation $\arcsin(x)$. This means that anything I do with non-trivial powers or inverses of trig functions use a different convention to the rest of any work I do with functions, but it is better than the ugliness of a notation with a single function which is inconsistent. – Niel de Beaudrap Mar 18 '10 at 22:24
  • 41
    In my first semester of college I was marked wrong on an exam for writing $\sin^{-1}(x)$ on a question whose answer was $\arcsin(x)$. When I complained, the professor kept explaining why $1/\sin(x)$ was incorrect. He claimed never to have heard of this notation for arcsin, and relented only after I got a second opinion from another professor in the department. So it's not only calculus students who get confused by this! – Tom Church Mar 19 '10 at 01:40
  • @Tom Church: does one need an opinion from another professor to reckon that arcsin is the inverse of sin? :) – Delio Mugnolo Dec 01 '13 at 12:17
  • 15
    Actually, I take issue with both of these notations. :) As a dynamicist, I would indeed agree that $\sin^2$ should be the second iterate of sine. However, $\sin$ is not invertible, and hence $\sin^{-1}$ should not be used for the arcsine, which is only a specific branch of the inverse function. $(\sin|_{[-\pi/2,\pi/2]})^{-1}$ would be ok I suppose ... – Lasse Rempe Jan 26 '15 at 16:29
  • 11
    Why not writing $f^{\circ 2} = f \circ f$ and $f^{\circ -1}$? And $f^2$, $f^{-1}$ are defined pointwise as usual. – Martin Brandenburg Sep 13 '15 at 08:25
  • @MartinBrandenburg you are correct on that, since $\sin\circ\sin(x)$ simply means $\sin\big(\sin(x)\big)$ ..... which means $\sin^2(x)$? – Mr Pie Jan 27 '18 at 21:05
87

I personally hate the notation $x \mid y$, for "$x$ divides $y$". Of course, I'm used to reading it by now, but a general principle I follow and recommend is:

Never use a symmetric symbol to denote an asymmetric relation!

Marty
  • 13,104
  • 55
    How do you cope with wedge products? Or tensor products? or, gasp, multiplication in a non-commutative group? I hope I didn't just blow your mind. :p – Willie Wong Mar 18 '10 at 16:55
  • 21
    Graham, Knuth, and Patashnik, in "Concrete Mathematics", share your unease. They write x\y for "x divides y". – Michael Lugo Mar 18 '10 at 17:08
  • 57
    Regarding wedge, tensor, multiplication symbols -- I specifically said "relation", not "operation". And I stand by what I said - I cope just fine! :) – Marty Mar 18 '10 at 17:55
  • 3
    Writing x\y for "x divides y" seems much more horrible than "x|y". Maybe I'm just too used to the latter. – Anonymous Mar 18 '10 at 20:53
  • 22
    Yes, when I tell undergraduates that x divides y means y/x is an integer I can see them thinking, "No way, he's messing with us." Then I tell them that actually 0 divides 0 as well, and I can practically see the steam coming out of their ears. (Come to think of it, this gives a funny variation on the old precalculus brainteaser "What is zero over zero"? Answer: "Yes.") – Pete L. Clark Mar 18 '10 at 21:15
  • 33
    I also don't like the notation $x \backslash y$ for "x divides y". That seems likely to confuse students due to its resemblance to an operation. If I were to choose a symbol, I might use $x \triangleleft y$, to mean "x divides y". It preserves the vertical line, which we're all used to, and adds a directional component which I find appropriate. – Marty Mar 18 '10 at 22:18
  • 4
    One problem with $x \triangleleft y$ for "x divides y" is that the normality relation is reversed for groups: $y\mathbb{Z} \triangleleft x\mathbb{Z}$. – S. Carnahan Jan 09 '12 at 09:15
  • 7
    True, but many correspondences are inclusion-reversing, and the issue you bring up usually arises much later (when students learn about ideals). And it is consistent with cardinalities of subgroups: if $H$ is a subgroup of $G$ then the cardinality of $H$ divides that of $G$. Maybe that's a more important consistency for an elementary notation? – Marty Jan 09 '12 at 16:26
  • 11
    if $x$ divides $y$, why not write "$x$ divides $y$" ? Well we can do better $x$ div. $y$. Why are young mathematician so obsessed by notations. Math is writing sentences, english, not gluing symbols after symbols.A few shortcuts when it helps OK, but when notations become a problem the best is to write in plain language. And it is often more comprehensible. Read the ancients and you will get what I mean. Cheers... – Patrick I-Z Dec 01 '13 at 02:01
  • 3
    The "$x$ divides $y$" relation isn't asymmetric, but antisymmetric =) – étale-cohomology Dec 31 '17 at 16:07
  • I tottaly agree! I like to use $x \prec y$ for x divides y, since it resembles an order relation. Also, the symbol $\prec$ is used for preorders sometimes and the divides relation can be generalized to rings as a preorder which becomes an order if we make the quotient by equivalence with respect to multiplication by unities. Finally, I think this also gives good symbols for gcd and lcm, respectively $\curlywedge$ and $\curlyvee$, that is consistent with the usual join and meet symbols for lattices. – Pedro G. Mattos Jan 17 '21 at 16:28
  • @PeteL.Clark, re, how do you explain to your calculus students why "$0$ divides $0$" if "$x$ divides $y$" means that "$y/x$ is an integer"? (Of course I would be fine if you said "if" instead of "means", but you seem to have chosen the words carefully ….) – LSpice Aug 21 '23 at 16:26
  • 1
    @LSpice: This was a while ago, but: generally I give the formal definition of divides and then explain that $x \mid y \iff \frac{y}{x} \in \mathbb{Z}$...IF $x \neq 0$. I definitely aspire to be more careful with what I say and write in class than this particular comment. – Pete L. Clark Aug 23 '23 at 14:39
77

Mathematicians are really quite bad when it comes to notation. They should learn from programming languages people. Bad notation actually makes it difficult for students to understand the concepts. Here are some really bad ones:

  • Using $f(x)$ to denote both the value of $f$ at $x$ and the function $f$ itself. Because of this students in programming classes cannot tell the difference beween $f$ (the function) and $f(x)$ (function applied to an argument).
  • When I was a student nobody ever managed to explain to me why $dy/dx$ made sense. What is $dy$ and what is $dx$? They're not numbers, yet we divide them (I am just giving a student's perspective).
  • In Langrangian mechanics and calculus of variations people take the partial derivative of the Lagrangian $L$ with respect to $\dot q$, where $\dot q$ itself is the derivative of momentum $q$ with respect to time. That's crazy.
  • The summation convention, e.g., that ${\Gamma^{ij}}_j$ actually means $\sum_j {\Gamma^{ij}}_j$ is useful but very hard to get used to.
  • In category theory I wish people sometimes used any notation as opposed to nameless arrows which are introduced in accompanying text as "the evident arrow".
Andrej Bauer
  • 47,834
  • 42
    Your first point is not a problem with the notation but with the users of the notation... – Mariano Suárez-Álvarez Mar 18 '10 at 15:23
  • 98
    The Leibniz notation dy/dx for the derivative is perhaps the most clever bit of notation ever invented in the whole history of mathematics. – Franz Lemmermeyer Mar 18 '10 at 15:51
  • 14
    As for $f(x)$, at least in algebra it has been largely abandoned in favor of the unambiguous lambda-type $x\mapsto f(x)$. As usual, it will probably take analysis people half a century to catch up... – darij grinberg Mar 18 '10 at 15:55
  • 45
    Einstein summation is indeed dumb, not because it is hard to get used to, but because it makes it impossible to refer to a single addend rather than to the whole sum (as in: "each of the addends is nonnegative, hence the sum is nonnegative"), forcing writers to leave out parts of their arguments just because they can't write it with Einstein summation. Although someone has proposed a sum sign crossed out to mean "this is NOT a sum". – darij grinberg Mar 18 '10 at 15:58
  • 7
    The second point also sort of make sense if you either (a) take the historical perspective [there was a reason why that notation was invented] or (b) learn calculus from nonstandard analysis.

    The third point is only a problem if you do Lagrangian mechanics the "wrong" way. The right way, is of course, to define the Lagrangian as a functional on the jet space, then \dot{q} is a natural object.

    The fourth point is, in my opinion, the opposite of what is sought in this question: I consider it one of the best notational conventions invented in the 20th century.

    – Willie Wong Mar 18 '10 at 16:05
  • 5
    @darij: If you are in a situation where you need to refer to single addend, you should not use Einstein summation to begin with. Einstein summation only makes sense / was defined for the case of dealing with covariant objects. If you have to deal with individual terms, you are NOT treating the whole thing covariantly. Or you can use Penrose's abstract index notation, where the explicit components are referred to, and not summed, by different index labels (Greek vs Roman; Upper vs Lower; surrounded by brackets, etc.) – Willie Wong Mar 18 '10 at 16:09
  • 2
    Okay, Penrose looks like a good idea to me. Though I don't wish to have to tell a small index i from a small index iota... – darij grinberg Mar 18 '10 at 16:11
  • 6
    The thing with the "If you have to deal with individual terms" argument is... if I don't want to deal with individual terms, I can usually write the whole thing in an invariant way, which often makes it clearer (to the algebraist, at least) and less messy. – darij grinberg Mar 18 '10 at 16:14
  • 50
    The Einstein summation convention is beautiful. It essentially makes it impossible to write down something that isn't coordinate independent. Or rather it's Penrose's abstract index notation that does this. But as his notation is essentially identical to Einstein's, there's not much difference. http://en.wikipedia.org/wiki/Abstract_index_notation (@andrej I thought you'd appreciate that kind of "type safety".) (@darij The fact that you can't refer to individual components is a virtue. It encourages people to state and write proofs that are independent of choice of basis.) – Dan Piponi Mar 18 '10 at 16:14
  • 1
    Hmm. Penrose's abstract index notation makes me change my mind - it seems to give a very nice (and base-change invariant) semantics for the Einstein notation. – darij grinberg Mar 18 '10 at 16:19
  • 7
    Andrej, I protest at the generality of your first sentence. I don't think such a broad claim can be reasonably defended. – S. Carnahan Mar 18 '10 at 19:21
  • Oh my, that is a lot of comments to my answer. @Franz: I agree that $dy/dx$ was a great invention, I was just recollecting how as a math student I was unhappy with the notation because nobody could explain it in a satisfactory fashion. All I got was "well, imagine $dy$ and $dx$ are small, etc." – Andrej Bauer Mar 18 '10 at 19:42
  • 35
    In my first calculus class, a very big deal was made to make sure students understood that dy/dx was NOT a quotient; dy and dx had NO meaning whatsoever on their own, it was just notation. And then I got to differential equations, and on the first day the professor said "Now multiply by dx." The other students seemed perfectly OK with this, but it confused the heck out of me for a while. Maybe I was the only one who had actually believed the calculus professor that dx had no independent meaning. – Michael Benfield Mar 18 '10 at 19:45
  • 4
    @Scott: Yes, my claim is a bit provocative, I did not intend to upset anyone. Mathematical notation is loaded with tradition and conventions. This is not necessarily bad, as it adapts notation to humans and not vice versa. Sticking to old notation allows us to read papers a century old. But when it comes to correct, i.e., precise notation programming language design wins over mathematicians. Correct notation is useful for teaching mathematics. A beginner can't tell apart a concept, its notations, and convention surrounding it. Why confuse him? – Andrej Bauer Mar 18 '10 at 19:49
  • 3
    Perhaps one should refrain from using dy/dx in a US-style introductory course in calculus. But as soon as you get to substitutions or differential equations or real analysis or the physicists' approach to infinitesimals, the symbol shows its power. I think Harold Edwards wrote a nice introduction to calculus based on differentials. And personally, I am fine with "imagine that dx and dy are really small" because I know that I could make everything precise using some mean value theorem - if I wanted to -)

    Oh, and I don't think anyone is upset - we're just enjoying a civilised discussion.

    – Franz Lemmermeyer Mar 18 '10 at 20:33
  • 9
    @Andrej: I wish you had posted separate answers for each point you make. They are all great, but it would be interesting to discuss the last point without being drowned by parallel discussions of Einstein and Leibniz notations. – François G. Dorais Mar 18 '10 at 22:52
  • 9
    $dy/dx$ is good notation; that its rationale is not explicitly explained in calculus texts is a crime. – Michael Hardy Jun 13 '10 at 00:04
  • And yet, and yet -- see this very long discussion at the nForum on differentials and variables and some of the attendant subtleties: http://nforum.mathforge.org/discussion/5402/what-is-a-variable/ – Todd Trimble Dec 01 '13 at 01:36
  • 1
    Your first point is not a problem of notation, but of misunderstanding the semantics. If everyone remembered the phrasing "Let $f$ be the function given by $f(x)=\ldots$" there would be no confusion. – Lior Silberman Aug 06 '16 at 00:33
  • 2
    Upvoted for the first one; although I disagree with the rest, I care about that one a lot. As a Calculus instructor, I believe that textbook writers should be fined every time they do this. (And instructors should be fined every time that they tell students that $dx$ and $dy$ have no meaning by themselves.) – Toby Bartels Feb 27 '17 at 14:39
  • 5
    Once you introduce differential forms, I find that Leibniz notation stops making sense. If $d^2 = 0$, why is it that $\frac{d^2y}{dx^2} \neq 0$? A derivative is not a quotient of forms, hence it shouldn't look like one. – Carl Patenaude Poulin Mar 08 '17 at 22:37
  • 4
    @CarlPatenaudePoulin, if you're OK with the subtraction notation for additive torsors, and (hence?) the division notation for multiplicative torsors, then in some sense a derivative is a quotient of forms; $f = \frac{\mathrm dy}{\mathrm dx}$ is the function for which $\mathrm dy$ equals $f,\mathrm dx$. (Your point about $\mathrm d^2 = 0$ is very well taken, though.) – LSpice Jul 12 '17 at 21:22
  • 2
    @CarlPatenaudePoulin there are two ways to interpret $d^2$: either as in the De Rham complex where $d^2=0$ or, when there's an underlying connection (flat or not), $d$ is the covariant derivative $\nabla$ and extends to all tensors. Then $d^2\neq 0$ and I claim that's how $d^2 y$ should be interpreted in Leibniz notation. A hint that this might be a correct interpretation comes from Leibniz, Euler etc, when they write "assume $dx$ constant" (i.e. flat section) to arrive at the equation $\frac{d}{dx}\frac{d}{dx} y = \frac{d^2 y}{dx^2}$. See my answer https://hsm.stackexchange.com/a/7857/3462 – Michael Bächtold Mar 21 '22 at 08:47
  • 1
    @AndrejBauer I don't think the claim itself is hugely provocative but one might wish to be careful about referring to something as being ''retarded'' as it could be considered offensive, although I appreciate English might not be your first language. – Hollis Williams Apr 01 '22 at 01:29
  • @HollisWilliams: Excuse me, but where did I use the word "retarted"? I am not saying I didn't, I just don't see where I used it. – Andrej Bauer Apr 01 '22 at 07:55
  • 1
    Sorry, I referred to the wrong name by accident, it was darij grinberg who used the word. – Hollis Williams Apr 01 '22 at 18:55
72

I never liked the notation ${\mathbb Z}_p$ for the ring of residue classes modulo $p$. At one point, it confused the hell out of me, and this confusion is easily avoided by writing $C_p$, $C(p)$ or ${\mathbb Z}/p$.

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119
  • 17
    Yep. $\mathbb{Z}_p$ is the ring of $p$-adic integers. But it took me some time to learn that too... – darij grinberg Mar 18 '10 at 16:06
  • 1
    There is also the notation $\mathbb{F}_p$, which I think is quite standard (for $p$ either prime or a prime power, that is). – Harald Hanche-Olsen Mar 18 '10 at 16:22
  • 28
    There is a slight ambiguity here. My personal convention is that C_p and C(p) refer to the cyclic groups with no mention of their ring structure, and it is Z/pZ (or even better, F_p) which refers to the actual ring. – Qiaochu Yuan Mar 18 '10 at 17:01
  • 25
    Please don't confuse the residue-class ring ${\mathbb Z}_q$ (aka ${\mathbb Z}/q{\mathbb Z}$) with a Galois field of $q$ elements, conveniently denoted by ${\mathbb F}_q$ (or $GF(q)$). It is one of the most popular fallacies of our students to assume that both symbols denote the same mathematical object, even if $q$ is not a prime. – MRA Mar 18 '10 at 17:01
  • 17
    You can never go wrong with $\mathbb{Z}/p\mathbb{Z}$ or $\mathbb{Z}/\mathfrak{p}$. – Harry Gindi Mar 18 '10 at 18:46
  • 7
    Z_q is occasionally used to refer to W(F_q), an unramified extension of the p-adic integers Z_p (although I am not particularly fond of this convention). My impression was that C_p refers to an abstract cyclic group of order p, while the cyclic group Z/p has a specified generator. – S. Carnahan Mar 18 '10 at 19:15
  • 3
    I once went to a group theory class in which the teacher wrote Z_2. Little did she know, I had just been to a class on p-adics ;) – David Corwin Mar 19 '10 at 02:15
  • 3
    $\mathbb Z/p$ is the worst of both worlds, in that it makes no sense. – Jim Conant Oct 22 '11 at 02:19
  • 2
    Roquette uses the notation A/p for A/pA in his article on class field towers. It made sense to me. – Franz Lemmermeyer Oct 22 '11 at 09:43
  • 31
    Jim, I disagree. It makes sense if we agree that it should make sense. It's unambiguous. And it follows a noble tradition of ignoring the distinction between elements of a ring and the principal ideals that they generate. – James Cranch Jan 08 '12 at 00:56
  • 4
    Presumably the "cyclic group" sense of ${\bf Z}_p$ originates in German, where the word cognate with "cyclic" is spelled with Z, not C (zyklisch(e)). – Noam D. Elkies Jan 08 '12 at 02:20
  • 5
    @Noam: yes, that's what I think, too. And if they'd simply write Z_p (without blackboard or boldface type for the Z) for the cyclic group with p elements then I wouldn't even mind. – Franz Lemmermeyer Jan 08 '12 at 09:09
  • 7
    I disagree: because $Z/p$ is a more basic and widespread notion than p-adic integers, and the notation seem natural for both, IMHO $Z_p$ should stay with $Z/p$, and something different for p-adics. – Michael Jul 14 '14 at 01:37
  • 12
    I find the dislike of (and constant admonishment against using) $\mathbb{Z}_p$ for the finite abelian group or finite ring, and even more the insistence that it means the $p$-adics, pretty overblown. I agree with @Michael, and besides lots of respectable people have used it to mean the finite structure. I really feel that $\hat{\mathbb{Z}}_p$ for the local completion is a good way to go, but no doubt this is a losing battle -- sigh. – Todd Trimble Jul 14 '14 at 03:02
  • 2
    @MRA That's pretty much like insisting that $\Bbb Z$ is a group and not a ring. – Michał Masny Jan 23 '15 at 13:37
  • 7
    @MichalMasny I think MRA is referring to the difference between $\mathbb{Z}/p^2\mathbb{Z}$ and $\mathbb{F}_{p^2}$. These are not isomorphic, no matter what kind of object they are. – Linus Hamilton Mar 10 '15 at 21:44
  • 2
    I agree with Qiaochu Yuan here. Also, $\mathbb{Z}/n$ can be seen as a cyclic group with a chosen generator. – Martin Brandenburg Sep 13 '15 at 08:28
  • 2
    A slight refinement, which I find more agreeable, is to use $\mathbb{Z}_{/p}$ for $\mathbb{Z}/p\mathbb{Z}$. This avoids the problems mentioned above, is succinct but distinct, and is visually similar to the original ambiguous notation. – Douglas Lind Sep 19 '15 at 16:27
65

Physicist will hate me for this, but I never liked Einstein's summation convention, nor the famous bra ($\langle\phi|$) and ket ($|\psi\rangle$) notation. Both notations make easy things look unnecessarily complicated, and especially the bra-ket notation is no fun to use in LaTeX.

Harald Hanche-Olsen
  • 9,146
  • 3
  • 36
  • 49
MRA
  • 217
  • 8
    What's complicated about bra-ket notation? One goes on the left, one goes on the right, and there's a natural way to pair one with the other that looks visually reasonable. – Qiaochu Yuan Mar 18 '10 at 18:45
  • 293
    Personally, I strongly dislike the misuse of the relation symbols $<$ and $>$ instead of the appropriate $\langle$ and $\rangle$ angle brackets. I dislike it so strongly, in fact, that I edited this answer. – Harald Hanche-Olsen Mar 18 '10 at 19:16
  • 31
    I would vote up the previous comment seven times if I could. – Mark Meckes Mar 18 '10 at 19:50
  • 15
    You can use latex's newcommand to make the typesetting easier.

    Type \newcommand{\bra}[1]{\langle #1|} at the top of your document, and

    \bra\psi will produce

    $\newcommand{\bra}[1]{\langle #1|} \bra\psi$

    – user1504 Mar 18 '10 at 20:10
  • 24
    Dirac notation isn't really justifiable for generic vectors on their own. But it is so helpful for quantum information! It is so much easier to read/write standard basis vectors as $|a_1 a_2 \cdots a_n\rangle$ than as $\mathbf{e}{a_1,a_2,\ldots,a_n}$ (or worse: $\mathbf{e}{a_1} \otimes \cdots \otimes \mathbf{e}_{a_n}$). I would go further, and introduce this notation into introductions to probability. For expressing states of configuration spaces over distinguishable labels, it is quite good. – Niel de Beaudrap Mar 18 '10 at 22:34
  • 19
    I would say the Einstein summation notation makes complicated things look misleadingly easy. When I write that scalar curvature is $R=g^{ij}R^k_{ikj}$, is it clear how much is actually going on there? It does prevent you from writing anything coordinate-dependent, which is quite nice. – Tom Church Mar 19 '10 at 01:53
  • 2
    A. J., I would also observe that, per Harald's comment, one should never use look-alike symbols when a semantically meaningful command is available. That is to say, I think \newcommand\bra[1]{\langle#1\rvert} (where one writes explicitly \rvert in place of the pipe) is better. – LSpice Sep 08 '10 at 03:55
  • 7
    I would seriously dispute the claim that Dirac notation is "unnecessarily complicated": try expressing the relationship $\sum_{n=0}^\infty|n\rangle\langle n|=1$ in any other notation and what you'll find is trouble. – Emilio Pisanty Mar 04 '12 at 18:43
  • Why do you care if they hate you? You cannot be friend with everyone... – Patrick I-Z Dec 01 '13 at 02:04
  • 8
    @EmilioPisanty $\sum_{u \in \cal B} u \bar u = {\rm id}$. – Patrick I-Z Dec 01 '13 at 02:11
  • 8
    Am i like the only person who thinks Einstein summation convention is awesome? Maybe because i was taught it in my first year as an undergrad? – lost1 Dec 30 '13 at 01:58
  • 2
    Physicists did show us mathematicians so many unexpected treasures I would be more careful when throwing anything coming from them out. – მამუკა ჯიბლაძე Jun 07 '15 at 05:11
  • 4
    Einsteins summation convention is probably the most powerful notational tool in mathematics! – Matthias Ludewig Oct 13 '15 at 21:13
44

The notation ]a,b[ for open intervals and its ilk. Sorry, Bourbaki.

  • 15
    The notation (a,b) sometimes leads to ambiguity with ordered pairs. I use (a,b) because I was raised that way, but I never mind ]a,b[. – Jonas Meyer Mar 19 '10 at 03:57
  • 49
    I like ]a,b[ because it does not have the ambiguity of (a,b) and it's very clear that a,b are excluded. – Martin Brandenburg Apr 16 '10 at 15:07
  • 32
    Personally, i have nothing against $]a,b[$; it just makes the `parenthesis (bracket) matcher' of my text editor blush. – Suvrit Oct 17 '10 at 08:47
  • 1
    plus, if you are quickly writing things on a blackboard, $(a,b)$ can easily be read as $[a,b]$ by students in the back of the room. – Delio Mugnolo Dec 01 '13 at 09:37
  • 42
    Imagining $a$ and $b$ on a real line, the notation $]a,b[$ looks to me more like $\mathbb R \setminus (a,b)$... – Y. Pei Feb 05 '15 at 18:48
  • 5
    By the way, many people use $(a,b)$ for the gcd of $a$ and $b$. Now that is confusing ... – Martin Brandenburg Sep 13 '15 at 08:21
  • @MartinBrandenburg, surely not so much so from the point of view of PIDs, where $(a, b)$ (as an ideal) is generated by $(a, b)$ (as an element)? – LSpice Oct 13 '15 at 20:23
  • 26
    Find all $(a,b)$ such that $(a,b)\in(a,b)$ – Thomas Rot Oct 14 '15 at 00:23
  • 1
    @LSpice: You are right. – Martin Brandenburg Nov 03 '15 at 07:02
  • 1
    @ThomasRot, except that your containment is ill typed: the left-hand side is (I presume) an ordered pair of the type of elements that are contained in the right-hand side. (It is probably possible, depending on your set-theoretic foundations, to have this work, but it's not the kind of question one would ask; much like the old canard about "Is $3$ an element of $(1, 2)$?", or whatever.) – LSpice Nov 03 '15 at 15:18
  • 24
    @LSpice: The first $(a,b)$ is an ordered pair, the second $(a,b)$ is the gcd, the third $(a,b)$ is an interval. I leave it to you to solve the exercise. – Thomas Rot Nov 03 '15 at 17:07
  • 5
    You have to be careful using this notation in (La)TeX. Writing {]a,b[} is simple hack, although \mathopen]a,b\mathclose[ (or \left]a,b\right[) is the really proper way to do it. (In practice, you'd make a macro for this, of course.) – Toby Bartels Feb 27 '17 at 14:47
  • @Suvrit as Toby says, it's fixed with "$\backslash$mathopen/mathclose" – YCor Dec 06 '18 at 08:07
  • @Ycor Well, it is "fixed", but at the expense of making the .tex code look even bulkier --- not sure that's a price worth paying.... – Suvrit Dec 06 '18 at 13:17
  • @Suvrit that's the price I'm paying, because the priority is the output (and it's trivial to make a macro defining "$\backslash$ob"' for "$\backslash$mathopen]"). – YCor Dec 06 '18 at 13:27
  • @YCor seems like a good personal choice, definitely. I prefer single letters / keys as long as possible (and also biased towards [], (), {} due to inertia I guess). – Suvrit Dec 06 '18 at 15:15
43

My candidate would be the (internal) direct sum of subspaces $U \oplus V$ in linear algebra. As an operator it is equivalent to sum but with the side effect of implying that $U \cap V = \lbrace 0\rbrace$. Whenever I had a chance to teach linear algebra I found this terribly confusing for students.

Alon Amit
  • 6,414
  • 8
    Shouldn't it imply $U \cap V = \{0\}$? I guess that's another piece of bad notation: not all trivial things equal $\varnothing$. – François G. Dorais Mar 19 '10 at 00:10
  • 1
    @Francois: Sheesh, of course. Sorry. Fixed. – Alon Amit Mar 19 '10 at 00:20
  • 1
    This isn't confusing as long as you clearly distinguish interior and exterior direct sums. – Qiaochu Yuan Mar 19 '10 at 05:16
  • 6
    @Qiaochu: I'm really only talking about the interior case. It's then that the \osum is the same subspace as the sum but with the implied additional condition on the subspaces. – Alon Amit Mar 19 '10 at 07:16
  • 1
    What would you do instead, write $U + V$ and note that $U \cap V = {0}$ on the side? I imagine that you might have to be more explicit (at least in introductory material) with arguments of the form ‹Because $U \cap V = {0}$, $U + V \cong U \oplus V$, so ….›. (I'm not claiming that this is a bad thing!) – Toby Bartels Feb 27 '17 at 14:43
42

I think composition of arrows $f:X\to Y$ and $g:Y\to Z$ should be written $fg$ not $gf$. First of all it would make the notation $\hom(X,Y)\to\hom(Y,Z)\to \hom(X,Z)$ much more natural: $\hom(E,X)$ should be a left $\hom(E,E)$ module because $E$ is on the left :) Secondly, diagrams are written from left to right (even stronger: Almost anything in the western world is written left to right). And i think the strange (-1) needed when shifting complexes is an effect of this twisted notation.

  • Do you want $f$ applied to $x$ to be written $xf$ as well then? I agree to some extent, but the weirdness of right-application of functions to elements rubs me the wrong way for this. – Mikael Vejdemo-Johansson Mar 18 '10 at 16:41
  • 1
    Ah, the good-old Reverse Polish Notation. As a joke/bet with a high school friend I tried to do all computations in RPN on paper for one of those inconsequential math contests where only the final answer mattered. I found it interesting that things I associate to an operator works quite well, but things I associate to a map doesn't. So if I mentally "apply f to x", then RPN works great. But if "f eats x and spits out y", then not so much. – Willie Wong Mar 18 '10 at 16:50
  • 12
    To some extent I agree. The problem is that the order in which composition is written in standard notation is the opposite of the temporal order, which by any standard is the "natural" one - first do this, then do this, then do this. In fact, I think it's reasonable to argue that the basis of the standard definition of function is the arrow of time. – Qiaochu Yuan Mar 18 '10 at 16:58
  • I am pretty sure someone wrote an algebra book that way, back in the '60s or so. But I cannot remember who. – Harald Hanche-Olsen Mar 18 '10 at 17:05
  • 7
    Harald- it was Herstein, the author of our favourite algebra book! – Maharana Mar 18 '10 at 17:41
  • 7
    Jacobson's book on Lie algebras also uses $xf$ to mean $f(x)$. It seems that at the time this notation was all the rage. I find it very confusing now. – José Figueroa-O'Farrill Mar 18 '10 at 18:34
  • 4
    Ah. I was clearly thinking of Jacobson's book. It's on my bookshelf even, but said bookshelf and me are separated by an ocean. – Harald Hanche-Olsen Mar 18 '10 at 19:19
  • 24
    $xf$ makes a lot of sense when you realize that $x: 1 \rightarrow X$ and $f: X \rightarrow Y$ are both functions. – Steven Gubkin Mar 19 '10 at 13:55
  • 9
    I believe the primary source of confusion here is that there is no reason whatsoever for the arrows to go from left to right. If by chance it would begin with something like $f:Y\leftarrow X$... – მამუკა ჯიბლაძე Jun 07 '15 at 05:17
  • 3
    That said - I think it was Freyd who tried to implant programmers' notation $f;g$ in place of $gf$. But failed. Of course. – მამუკა ჯიბლაძე Jun 07 '15 at 05:18
  • 3
    Recently I have written a survey article on $2$-categories (in german, though) and have found that the usual rule of composition is confusing as hell. I chose $f \ast g$ to denote the composition "first $f$, then $g$", and everything looks fine now. I don't like the notation $f;g$. – Martin Brandenburg Sep 13 '15 at 08:22
  • 2
    I don't like $f;g$ either and I use a custom made ">>" sign . – Gerrit Begher Oct 07 '15 at 09:17
  • 2
    @მამუკაჯიბლაძე, doesn't Dijkstra, or Knuth or some famous computer scientist, use precisely this notation? I remember trying to read some CS recently and having a terrible time wrapping my brain around this notation (but persisting because it was so evidently sensible). – LSpice Oct 13 '15 at 20:22
  • 2
    @LSpice Then again the fact is we read and write from left to right and there's not much one can do about it. Once your convention is $f(x)$ rather than $(x)f$, it is inevitable that $g(f(x))$ becomes $(gf)(x)$ rather than $(fg)(x)$... – მამუკა ჯიბლაძე Oct 13 '15 at 22:00
  • @მამუკაჯიბლაძე, I wasn't arguing for or against anything, just dimly recalling a bit of history. (In case it wasn't clear, I was responding to your "Jun 7 at 5:17" comment about $f : Y \leftarrow X$, not the 5:18 one that immediately follows.) – LSpice Oct 13 '15 at 22:05
  • 1
    @LSpice I see sorry for misunderstanding. Interesting, I never actually encountered the $\leftarrow$ one – მამუკა ჯიბლაძე Oct 13 '15 at 22:30
  • 3
    Very late to the party, but the contravariant Hom-functor Hom$(-,T)$ with the standard notation $f^=$ Hom$(f,T)$ yields $f^(g)=g\circ f$ as the desired notation for first $f$ then $g$. – Jochen Wengenroth Nov 17 '20 at 08:31
  • 1
    @LSpice Macauley2 uses arrows going to the left, maybe that was it? – Jules Lamers Dec 23 '21 at 21:45
  • @JulesLamers, I seem to remember it in the literature rather than in a specific programming language, but I could be wrong! – LSpice Dec 24 '21 at 04:07
32

Writing a finite field of size $q$ as $\mathrm{GF}(q)$ instead of as $\mathbf{F}_q$ always rubbed me the wrong way. I know where it comes from (Galois Field), and I think it is still widely used in computer science, and maybe in some allied areas of discrete math, but I still dislike it.

KConrad
  • 49,546
  • 11
    Well, maybe $\mathrm{GF}(q^{q^2})$ is still better than $\mathbb F_{q^{q^2}}$. Indices are nice until one has to nest three or more of them... – darij grinberg Mar 18 '10 at 20:10
  • 6
    In references authors may write exp($z$) instead of $e^z$ when the exponent could get particularly complicated, but when does this ever really happen for sizes of finite fields? If I find it necessary to write something like ${\mathbf F}_{q^{q^2}}$ again, I'll reconsider my opposition to writing GF. :) – KConrad Mar 18 '10 at 21:20
  • 5
    Best thing is when it's (re-)translated as "Galois-Feld" in German. – user717 Mar 18 '10 at 23:36
  • 3
    The worst thing about "Galois-Feld" is that it has apparently been the standard notion in German in the beginning of the 20th Century (at least, Witt uses it). -- @KConrad: "Contributions to the Theory of Finite Fields" by Oystein Ore (Transactions of the American Mathematical Society, Vol. 36, No. 2. (Apr., 1934), pp. 243-274) works with $\mathbb{F}_{p^{ff^{\prime}}}$, which he fortunately abbreviates. Otherwise, I think not even JSTOR's high scanning quality would suffice to recognize the symbols. – darij grinberg Mar 18 '10 at 23:54
  • Darij: I skimmed through Ore's paper (31 pages and you didn't say where to look!) and I didn't see any notation for finite fields whatsoever. Maybe I skimmed too quickly? On the bottom of p. 248 he writes $F_{p^{N'}}(x)$, but that's a polynomial, not a finite field. As for Witt's usage of GF many years ago, I don't mind if that's what was used a long time ago, but I would wish that in recent years opinion had universally settled on ${\mathbf F}_q$ as the conventional notation (when the size of the field is to be indicated at all). – KConrad Mar 19 '10 at 00:45
  • If we decide that the size of the field isn't necessary to specify, we could just write $\mathbf F$ for a general finite field. Do the GF($q$)-people write a general finite field as GF? That would look simply awful! – KConrad Mar 19 '10 at 00:46
  • Oh, I see I misunderstood the German business and you were talking about terminology rather than notation in German in the early 20th century. I've read Galois field as the name for finite field in English a few times and it always looked silly (like calling a finite group a Galois group... okay, that would clearly be worse). – KConrad Mar 19 '10 at 00:49
  • 1
    @darij: Are you sure? "Field" is "Körper" in German and always has been. I don't know why it's called "field" in English, but it's okay. Only when people then translate "Galois field" as "Galois-Feld", ... :-/ – user717 Mar 19 '10 at 12:03
  • @KConrad: Ore writes $K_{ff^{\prime}}$ for the field extension. – darij grinberg Mar 19 '10 at 14:25
  • 1
    @"Galois-Feld": Quoting Hazewinkel's "Witt vectors" survey: "In loc. cit., p. 153, Witt writes: “Es ist merkwürdig, dass diese Rangformel übereinstimmt mit der bekannten Gausschen Formel für die Anzahl der Primpolynome x^n + a_1x^(n-1) + ... an im Galoisfeld von q Elementen." – darij grinberg Mar 19 '10 at 14:26
  • Just verified this with the source. Yes, it's written exactly in that paper (which is the one where Witt introduces and proves the Poincaré-Birkhoff-Witt theorem). – darij grinberg Mar 19 '10 at 14:34
  • Arminus: In L.W. Reid's "The elements of the theory of algebraic numbers" (1910), which is on Google books, he uses "realm" instead of "field". On p. vi of the introduction he explains why and notes that field is a term others use. In 1905 there is E. V. Huntington's "Note on the definition of abstract groups and fields by sets of independent postulates" (Trans. AMS, Vol. 6 181--197) and Dickson's "Definitions of a group and a field by independent postulates" (pp. 198--204). – KConrad Mar 19 '10 at 15:26
  • @KConrad, I think that you've made an important point in passing. I always wonder at talks on the $p$-adics that start with “Let $K$ be a $p$-adic field with residue field $\mathbb F_q$ …”—and then never use $q$ again! – LSpice Sep 08 '10 at 04:00
  • 1
    @darijgrinberg: OK, what about $\mathbb{F}(q)$? – Martin Brandenburg Sep 13 '15 at 08:23
  • @MartinBrandenburg, a problem with that notation is that it makes it look like $\mathbb F$ is a function from prime powers (bigger than $1$) to fields, thus preventing the pleasant device @ KConrad mentions of using $\mathbb F$ for a finite field of unspecified size. – LSpice Jul 12 '17 at 21:50
  • I don't see what is wrong with $GF(q)$ or $GF(p^n)$, other than the fact that (apparently) some people dislike it. It has the advantage that "GF" is very specific and is difficult to confuse with anything else, whereas the letter $F$ often has other meanings. (I think we write $\sin(x), \cos(x)$ instead of $s(x), c(x)$ for a similar reason.) – Goldstern Jul 12 '17 at 22:56
  • @Goldstern, of course logically there is nothing wrong with $GF$, and $F$ alone is not widely used as a standard notation for a finite field anyway; people typically write a decorated version of $\mathbf F$ if they use an $F$-like symbol for the field. A problem I pointed out in an earlier comment with $GF(q)$ notation is that it is inherently incapable of being used without having to track with it the size of the field as part of the notation. Compare to "Let $G$ be a group" where we don't have to say within the notation how big $G$ is. Nobody does this with $GF(q)$ by writing a plain $GF$. – KConrad Jul 13 '17 at 13:36
31

I rather dislike the notation $$\int_{\Omega}f(x)\,\mu(dx)$$ myself. I realize that just as the integral sign is a generalized summation sign, the $dx$ in $\mu(dx)$ would stand for some small measurable set of which you take the measure, but it still rubs me the wrong way. Is it only because I was brought up with the $\int\cdots\,d\mu(x)$ notation? The latter nicely generalizes the notation for the Stieltjes integral at least.

Harald Hanche-Olsen
  • 9,146
  • 3
  • 36
  • 49
  • 6
    I also used to dislike that notation, but I think it's only a matter of upbringing. It's useful, e.g., for dealing with things like Markov kernels: when you have $K:\Omega\times\mathcal{F}\to\mathbb{R}$, where $\mathcal{F}$ is a $\sigma$-algebra on $\Omega$ and for each $x$, $K(x,\cdot)$ is a measure, you can easily write things like $\int f(x)K(y,dx)$, which is much harder to write with the $\int f(x) d\mu(x)$ notation. – Mark Meckes Mar 18 '10 at 19:59
  • 2
    Also, whether $\int f(x) d\mu(x)$ nicely generalizes the Stieltjes integral notation or not depends on your viewpoint. If $\frac{d\mu}{dx} = h = \frac{d H}{dx}$, then indeed $\int f(x) d\mu(x) = \int f(x) dH(x)$. On the other hand, in that case $\mu=h$ (not $H$) as distributions, i.e. $\langle f,\mu \rangle = \langle f,h \rangle$. – Mark Meckes Mar 18 '10 at 20:05
  • 4
    That is one of the worse. Formally you have to write $(f\mu)(\Omega)$, but no one does it. It should be form under the integral, so in principle you can think that measure is a form so you can write $\int f\mu$ and no one does it either... Instead everyone writes something which has no sense like $\int f\mu(dx)$ or $\int fd\mu$... – Anton Petrunin 0 secs ago – Anton Petrunin Mar 20 '10 at 16:44
  • 3
    The $d$ is entirely superfluous in $\int \cdots d\mu(x)$ (which does not generalize the Stieltjes integral, because the measure in $\int \cdots dg(x)$ is $dg$, not $g$). You should either write $\int \cdots \mu(dx)$ or (if you're willing to defy convention) $\int \cdots \mu(x)$ (or $\int \cdots \mu$ if there is no need to use a dummy variable). You can see this in action at https://ncatlab.org/nlab/show/measure+space. – Toby Bartels Feb 27 '17 at 15:07
29

I get very frustrated when an author or speaker writes "Let $X\colon= A\sqcup B$..." to mean:

  1. $A$ and $B$ are disjoint sets (in whatever the appropriate universe is),
  2. and let $X\colon= A\cup B$.

If they just meant "form the disjoint union of $A$ and $B$" this would be fine. But I've seen speakers later use the fact that $A$ and $B$ are disjoint, which was never stated anywhere except as above. You should never hide an assumption implicitly in your notation.

Tom Church
  • 8,136
  • 2
    Huh, I'm guilty here, but not really feeling I've sinned. I just read it as "Let X be the disjoint union of two sets A and B", and that seems to make the matter clear. – Scott Morrison Mar 19 '10 at 06:08
  • 4
    But the point is that there is no universally agreed upon definition of what the "disjoint union" of two nondisjoint sets is. Or is there? (I suppose if you forced me to give one, I would put $A \coprod B = A \times \{0\} \cup B \times \{1\}$, but surely not everyone agrees with this?) – Pete L. Clark Mar 19 '10 at 06:18
  • 13
    This is very similar to Alon Amit's post about writing internal direct sums using the more general direct sum notation without mention of the implicit assumptions. The problem is when the notation does not refer to the "disjoint union" of not-necessarily-disjoint sets, but rather to a union of two sets that are assumed implicitly to be disjoint. – Jonas Meyer Mar 19 '10 at 09:28
  • 39
    Oh for Pete's sake! :-D Seriously, Pete: what's the problem? The disjoint union is defined up to unique isomorphism by a universal property; the particular set-theoretic encoding is of no importance whatever. – Todd Trimble Nov 19 '12 at 02:41
  • 3
    I agree with Tom Church that the cited usage is inept. – Todd Trimble Nov 19 '12 at 02:44
  • 3
    I agree with Todd Trimble. Also, notice that there is no notion of "two sets are disjoint" for a category theorist. It only makes sense to say that two arrows $A \to X \leftarrow B$ are disjoint. – Martin Brandenburg Sep 13 '15 at 08:33
  • 1
    @ScottMorrison: this comment since it seems to me that Scott's question was not clearly-enough adressed in the ensuing discussion; this may be clear to the discutants but certainly not to all casual readers: what Tom Church correctly points out is that if someone only writes $X :=A\sqcup B$, then both the following situations are consistent with this: (0) $A$ and $B$ are disjoint sets (this has a meaning in some contexts) in the set-theoretic foundations used and $X$ is their union, or (1) $A$ and $B$ are non-disjoint, and the speaker is operating in a formalization of mathematics [..] – Peter Heinig Aug 09 '17 at 08:47
  • 2
    [..] which offers you the operation of taking the 'disjointifying union' of two arbitrary sets (as Pete L. Clark points out, there is little consensus about such a gadget, yet it can be meaningfully defined in certain contexts), and the speakers applies said operation to get $X$, but this does not change $A$ and $B$ in any way of course, they are still 'lying around quietly' in the set-theoretic universe used, still being non-disjoint, and as Tom Church points out, in scenario (2), it is then incorrect to later use in the proof that $A$ and $B$ are disjoint. So, @ScottMorrison, [...] – Peter Heinig Aug 09 '17 at 08:51
  • 1
    [...] the point Tom Church is making (I think) is not so much whether what $X$ is is sufficiently made clear by the hypothetical speaker, rather that the speaker has intimated properties about $A$ and $B$ which cannot be affected by applying an operator* $\sqcup$*. An analogy would be to talk of 'local' and 'global' variables in programming: the notation $\sqcup$ is sometimes abused as if it would change the 'global value' of its two arguments. Also, relatedly, I think in such a discussion the similar notational practice $A\dot{\cup} B$ should be mentioned (and warned against) [...] – Peter Heinig Aug 09 '17 at 08:55
  • 1
    [...] which (or so is my perception) is less used as a 'disjointified-union-operation', more like a 'union-symbol-with-in-built-boolean-valued-logical-proposition "these two sets are disjoint". I do not have anything precise to say on $\dot{\cup}$, simply because there is nothing precise to say on it. In summary, neither $\sqcup$ nor $\dot{\cup}$ have a universally agree-upon meaning, and one should briefly exactly state what one means. – Peter Heinig Aug 09 '17 at 08:59
28

As Trevor Wooley used to always say in class, “Vinogradov's notation sucks…the constants away.”

For those who don't know, Vinogradov's notation in this context is $f(x)\ll g(x)$ meaning $f(x) = O(g(x))$. (If you prefer big-O notation, that is.)

LSpice
  • 11,423
Ben Weiss
  • 1,588
  • 18
    +1 for Trevor Wooley – Georges Elencwajg Mar 18 '10 at 20:55
  • 12
    For the Big O notation "sucks" and Vinogradov's notation has sense and. Say $o(x)=O(x)$ but $O(x)\not=o(x)$. – Anton Petrunin Mar 18 '10 at 23:40
  • 16
    The above is not a problem caused by big-O notation, but a problem caused by failing to recognize that $O(x)$ is a set. The correct equivalent statements are that $o(x) \subseteq O(x)$ but $O(x) \not\subseteq o(x)$. [Of course, $o(x) \ne O(x)$ is also correct.] As equality is the most important relation we have, one should never write an equals sign unless one really means it! – Niel de Beaudrap Mar 19 '10 at 06:24
  • 29
    So $f(x) < !! < f(x)$? That's terrible. Weak inequalities should have a horizontal line in their notations, like $\leq$ and $\subseteq$. – David E Speyer Mar 19 '10 at 07:12
  • 1
    @DavidSpeyer, I had an argument once with a graduate-school classmate who insisted on using $\subset$ for $\subseteq$. "Well, would you write $5 < 5$?" I asked him, and he insisted that, though mathematical convention forbade it, he found that a far more sensible notation. – LSpice Oct 13 '15 at 20:25
  • 8
    @LSpice I've actually taken to writing $\subseteq$ or $\subsetneq$ whenever the issue is important and not obvious from context. While I agree that $\subset$ should mean $\subsetneq$, there is not enough consensus on this point to be sure I am understood. – David E Speyer Oct 13 '15 at 20:40
  • 1
    @DavidSpeyer, I agree, and use exactly the same convention. – LSpice Oct 13 '15 at 20:54
  • 5
    To understand the reason for the notation $O(x)$, compare the statements $\cos(x) = 1 + O(x^2)$ and $\left(\cos(x)-1\right) \ll x^2$. – Lior Silberman Aug 06 '16 at 00:41
  • 1
    @David I always use $\subset $ when the inclusion happens to be proper, but it doesn't matter for the argument at hand. When it does, I write $\subsetneq $. – Andrés E. Caicedo Jul 13 '17 at 01:16
  • Since the opinion on this seems to be quite unanimous (ignoring the issue of set inclusion), I propose to use the notation described here: https://www.researchgate.net/publication/368356926_A_better_Vinogradov_notation for big-O relations (LaTeX code included). It conveniently fullfills all your proposed criteria, as well as reserving f << g for "f is much smaller than g", while still being backwards compatible with the Vinogradov notation. Ha! – AfterMath Feb 08 '23 at 16:06
21
  • The use of squared brackets $\left[...\right]$ for anything. It's not bad per se, but unfortunately it is used both as a substitute for $\left(...\right)$ and as a notation for the floor function. And there are cases when it takes a while to figure out which of these is meant - I'm not making this up.

  • The word "character" meaning: a 1-dimensional representation, a representation, a trace form of a representation, a formal linear combination of representations, a formal linear combination of trace forms of representations.

  • The word "adjoint", and the corresponding notation $A\mapsto A^{\ast}$, having two completely unrelated meanings.

  • 4
    Do people really use squared brackets for the floor function? I thought it has its own symbol (one that makes a lot of sense at that). Also, if you want to talk about words that are abused, at least be outraged by "normal"! – Willie Wong Mar 18 '10 at 16:16
  • 26
    Indeed, the floor function should be written $\lfloor\cdot\rfloor$. Pet peeve of mine. – Harald Hanche-Olsen Mar 18 '10 at 16:24
  • 2
    Yes, squared brackets are still used in some parts of the world for floor, unfortunately. And as for "normal", it indeed belongs into the list, though it's not as bad as people claim; the different uses of "normal" mostly belong to different fields of mathematics, and thus it's not that easy to confuse them. Except "normal" for Hopf algebras vs. "normal" for commutative rings. – darij grinberg Mar 18 '10 at 16:33
  • 4
    @darij: Take a topological group $G$, now consider a normal subgroup $H$ ... – Gerald Edgar Mar 18 '10 at 18:54
  • Ah, normal spaces... I used to think that this notation was replaced by T_something long ago, but now I see that it isn't even in the T list. – darij grinberg Mar 18 '10 at 20:08
  • I just came across something very similar while answering another MO question. In stochastic calculus [X,Y] is standard notation for quadratic covariations. Reading through the discussion of Hormander's in Rogers & Williams (Diffusions, Markov Processes, and Martingales), I see that, at the same time, [X,Y] is used for the Lie Bracket of vector fields. Confusing, especially for processes in the tangent space of a manifold. Not necessarily bad notation though, just a conflict when combining different fields of maths. – George Lowther Mar 18 '10 at 20:26
  • 1
    Wasn't there a topology textbook that had two incompatible definitions of "perfectly normal"? (I think it was a translation and the original used different words for "perfect", but still...) – François G. Dorais Mar 19 '10 at 00:31
  • Even worse than the use of brackets for the floor function is their occasional use for the ceiling function …. (I have seen it!) – LSpice Sep 08 '10 at 04:03
  • 1
    @Francois: You're probably thinking of Kowalsky's "Topological Spaces," which has two incompatible definitions of "completely normal" (pages 61 and 93). I think the German original had "völlig normal" and "vollnormal". – Andreas Blass Mar 30 '11 at 16:13
  • @darijgrinberg : There are four incompatible conventions for normal vs T_4 (also for regular vs T_3 and related terms). To begin with, some people think that they have the same meaning, while some people think that they have different meanings. Besides that, those who think that they have the same meaning disagree on whether that meaning includes being Hausdorff, and those who think that they have different meanings (one that includes being Hausdorff and one that doesn't) disagree on which means which! – Toby Bartels Feb 27 '17 at 00:57
  • I sometimes find the distinction between square brackets and parentheses useful in probability and statistics. I use $P[x \leq a]$ to describe the probability of an event, e.g., that variable $x$ has its value less than or equal to $a$, while I use $P(x)$ to describe a function of $x$. – David G. Stork Nov 13 '17 at 19:08
17

My personal pet peeve of notation HAS to be algebraists writing functions on the right a la Herstein's "Topics In Algebra". I don't know why they do it when everyone else doesn't. I think one of them got up one day and decided they wanted to be cooler then everyone else, seriously...

Andrés E. Caicedo
  • 32,193
  • 5
  • 130
  • 233
  • 3
    I wasn't around when that was first introduced, but my guess is that it developed from looking at functions in commutative diagrams. If f maps X to Y and g maps Y to Z, considering that we write that from left to right it could occur to someone that it would be more natural to let their composite from X to Z be written as fg. Then, to make f act first on the point x, you'd go further and put your elements on the left: (x)(fg) = ((x)f)g. – KConrad Mar 18 '10 at 18:56
  • 2
    Nearly all algebraists eventually stopped putting functions on the right (I really wonder exactly how widespread it ever was), but it still lives on unfortunately in books printed back when certain authors had adopted the strange rule. – KConrad Mar 18 '10 at 18:58
  • 38
    The rule isn't strange, but it is an example of “rationalization” efforts going too far. Like having 100 degrees (or “grads”) in a right angle, or my favourite, the 13 month calendar. The idea is quite clever, actually: 13 months, each 28 days long, add up to 364 days. The 365th day (and the 366th, in leap years) should be a universal holiday, and it should not be assigned a weekday name! Thus the calendar for every month would look the same – from Monday the 1st to Sunday the 28th. But thinking you can actually inflict this on society verges on lunacy. – Harald Hanche-Olsen Mar 18 '10 at 19:29
  • 51
    Come on, don't tell me that the "lunacy" pun in the context of the 13 months calendar was unintentional... – darij grinberg Mar 18 '10 at 20:11
  • 19
    The "functions on the right" style is alive and well... in many object oriented programming languages. Methods are functions applied to objects. – Niel de Beaudrap Mar 18 '10 at 22:07
  • 3
    @Niel: But only one object gets to be on the left, right? If the method takes more arguments, they get added on the right. – Harald Hanche-Olsen Mar 18 '10 at 23:50
  • @darij: No comment. – Harald Hanche-Olsen Mar 19 '10 at 00:08
  • I had always heard the convention originated in the calculus of permutations. If one writes permutations to the right of their arguments, then the action of the permutation follows the natural way to multiply them. I.e., $(fg)(x) = g(f(x))$, while $x(fg) = (xf)g$. Of course, one can get around that by writing the permutation as a superscript, which is also a "right action". – James Mar 19 '10 at 03:18
  • 1
    @Harald: this is true. Perhaps it is more correct to say that the "functions on the right" style is alive but crippled in OOP. But this is mostly because programming-language designers don't put a high priority on products and co-products as category theorists do. – Niel de Beaudrap Mar 19 '10 at 06:38
  • 5
    Some genius among the programmers of GAP must have deemed it a great idea to drag that unholy right function notation out of its well-deserved grave and force it upon users. To make things worse, $f(x)$ is written x^f in GAP, which means additional fun because of the way systems process the tilde sign. And, of course, it is a consequence that group multiplication on $S_n$ in GAP is opposite to the rest of the world. Dear Mr. Cool, thanks for proving once again that open source software is not written for users. – darij grinberg Sep 12 '10 at 23:34
  • 3
    It's still rather common among many British group theorists, actually. – Jonathan Kiehlmann Dec 03 '10 at 17:09
  • 13
    I once had the pleasure of teaching a course ("applied modern algebra") from a textbook that used the "composition on the right" convention for binary relations, defined functions as a special case of binary relations but used composition on the left for them, and defined permutations as a special case of functions but used composition on the right for them. – Andreas Blass Mar 30 '11 at 16:22
  • 2
    When I was an undergraduate in honors abstract algebra,we were learning out of Herstien(A book I otherwise love) where we skipped chapter 1 and no one warned me about the functions on the right notation.I kept getting permutation computations completely wrong.When you're dealing with beginners especially,it's critical to make this clear since it's NOT standard throughout mathematics. I guess that's why it annoys me-I never really recovered from that gaffe. – The Mathemagician Mar 31 '11 at 20:00
17

The term "symplectic group" used to mean the group $U(n,{\mathbb H})$. It's as if people called $U(n)$ and $GL(n,{\mathbb R})$ by some single name.

Allen Knutson
  • 27,645
16

I don't like (but maybe for a bad reason) the notation $F\vdash G$ for $F$ is left adjoint to $G$.

Any comment ?

user2330
  • 1,310
  • 2
    If we ignore the extra information (unit and counit), then adjointness is a binary relation. Presumably then you want to use the notation which places a symbol between F and G. What is wrong with $\dashv$? Would you prefer another symbol? – Andrej Bauer Mar 18 '10 at 15:24
  • 31
    Isn't it $F\dashv G$ for "F left adj. to G"? – Gerrit Begher Mar 18 '10 at 15:33
  • 10
    I think Garief's question suggests why this is bad notation: I doubt I could ever remember which way it is supposed to go without checking wikipedia. Either convention would make sense since F is on the left. Using a symmetric symbol here is definitely a bad idea, but when I need to abbreviate "F is left adjoint to G" I often write something like "F:D <---> C:G is an adjunction". – Sam Lichtenstein Mar 18 '10 at 17:09
  • 6
    The only way I can remember which one is which is by looking at $Hom(F(\cdot),\cdot) \cong Hom( \cdot ,G(\cdot))$ – Harry Gindi Mar 18 '10 at 18:49
  • I think a better symbol should have the left adjoint at the left (obviously) and should include a hint on wether the unit is $\operatorname{id} \to FG$ or $\operatorname{id} \to GF$. – Gerrit Begher Mar 18 '10 at 22:40
  • I propose $\frac{F\to 1}{\overline{1 \to G}}$. (If the LaTeX doesn't show up, there's a "(Re)process math" link on every MathOverflow page.) – darij grinberg Mar 18 '10 at 23:12
  • 6
    I recently realized that the mnemonic for the unit and counit is that the left (resp. right) adjoint occurs visually first when it occurs on the left (resp. right) side of the arrow. So if F is left adjoint to G then the unit and counit are 1 --> GF and FG --> 1. – Sam Lichtenstein Mar 18 '10 at 23:53
  • The obvious way to remember it is just that the left adjoint is on the left. And yes, it is dashv, not vdash. – მამუკა ჯიბლაძე Jun 07 '15 at 05:23
14

A cute idea but for which I have yet to find supporters is D. G. Northcott's notation (used at least in [Northcott, D. G. A first course of homological algebra. Cambridge University Press, London, 1973. xi+206 pp. MR0323867) for maps in a commutative diagram, which consists in enumerating the names of the objects placed vertices along the way of the composition. Thus, if there is only one map in sight from $M$ to $N$, he writes it simply $MN$, so he has formulas looking like $$A'A(ABB'') = A'ABB'' = A'B'BB'' = 0.$$ He also writes maps on the right, so his $$xMN=0$$ means that the image of $x$ under the map from $M$ to $N$ is zero.

I would not say this is among the worst notations ever, though.

Ben McKay
  • 25,490
13

Students have big difficulties when first confronted with the $o(\cdot)$ and $O(\cdot)$ notation. The term $o(x^3)$, e.g., does not denote a certain function evaluated at $x^3$, but a function of $x$, defined by the context, that converges to zero when divided by $x^3$.

  • 6
    And for the little-oh notation in particular, one often forgets to specify the limit. There is a rather substantial difference between “$o(x^3)$ as $x\to\infty$” and “$o(x^3)$ as $x\to0$”, after all. – Harald Hanche-Olsen Mar 18 '10 at 19:36
  • I have always found Big/little o and omega notations quite obscure. In particular, I don´t see a single justification for this abuse given the fact that one can denote/express these concepts much more elegantly. It is another question that such alternative notations for asymptotics could not prevail. – M.G. Mar 18 '10 at 20:33
  • 4
    @ex-falso-quodlibet: How would you prefer to express these concepts? It is possible to express these concepts quite nicely as limits of ratios, but the big-O, little-o notation has the advantage of being fairly clear while expending less effort --- which is ultimately the goal of notational devices. – Niel de Beaudrap Mar 18 '10 at 22:16
  • I think if you use big-O and little-o notation you should specify what the constant depends on. Sometimes it's clear from context, but I think for students it's bad to have notation that can lead to mistakes in their proofs when they forget that a constant in one of their estimates depends on epsilon, etc. – Qiaochu Yuan Mar 19 '10 at 05:17
  • 9
    ex-falso-quodlibet Big O notation is useful because I can include it as a term in larger expressions. For example, $$\sum \log(1/(1-\chi(p)/p^s)) = \sum \left( \chi(p)/p^s + O(1/p^{2s}) \right) = \sum \chi(p)/p^s + O(1)$$ as $s \to 1^{+}$. Try writing that in your preferred notation and see which is more readable. – David E Speyer Mar 19 '10 at 07:19
  • 3
    I find the only thing wrong with the big O notation is the equality symbol: $f = O(g)$. It's really a reflexive transitive relation, so one should write $f \leq_O g$ or something. – Todd Trimble Feb 26 '13 at 01:35
  • 2
    @ToddTrimble Much belated, but I like the usage of $O()$ as a set of functions, and the notation $f\in O(g)$. It offers a much clearer intuition as to what's going on, and helps to prevent some of the most egregious mistakes that the equality notation tends to trigger. – Steven Stadnicki Jul 14 '14 at 02:45
  • 4
    @StevenStadnicki Very belated response, but: yes, that would be a very proper use of notation. In fact there's a notation pun that I like: if we think of Hardy fields that consist of germs of functions at infinity as valuation fields $K$, and remember that in algebraic geometry one often uses $O$ or $\mathcal{O}$ for the local valuation ring of germs that are bounded at the point (in this case $\infty$), then your $f \in O(g)$ literally means $f \in O g$ in the sense of principal fractional ideals (the same as $f \leq g$ in the ordered valuation group $K^\ast/O^\ast$). – Todd Trimble Oct 13 '15 at 22:15
11

I have struggled with 'dx'. I've spent years trying to study every different approach to calculus that I could find to try and make sense of it. I read about the limit definitions in my first book, vector calculus with them as pullbacks of linear transformations or flows/flux, differential forms from the bridge project, k-forms, nonstandard analysis which enlarges $\mathbb{R}$ to give you infinitesimals (and unbounded numbers) but the same first order properties and lets integral be defined as a sum, constructive analysis using a monad to take the closure of the rationals to give reals... but I am still just as confused as ever, I understand that the mathematical notation doesn't have a compositional semantics but still don't really get it - one of the problems is despite not really understanding it, or having any abstract definition of it.. I can still get correct answers and I really hope this doesn't become a theme as I study more topics in mathematics.

muad
  • 1,402
  • 4
    Almost always, dx refers to the 1-form obtained by exterior differentiation from the function x on your space (say the graph of a function R-->R viewed as a subset of R^2, in which case x is the projection onto the first factor). It's not really mysterious. I don't know about nonstandard analysis, but I can imagine it would mean something else in such contexts. Although in my opinion, authors would do well to use a new symbol to denote something like "infinitessimal change in x" (assuming they can make mathematical sense of this) rather than overloading a symbol with a perfectly good meaning – Sam Lichtenstein Mar 18 '10 at 17:18
  • 2
    I'm voted -1 for this remark? Is it because it's too basic or boring a problem or why? :/ – muad Mar 18 '10 at 18:22
  • 6
    Probably you were downvoted because you did not really explicitely explain the confusion caused by dx. As somebody who uses differential forms a lot I have never stumbled over a situation where dx was mysterious for me. – J Fabian Meier Mar 18 '10 at 18:56
  • 6
    There are definitely some annoying things about the $dx$ or $\frac d{dx}$ notation, but I think Newton's notation was worse, so $dx$ can't be the worst. – Douglas Zare Mar 18 '10 at 18:56
  • 22
    Then how about $dS=\sqrt{dx^2+dy^2}$ (<--some physicists love this) ?!... :-) – M.G. Mar 18 '10 at 20:26
  • 2
    I went through a similar period (see my comment on another answer). I think sometimes authors and instructors see all this as so obvious that it doesn't get explained to some students' satisfaction. It all worked itself out when I decided to stop thinking about it too much. dx is a very small change in x, and dy is the corresponding small change in y. Then dy/dx is the derivative. Not really, because we have to take limits, and if you want a rigorous interpretation of the notation obviously you'll need a different approach, but this helped me. – Michael Benfield Mar 18 '10 at 23:34
  • 2
    @efq : You can take $\sqrt{dx^2 + dy^2}$ perfectly literally. (Not so much $dS$, because then it's unclear what $S$ is.) I presume that you know how to evaluate a differential 1-form at a point $P$ and a tangent vector $v$ at $P$. Now given such $P$ and $v$ in $R^2$, and taking $x$ and $y$ to be the usual coordinates there, evaluate $dx$ at $P$ and $v$, evaluate $dy$ at $P$ and $v$, square the results, sum them, and take the principal square root. You have now evaluated $\sqrt{dx^2 + dy^2}$ at $P$ and $v$; the result is the length of $v$ (in the usual metric on $R^2$). – Toby Bartels Feb 27 '17 at 01:10
  • Any algebraic expression built out of $x$, $y$, $dx$, and $dy$ can be interpreted as a generalized differential $1$-form on $R^2$ in this way (modulo concerns that an expression might be undefined when evaluated at some points and vectors). You can also integrate such a thing along an oriented rectifiable curve: divide the curve into finitely many pieces, tag each piece with a point within it, and assign to each piece the vector from its starting point to its ending point (using the orientation). Evaluate the form on each piece using the given point and vector, and add to get a Riemann sum. – Toby Bartels Feb 27 '17 at 01:22
  • Most random expressions don't have interesting integrals; for example, the integral of $dx^2 + dy^2$ is always $0$, and the integral of $\sqrt{|dx| + |dy|}$ is always $\infty$ (on a curve of nonzero length). But the integral of $\sqrt{dx^2 + dy^2}$ is the length of the curve. (The orientation is irrelevant whenever $dx$ and $dy$ appear only as $|dx|$ and $|dy|$.) Working in an arbitrary manifold is not much harder; the only thing that is not straightforward is interpreting ‘the vector from its starting point to its ending point’. See https://ncatlab.org/nlab/show/cogerm+differential+form – Toby Bartels Feb 27 '17 at 13:04
5

p < q as in "the forcing condition p is stronger than q".

Haim
  • 870
  • 1
    Think of intervals: A smaller interval contains more information in the sense that it determines an arbitrary element with less error. Hence giving the smaller interval is the "stronger" condition. (If I remember correctly, there really is a forcing notion where this is literally true. The poset it choosen to be the Borel sets of [0,1] or something similar. Please correct me if that's wrong, I've never really had anything to do with forcing) – Johannes Hahn Dec 01 '13 at 01:02
  • 1
    I don't think it is an issue, since this is the appropriate ordering in the Boolean completion, where the weakest condition, 1, is the largest. Anyway, Matt Foreman avoids having to decide whether to use this or the other convention by writing $p\Vdash q $ and only using separative posets. – Andrés E. Caicedo Jul 12 '17 at 23:15
-2

I hate the short cut $ab$ for $a\cdot b$. Everyone get used to it, BUT it creates very deep problem with all other notation; say you never can be sure what $f(x+y)$ or $2\!\tfrac23$ might be...

Also in modern mathematics people do not multiply things too often, so it does not have sense to make such a short cut.

Yet the shortcut $x^n$ is really bad one. One can not use upper indexes after this. It would be easy to write $x^{\cdot n}$ instead.

  • 10
    Writing $a\dot b$ every time would make group presentation theorems a night mare to read. – Anonymous Mar 18 '10 at 21:01
  • 6
    @Anonymous. No, it would not. – Anton Petrunin Mar 18 '10 at 21:35
  • 15
    At the risk of stating the obvious, I will venture that this convention exists for a simple reason. In a tremendous number of situations, it makes parsing a mathematical expression roughly equivalent in effort to parsing a written sentence in a European language; thus reducing an important task to one which had been previously solved. – Niel de Beaudrap Mar 18 '10 at 22:40
  • 11
    Can you elaborate what deep problems are created by this shortcut? – Andrea Ferretti Mar 19 '10 at 00:01
  • 1
    It is not quite true that "in modern mathematics people do not multiply things too often". One multiplies matrices all the time, for instance in linear algebra or mathematical physics, and the standard notation Ax for the value taken at x by a linear operator A is a historical consequence of it. Would you like reading all the time $\Delta\cdot u$ instead of $\Delta u$ for the Laplacian of $u$? – Delio Mugnolo Dec 01 '13 at 09:27
  • 1
    @DelioM. Applying operator is not exactly multiplication, but it is ok to use dot if you want to think of it as multiplication. – Anton Petrunin Dec 01 '13 at 20:10
  • 7
    BTW, I am impressed by number of negative votes --- I think it only shows that we are not ready to admit that the notation we use everyday is the worst one. – Anton Petrunin Dec 01 '13 at 20:14
  • 2
    @Anton Petrunin: Well, given that linear operators are essentially infinite dimensional versions of matrices I do not see any difference. In view of the spectral theorem, normal operators are matrices (up to unitary transformations). – Delio Mugnolo Dec 01 '13 at 23:59
  • @DelioM. Read "Dialectics of Nature", then you will see no difference at all :) – Anton Petrunin Dec 02 '13 at 00:05
  • 5
    I think that every mathematician will agree that $2 \frac2 3$ means $\frac4 3$, and no mathematician will use it to mean $\frac8 3$. I love the idea of decorating a power with the operation (thus distinguishing $A^{\oplus n}$ from $A^{\otimes n}$, for example), though I'm not sure that I would be brave enough to apply it to exponentiation on real numbers, but how would this solve any problem with following upper indices? (Also, would you accept $e^x$, use the apocryphal notation $e^{\cdot x}$, or force it to be written as a function $\operatorname{exp}(x)$ or so?) – LSpice Oct 13 '15 at 20:34
  • 1
    @LSpice I use "$\cdot$" for multiplication all the time; it is easier to read formulas and now I can denote a constant as say $\mathit{dil}$. I am not ready for $e^{\cdot x}$, but I think it is right notation. – Anton Petrunin Oct 15 '15 at 19:37
  • 1
    @AntonPetrunin As a student, I try to use the "shortcut" whenever possible, since using $\cdot$ for multiplication makes me get it confused with the dot product. Oh, and if you don't like the juxtaposition notation, what's your opinion on using parentheses to imply multiplication? – user3932000 Apr 09 '16 at 02:55
  • 2
    @LSpice In my opinion, the notation $2\frac23$ should be avoided if possible. All high school students (in Germany) learn that this means $\frac83$, so you confuse your (university) students if you use it any other way. – J Fabian Meier Feb 03 '19 at 09:57
  • 1
    @J.FabianMeier, I agree it should be avoided, precisely because I think all mathematicians agree it means one thing, and most high-school students think it means a different thing. I do not use this symbol, but my students generate it, and then confuse themselves between the two possible meanings. (The mathematical meaning occurs, for example, when a student blindly takes the derivative of $2x^{2/3}$ and, without being too careful about grouping or explicit operations, gets $2\frac2 3 x^{-1/3}$.) – LSpice Feb 03 '19 at 15:53
  • 4
    Mixed fractions has to be one of the worst ideas ever invented by elementary math curriculum designers. Just put a + in there, for pity's sake! – user76284 Mar 15 '20 at 06:32