37

Reading the book Schaum's Outline of Engineering Mechanics: Statics I came across something that makes no sense to me considering the subject of significant figures:

Schaum's Outline of Engineering Mechanics: Statics fragment I have searched and saw that practically the same thing is said in another book (Fluid Mechanics DeMYSTiFied): Fluid Mechanics DeMYSTiFied fragment


So, my question is: Why if the leading digit in an answer is 1, it does not count as a significant figure?

knzhou
  • 101,976
Vinicius ACP
  • 788
  • 11
  • 27

5 Answers5

78

Significant figures are a shorthand to express how precisely you know a number. For example, if a number has two significant figures, then you know its value to roughly $1\%$.

I say roughly, because it depends on the number. For example, if you report $$L = 89 \, \text{cm}$$ then this implies roughly that you know it's between $88.5$ and $89.5$ cm. That is, you know its value to one part in $89$, which is roughly to $1\%$.

However, this gets less accurate the smaller the leading digit is. For example, for $$L = 34 \, \text{cm}$$ you only know it to one part in $34$, which is about $3\%$. And in the extreme case $$L = 11 \, \text{cm}$$ you only know it to one part in $11$, which is about $10\%$! So if the leading digit is a $1$, the relative uncertainty of your quantity is actually a lot higher than naively counting the significant figures would suggest. In fact, it's about the same as you would expect if you had one fewer significant figure. For that reason, $11$ has "one" significant figure.

Yes, this rule is arbitrary, and it doesn't fully solve the problem. (Now instead of having a sharp cutoff between $L = 9$ cm and $L = 10$ cm, you have a sharp cutoff between $L = 19$ cm and $L = 20$ cm.) But significant figures are a bookkeeping tool, not something that really "exists". They're defined just so that they're useful for quick estimates. In physics, at least, when we start quibbling over this level of detail, we just abandon significant figures entirely and do proper error analysis from the start.

knzhou
  • 101,976
  • 4
    It's worth noting that the sharp cutoff moves, but the degree of the cutoff is half as much by excluding 1 as a leading significant digit. – Kevin Dec 23 '19 at 20:05
  • 6
    @Kevin No, it really is the same. If you count $1$ as a significant digit, then e.g. two sig figs means "anywhere between 1% and 10% uncertainty". If you don't count $1$ as a significant digit, then two sig figs means "anywhere between 0.5% and 5% uncertainty". It's still an order of magnitude range, it's just now better centered about 1%. – knzhou Dec 23 '19 at 20:22
  • 4
    Ah - I see where you're coming from. I was viewing it as "the worst level of inaccuracy is X%" - and that algorithm approach decreases X% from 10% to 5%. You're viewing it as "the order-of-magnitude difference difference is always a factor of 10, regardless of where the split is." – Kevin Dec 23 '19 at 20:31
  • 2
    @Kevin By that logic, a far better level of inaccuracy is obtained by subtracting four thousand from the number of digits, and calling that the number of significant figures. It's not a useful metric. – wizzwizz4 Dec 24 '19 at 14:54
  • This seems like a good answer to me. I might expand it a bit by saying something about error analysis and exactly how it sidesteps this problem. I would at least indicate that in physics, one usually quotes quantities as, e.g., $80 \pm 30 cm$ rather than relying on significant figures to tell the story. – Charles Hudgins Dec 25 '19 at 03:11
  • An example I liked to use back when I taught the subject in high school was to have students calculate the area of a square that is 5 ft on each side. Luckily, this "not counting 1 as a significant digit" doesn't change that the result is 30 ft². – Ben Hocking Dec 25 '19 at 11:23
  • I think, the key point is, when you define the cutoff between 0.9 and 1, then two significant digit means an error range from 1% to 10%. If you define the cutoff between 1.9 and 2, then two significant digits means 0.5% to 5% error. In the first case, the 1% that is suggested by "two significant digits" is actually only at the low end of the interval. In the second case, the actual error range can actually be greater or smaller than 1%. The ideal cutoff would be at $\sqrt{10} = 3.16$, so it would be a good idea to also exclude a leading 2 from the significant digit count. – cmaster - reinstate monica Dec 25 '19 at 22:27
15

This isn't an actual rule. And as some people point out in the comments, it's not even mentioned in the Wikipedia article on significant digits. The rule applies to $0$, not to $1$.

Simple counter-example: $10$. Would the authors claim that this number has no significant digits?

You can verify this by doing a search for "sig fig counter." All of them should tell you that the number in your question has 4 significant figures.

As others note, this boundary condition is clearly arbitrary. But it needs to be consistent across literature, or else confusion abounds when you're working with others. So I'd say ignore the rule.

  • 7
    Not only is this rule "a thing" but it was nearly universal during the slide rule era, though it has fallen out of fashion in the intervening decades. – dmckee --- ex-moderator kitten Dec 23 '19 at 19:11
  • 2
    Regarding the counter-example: Would you say zero has no significant digits? The number of such digits is only a mechanism to gauge how certain you are about that value. For example: if the actual value is 1.00001, but I can only measure hundredths and therefore see 1.00, I could say it's one with three significant digits. (Or according to those authors, two sig. digits). Actual error analysis will always be more robust, though. – Phlarx Dec 23 '19 at 21:11
  • @Phlarx Under the "traditional" definition, a trailing $0$ is not regarded as a significant figure, unless it's followed by a nonzero digit. – Aleksandr Hovhannisyan Dec 23 '19 at 21:55
  • 3
    Where does your "traditional definition" come from? I do make the difference between 20 and 20.0, for instance. – Blackhole Dec 23 '19 at 22:33
  • My education, for one. Are you familiar with the rules for significant figures? http://chemistry.bd.psu.edu/jircitano/sigfigs.html – Aleksandr Hovhannisyan Dec 23 '19 at 23:56
  • 1
    There is no universally agreed set of rules for significant figures. The porblem being that the whole notion is a blunt instrument (though too important to simply do away with altogether) and all sets of rules have bad corner cases. Various industries do have standards documents, however, so if you work in those fields you can point to an authoritative source and say "This is how we do it". It's just that you won't find universal agreement. I find that chemists are much more unified in their approach than physicists. – dmckee --- ex-moderator kitten Dec 24 '19 at 00:47
  • Also, yes, there's a distinction between 20 an 20.0. I was talking about a number like 10, which has only 1 sig fig. – Aleksandr Hovhannisyan Dec 24 '19 at 00:48
  • 2
    BTW, the answer to your question about "10" is that on a slide rule that would be $1.00 \times 10^1$, so it "obviously" has two sig-figs, and for actual integer values the whole notion is misplaced. Context matters. – dmckee --- ex-moderator kitten Dec 24 '19 at 00:49
12

Truncating numbers to a certain precision is completely arbitrary. There's no reason not to make it more arbitrary.

It seems like someone didn't like the step in precision between 9.99 and 10.0 so they moved it to between 19.99 and 20.0.

In any field where results are clustered around a power of 10, doing this may be beneficial.

Jasen
  • 866
  • 1
    Uh, no, it's not just "we simply moved it." The level of imprecision between 9.99 and 10.0 is twice what it is from 19.99 and 20.0. This rule tightens the allowed level of imprecision for a set amount of significant digits. – Kevin Dec 23 '19 at 20:03
  • 2
    but only because the numbers are twice as big the step is still approximately 10 times. – Jasen Dec 23 '19 at 21:46
  • ... however arbitrary you might think it, it's partly just common sense. Is "100.0" really ten times more precise than "99.0" if measured with the same instrument, say? – Will Crawford Dec 24 '19 at 18:17
3

It's Experiment Time!

(I was starting to see both points of view on whether to drop the 1, and was curious if there was some objective way of tackling the problem... so I figured it might be a good opportunity for an experiment. For Science!)

Assumptions: Significant Digits are a way of signifying precision on a number - either from uncertainty of measurement or as the result of calculations on a measurement. If you multiply two measurements together, the result has the same number of significant digits as the lower of the two starting values (so 3.8714 x 2.14 has three digits total, not seven like you'd get from plugging it into a calculator.)

That 'calculation' part is what I'd like to take advantage of. Because arguing significant digits on a number in a vacuum is just semantics. Seeing how the precision carries forward with actual operations gives an actual testable prediction. (In other words, this should remove any sort of 'cutoff' issue. If two numbers have X significant digits, then the multiplication of them should have an accuracy of roughly X significant digits - and the validity of the how you determine what's a significant digit should translate accordingly.)

Experimental Layout

Generate two high precision, Benford-compliant coefficients (I'm not actually sure Benford matters in this experiment, but I figured I shouldn't omit any possible complicating factors - and if we're talking physics, our measurements should be fit Benford's Law.) Perform an operation like Multiplication on them. Then, round those same coefficients down to 4 digits after the decimal, and perform the same multiplication on those rounded values. Finally, check how many digits the two resulting values have in common.

Aka, check how well the imprecise 'measurement' version compares the actual, hidden, high-precision calculation.

Now, in an ideal world, the value would be 5 matching (significant) digits. However, since we're just blinding checking whether digits match, we're going to have some that match by sheer luck.

Experimental Results For Multiplication

Digits Matching Where Result Doesn't Start With One
    ... and no input value starts with One:
            5th digit matches 89.7%
            6th matches 21.4%
    ... and one input value starts with One:
            5th digit matches 53.7%
            6th matches 5.57%
    ... and two input values start with One:
            5th digit matches 85.2%
            6th matches 11.1%
Digits Matching Where Result Starts With One:
    ... and no input value starts with One:
            5th digit matches 99.9+%
            6th matches 37.8%
    ... and one input value starts with One:
            5th digit matches 99.9+%
            6th matches 25.5%
    ... and two input values start with One:
            5th digit matches 95.0%
            6th matches 13.9%

Conclusions For Multiplication

First, multiplying two numbers and ending with a number that starts with 1, you should probably count the 1 as a significant digit. In other words, if you multiply '4.245' x '3.743', and come up with '15.889035', you should probably leave it at '15.89'. If you add an additional digit and call it '15.889', you have a 38% chance of that final digit being correct... which probably isn't high enough to be defensible to include.

But multiplying where one of the inputs starts with 1, and it gets strange. Multiplying '1.2513' x '5.8353', and realistically, you don't have five significant digits in your result. According to the experiment, you've got four digits... and a 54% chance of being right with that fifth value. Well, if a 38% chance in the prior situation (multiplying two numbers and ending with a value starting with '1') of getting an 'extra' significant digit isn't acceptable, then it's probably fair to say the 54% chance in this situation is also probably too low to justify including the 5th digit.

So you might be tempted to say "Don't treat a leading 1 as significant as an input to a calculation"... except that multiplying 1.##### x 1.#### (two numbers that start with 1) gives you 85.2% accuracy on that fifth digit - which is pretty much the same level of accuracy where none of the three numbers begin with a 1. So if 8.83 x 8.85 should have three significant digits, so should 1.83 x 1.85.

Final Conclusion: It's actually a deceptively difficult problem to figure out a good heuristic. Especially since there's a pretty big difference between a measurement of 1.045 that's fed into the input of a calculation, and the 1.045 that comes out as a result of a calculation. Which explains why there are multiple methods of handling leading 1's. (If I were forced to choose a Heuristic, it would be: don't count the leading '1' on any measurements performed, but count it for the output of any calculations.)

Kevin
  • 260
2

Keeping track of "significant digits" is a heuristic for indicating approximately the precision of a number. It's not a substitute for a real uncertainty analysis, but it's good enough for many people and many purposes. When some people run up against the limitations of significant figures, they have enough background (or colleagues with enough background) to switch to a more serious error analysis. When other people run up against those same limitations, they try to "fix" the significant-digits approach by creating new ad-hoc rules like this one.

Let's suppose that you and I are independently analyzing the same data set. Each of us has measured the same quantity to two significant figures: your result is 0.48, and my result is 0.52. Since a healthy significant-figure analysis retains one least-significant digit whose value is only mostly trustworthy, it's not clear whether our measurements agree or not; that level of disagreement is interesting and we might end up discussing how to turn that into a three-significant-figure experiment, in case we've both correctly measured a "true" value closer to 0.498.

Now imagine a different universe where we both do the same experiment, but a different definition somewhere means that our "results" are different numerically by a exact numerical factor of twenty. Your measurement in this universe is 9.6, and mine is 10.4. There's still an interesting tension between those numbers. But if I count the leading 1 as one of my two significant digit s, I should report my result as "10", suggesting it is equally likely to be "9" or "11." If you report 9.6 and I report 10, the tension between our results is much less obvious. Also it appears that my result is ten times less precise than yours. I shouldn't be able to change the precision of a number by doubling or halving it.

That's the logic for keeping track of a "guard digit" if a number happens to fall in the bottom part of a logarithmic decade. (The Particle Data Group keeps a "guard digit" if the first two significant digits are between 10 and 35.) But to explain this by saying that "a leading 1 isn't a significant digit," as your source does: that's terribly confusing. I'd find a book written by someone else and read the author you quote here with some caution.

@supercat reminds me in a comment that there is a compact convention for representing real uncertainties that's become popular in the literature in the past couple decades: one writes theuncertainty in the last few digits in parentheses just after the number. For example, one might write $12.34(56)$ as a shorthand for $12.34\pm 0.56$. This approach is nice in the precision measurements business, where there are many significant figures. For example, the current Particle Data Group reference reports the electron mass (in energy units) as $0.510\ 998\ 950\ 00(15)\,\mathrm{ MeV}/c^2$, which is much easier to write and to parse than $0.510\ 998\ 950\ 00\,\mathrm{ MeV}/c^2 \pm 0.000\ 000\ 000\ 15 \,\mathrm{ MeV}/c^2$.

I haven't seen that approach much in material for introductory students, and I can think of a couple reasons why. The "significant figure rules" are, for most people, the first time they learn that arithmetic is something you can do with numbers that are not exact. Many students are intellectually unprepared for that idea: they're ready to write 0.5 instead of 1/2, but they're vague on whether to decimalize 1/7 as 0.1 or as 0.1428571429, because the latter is how it comes out of the calculator. Furthermore, to use the parenthesis notation, you have to have some understanding of significant figures already. To combine my examples above, most people who aren't in the precision measurements business (where understanding the uncertainty may be more challenging than understanding the central value) would write 12.3(6) rather than keeping the guard digits in 12.34(56). But if you were multiply that value by twenty, it would become 246.8(11.2). Whether to record it thus, or as 247(11), or as $250\pm10$, winds up raising the same issues about guard digits that started this question. While the ambiguity is moved from the central value to the uncertainty, so the stakes for misjudging are lower, explaining this to a person who is new to the idea of careful imprecision is a tall order.

rob
  • 89,569
  • It's too bad no convention emerged to distinguish between values have differing levels of uncertainty in the last place, perhaps replacing the last digit with 0/2 or 1/2, or 0/4, 1/4, 2/4, or 3/4 so that the biggest change in expressed uncertainty between adjacent levels of precision would be a factor of 2.5 rather than a factor of ten. – supercat Dec 25 '19 at 16:40
  • @supercat There is such a convention. I've updated the answer. – rob Dec 25 '19 at 19:59
  • I'd not seen that convention. I do remember a rather ancient (probably 1970s) periodic table which marked some of the atomic masses with an asterisk indicating that they were +/- 4 in the last place, while other values were within +/- 1 in the last place. Is there any convention for distinguishing between values that are within 0.501ulp, 0.75ulp, or 1ulp? Also, another thing I've thought should be standardized is a means of indicating values that should be considered exact to arbitrary precision. If one has eight shelves with of eight rows of eight columns of blocks, one doesn't have... – supercat Dec 25 '19 at 20:10
  • "500" blocks (one significant figure), but 512 exactly. – supercat Dec 25 '19 at 20:11
  • When sub-ULP precision matters, then you're doing a real uncertainty analysis rather than using significant digits as a shorthand. The most common way to indicate this is to add one or more guard digits when recording the uncertainty. Note that modern analysis is often done end-to-end using double-precision floating-point numbers on computers, which have about fifteen significant figures; most of that precision could be considered guard digits. For exact values, the reliable way to communicate them is an explanatory sentence. @supercat – rob Dec 25 '19 at 21:21