Inherent imprecision of floating point variables
Barry Margolin
barmar at think.com
Fri Jul 6 08:46:38 AEST 1990
In article <4186 at jato.Jpl.Nasa.Gov> kaleb at mars.UUCP (Kaleb Keithley) writes:
>In article <b3f.2688bfce at ibmpcug.co.uk> dylan at ibmpcug.CO.UK (Matthew Farwell) writes:
[A for-loop iterating by 0.1]
>-If its all to do with conversion routines, why doesn't this stop when f
>-reaches 10?
>Because (10.0 - 0.1) + 0.1 will never be exactly equal to 10.0.
Well, it does on my IEEE-compliant Symbolics Lisp Machine. But other
floating point formats may not have this property. And there are other
floating point computations that don't obey normal arithmetic axioms; for
instance,
10.0 * .1 == 1.0
.1+.1+.1+.1+.1+.1+.1+.1+.1+.1 == 1.0000001
This latter inequality is the reason the loop fails to terminate. The
problem is that 1/10 is a repeating fraction in binary, so the internal
representation of .1 can't be exactly right. The multiplication happens to
round in such a way that the error is cancelled out, but the repeated
additions accumulate the errors because there's a rounding step after each
addition step. In fact, here's a smaller case that demonstrates this:
.1 + .1 == .2
.5 + .1 == .6
.5 + (.1 + .1) == .7
(.5 + .1) + .1 == .70000005
--
Barry Margolin, Thinking Machines Corp.
barmar at think.com
{uunet,harvard}!think!barmar
More information about the Comp.lang.c
mailing list