C Floating point arithmetic
Roy Smith
roy at phri.UUCP
Wed Nov 27 01:25:55 AEST 1985
> [...] in most cases where the loss of speed might really matter, double
> precision is usually needed anyway to get meaningful results. Some
> people, though, judge code more on how fast it runs than on whether it
> performs a useful function correctly.
Bullcookies! A lot of people (like me) work with primary data
which is only accurate to 2 or 3 significant digits. It takes a hell of a
lot of roundoff error in the 7th decimal place to make any difference in
the accuracy of the final result. Why should I pay (in CPU time) for
digits 8-15 when I don't need them? Why do you think they make machines
with both single and double precision hardware to begin with?
--
Roy Smith <allegra!phri!roy>
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016
More information about the Comp.lang.c
mailing list