double vs single precision
Ken Turkowski
ken at turtlevax.UUCP
Sat Jan 11 08:59:09 AEST 1986
In article <1333 at brl-tgr.ARPA> tribble_acn%uta.csnet at csnet-relay.arpa (David Tribble) writes:
>For the last few weeks there has been on-going discussion of
>the merits and drawbacks (demerits?) of the precision the compiler
>sees fit to use for statements like-
> float a, b;
> a = a + 1.0; /* 1 */
> a = a + b; /* 2 */
>One argument that should be mentioned is that some compiler writers
>choose the criteria-
> 1. It keeps runtime code small, because only one floating
> point routine ($dadd) is required; a single-precision
> $fadd is not necessary.
Routine? What's wrong with using the single-word machine
instructions? There's no reason not to use floating-point hardware
since it is now so cheap and available.
> 2. It agrees with K&R's definition of 'the usual arithmetic
> conversions' for doing arithmetic operations.
Just because it agrees with K&R doesn't make it right. Nearly every
other programming language states that if you want computations to be
done at higher precision than any of the operands, then you cast any
one of them to the higher precision.
This should be done for chars and shorts as well as floats and longs.
(could cast a long to double for more precision, but not to float)
What's that? You say it breaks existing code? The easy solution to
that is:
#define char long
#define short long
#define int long
#define float double
--
Ken Turkowski @ CIMLINC, Menlo Park, CA
UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken
ARPA: turtlevax!ken at DECWRL.DEC.COM
More information about the Comp.lang.c
mailing list