Numeric ranges ("short", etc.)
Chris Gray
cg at myrias.UUCP
Tue Oct 1 05:54:54 AEST 1985
Instead of specifying the number of bits that are needed for a numeric type,
why not just specify the range of values needed? This allows the compiler
to use whatever is most efficient for the type, but does not even look like
it forces the compiler to use exactly that number of bits. (A significant
semantic difference - try implementing 13 bit integers on a machine without
a good set of bit twiddling instructions). I use
unsigned BLAH
signed BLAH
to represent unsigned ints 0 - BLAH and signed ints -BLAH - BLAH respectively.
(Ignoring the extra on two's complement machines.) This method is portable,
but doesn't give you any way of guaranteeing the size of a number, which is
more useful for dealing with external hardware constraints. However, using
the suggested practice of defining a type for each required size, e.g.
'int16' or 'int24' and then redefining those for each new implementation
of the language, will probably work.
Chris Gray ..!alberta!myrias!cg
More information about the Comp.lang.c
mailing list