ints vs. pointers
Thomas M. Breuel
breuel at harvard.ARPA
Sun Nov 11 16:34:00 AEST 1984
> I can remain silent no longer. In the debate about NULL vs non-NULL,
> people seem to be overlooking the fact that the number of actual "data"
> bits (as opposed to "noise" bits such as object size, etc., used on
> some CPUs) in a pointer MUST BE no bigger than the number of bits in an int!
> [Before the flames start I KNOW K&R's not a standard but it's the closest
> we've got.] I refer you to Appendix A, para 7.4 (page 189 in my edition):
>
> If two pointers of the same type are subtracted, the result is con-
> verted (by division by the length of the object) to an int representing
> the number of objects separating the pointed-to objects.
>
> This is clearly not necessarily possible if you have (say) 32-bit pointers
> (with ALL 32 bits being address) and 16-bit ints.
>
> John Mackin, Physiology Department, University of Sydney, Sydney, Australia
> ...!decvax!mulga!john.physiol.su
Sure, K&R writes that, but reality is different. There are 'C' compilers
in existence which have 16 bit ints and 32 bit pointers. Such compilers
have differences of pointers return a long value, which usually gets
truncated to a short value, and which usually does not cause any problems,
since only rarely do related pointers have a distance of more than >64kbytes.
The most annoying feature of the integer data types in 'C' is that one
cannot rely on a specific integer data type having a specific precision,
in this case on a specific integer type having sufficient precision to
hold a pointer. When programming, I'm generally not interested in
whether shorts use less space than longs, but rather whether a short
is big enough to index into my array, or whether a long gives sufficient
precision to hold the amount in my bank account (even a pdp short is,
unfortunately, sufficient for that).
A solution would be to re-define and augment integer data types as follows
(cf. K&R p.182):
char 8 bits or # bits in a character, whichever is larger
short 16 bits or larger
long 32 bits or larger
quad 64 bits or larger
float 32 bits or larger
double 64 bits or larger
addr same size as a pointer
int 16 bits or larger, whatever the compiler writer likes
(Sorry, I couldn't resist slipping in 'quad').
This is the de-facto standard anyhow (or do you expect to get a 9 bit
integer when you declare something 'short'?).
I think it is absurd to require the size of an integer to be sufficiently
large to hold a pointer: the 68000 and 32016 are 16 bit microprocessors,
and the natural size for an integer is 16 bits. Nevertheless, pointers
are 32 bits on these machines. On the other hand, there is the need for
an integer type of the same size of a pointer.
Thomas.
(breuel at harvard)
--
Thomas M. Breuel
...allegra!harvard!breuel
More information about the Comp.lang.c
mailing list