sizeof(int) on 16-bit Atari ST.
Ray Butterworth
rbutterworth at watmath.waterloo.edu
Wed Nov 22 06:25:33 AEST 1989
In article <31505 at watmath.waterloo.edu> rbutterworth at watmath.waterloo.edu (Ray Butterworth) writes:
>e.g. #define SIGNBIT(x) (0x8000 & (x))
>makes a big assumption about int being 16 bits.
>But #define SIGNBIT(x) ( (~(( (unsigned int)(~0) )>>1)) & (x) )
>will work fine regardless of the size of int, and will generate
>the same machine code as the first macro when int is 16 bits.
Thanks to Niels J|rgen Kruse (njk at diku.dk), who had to tell me twice
before I'd believe the obvious, and Karl Heuer (karl at haddock.isc.com),
who inform me that the above isn't quite as portable as I'd thought.
All of the operators are logical-bit-operators *except* for the cast.
Casting to (unsigned int) is an arithmetic operator and as such it
might change the bit pattern, in particular on 1's-complement and
sign-magnitude architectures.
The standard (3.1.2.5) requires that "(unsigned int)n" have the
same bit pattern as "n" if n is a non-negative int, so the cast
should of course be done to the 0, not to the ~0. i.e.
#define SIGNBIT(x) ( (~(( ~(unsigned int)0 )>>1)) & (x) )
>Coding for portabability may require a little extra effort,
>but it doesn't mean the result has to be any lesss efficient.
Maybe that "little" is an understatement.
More information about the Comp.lang.c
mailing list