bit patterns of all ones
Richard Kuhns
rjk at mrstve.UUCP
Sat Dec 27 01:14:07 AEST 1986
In article <1527 at hoptoad.uucp> gnu at hoptoad.uucp (John Gilmore) writes:
>Ken Ballou seems to have presented good and valid reasons derived from H&S
>that (unsigned)-1 must have all the bits on. So far nobody has refuted
>him. I think he's right -- the cast to unsigned *must*, by the C language
>definition, convert whatever bit pattern -1 has into all ones. This
>is no worse than casting -1 to float causing a change its bit pattern --
>and it's for the same reason.
I don't understand. On a ones-complement machine, -1 is represented by
sizeof(whatever) - 1 ones followed by a zero. How does casting this
value to unsigned get rid of the zero? To wit:
00000001(binary) = 1 (decimal)
11111110(binary) = -1 (decimal, ones complement, signed)
If the second value above is cast to unsigned, we end up with 254(decimal).
What does this have to do with a bit pattern of all ones?
--
Rich Kuhns {ihnp4, decvax, etc...}!pur-ee!pur-phy!mrstve!rjk
More information about the Comp.lang.c
mailing list