Signed char - What Foolishness Is This!

Brent Chapman chapman at cory.Berkeley.EDU
Tue Oct 21 10:09:44 AEST 1986

In article <14 at ems.UUCP> mark at ems.UUCP (Mark Colburn) writes:
>It is important to note that K&R define that:
>	char  8 or more bits
>	short 16 or more bits

Just _WHERE_ does K&R say this?  No place that I've ever seen...  The only
thing that I can figure is that you are inferring these "minimum" values
from the table of _sample_ type sizes on p. 34; this is not a good thing
to do.

Note to everyone: If you're going to quote from something, especially K&R,
_please_ check to make sure it says what you _think_ it says, and then 
include the page number of the info which supports your posting.

>Although these values may be implementation specific.  On my 68020 based
>machine, shorts are 16 bits.  When I need an 8 bit unsigned value (e.g. a byte)
>in my code (which happens quite frequently when you are writing software to
>support 8 bit CPU's) I use 'unsigned char'.
>I got myself into all sorts of trouble when I was first using C because I
>assumed that if an int is 16 bits, then a short must be 8.  Right?  Wrong!

Definitely wrong.  On p. 34, K&R say "The intent is that short and long 
should provide different lengths of ingeters _where practical_ [emphasis
mine -- Brent]; int will normall reflect the most "natural" size for a
particular machine.  As you can see, each compiler is free to interpret
short and long as appropriate for its own hardware.  About all you should
count on is that short is no longer than long."

Nowhere (that I'm aware of, anyway, and I looked carefully for it) does
K&R say that ints must be at least 16 bits, nor that chars must be at
least 8 bits.  I seem to recall hearing about some screwey machine 
whose "character size" and "most natural integer size" were both 12
bits; for that machine, types 'char', 'int', and 'short' were all 12-bit

>Therefore, the only portable way to express a true byte (8-bit) value is with
>an 'unsigned int' declaration.  This may still get you into trouble when you
>are working on a compiler that uses characters that are more than 8 bits.

'unsigned int'?  Are you sure you don't mean 'unsigned char'?  But even if
you do, there's no guarantee that you get what you call a "true byte"; there's
nothing in K&R that outlaws a 7-bit char, for instance.  The definiton of
char (again, on p. 34) is "a single byte, capable of holding one character
in the local character set".  Note that "byte" doesn't automatically mean
"8 bits".


chapman at	or	ucbvax!cory!chapman
Brent Chapman

chapman at	or	ucbvax!cory!chapman

More information about the Comp.lang.c mailing list