use of NULL
Doug Gwyn
gwyn at smoke.BRL.MIL
Sun Feb 19 11:23:13 AEST 1989
In article <965 at optilink.UUCP> cramer at optilink.UUCP (Clayton Cramer) writes:
-In article <9582 at smoke.BRL.MIL., gwyn at smoke.BRL.MIL (Doug Gwyn ) writes:
-. In article <1340 at uwbull.uwbln.UUCP. ckl at uwbln.UUCP (Christoph Kuenkel) writes:
-. .its much more obvious to write 0, when I mean zero, ((char *) 0) when
-. .I mean a zero character pointer, etc. etc.
-. Using 0 instead of NULL is perfectly acceptable.
-No it isn't. Segmented architecture machines will have problems with
-that in large model. Microsoft defines NULL as 0L, not 0, in large
-model. Pushing an int 0 instead of a long 0 will screw you royally
-on the PC.
It is always non-portable to pass the macro NULL, or 0 or 0L for that
matter, as a pointer argument to a function without casting it to the
correct pointer type. MicroSoft's definition appears to have been an
ill-considered attempt to make sloppily-written code have a greater
chance of working when compiled in their environment. However, such
code won't port correctly to other systems until the misusage of NULL
is fixed anyway.
There is nothing "magic" about NULL as defined by <stdio.h> etc. It's
just a mnemonic for 0, ((void*)0), or some similar expression. The C
language itself guarantees that the programmer can use unadorned 0 in
pointer comparisons, assignments, etc. Given the way some vendors
have botched the definition of NULL, it may even be advisable to avoid
using the NULL macro. It is never *necessary* to use it in portable C
programming.
More information about the Comp.lang.c
mailing list