Abandon NULL for (0)
Chris Torek
chris at mimsy.UUCP
Fri Oct 6 21:55:01 AEST 1989
In article <15571 at nswitgould.cs.uts.oz> karl_auer_%7801.801 at fidogate.fido.oz
(1) writes:
>There is another good reason not to use '(0)' - in some
>implementations of C, pointers can have different sizes, requiring
>that NULL be sometimes defined as (0), sometimes as (0L) - as with
>almost all 80n86 implementations!
No, sorry; the above statement is false. The antecedent is correct
---in many systems, pointers of different types have different sizes
or formats. However, `0' is always a correct and proper source code
representation for the generic nil pointer, which must be converted
immediately into a specific nil pointer by cast, assignment, comparison,
or by being an argument to a function that has a prototype in scope.
(The first and last are both special cases of assignment.)
>Having a #define called NULL allows ... conditional #defines depending
>on memory model (the usual method).
IBM PC compiler vendors do this only because they are interested in
having incorrect source code compile to correct binary code, without
any work on the part of the user.
(It should not be surprising that it is possible to make correct
binaries from incorrect sources. For instance, the following
program works on a VAX:
short main[] = { 0, 0x50d4, 4 };
This is a very machine-specific implementation of /bin/true.)
The incorrect source code in question is usually of the form
void f() { g(NULL); }
void g(p) char *p; { if (p == NULL) printf("hello\n"); }
This code can be fixed in one of two ways: provide a prototype for
g(), or apply a cast:
void g(char *);
void f() { g(NULL); }
or
void f() { g((char *)NULL); }
Other, equivalent source code representations for this program can
also be devised (including using 0, (0), 0L, (1-1), etc. instead of
NULL).
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at cs.umd.edu Path: uunet!mimsy!chris
More information about the Comp.lang.c
mailing list