use of NULL
Chris Torek
chris at mimsy.UUCP
Sun Feb 19 02:29:08 AEST 1989
>In article <9582 at smoke.BRL.MIL> gwyn at smoke.BRL.MIL (Doug Gwyn) writes:
>>Using 0 instead of NULL is perfectly acceptable.
In article <965 at optilink.UUCP> cramer at optilink.UUCP (Clayton Cramer)
substitutes too many `>'s (I patched the references line) and writes:
>No it isn't. Segmented architecture machines will have problems with
>that in large model. Microsoft defines NULL as 0L, not 0, in large
>model. Pushing an int 0 instead of a long 0 will screw you royally
>on the PC.
Doug Gwyn is right; you are reading things into his posting that
are not there. Using 0 wherever NULL is strictly legal will always
work. Never mind the fact that, by trickery, Microsoft C defines NULL
in a way that works for INCORRECT calls in large model (but *not* for
medium nor compact models), as a concession to bad programmers' wrong
programs.
Time for another rerun....
From: chris at mimsy.UUCP (Chris Torek)
Newsgroups: comp.lang.c
Subject: Why NULL is 0
Summary: you have seen this before, but this one is for reference
Date: 9 Mar 88 02:26:10 GMT
(You may wish to save this, keeping it handy to show to anyone who
claims `#define NULL 0 is wrong, it should be #define NULL <xyzzy>'.
I intend to do so, at any rate.)
Let us begin by postulating the existence of a machine and a compiler
for that machine. This machine, which I will call a `Prime', or
sometimes `PR1ME', for obscure reasons such as the fact that it
exists, has two kinds of pointers. `Character pointers', or objects
of type (char *), are 48 bits wide. All other pointers, such as
(int *) and (double *), are 32 bits wide.
Now suppose we have the following C code:
main()
{
f1(NULL); /* wrong */
f2(NULL); /* wrong */
exit(0);
}
f1(cp) char *cp; { if (cp != NULL) *cp = 'a'; }
f2(dp) double *dp; { if (dp != NULL) *dp = 2.2; }
There are two lines marked `wrong'. Now suppose we were to define NULL
as 0. Clearly both calls are then wrong: both pass `(int)0', when the
first should be a 48 bit (char *) nil pointer and the second a 32 bit
(double *) nil pointer.
Someone claims we can fix that by defining NULL as (char *)0. Suppose
we do. Then the first call is correct, but the second now passes a
48 bit (char *) nil pointer instead of a 32 bit (double *) nil pointer.
So much for that solution.
Ah, I hear another. We should define NULL as (void *)0. Suppose we
do. Then at least one call is not correct, because one should pass
a 32 bit value and one a 48 bit value. If (void *) is 48 bits, the
second is wrong; if it is 32 bits, the first is wrong.
Obviously there is no solution. Or is there? Suppose we change
the calls themselves, rather than the definition of NULL:
main()
{
f1((char *)0);
f2((double *)0);
exit(0);
}
Now both calls are correct, because the first passes a 48 bit (char *)
nil pointer, and the second a 32 bit (double *) nil pointer. And
if we define NULL with
#define NULL 0
we can then replace the two `0's with `NULL's:
main()
{
f1((char *)NULL);
f2((double *)NULL);
exit(0);
}
The preprocessor changes both NULLs to 0s, and the code remains
correct.
On a machine such as the hypothetical `Prime', there is no single
definition of NULL that will make uncasted, un-prototyped arguments
correct in all cases. The C language provides a reasonable means
of making the arguments correct, but it is not via `#define'.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
More information about the Comp.lang.c
mailing list