data validation (was re self-modifying code)
Doug Gwyn
gwyn at brl-smoke.ARPA
Fri Jul 29 17:45:35 AEST 1988
In article <3084 at geac.UUCP> daveb at geac.UUCP (David Collier-Brown) writes:
> Subroutines should not check the number or type of input
> arguments, but assume they have been called correctly.
> What the Multicians are saying is exactly what Guy says: input
>routines validate input as part of their purpose in life. Other
>routines assume the data is valid, and don't put in checks unless
>thay have to deal with "versioned" structures.
> Depending on the hrdware or compiler to catch invalid data by
>trapping on its use has been a known bad practice since well before
>Unix...
Yes. The basic problem is that errors detected at unanticipated
points within the bowels of a program will not be handled intelligently.
(In theory it would almost be possible to provide reasonable recovery
from every such possible error, but in practice life is too short.)
On the other hand, unless the code is developed under some formal
verification system, there is a non-negligible chance that a high-
level oversight will permit a low-level routine to be invoked
improperly. Rather than behaving randomly, a suitable compromise
is to perform simple, CHEAP plausibility tests in the low-level
routines. For example, check that a pointer is non-null before
dereferencing it, or check that a count is nonnegative. Low-level
routines should try to behave as sanely as is reasonably possible.
I usually code up such verifications under control of assert(),
and turn them all off after the whole system has been thoroughly
shaken out. Some people recommend leaving the tests enabled
forever, as inexpensive insurance. Good run-time error handling
for a production release of a system should not rely on recovery
from such low-level interface errors anyway.
More information about the Comp.lang.c
mailing list