core files under SV
Conor P. Cahill
cpcahil at virtech.uucp
Tue Nov 21 00:02:25 AEST 1989
In article <3991 at sbcs.sunysb.edu>, brnstnd at stealth.acf.nyu.edu (Dan Bernstein) writes:
> If you use double-forking to avoid ten lines of child-handling code, you
> risk a (very improbable) race condition. And the extra fork wastes more
> time than a signal handler. And you may run out of processes.
What race condition? Maybe you think it is a race condition if the grandchild
aborts prior to the child exiting? That is no problem because zombies of
processes that die/exit are still inherited by init.
Older forks were very ineffecient, but with system V.3 forks, the kernel does
not make a complete copy of your process, it only flags the pages of the
current process as "copy on write" and copies the page table entries for the
new process. This makes fork() a much faster, by orders of magnitude I would
guess, system call.
Yes, you may run out of processes, but if you are so near your process limit
that a double fork to perform a core dump causes you to run into the limit,
the limit needs to be raised. The duration of these two processes will be
so short as to make them almost unnoticable (that is, of course, unless your
process is an 8meg process and you have slow disk drives).
The double fork serves two purposes. The first, and primary, reason is to
let you continue processing while the child process dumps the core. If you
want to do this and not have to worry about too many zombies attacking you
when you do this to often in the same process, the easiest solution is to
use the double fork. Of course, if you will never be executing any children
you could do a signal(SIGCLD,SIG_IGN);
--
+-----------------------------------------------------------------------+
| Conor P. Cahill uunet!virtech!cpcahil 703-430-9247 !
| Virtual Technologies Inc., P. O. Box 876, Sterling, VA 22170 |
+-----------------------------------------------------------------------+
More information about the Comp.unix.questions
mailing list