Too Many Files Open Error During Shared Memory Attach
Sean Landis
scl at unislc.uucp
Sat Jun 8 08:28:06 AEST 1991
staggers at casbah.acns.nwu.edu (Ken Staggers) writes:
>If anybody knows the answer to the problem below, please respond as soon
>as possible....thanks a plenty!
> ... Lot's deleted ...
The man page for shmop(2) indicates:
[EMFILE] The number of shared memory segments attached
to the calling process would exceed the
system-imposed limit.
If you look in /usr/include/sys/errno.h:
...
#define EMFILE 24 /* Too many open files */
...
Which is what drives perror(). Now what this really means is that your
process is violating a tunable parameter called SHMSEG. The attachments
are inherited by the forked process that tries to do one more and fails.
Maybe you could do the attach after the exec()? I know that it removes
some of the object-orientedness from your code, but processes in say
UNIX, do have some startup code in them anyway.
To get a picture of what is going on, use ipcs(1) command to get
interprocess communication status.
Hope this helps,
Sean
--
Sean C. Landis | {hpda, sun, uplherc}!unislc!scl
Unisys Open Systems Group | unislc!scl at cs.utah.edu
320 North 2200 West B2D01 | (801) 594-3988
Salt Lake City, Utah 84116 | (801) 594-3827 Fax
More information about the Comp.unix.questions
mailing list