NOFILES bug in System V.3
Walter_James_DeReu at cup.portal.com
Walter_James_DeReu at cup.portal.com
Tue Nov 22 12:27:45 AEST 1988
I've lost the posting, but a System V.3 problem was reported earlier
which occurred when NOFILES exceeded 20, files were opened via system
calls (bypassing fopen()), and the stdio routines were used. The _bufendtab
array is indexed by the UNIX file descriptor but dimensioned under the
assumption that file descriptors won't exceed 20, and memory following
that array gets corrupted.
I thought I could work around this by ensuring that fopen'd files had
file descriptors < 20 while using larger file descriptors for files being
accessed via direct calls to open(), read(), and write(). This almost
worked, but I encountered a problem when I called sscanf() with a
file open as file descriptor 20.
I don't have source, but after much time in sdb I believe I know what is
happening. Sscanf() apparently dummies up a FILE struct and then calls
_doscan() to do the real work. It uses the string being scanned as the
"buffer" in the FILE struct and sets to file descriptor to 20. When
_doscan() reaches the end of your the, it calls _filbuf() to read the
next block from the file into the "buffer". If file descriptor 20 isn't
an open file, the read fails, _doscan() returns, and everything is fine.
But if a file is open for reading as file descriptor 20, it will be
read into the "buffer" -- which is really the string being scanned.
Furthermore, the number of bytes read into the "buffer" is _bufendtab[20] -
the address of the buffer. In my case, this was 0 - 0x7ffffsomething,
with an unsigned result of about 2 billion characters! This does
a bit of damage to the stack.
This can be demonstrated with the following program:
#include <stdio.h>
main()
{
char buf1[100];
char buf2[10];
char buf3[100];
int i;
strcpy(buf1, "Buffer 1");
strcpy(buf2, "01/02/03");
strcpy(buf3, "Buffer 3");
while (dup(0) != 20)
;
sscanf(buf2, "%d/%d/%d", &i, &i, &i);
printf("Buf1: %s\nBuf 2: %s\nBuf 3: %s\n");
}
You must have NOFILES configured for more than 20 (or the while-dup
will loop forever). On the two machines I have tested (AT&T 3B15
and an 80386 running Interactive UNIX) the sscanf() will cause a
read from stdin. If you type a fairly long string, sscanf() will
trash both buf2 and buf3 before returning. If you type a REALLY
long string or redirect your input to a large file, a core dump ensues.
I hesitate to call this a bug because I believe AT&T has documented that
you can't use stdio in programs which use more than 20 files. On the other
hand, certain standard UNIX programs (such as the C preprocessor) dump
core if they inherit 15 or so open files and are given a task to perform
which requires them to open more than 5 files (such as compiling a program
with deeply nested #includes).
I am currently working around this by arranging for /dev/null to be
opened as file descriptor 20. This keeps my buffer from getting trashed
but is hardly elegant. Does anyone have a better solution?
More information about the Comp.bugs.sys5
mailing list