Memory Disks
utzoo!decvax!ucbvax!hpvax!sri-unix!mike at BRL
utzoo!decvax!ucbvax!hpvax!sri-unix!mike at BRL
Sat Jan 16 08:02:10 AEST 1982
In response to the letter from Randy King -
We at BRL have been a very good customer of Dataram, with 3 BULK MOS and
1 BULK CORE memory systems installed (across a base of 5 machines).
However, we have found them significantly more useful for filesystem
disks than for swapping. Let me describe one of our configurations:
BRL-BMD (3/29): 11/70, with 3 80-Mbyte Massbus disks, 4096 blocks of
BULK CORE for the system root, 4096 of BULK MOS for /tmp, 1 RK05 for
swapping, 1 RK05 for users, 1.5 Mbytes of 450ns CacheBus MOS.
On this system, swapping runs 2 - 15 swaps/sec, with an average for a
loaded system (30 - 35 users) of about 7/sec. Clearly the 15/sec is
the maximum of an RK05, but... On the other hand, I sometimes may
see 75 - 200 blocks/sec of I/O for the ROOT or /tmp disks. (BTW, we
have 65 blocks of kernel buffer cache, and kernel inode caching).
The effects of this configuration on system response have been significant.
UNIXes are I/O hungry creatues. When our next Massbus Controler & 300 Mbyte
disks come in a few weeks, I expect performance to increase even further.
The load here is pretty evenly split: AOS, secretaries, DBMS stuff,
F4P & F77 compiles and tests, lots of "C" compiles, 10-15 ED or EMACS users,
etc, etc, etc.
On our remaining systems, we are using the BULK MOS just for /tmp,
which makes compilers and editors respond a lot better. Having fast
random access memory for mere swap space is unlikely to help anything
except systems with overabundant I/O, or 18-bit address space (like
34s and 45s).
As soon as the dual V6/V7 filesystem code is finished, we plan to
make an attempt to have "exec" sense when pieces of files (or whole files)
happen to be contiguous, and doing direct DMA of the contiguous portions
into memory. (There will be NO way to write files that way; it will
be left to the good graces of ALLOC, and standalone disk compactors).
On a system which may be doing 5 - 8 exec's/sec, fetching the individual
pieces of programs is quite a drain on the I/O system. I realize that
this sounds distasteful, but I think that this is the only good way
to approach this.
(On a similar subject, Bill Lindemann implemented direct DMA filesystem
I/O, and found that the cost of doing the PHYSIO was about the same
as going through the buffer cache, and copying out to user space. I
use this as additional evidence that the exec stuff will make a real
difference, as we will only have to do 1 physio for the program load,
and save all the block copies).
PDP-11's arn't dead yet!
-Mike
More information about the Comp.unix.wizards
mailing list