Dump(8) speedups and the mass driver
Chris Torek
chris at umcp-cs.UUCP
Thu Aug 22 19:48:55 AEST 1985
A while back I posted a mass (pardon the pun) of 4.2BSD kernel +
dump(8) hacks (something I call the "mass driver") for speeding up
dumps. Well, when I got in yesterday, around 7PM, our big 785 had
just gone down for a level 0 dump. I decided to time things, and
send out some "hard data" on the effectiveness of the mass driver.
Here is how the disks are set up, right now:
Filesystem kbytes used avail capacity Mounted on
/dev/hp0a 7421 6445 233 97% /
/dev/hp1g 120791 80891 27820 74% /foonman
/dev/hp2c 236031 194710 17717 92% /ful
/dev/hp3a 179423 150545 10935 93% /oldusr
/dev/hp3b 179423 148005 13475 92% /g
/dev/hp4a 179423 157349 4131 97% /usr
/dev/hp4d 15019 7469 6048 55% /usr/spool
/dev/hp5c 389247 317269 33053 91% /u
(I've deleted the entries for disk partitons that we don't dump, e.g.,
/tmp.) "bc" tells me that this sums to 1,062,683K---just a bit over
one gigabyte.
We dumped all of that to two tape drives in under two hours.
------[suspenseful pause; commercial break; etc.]------
First, I need to describe our configuration. It looks approximately
like this:
<===================SBI===================>
||| ||| |||
RH0 RH1 RH2
| | | ||| |
RM05 | TU77 4 Fuji Eagles TU78
2 RP06s
(The '|||'s are intended to indicate higher bandwidth (-:. "RH"
is DECish for a MASSBUSS. RH1 is not a real MASSBUSS, but rather
an Emulex controller that emulates one.) Two weeks ago I observed
another level 0 dump in progress. We had been using a rather
suboptimal approach; it turns out that dumping the RM05 to the TU78
while dumping one of the Eagles to the TU77 uses up too much of
RH0's bandwidth, or something; in any case we have reordered the
dumps to run from the Eagles to the TU78 and from other drives to
the TU77. It helps.
Anyway, onward! to more timing data. I timed three tapes written
on the TU77, dumping from /usr (one of the Eagle partitions). The
total write time (not counting the 1 minute 20 seconds for rewind)
was in each case under 5 minutes (I recall 4:30, 4:40, and 4:45 I
think; but I didn't write these down). A TU77 has a maximum forward
speed of 125 inches per second, which works out to 3 minutes 50.4
seconds to write an entire 2400 foot reel. 4:40 gives an average
forward write speed of 102.857 inches per second, which is not bad.
Unfortunately, by the time I thought of timing these we had already
started the last full reel for the TU78, so I don't have numbers
for it; however, it seemed to write a full reel in a little under
twice as long as the TU77, so I'd estimate 9 minutes per tape.
Since the TU78 was running at 6250 bpi, this is not bad; it works
out to about twice the data tranfer rate of the TU77.
In any case, the total time for the dump, including loading tapes
and other small delays, was 1 hour 56 minutes from start to finish.
This compares quite well to the 4.1BSD days of six hour dumps for
two RP06s (~250M dumped on just a TU77), or our pre-mass-driver
4.2BSD days of four or five hour dumps for the configuration listed
above (but somewhat less full). Further improvement is unlikely,
short of additional tape drives.
------[another break, of sorts]------
For those who have stuck with me to the end of this (admittedly
long and getting wordier by the moment :-) ) article, here's a
small ugly kernel hack that should be installed after the mass
driver, to make old executables with a bug in stdio work. In
sys/sys/sys_inode.c, find ino_stat() and change the two lines
reading
else if ((ip->i_mode&IFMT) == IFCHR)
sb->st_blksize = MAXBSIZE;
to
else if ((ip->i_mode&IFMT) == IFCHR)
#if MAXBSIZE > 8192 /* XXX required for old binaries */
sb->st_blksize = 8192;
#else
sb->st_blksize = MAXBSIZE;
#endif
This generally shows up when doing things like
grep e /usr/dict/words >/dev/null
as random weird errors, core dumps, and the like (stdio is tromping
on whatever data occurs after _sobuf in the executable).
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP: seismo!umcp-cs!chris
CSNet: chris at umcp-cs ARPA: chris at maryland
More information about the Comp.unix.wizards
mailing list