fragmentation
rbriber at poly1.nist.gov
rbriber at poly1.nist.gov
Wed Jul 4 11:13:11 AEST 1990
pdi!shoshana at sgi.com asks about file fragmentation:
AT&T filesystem was subject to serious fragmentation, ie, the longer
a file had been around (assuming it grew gradually) the more
fragmented it became (its blocks were all over the disk) and the
slower its access time was. Over time, then entire operating system
could be seen to gradually slow down. The only cure for this was to
back the whole thing up on to tape, remake the filesystem and
restore.
Along came the Berkeley filesystem, with cylinder groups and other
niceties. With the Berkeley fs, files were completely rewritten
to disk when a size increase would have fragmented them - in other
words, the old blocks were deallocated and new, contiguous blocks
(in the cylinder group sense) were allocated. This system sustained
performance over time, but slowed noticeably when the disk became
more than 90% full.
So here are my questions:
1) Assuming the above is correct, which scheme does SGI use?
This is a question I am interested in also and haven't read/heard anything
about on SGI machines. Is file fragmentation a problem and if it is what
are the choices to correct it?
--
----------------------------------------------------------------------------
| Adios Amoebas, | "I've tried and I've tried and I'm still mystified, |
| Robert Briber | I can't do it anymore and I'm not satisfied." |
| 224/B210 NIST | --Elvis |
| Gaithersburg, MD |------------------------------------------------------|
| 20899 USA | rbriber at poly1.nist.gov (Internet) |
|(301) 975-6775(voice)| rbriber at enh.nist.gov (Internet) |
|(301) 975-2128 (fax) | rbriber at nbsenh (Bitnet) |
----------------------------------------------------------------------------
More information about the Comp.sys.sgi
mailing list