Filesystem fragmentation / performance
Shoshana Abrass
shoshana at pdi.UUCP
Wed Jul 4 04:22:25 AEST 1990
People here are confused about the SGI's filesystem, and I'm hoping
someone out there can clear things up. As I understand it, the old
AT&T filesystem was subject to serious fragmentation, ie, the longer
a file had been around (assuming it grew gradually) the more
fragmented it became (its blocks were all over the disk) and the
slower its access time was. Over time, then entire operating system
could be seen to gradually slow down. The only cure for this was to
back the whole thing up on to tape, remake the filesystem and
restore.
Along came the Berkeley filesystem, with cylinder groups and other
niceties. With the Berkeley fs, files were completely rewritten
to disk when a size increase would have fragmented them - in other
words, the old blocks were deallocated and new, contiguous blocks
(in the cylinder group sense) were allocated. This system sustained
performance over time, but slowed noticeably when the disk became
more than 90% full.
So here are my questions:
1) Assuming the above is correct, which scheme does SGI use?
2) In the AT&T system, if your use pattern was to create a lot of
files - then remove them all - then make a bunch more - would you
still see a gradual slowdown in performance? ie, if you didn't
have files that increased in size over time?
3) Can aging slow the Berkeley fs and, if so, how can it be fixed?
As you might guess, some of our PI's are apparently slower than
other PI's with identical hardware, and we're seeking a reason. Can
anyone think of age problems other than fragmentation?
Thanks for any help - I'll summarize significant replies that don't
get posted.
-shoshana
Shoshana Abrass
pdi!shoshana at sgi.com
--------------- Disclaimer necessitated by mailpath: ----------------
I don't work for sgi, I just work downstream.
---------------------------------------------------------------------
More information about the Comp.sys.sgi
mailing list