File system performance
Piercarlo Grandi
pcg at cs.aber.ac.uk
Tue Nov 13 06:04:02 AEST 1990
On 10 Nov 90 21:43:05 GMT, david at twg.com (David S. Herron) said:
[ ... on the BSD FFS and its ESIX incarnation ... ]
david> That 10% limit is a heuristic invented when Berkeley invented FFS
david> designed to help keep fragmentation down.
No, it is designed to prevent the hashed quadratic search for a cylinder
group with free blocks to be repeated too many times. It has *nothing*
to do with fragmentation, only with the distribution of free blocks.
[ ... on defragmenting the BSD FFS online ... ]
david> Theoretically, yes. It would work best if the disk were
david> `unmounted' first which is easiest to do if the system were
david> brought to single user. It would also require writing software
david> which would first sort the free list, then reorder all the data
david> blocks into as contiguous an order as possible. The method is
david> left as an excercise to the reader.
But it would be pointless. The BSD FFS already keeps the blocks in
nearly optimal order, without rewuiring reorganization. I doubt very
much that in a FFS partition with sufficient free space (>10%)
reorganization would buy a lot of extra thruput, whether the
reoganization is online or offline.
Moreover again: the issue we are discussing here is *scattering*, not
fragmentation. In the BSD FFS fragmentation is the splitting of large
blocks into small blocks for file tails.
--
Piercarlo Grandi | ARPA: pcg%uk.ac.aber.cs at nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg at cs.aber.ac.uk
More information about the Comp.unix.sysv386
mailing list