unix question: files per directory
Blair P. Houghton
bph at buengc.BU.EDU
Sat Apr 15 20:19:01 AEST 1989
In article <127 at dg.dg.com> rec at dg.UUCP (Robert Cousins) writes:
>
>This brings up one of the major physical limiations of the System V
>file system: if you can have 2 ^ 24 blocks, and only 2 ^ 16 discrete
>files, then to harness the entire file system space, each file will
>(on average) have to be 2 ^ 8 blocks long or 128 K. Since we know that
>about 85% of all files on most unix systems are less than 8K and about
>half are under 1K, I personnally feel that the 16 bit inode number is
>a severe handicap.
>
>Robert Cousins
>
>Speaking for myself alone.
I'll stand behind you, big guy. I just hacked up a program to check out my
filesizes, and I'll be damned if I didn't think my thing was real big...
On the system I checked (the only one where I'm remotely "typical" :),
I have 854 files, probably two dozen of them zero-length (the result of
some automated VLSI-data-file processing). The mean is 10.2k, stdev is 60k
(warped by a few megabyte-monsters), and the median is 992 bytes (do
you also guess peoples' weight? :) Of these 854 files of mine, 84% are
under 8000 bytes, and a paltry eight exceed the 128k "manufacturer's
suggested inode load" you compute above.
For another machine:
1740 files
Median 1304 bytes
Mean 7752
StDev 28857
77% < 8kB
And only 4 (that's FOUR) over the 128k optimal mean.
Hrmph. And I thought I was more malevolent than that. At least the
sysadmins can't accuse me of being a rogue drain on the resources...
Consider that "block" can be 1,2,4kB or more, and you're talking some
BIIIG files we have to generate to be efficient with those blocknumbers.
--Blair
"...gon' go lick my wounded ego...
and ponder ways to make file
systems more efficient, or at least
more crowded. ;-)"
More information about the Comp.unix.questions
mailing list