Computational complexity of rm & ls
Leslie Mikesell
les at chinet.chi.il.us
Mon Mar 13 07:39:08 AEST 1989
In article <9000014 at m.cs.uiuc.edu> wsmith at m.cs.uiuc.edu writes:
>What brought this up in the first place was an attempt to decide whether
>it was better to have 10000 or more files in one directory, to have a
>fraction of the files in several directories, or at the extreme case,
>to have 100 of the files in each of 100 directories if I wanted to be
>able open() individual files quickly.
The maximum optimal size probably varies with the OS version. I've
been told that directory access becomes much less efficient when
the directory inode goes to triple indirect blocks (300 files?).
Under SysV directories never shrink by themselves, so access efficiency
depends on how many files have ever been in that directory at once
instead of how many are currently there. If you have a scheme to
determine which of 100 directories a particular file will be stored
under, that calculation is almost certain to be faster than a search
of many hundreds of extra files in the same directory.
Les Mikesell
More information about the Comp.unix.questions
mailing list