Computational complexity of rm & ls
wsmith at m.cs.uiuc.edu
wsmith at m.cs.uiuc.edu
Thu Mar 9 09:46:00 AEST 1989
I did a test of the computational complexity of the ls and rm programs.
I created directories with a large number of files in them (large = 1000-2000).
Then, I timed ls commands in these directories. It appeared that the
program was quadratic in the amount of system time that it took and n*log n
in user time. (This is on a encore 4.2 berkeley Unix.) When I tried it
under 4.3 Berkeley Unix on a VAX, it appeared to be order n in system time
and n log n user time. The 4.3 version is obviously better.
How do other versions of unix perform on large directories?
To clean up my mess, I removed the files in two steps. First deleting
half with `rm *a` rmand then deleting the rest with `rm *` The 4.3 Unix
appeared to be quadratic in system time for this operation, while the 4.2 Unix
was cubic or worse in the amount of system time required!
(I'm only measuring the curve at 2 points, so it is hard to get a good
estimate of the complexity.)
Surely rm can do better than cubic, can't it? "rm *" seems to be a common
enough idiom that rm could be optimized to handle that case better. I would
like rm to be linear in system time. Is that possible?
Bill Smith
wsmith at cs.uiuc.edu
uiucdcs!wsmith
More information about the Comp.unix.questions
mailing list