Value of microeffiency (was: Re: Optimal ...)
Chris Torek
chris at mimsy.UUCP
Sat Jul 9 01:22:44 AEST 1988
In article <3693 at rpp386.UUCP> jfh at rpp386.UUCP (John F. Haugh II) writes:
>if i write a sort routine with O(n*ln n) and you write one O(n**2),
>how much ``micro optimization'' is it going to take to outperform my
>poorly implemented, but superior time complexity, algorithm? if your
>idea of a really swell sort algorithm is a bubble sort, no matter how
>much optimizing you perform you are going to lose.
>
>even a highly optimized bad algorithm is still a bad algorithm.
This is true, but be careful: theoretical average performance on
theoretical average data is not the same as actual performance on
actual data. For instance:
swap(a, m, n):
t <- a[m]; a[m] <- a[n]; a[n] <- t;
bsort(a, n):
do
sorted <- true;
for i in [0..n-1) do
if a[i] > a[i+1] then
swap(a, i, j);
sorted <- false;
fi;
rof;
until sorted;
versus a quicksort of the form
qsort(a, l, h):
if l >= h then return; fi;
i <- l; j <- h;
do
while a[i] < a[h] do i++;
while --j > i and a[j] >= a[h] do null;
while (i < j and [swap(a, i, j); true]);
if i < h then
swap(a, i, h);
qsort(a, i + 1, h);
fi;
qsort(a, l, i - 1);
In the `usual' case, qsort wins by great leaps. But what if the
array is already sorted? (There is a fix for qsort, involving
choosing a median partition element, but then suppose that n<4:
bsort will still win, on sheer simplicity.)
------------------------------------------------------
Always consider actual data when analysing algorithms.
------------------------------------------------------
In practise, I have used a routine which either does an insertion
sort or a quicksort, depending on how scrambled the data are
expected to be. Profiling showed that this worked quite well.
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain: chris at mimsy.umd.edu Path: uunet!mimsy!chris
More information about the Comp.lang.c
mailing list