Analysis of algorithms (was Value of microeffiency)
David Collier-Brown
daveb at geac.UUCP
Mon Jul 18 22:58:39 AEST 1988
> In article <12369 at mimsy.UUCP> chris at mimsy.UUCP (Chris Torek) writes:
>>This is true, but be careful: theoretical average performance on
>>theoretical average data is not the same as actual performance on
>>actual data. For instance: [bsort -vs- qsort]
>From article <165 at quintus.UUCP>, by ok at quintus.uucp (Richard A. O'Keefe):
> There is no substitute for checking the literature. According to all
> three of Knuth V3, Melhorn, and Sedgewick, the average case of quick-
> sort does about 1.4 times ...
A side point here: the analysis of the behavior of algorithms
tends NOT to consider other than the worst case, with occasional
consideration of a limit when the data is small. (The good authors
tend to do a much broader analysis, see below).
What is not often considered is an analysis of the shape of the
performance curve under differing circumstances. The only ones I've
seen were in a paper on sort behavior, which contrasted
x
. . x
. x
. with x
. x
. x x
and concluded that there really **were** advantages for using
algorithm "." in certain cases.
Part of the reason is that an "O" calculation is fairly easy, if
challenging. A broader analysis requires a lot of plain, ordinary,
non-mathematical drudgery, counting comparisons in programs and
simulating performance across a whole family of data-points. Knuth
tends to do it, because he writes out implementations in MIX, but it
is otherwise not all that common...
--dave
--
David Collier-Brown. {mnetor yunexus utgpu}!geac!daveb
Geac Computers Ltd., | Computer science loses its
350 Steelcase Road, | memory, if not its mind,
Markham, Ontario. | every six months.
More information about the Comp.lang.c
mailing list