XLF optimizer unreliable, inappropriate for benchmarks
George Seibel
seibel at cgl.ucsf.edu
Thu Sep 20 13:24:44 AEST 1990
In article <384 at nwnexus.WA.COM> golder at nwnexus.WA.COM (Warren Jones) writes:
>
>However, until the XLF optimizer can be trusted, realistic
>benchmarks should be compiled with optimization OFF. It
>doesn't matter how fast the machine is if the results are wrong.
Nonsense. Realistic benchmarks are expected to produce answers
that can be validated. I'm not interested in unoptimized benchmarks;
If I'm buying an optimizer, I want to know how well it works. *IF*
a particular benchmark can't pass because of an optimizer bug, then
compile at a lower level of optimization, but say so.
>I'm including a short fortran program that demonstrates an
>optimizer bug that one of our applications people discovered.
>For the time being, I will not use the optimizer, period.
>It's not worth the grief.
I've found optimizer bugs on just about every machine that I've
put in a lot of time with. They happen; that's the price you pay
for the extra speed that the optimizer provides, and that's why
software should have validation suites. If you don't validate your
code whenever the compiler, libs, O/S, hardware, or your own code
changes, you take your chances. This is true on every machine, although
it may be more true on some than on others. I generally split up
large applications into numerically intensive portions and "all the rest",
and only optimize the numerically intensive part. This provides a
lot of protection against optimizer bugs.
George Seibel, UCSF
seibel at cgl.ucsf.edu
More information about the Comp.unix.aix
mailing list