strange floating point exception interrupt behavior
Steve Dempsey
sdempsey at UCSD.EDU
Thu Jul 26 10:22:26 AEST 1990
The following discussion pertains to a 4D/25TG running 3.2.1 and a
4D/340VGX running 3.3.
Recently I have been doing a performance analysis of a number cruncher
program that runs much more slowly on IRISes than one would expect.
I fired up gr_osview and ran the program, expecting to see lots of
system calls or swapping, and an indication of where the cpu time was being
wasted. What I saw was something quite strange! The cpu was spending
99% of its time in user mode, just like any decent number cruncher should.
The shock came from the interrupt rate, which went from a background level
of 200-400 per second up to ~20K per second (35K on the 340VGX!)
Ultimately, I discovered that the extra interrupts were occuring whenever
floating point operations resulted in underflow. This behavior can be
demonstrated by compiling and running this code:
#include <math.h>
main()
{
double x, y, z;
int i;
y = MINDOUBLE;
z = 0.5;
i = 10000000;
while(i--) x = y * z;
}
Both C and Fortran versions of this code produce the same results.
I tried similar tests, forcing overflows and divide-by-zero, but no extra
interrupts were found for these floating exceptions.
Can anybody explain what's so special about underflows, and why do I get
interrupts even though the floating point exception interrupts are not enabled?
--------------------------------------------------------------------------------
Steve Dempsey (619) 534-0208
Dept. of Chemistry Computer Facility, 0314 INTERNET: sdempsey at ucsd.edu
University of California, San Diego BITNET: sdempsey at ucsd
La Jolla, CA 92093-0314 UUCP: ucsd!sdempsey
More information about the Comp.sys.sgi
mailing list