4.2 scheduler (and CPU utilization)

Guy Harris guy at sun.uucp
Sat Dec 7 16:26:17 AEST 1985


> to set the record straight, a properly configured and tuned MVS system
> running flat out will spend between 5 and 15 percent of it's time in
> the kernel or related system programs averaged over a 15 minute
> interval.

You work for IBM, I don't, so I'll take your word for it.  But...

> a 4.2 bsd system properly configured and tuned will spend
> about 25 to 35 percent of it's time executing in the kernel.  if you
> are willing to live with 45 percent system time averaged over 15
> minutes, your system is overloaded for good response.

Well, I don't know what the averaging period was for the system they were
talking about in the Berkeley paper.  However, I've seen systems running a
"normal" load showing anywhere between a 90% user/ 1-10% system split to a
30% user / 60% system split ("instantaneous" figures from "vmstat").
Another person here whose opinion I respect says that the 45% figure is
reasonable.

> for a 4M 780, the 15 minute load average rising above 10 is a warning that
> system limits are being reached.  any time the 1 minute load average is
> over 14 for more than 30 seconds, the system is beginning to thrash.
> kernel character echo at 9600 baud is noticeably slower and and almost
> stops when it reaches about 20.

gorodish$ uptime
  8:14pm  up 10 days,  7:57,  1 users,  load average: 20.88, 20.45, 19.71

At that time, kernel character echo was not noticeably slower than when the
system was unloaded.  Furthermore, the "uptime" command came back within a
couple of seconds after I hit the return.

The load average is just an average of the length of the run queue over a
particular period of time.  It may *indicate* how loaded the system is, but
its numerical value doesn't necessarily directly indicate anything.  In this
particular case, I kicked off 20 programs doing a "branch-to-self".  The
load average of ~20 should not be surprising.  Kernel character echo is done
at interrupt level, so user-mode activity should not greatly affect it (it
may affect interrupt latency, or toss kernel code or data from a cache or
toss page table entries from a translation lookaside buffer - admittedly,
the latter two don't apply on a Sun).

Your noting the memory size of the VAX in question, and your reference to
"system" limits rather than "CPU" limits, indicate that you're thinking of a
particular job mix.  If you pile on more jobs of some sort which consumes
memory, then your paging rate will go up and cause more I/O traffic.  If the
system is spending enough time in the disk driver interrupt routine, then
yes, it could lock out terminal interrupts and slow down echoing.

If, however, you have a bunch of jobs FFTing or inverting matrices or doing
ray-tracing or some other sort of compute-intensive work, and they have
enough physical memory so that they do little or no paging, you should see
minimal impact on interrupt-driven activities (such as echoing) and it
shouldn't totally destroy interactive activities (such as relatively quick
commands) - the UNIX scheduler does give *some* weight to patterns of CPU
usage, after all.  (Yes, screen editors aren't happy, but screen editors do
eat a lot of CPU, and that's the scarce resource in this example.)

	Guy Harris



More information about the Comp.unix.wizards mailing list