4.2 scheduler (and CPU utilization)

Herb Chong herbie at polaris.UUCP
Mon Dec 9 09:27:05 AEST 1985


In article <3062 at sun.uucp> guy at sun.uucp (Guy Harris) writes:
>Well, I don't know what the averaging period was for the system they were
>talking about in the Berkeley paper.  However, I've seen systems running a
>"normal" load showing anywhere between a 90% user/ 1-10% system split to a
>30% user / 60% system split ("instantaneous" figures from "vmstat").
>Another person here whose opinion I respect says that the 45% figure is
>reasonable.

okay, i admit i didn't indicate what work was being run.  try with 10
vi sessions and 8 troff's and leave 2 for trivial commands and you get
what i got.  the load average is not a good indicator of the work
being done in general on a system, but for a given process mix, it is.

BTW, i once started up 20 programs that allocate and access randomly
3Mbytes of arrays each.  i tried an uptime and it never came back even
though i waited about 10 minutes.  since i was running single user when
i was "benchamarking" the entire system, i knew that those processes
were the only other processes running besides myself.  character echo
stopped completely and never did i see anything more after i typed in
uptime.  this special case drove our 4M 780 running 4.2 into thrashing
once more than 4 of the programs were running.  running about 10
long CPU and IO bucholz benchmark scripts did the same but that was
because I/O was competing with paging I/O.

the profile that we ran on the system later under "live" conditions
indicated that 20% of the time was spent in namei and about 35% of the
time in paging routines.  under the conditions we wanted to use the
vax, adding more memory would have helped a lot but then fairly quickly
we would have become CPU bound in namei.  hopefully, 4.3 bsd addresses
this problem successfully.

our vax that i worked on was a heavily loaded machine doing what vaxes
are not particularly good at when there's not enough memory.  under
controlled conditions, i forced the 780 into thrashing and for the
workload we usually ran. 14 load average was it.  more than 15 minutes
and the system more or less rolled over permanently unless we started
renicing processes, effectively suspending them.  the load control
system posted a while back to net.sources attempts the same kind of
thing automatically on your processor and I/O hogs.  the conclusion i
drew from my analysis of our system was that more memory and more CPU
was required.  i recommended an upgrade to a 785 and adding between 4M
and 12M of memory.

a different system ran student programs that were CPU bound and tended
to fork many processes.  it rolled over at a load average of about 30.
it also was a 4M 780.  kernel echo became noticeable at about load
average 25 or so.  i don't pretend to be an expert on implementation of
any of the kernel stuff, but i do know about system modelling and
analysis and have spent quit a bit of time analyzing both MVS and
4.2bsd systems trying to squeeze as much as possible out of them.

Herb Chong...

I'm still user-friendly -- I don't byte, I nybble....

VNET,BITNET,NETNORTH,EARN: HERBIE AT YKTVMH
UUCP:  {allegra|cbosgd|cmcl2|decvax|ihnp4|seismo}!philabs!polaris!herbie
CSNET: herbie.yktvmh at ibm-sj.csnet
ARPA:  herbie.yktvmh.ibm-sj.csnet at csnet-relay.arpa
========================================================================
DISCLAIMER:  what you just read was produced by pouring lukewarm
tea for 42 seconds onto 9 people chained to 6 Ouiji boards.



More information about the Comp.unix.wizards mailing list