Csh confusion: #define HZ 100?
Rob Warnock
rpw3 at redwood.UUCP
Wed Dec 12 16:53:38 AEST 1984
+---------------
| From: guy at rlgvax.uucp
| True, except humans don't have to deal with the math most of the time; that's
| what computers are for. 100 seems to be an inappropriate frequency, as
| you spend too much time servicing clock interrupts. (One could imagine
| a UNIX system which didn't run the system clock at any set frequency, but
| just set it to interrupt when the next "interesting" event happened, like
| quantum expiration, a "callout" timeout coming due, etc..)
| Guy Harris
| {seismo,ihnp4,allegra}!rlgvax!guy
+---------------
We (mainly Dave Yost, with my sniping and "+1"s) actually did something
similar to this back at Fortune Systems. All internal short-term time
intervals (used for callouts, etc.) were converted to fixed-point
rational arithmetic -- 32 bits with the binary point 20 bits from the
right, so the maximum interval was (signed) 2048 seconds, with a
resolution of approximately one microsecond (2**(-20) sec.). This was
originally done because the disk driver wanted to be able to do overlapped
seeks on "buffered-stepper" ST506-type drives, and needed fine-grained
callouts for predicting when to "re-connect" to wait for the final "ready"
from the drive. (A similar strategy used back on TOPS-10 for DECtapes was
called "dead reckoning". Yes, you could do overlapped seeks on DECtape!)
The hardware clock was driven off a Z80B-CTC (which also supplied baud
rates to certain devices) and was the highest (non-fatal) interrupt
level, but ALL scheduling and callouts (and "pseudo-DMA" completions)
took place at the lowest level via a hardware "doorbell" interrupt.
All other I/O devices were at intermediate levels. Because of this,
actual hardware clock interrupt code was VERY short: decrement a count
of micro-ticks-to-go, and trigger a scheduler interrupt if negative.
(It was allowed to underflow so that real time would not be lost; any
underflow was accounted for when the low-level code actually got to run.)
Even in critical sections, the clock interrupt was left enabled ("spl7()"
actually meant "spl5()"), so as to not lose real time.
The "lbolt" scheduler was just another callout event. Since you always
know what the next event is (it's the first one in the callout chain),
it's easy to have the (soft) clock interrupt only when needed, rather
than at a fixed interval.
Given the above, and given the very small overhead of the hardware
clock interrupt, we found that the hardware clock rate didn't affect
system overhead nearly as much as before. Rates as high as 900 Hz were
used during certain debugging sessions, with little affect on performance.
(Of course, even one or two percent is worth SOMETHING, so I believe
the standard rate used in the field is 50 or 60 or 90 or something low
like that.)
Rob Warnock
Systems Architecture Consultant
UUCP: {ihnp4,ucbvax!dual}!fortune!redwood!rpw3
DDD: (415)572-2607
Envoy: rob.warnock/kingfisher
USPS: 510 Trinidad Ln, Foster City, CA 94404
More information about the Comp.unix.wizards
mailing list