The performance implications of the ISA bus
Doug Pintar
dougp at ico.isc.com
Wed Dec 12 09:58:39 AEST 1990
In article <18871 at yunexus.YorkU.CA> rreiner at yunexus.YorkU.CA (Richard Reiner) writes:
>
>Thanks, Piercarlo Grandi, for your clarifying analysis of ISA bus +
>disk issues. I wonder if I could ask you one or two questions.
>
>>just using two ESDI controllers, one per each disk, will give
>>tremendous improvements [because of multi-threaded operation]
>
>What about using SCSI equipment? Do there exist SCSI host adaptors
>for the ISA bus which support multi-threaded operation?
>
>And what about track-buffering ESDI controllers? Would their
>advantages go away if they were used in the setup you suggest (since
>you claim that one would get effectively near-zero seek times anyway)?
>
The comments below are are intended to relate to ISC Unix, but most will
apply in the general case (HPDD stuff notwithstanding) -- DLP
First, the use of two ESDI controllers will swamp the system before giving
you much advantage. Remember, standard AT controllers interrupt the system
once per SECTOR. The interrupt code must then push or pull 256 16-bit words
to/from the controller. Given an ESDI raw transfer rate of 800 KB/sec (not
unreasonable for large blocks) that's 1600 interrupts per second, each with
a (not real fast, due to bus delays) 256-word PIO transfer. Try getting two
of those going at once and the system drags down REAL fast. I've tried it on
a 20 MHz 386 and found at most a 50% improvement in aggregate throughput
using 2 ESDI controllers simultaneously. At that point, you've got 100% of
the CPU dedicated to doing I/O and none to user code...
Two drives on a single AT-compatible controller will gain you something
in latency-reduction, as the HPDD does some cute tricks to overlap seeks.
Bus-mastering DMA SCSI adapters, like the Adaptec 154x (ISA) or 1640 (MCA)
provide MUCH better throughput. They ARE multi-threaded, and the HPDD will
try to keep commands outstanding on each drive it can use. The major win is
that the entire transfer is controlled by the adapter, with host intervention
only when a transfer is complete. You get lots more USER cycles this way!
The limiting factor here is how fast you can get transfers happening between
the bus and memory. This varies from motherboard to motherboard and is
unrelated to bus speed or processor speed. You normally want to tune the
SCSI adpater to have no more than a 50% on-bus duty cycle, or you start
losing floppy bytes (and, in the worst case, refresh!). On Compaq and
Micronics motherboards, you can go at 5.7 MB/sec bursts. Some motherboards
can go at 6.7 and others will go up to 8. Your max rate will be about half
this, given the 50% bus duty cycle limit. Arbitration for the SCSI bus can
limit this even more if you've got a bunch of drives trying to multiplex data
through a slow pipe to memory. I found that I couldn't get much over 1.7
MB/sec using 3 simultaneous SCSI drives on a Compaq. Going to more drives
actually slowed things down due to extra connections and releases of the sCSI
bus. I would imagine I'd see a big improvement if I could get the transfer
rate up to the 8 MB/sec burst rate.
I'm still not convinced that cacheing controllers are a big win over a large
Unix buffer cache. I usually use 1-2 MB of cache, and a couple-MB RAMdisk
for /tmp if I have the memory available. Using system memory as a cache is
LOTS faster than going over the bus to cache on a controller, and I trust the
Unix disk updater more than some unknown algorithm used in a controller.
At least when you shut Unix down with a normal controller, you know you can
really power the system down. With some controllers, there's an unknown
latency time before the final 'sync' and write of the superblock actually
gets out there. Could get ugly.
As usual, should any opinion of mine be caught or killed, ISC will disavow
any knowledge of me...
Doug Pintar
More information about the Comp.unix.sysv386
mailing list