Risc System/6000 and general filesystem performance
Samuel G. Fulcomer nac1280
fulcomer at jvncf.csc.org
Sat Feb 24 05:04:54 AEST 1990
In article <51507 at sgi.sgi.com> markb at denali.sgi.com (Mark Bradley) writes:
>In article <1660 at aber-cs.UUCP>, pcg at aber-cs.UUCP (Piercarlo Grandi) writes:
>> In article <EMV.90Feb20220637 at duby.math.lsa.umich.edu> emv at math.lsa.umich.edu (Edward Vielmetti) writes:
>>
>> [ disk options on the RS/6000
>>
>> 23ms, 1.3MB/sec transfer is wimpy for a fast machine.
>> In this configuration the machine is going to be seriously
>> i/o bound, without a doubt.
>>
>> Pah. The bottleneck is the filesystem, unless you do asynch io via a raw
>> device. You cannot get more than 600KB per second out of the filesystem in
>> ...
>> The problem is software, not hardware.
>
>Pah, indeed. I am measuring >6 MB/sec. through our filesystem today, abeit
>not with SCSI. Our SCSI (synchronous) is only a bit over 2 MB/sec. on a
How many extents/sec is that Mark?
Really, the use the system is going to get is what determines the
hardware-software performance requirements of the system. A machine with
an i/o bottleneck is only going to be bound if that bottleneck is
saturated. I might be perfectly happy with a diskless RS/6000 if I were
only using it for computation with little i/o.
In talking about filesystem software efficiency one has to make a distinction
among the various functional components of the code. File system layout
(physical and abstract data structure efficiency) is what interacts with
physical disk speeds. The rest of the filesystem code is just cpu revs.
I'd rather average access time reduced by half than see transfer rate
doubled, simply because most fs code's going to be seeking like mad under
normal use.
IMHO, how can software possibly be the bottleneck? Do you mean to say that
the fs code is sucking up so many cpu revs that the disk is twiddling its
thumbs? On what?, a Z80? Granted, this can happen on a 68020 NFS server pretty
easily, but we are talking primarily about new machines, neh?
It's useless to mention benchmarks without qualifying them. Which of the
figures mentioned was the result of mixed (moderately sized, say 63k) reads
and writes among multiple open files?
sgf at cfm.brown.edu
More information about the Comp.unix.aix
mailing list