UNIX domain sockets throughput
Sandeep Mehta
sxm at philabs.philips.com
Thu Aug 17 22:56:08 AEST 1989
Unless I'm missing something really obvious I can't figure out why there
is a singularity in throughput using UNIX domain sockets (see below). I'm
using them under SunOS 4.0 and there's nothing special about the setup.
The messages are n bytes long each way, therefore 2n bytes are used in the
round trip throughput calculation. The loops were timed with the time of
day clock for 1000 iterations each and about 4 runs for each message size
were done. The TOD clock, although capable of usec timing, can only yield
true resolution of 10 msec, 'cause Intersil 7170 is run in 100 Hz mode
(actually I think every other interrupt is dropped so it may be 50 Hz ?).
So I could only time all iterations and average, otherwise the standard
deviations were higher than the mean (i.e., the area under the tail of the
distribution was v. high). The tests were done between a 3/60 and a 3/260
running in multi-user mode, at different times of day, with no special
loading attempted. client and server processes were running (almost) in
sync.
Round trip message(bytes) Throughput (Kb/s)
32 36
512 502
1024 842
1300 717 ---
1400 676 | singularity
1460 669 |
1492 651 ---
1536 906
2032 1127
2984 1230
4096 1402
My question - what buffer sizes are involved underneath ? Is there more
than buffer mgmt that contributes to this ? I've waded through some
include files with no luck. Don't BSD UNIX versions provided 2032 byte
buffers to all connections, or is it only to INET type connections ?
Since I'm at a loss for answers any help would be appreciated.
Thanks in advance.
sandeep
--
Sandeep Mehta ...to be or not to bop ?
uunet!philabs!bebop!sxm sxm at philabs.philips.com
More information about the Comp.sys.sun
mailing list