SUMMARY: distributed file system (esp NFS) performance
paul at cosc.canterbury.ac.nz
paul at cosc.canterbury.ac.nz
Mon Oct 8 07:30:00 AEST 1990
A few weeks ago I sent out a request for information on distributed file
system simulation models and workload characterisation, and for
information on NFS performance in particular. Here is a summary of the
responses I received - thanks to all that replied.
I will post something when the honours project is complete.
(I used the acronynm TOCS in the original posting - it stands for ACM
Transaction on Computer Systems).
Legato and Asupex
-----------------
Both produce products which are "NFS speedup devices". Legato has some
tools and information available from a mail server (send a message
containing send index to request at legato.com). One tool is the NFS
benchmark program nhfsstone. send prestoserve release_notes_2.0". I also
retrieved NFS.perf.slides.ps by sending the request:
send prestoserve NFS.perf.slides.ps
The description of this file in the index is:
This file contains slides from a talk by Bob Lyon on NFS performance, last
given at the Western Regional Sun Users Group in Sunnyvale CA on July 20,
1990. Various problem areas and their associate solutions are presented,
including networking and inter-networking issues, CPUs, controllers,
disks, clients, and server read and write caches.
Apparently auspex have a number of technical reports that might be of
interest, but I have been unable to get any info on them (if anybody can
tell me anything I would appreciate it).
Comments
--------
Some comments I thought were worth including:
1. my understanding about these matters is that a crucial point is the
synchronous NFS writes (to ensure a stateless file system). Legato's
product gets performance by writing to a battery-backed ram buffer first,
thus enabling a much quicker write acknowlegement. Other NFS server
vendors have bent this writing policy to get performance at a certain cost
in file system consistency risks.
2. it sounds like a steady state model is the object. now, a model able to
account for minor and major network storms would be something...
3. I think characterizing client configuration is a very hard problem.
It's straightforward to describe in words what a user does, e.g. what
applications they run, what their duty cycle is, how many windows they
typically have open, etc. It's also straightforward to use "nfsstat" to
find out what NFS load they're generating. But the problem of how to
convert from one to the other continues to elude me.
4. How to estimate cache effectiveness in a simulation model (i.e. the hit
> rate) based on the number of requests and some locality estimate, rather
> than on a full reference trace.
I believe this is also a very difficult problem. One unknown is the
"locality of reference" of different client workloads. If clients request
the same disk sectors over and over, even a small cache will have a high
hit rate. If on the other hand client requests vary a lot, no amount of
cache will do much good.
A second variable that's likely to trip you up is that the virtual memory
subsystem in SunOS 4.1 is slightly different from the one in SunOS 4.0,
which in turn is *vastly* different from the one in SunOS 3.x. Cache
performance results could change completely depending on which OS virtual
memory algorithm you're running under.
5. Here a few more interesting questions:
- what is the benefit of larger caches on the server? on the client?
Given I have money to buy X megabytes of memory, where should I place
them?
- what are the pros and cons of local disks vs. central disks, considering
the performance ratio of SCSI (local) vs. SMD+network+congestion (at the
server)
6. (BTW, the presence/number of biod's on the client end should probably
be included in your model.)
7. jstampfl at iliad.West.Sun.COM supplied some NFS becnahmark information
for various client/server combinations.
References (alphabetical order on author)
-----------------------------------------
Roberta Bodnarchuk,
"Modelling Workload in Distributed File System",
MSc thesis, Dept. Computational Science, University of Saskatchewan,
Supervisor Prof. Rick Bunt.
(Available as a research report at the end of the month from:
College of Engineering
Dept. of Computational Science,
University of Saskatchewan,
Saskatoon, Saskatchewan,
Canada, S7N OWO)
@inproceedings(hamacher:87,
author="Carl Hamacker",
title="{Local Networks For Future Requirements}",
booktitle="Proceedings of the CPIS Edmonton Conference",
year="1987",
pages="200--206",
month="November")
@article(howard:88,
author="John Howard and Kazar Menees and David Nochols
and M. Satyanarayanan and R. Sidebotham and M. West",
title="{Scale and Performance in a Distributed File System}",
journal="{ACM Transactions on Computer Systems}",
year="1988",
pages="51--81",
month="February")
Chet Juszcak, "Improving the Performance and Correctness of an NFS
Server" in the 1989 Winter USENIX Technical Conference, San Diego, 1989.
S. Keshav and D. Anderson, 'A Workload Model for a Distributed file
system', 19th Annual Pittsburgh Conference on Simulation and Modeling,
1988.
@InProceedings{icdcs10:p212,
crossref = "icdcs10",
author = "Makaroff, D. J. and Eager, D. L.",
title = "Disk Cache Performance for Distributed Systems",
booktitle = icdcs10t,
year = 1990,
pages = "212--219"
}
(where icds10 is:
@Proceedings{icdcs10,
title = "10th Int.\ Conf.\ on Distributed Computing Systems",
year = 1990,
publisher = "IEEE Computer Society Press",
organization = "IEEE",
address = "Paris (France)",
month = "May--June"
})
Joseph Moran, Russel Sandberg, Don Coleman, Jonathan Kepecs, Bob Lyon,
"Breaking through the NFS Performance Barrier" presented at the
European Unix Users Group.
(The sender noted: I don't recall exactly where the 2nd paper was
published. It may have been in the UNIFORUM proceedings (1990))
David Nichols, "Multiprocessing in a Network of Workstations", PhD thesis
tech report CMU-CS-90-107, CS Dept, CMU, Pittsburgh, PA 15213.
(Contains a discrete-event simulation of the Andrew File System).
@inproceedings(ousterhout:85,
author="John Osterhout and Herve Da Costa and David Harrison
and John A. Kunze and Mike Kupfer and James G. Thompson",
title="{A Trace-Driven Analysis of the UNIX 4.2 BSD File System}",
booktitle="{Proceedings of the Tenthn ACM Symposium on
Operating Principles}",
year="1985",
pages="15--24",
month="December")
@article(ousterhout:88,
author="John Ousterhout and Andrew Cherenson and Frederick Douglis and
Michael Nelson and Brent Welch",
title="{The Sprite Network Operating System}",
journal="{IEEE Computer}",
year="1988",
month="February")
@article(ousterhout:89,
author="John Ousterhout and Fred Douglis",
title="{Beating the I/O Bottleneck: A Case for Log-Structured File
Systems}",
journal="{Operating Systems Review}",
year="1989",
month="January")
@techreport(pasheva:89,
author="Elina S. Pasheva",
title="A Hierarchical Distributed File Cache",
institution="{MASc thesis, Department of Electrical Engineering,
University of Toronto, Computer Group}",
year="1989",
month="August")
@article(saltzer:84,
author="J. H. Saltzer and D. P. Reed and D. D. Clark",
title="{End-To-End Argument in System Design}",
journal="{ACM Transactions on Computer Systems}",
year="1984",
volume="2",
number="4",
month="November")
@article(smith:85,
author="Alan Jay Smith",
title="{Disk Cache-Miss Ratio Analysis and Design Considerations}",
journal="{ACM Transactions on Computer Systems}",
year="1985",
volume="3",
number="3",
pages="161--203",
month="August")
@inproceedings{srinivasan:spritely,
author = "V. Srinivasan and Jeffrey C. Mogul",
title = "{Spritely NFS: Experiments} with Cache-Consistenct
Protocols",
booktitle = sosp12,
year = 1990,
pages = "45--57",
comment = "Describes the addition of the Sprite file cache
consistency protocol to an existing NFS system. Allows a fair
comparison of the two. The mixture is called Spritely NFS, and usually
performed better than NFS, with the additional advantage of providing
full file cache consistency.",
keyword = "Sprite, distributed file system, file caching, NFS"
}
(where sosp12 is
@string{sosp12 = "Proceedings of the 12th {ACM} Symposium on Operating
Systems Principles"})
More information about the Comp.sys.sun
mailing list