Quotas
Barry Shein
bzs at bu-cs.BU.EDU
Thu Jul 13 02:12:16 AEST 1989
Well, the current quota system does have a notion of soft and hard
limits which seems to be trying to address the need for temporary
storage (I assume you mean something like being able to get a big core
dump for debugging and then deleting it when done, a badly set up
quota system can drive you to tears in that regard.) Might have to get
more specific on why that doesn't answer this need.
As to multiple partitions there is some logic to being able to say
something like "a total of X bytes on all partitions" rather than
breaking out paritions. It would have to be optional since on some
systems the whole point is to keep the stuff off of certain partitions
(eg. on one system I managed we had an entire disk drive devoted to
only temp files since that was what this group needed, they generated
these huge checkpoint files during their array inversions etc.)
I would agree that somewhere in here the quota system is serving two
masters (how much space versus where that space is being used.) That
usually makes a software system feel clunky.
The sum(all quotas) gt or eq free_space is a real sticky issue, I'm
not at all sure *any* solution exists. I've certainly never heard of
an O/S (which had a quota system) which wasn't as helpless in the face
of that problem as Unix.
I agree that it does seem dumb that I can get on a disk which for
months will have 100MB or more free but the quota system won't let me
use it ever. There's certainly no profit to having it empty and
spinning under the head all day.
Perhaps what is needed (everyone will hate this) is yet another file
bit which says "this is a temporary file" which means it's not counted
in the quota system and, if the file system nears filling and it's not
open it will be deleted (uh oh, the Grim File Reaper returns!) If it
is open you may get fussed by messages and/or if after a reap not
enough file space is reclaimed, the process killed and the files
reaped again (yes, this is all taken from ITS and other PDP10
systems.) Most of that could be locally decided.
That means I can create unquota'd files at will but they might go away
randomly. They might live for months, they might live for minutes. The
df command and some sense of how busy the system is gives me some
idea, if I really need real file space and this unpredictability
doesn't cut it I'd better make other arrangements. But I certainly
could create a large core file for debugging if there were space with
no problem, or a temporary data file etc.
I make a tmp file like that because I couldn't otherwise create it
without going over my quota (hence, I couldn't otherwise create it at
all.) There's probably no advantage to making a temp file when I could
make a real file (plus or minus how they might get accounted for in
chargeback scheme, local policy issue.)
/tmp is similar except there's no discipline like this for reaping it
and most systems clear it on re-boot (these temp files would not be
cleared unless the system needed the room.)
Obviously there are policy issues involved as to whether you reap all
files, only those you happen to find first (eg. inode order) until
there's some threshold of desirable free space, based on some
weighting of age vs size etc. Again, a policy issue. If the system
maintains the information upon which such policies could be locally
implemented then its job is done (ie. mechanism not policy.)
Granted there are various hostile attacks on such a system possible
(various tricks) but at least it takes quite a bit of effort and
ultimately you could always track down an abuser (they're using a lot
of disk space) and rip their face off, it should be relatively rare
once policies are clarified and holes can usually be closed.
Worth a thought.
--
-Barry Shein
Software Tool & Die, Purveyors to the Trade
1330 Beacon Street, Brookline, MA 02146, (617) 739-0202
Internet: bzs at skuld.std.com
UUCP: encore!xylogics!skuld!bzs or uunet!skuld!bzs
More information about the Comp.unix.questions
mailing list