compress -d
Admin
root at hawkmoon.MN.ORG
Sun Aug 7 17:16:42 AEST 1988
Has anyone seen this problem? I'm running Xenix 2.1.3 on a unisys pc/it with
2.5M of memory. Frequently, when rnews is running, a compress -d is spawned to
uncompress the incoming batched news file (you know like "rnews 88040188933" or
something like that). But, the compress -d frequently gets >300 minutes of cpu
time. I have even tried this by hand on a news file and it had over 180
minutes before i killed it. Is this a problem with bad batches of news (we ran
out of disk space a little while ago) or is this something wrong with compress?
Anything i can do to fix it? I don't know how long the compresses will run w/o
me killing them. I have either killed them or the machine has crashed before i
have seen a compress with lots of time finish. Of course, i'm not keeping
track of every process; this is an observation via. ps -fe every so often.
This apparent anamoly makes receiving news *real* slow!
--
Derek Terveer root at hawkmoon.MN.ORG
w(612)681-6986
h(612)688-0667
More information about the Comp.unix.xenix
mailing list