Compressed Batching w/Restricted uux

Greg Noel greg at ncr-sd.UUCP
Thu Nov 28 17:22:04 AEST 1985


In article <224 at bty.UUCP> yost at bty.UUCP (Brian Yost) offers some code
to allow rnews to automaticly do its own decompression.

The idea is a good one; I believe it should be part of the next release
of netnews.  However, the scheme as offered does an unnecessary amount
of copying.  If the standard input is a pipe (or equivalent stream like
an ethernet link, a common case), reset_stdin() will copy the data to a
file, which will be copied by uncompress into a \second/ file by Brian's
code, and then finally read back into rnews.

An improved method would be (a) use a pipe to get the information from
uncompress to rnews, and (b) use the (undocumented, I know) -n option
of uncompress so that it won't try to read and check the magic number
on the input.  The pain of this is that the first read of the file must
avoid stdio so that only one (or two -- you should check that second
byte as well) bytes will be read and not a full stdio buffer.  Then there
will be \no/ file copying -- uncompress will read the standard input and
write the pipe while readnews will read the pipe.  With reasonable pipe
efficiency, this will never be translated into physical writes on the
disk and should run faster.

The only efficiency problem that I would see is that the eventual stdio
reads would not be reading the file on "natural" filesystem boundries.
Does anyone know a portable way of getting stdio to adjust its initial
read so that the following reads will be lined up?  I don't, and when I
was looking at implementing this, I couldn't decide if the loss of
efficiency would offset the gain from the reduction of file shuffling.
-- 
-- Greg Noel, NCR Rancho Bernardo    Greg at ncr-sd.UUCP or Greg at nosc.ARPA



More information about the Comp.sources.unix mailing list