GNU-tar vs dump(1)
Glenn Mackintosh
glenn at eecg.toronto.edu
Sun Jan 8 17:31:02 AEST 1989
In article <4601 at xenna.Encore.COM> bzs at Encore.COM (Barry Shein) writes:
>
>Another limitation of using tar (which, again, I don't know if gnutar
>attacked) is restoring device entries. This isn't always a problem
>since you usually got a working /dev/ from somewhere to start the
>restore but if there are other device entries which are normally
>dumped/restored this could be a consideration.
Well, looking at the doc on gnu tar I note that it stores the major and
minor device numbers as well as the file type. I would infer from this that
it will regenerate the devices properly but I haven't tried it.
While I am at it I might as well answer some other questions that people
have asked.
It has special options for doing incremental backups. On creation it will
put an entry for each directory that it works on with a list of all files
that were in the directory and a flag indicating whether the file put into
the tarfile or not. On extraction it can be made to remove files which it
does not find in this directory list (the assumption being that the file was
deleted). These can be useful when rebuilding a corrupted filesystem from
incremental backups.
The version I have does not do anything special about large blocks of zeros
in a file (as would result from unallocated blocks in the file) but a
modification was posted to one of the gnu news groups which caused it to not
allocate blocks for them when they are extracted. This means that a restored
database file would not take any more space than the original did. However,
the tarfile itself will contain these potentially large blocks of useless
information. Since you can get it to make multi volume archives you
actually could try to put all this on even though it might cross one or even
several tape boundaries (and could take up quite a bit of time and tape in
the process).
I don't see what they could do to get around this with the current tarfile
format since the file record is just one large character block. They could
potentially add some magic field to indicate that the file had holes in it
and before the contents of the file have another record which was a list of
offsets and sizes indicating how to rebuild the file. This would be a fairly
major incompatibility with older versions though. Also it would mean that
the inodes for the file would have to be scanned beforehand to find out if
the file contained unallocated blocks in order to decide whether it needed
this new format (or it could just do it for every file but this seems
unnecessary).
Glenn Mackintosh
University of Toronto
-------------------------------------------------------------------------------
Include standard disclaimers here.
CSNET: glenn at eecg.toronto.edu
ARPA: glenn%eecg.toronto.edu at relay.cs.net
UUCP: UUNET!ai.toronto.edu!eecg.toronto.edu!glenn
CDNNET: glenn at eecg.toronto.cdn
BITNET: glenn at eecg.utoronto.bitnet (may not work from all sites)
More information about the Comp.unix.wizards
mailing list