dump/restore
Rudy.Nedved at h.cs.cmu.edu
Rudy.Nedved at h.cs.cmu.edu
Fri Nov 21 06:27:06 AEST 1986
Nancy,
When I worked on dump for 4.1BSD (I have not worked on
the 4.2 or 4.3 versions but believe the design is still
same), the act of running several passes over the inode
list for a filesystem was prone to race conditions and
the act of chasing down disk addresses without going
thru the operating system was prone to problems.
The classic failure mode was when a very large file was
truncated at just the right time and another large file
was being created. Dump would be processing the disk
addresses from its copy of the inode. If the released
disk addresses are reused fast enough...you will get the
wrong data recorded on tape. If the file was large enough
and you were chasing down indirect pointers and the
indirect pointer block had been replaced by random data...
you would get a large number of block read (bread)
failures.
The only solution that CMU came up with for running backups
was to run level 0 dumps on an inactive file system so we
*knew* the data was valid and to modifiy dump so when we ran
higher level dumps on active filesystems...a bread failure
during the processing of an indirect block would cause the
rest of the file to be ignored.
We seemed to have gotten away with it but we are relying
more and more on incrementals which have a large potential
of recording the wrong data for a file. The only logic
that works is that old stable files are backed up and that
tends to be good enough for us.
-Rudy
More information about the Comp.unix.questions
mailing list