Large file systems

Jeffrey Kegler jeffrey at algor2.UUCP
Thu May 11 23:57:47 AEST 1989


In article <675 at maxim.erbe.se> prc at erbe.se (Robert Claeson) writes:
=In article <735 at kcdev.UUCP=, gentry at kcdev.UUCP (Art Gentry) writes:
=
== Have always been a little nervous about multiple tape archives anyhow,
== Murphy says "if you need a file from tape #9, tape #8 will be corrupt". :-)
=
=Yes...

I have always considered the behavior of making all subsequent volumes
unreadable if a previous one is unreadable (lost, etc) a serious bug.
Hence I never make multi-volume back ups.

I am clumsy, and do a lot of file system crunching driver work, so I need
backups pretty often.  To date, I have only had one unsuccessful restore
out of dozens (my tape drive broke, and the restore will probably work when
the new one arrives).  The usual track record I see elsewhere is that one
in two restore's fail.

My rules for backups:

1) Backups procedures should be unintelligent, in fact stupid.  Clever
selection of only the directories you will need is likely to miss one
crucial file somewhere.  Your backups should be of at least whole file
systems at a time, if not the universe.  Assume that whoever is doing the
backups is really dumb or not paying attention, or both.

Exception:  Special project backups of what you are working on at the
moment, if you have a fuller backup scheme in place, sufficient to prevent
catastrophic losses.

2) Never use incremental backups (files changed since the last back up).
The reliance on two restores increases the risk factor too much.

Exception:  Incrementals done for a little extra security where the basic
backup scheme is sufficient to prevent major losses.  In other words, where
you are not relying on the incremental for anything major.

3) Never do a backup onto multiple media, where you are depending on the
contents of one volume to restore another.  In fact, where it makes sense
on the media, (Bernoulli's, for example, or other random access media with
capacity over 2 megabytes) I will break up a backup even within a single
physical volume.

4) Always do a verify pass over the backup volumes immediately after
creating them.

5) Use backup methods that conveniently allow you to restore a single file.
If the only easy way to restore stomps your entire file system, you are
creating some pretty nasty potential choices for the restorer.

6) Use backup methods that allow you the greatest range of restore chances.
If you have a choice between backing up on media that only one drive will
read, as opposed to two drives, guess which gives you better odds.
Remember, the circumstances under which you do restores are usually less
than optimal, and often bad beyond the imagination of the person doing the
backup.

No backup procedure wastes more time than one that will let you down
when you need it.

Techniques:  As long as the above rules are followed, anything that works
is OK.  I personally (and I may be behind the times) use ff to generate
lists of file names and size by file system, a shell script to break the
file name list into 1 megabyte or volume sized chunks, as appropriate, and
then cpio.  I cpio -icvt them back and do a compare with the actual
contents of the file system, by name and file size (log files, of course,
will differ, and the set of temporary file will differ).
-- 

Jeffrey Kegler, President, Algorists,
jeffrey at algor2.UU.NET or uunet!algor2!jeffrey
1762 Wainwright DR, Reston VA 22090



More information about the Comp.unix.questions mailing list