On Tue, 12 Oct 2004, David V. Corbin wrote:
>>> But the very top of the pile of reasons
for data loss is human error,
>>> and a simple "recent copy" solves 99% of that.
Unless of course, the error is not detected until the
next overwrite. I
typically handle 5-10 "disaster recovery" situations a year where the client
has backup procedures in place. If you exclude crash type failures [system
running fine then dead], and just look at the "I lost some information that
I need back", the backups are almost always worthless since they rotated
through the entire set since the problem acctually occurred.
Oh and I've made that very mistake many times, which is why
I've become personally very paranoid about it! (The propagated
backup error is what caused me to lose all my work in 1994.)
It's really hard to do backups right. Or, easy, but usually
succumbs to "security guard syndrome", you get number & lazy.
>>> and a simple "recent copy"
solves 99% of that.
...assumes you actually have one! But the context here was a
mail archive, so pyramidal data isn't needed for that part,
since it's just a grow.
Assuming an "rsync" approach is usedl, there
would presummable be only one
set. At a minimum, I like to see a pyrimad set that is either a daily binary
progression, or at least a calendar unit pyrimid.
I use rsync and cp -a to create a rotating hard-link
full-snapshot backups on the university servers; 400GB of homedir
junk grows only about 5%. It prevents rm -r * type data loss;
if there's a bad sector under the underlying inode's data it's
total loss -- but that's what RAID5 and dual-server is for.
With my oh-so-clever script system I use now, rsync --delete
is very dangerous (I do it only manually), and I've just last
week had to drag my entire music collection back down to the
home machine (duh). I'm right-now working on making it a lot
smarter to punt the whole process if there's a discrepancy.
They all require vigilance!