Before backing it all up, consider going in and cleaning out all your old
temp, log, old, new, bak, dmp, dma and similar files. Its easy and can
clear out much more space than you might believe. After a year of doing
that routinely at home, I wrote a routine for work that does this
automatically for several of my company's computers. It took another year
to get that approved, but now I use it routinely for our remote systems as
well. It will return you several megs, at least, and will marginally
speed-up access on your drives.
Cheers!
Ed
----- Original Message -----
From: "David Holland" <dholland(a)woh.rr.com>
To: "Classic Computer Talk" <cctalk(a)classiccmp.org>
Sent: Thursday, January 16, 2003 06:31 PM
Subject: Re: OT: Maxtor drive goes under
(Sorry to be x86 centric here - But)
A word of warning to those who see the integrated promise controllers
built into the a fairly current motherboard, and think "Hey, quick/cheap
way to do RAID"
I know a fellow who was using his on-board Promise controller, and was
with out his computer, and without access to his data, for over a week,
when his MB went south, fortunately it was under warranty, but it (IMHO)
far too long for the company to replace it.
With software mirroring, it may cost you performance, but at least you
can plug your drives into "any old" MB, and start recovery.
As for backup systems, I'd be more interested in reasonable cost
removable storage hardware. Somewhere along the line, my system grew to
280GB, I'm trying to figure two things:
1) Why the <expletive> my drives are so full?
2) How the <expletive> am I going to back this up? :-)
IMHO/YMMV/Std Disclaimers/Void where prohibited by law.
David
On Thu, 2003-01-16 at 14:26, Kent Borg wrote:
On Thu, Jan 16, 2003 at 10:20:34AM -0800, Sellam
Ismail wrote:
Unless someone has a suggestion for something
that's very easy to
use and either dumps backup data to a server or a ZIP disk or
something removeable.
I've been thinking about backups of late.
In my recent story the disk that died didn't bring any data with it
because of being mirrored in a raid 1 array. Cool, but I don't want
to get smug.
Backups are, in part, for protecting against hardware failure, and
raid 1 protects well against a disk dying, but not against the whole
box being lost in flood, fire, theft, lightening, etc. That is not
complete hardware protection but it is significant.
But that's not *all* backups are for: backups are also for "time
travel", for example, to help one recover from an errant "rm -rf ~".
Raid 1 clearly doesn't solve this problem, but does make a formerly
insane approach possible: How about backing up the raid array with
itself?
Use a (normally not mounted) partition to store historical information
about files. I haven't worked out the details of how to do this, but
if done it would result in a system that would be safe from the most
common sorts of hardware failures (a single disk dying) and software
failures (specific files deleted, corrupted, edited inadvisably). The
physical loss or destruction of the whole box or a low-level
scribbling of the disk (i.g., wildly applied Linux dd command) would
still be a risk, but those risks are far less than the risks of a
single unprotected disk.
For more robustness, if two physically separated computers can talk to
each other at decent speeds, maybe they could be mutual backups. That
would remove even more risks.
Note that using a disk to back a disk (be it the same disk or a
diffferent disk) is only sensible as disks get so big that they become
very difficult to back up via removabe media and are even difficult to
figure out how to fill up! (A 120 GB disk for ~$120? 120 GB is a LOT
of space. My ~measly~ 60 GB disks are damn big.)
Any know of a good online backup system for Linux that would work as I
describe? A simple rsync isn't good enough, I want to be able to go
back in time and browse for old files.
-kb