On Tue, Mar 27, 2018 at 10:26:53PM -0300, Paul Berger via cctalk wrote:
On 2018-03-27 10:05 PM, Ali via cctalk wrote:
-------- Original message --------
From: Fred Cisin via cctalk <cctalk at classiccmp.org>
Date: 3/27/18 5:51 PM (GMT-08:00)
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk at
classiccmp.org>
Subject: RAID? Was: PATA hard disks, anyone?
How many drives would you need, to be able to set up a RAID, or hot
swappable RAUD (Redundant Array of Unreliable Drives), that could give
decent reliability with such drives?
10 -
Two sets of 5 drive? RAID 6 volumes in a RAID 1 array.
You would then need to lose 5 drives before data failure is imminent. The 6th one will do
you in. If you haven't fixed 50 percent failure then you deserve to lose your data.
Disclaimer: this is my totally unscientific unprofessional and biased estimate. My daily
activities of life have nothing to do with the IT industry. Proceed at your own peril.
Etc. Etc.
-Ali
To meet Fred's original criteria you would only need 4 to create a minimal
RAID 6 array.? In theory a RAID 1 array (mirrored) of 4 or more disk could
also survive a second disk failure as long as one copy of all the pairs in
the array survive but you are starting to play the odds, and I know of some
cases where people have lost . You can improve the odds by having a hot
spare that automatically take over for a failed disk.? One of? the most
important things is the array manager has to have some way of notifying you
that there has been a failure so that you can take action, however my
observations as a hardware support person is that even when there is error
notification it is often missed or ignored until subsequent failures kill
off the array. ? It also appears to be a fairly common notion that if you
have RAID there is no need to ever backup, but I assure you RAID is not
foolproof and arrays do fail.
Repeat 10 times after me: "RAID is NOT backup".
If you only have online backup, you don't have backup, you have easy
to erase/corrupt copies.
If you don't have offline offsite backup, you don't have backup, you have
copies that will die when your facility/house/datacenter burns down/gets
flooded/broken into and looted.
And yes, in a previous job I did data recovery from a machine that
sat in a flooded store. Was nicely light-brown (from the muck in the water)
until about 2cm below the tape drive, so the last backup tape survived.
It missed about 24h of store sales data - which _did_ exist as paper
copies, but typing those in by hand ... yuck.
So we shipped the machine to the head office, removed the covers,
put it into a room together with some space heaters and fans blowing
directly on it and left if for two weeks to dry out.
Then fired it up and managed to scrape all the database data off it
while hearing and seeing (in the system logs) the disks dying.
Why didn't they have offsite backups? Well, that was about 12 years ago
and at that time, having sufficiently fat datalinks between every store
(lots of them) and the head office was deemed just way too [obscenity]
expensive. We did have datalinks to all of them, so at least we got
realtime monitoring.
There are good reasons why part of my private backup strategy is
tapes sitting in a bank vault.
I'm also currently dumping a it-would-massively-suck-to-lose-this dataset
to mdisc BD media. There I'm reasonably confident about the long term
survival of the media, what worries me is the long term availability of
the _drives_. Ah well, if you care about the data, you'll eternally have
to forward-copy anyway.
? One of the big problems facing using large
disks to build arrays is the number of accesses just to build the array may
put a serious dent in the speced number of accesses before error or in some
cases even exceed it.
That is actually becoming a problem, yes. Moreso, for rebuild - with
RAID5, you might encounter a second disk failure during rebuild, at
which point you are ... in a bad place. Forget about RAID5, go straight
to RAID6.
Kind regards,
Alex.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison