CRC's are chosen for their immunity to pattern-sensitivity, among other
things, and ECC's, likewise are chosen on the basis of their suitability for
the sector-size, types of errors anticipated, etc. CRC-16, the one chosen
for FDC use, was chosen because it was already an established "standard."
There were, in fact, standard TTL devices commonly used to append CRC-16 to
data blocks in communication and disk/tape applications. There are a few
bitwise corrections that will make CRC-16 yield a zero, but there's no
reason to believe that introducing a bitwise change at one place or another
will yield the correct data just because CRC-16 yields a zero.
The 32 and 56-bit "fire" codes used in winchester disk controllers back in
the late '70's were tailored for the sector sizes and error types
encountered in that industry and still were, initially, quite difficult to
implement. Current technology has come quite far in terms of error
recovery. In the '70's we were concerned with correcting the occasional
erroneous data block on a disk drive, not recoverable with many retries.
Unfortunately, popular processors were not fast enough to do on-the-fly
correction at rates faster than the rotation of the disk. Conseqeuently the
statistics looked bad, since it was firmly established that it was a "hard"
error that couldn't be recovered by rereading.
Dick
----- Original Message -----
From: Pete Turnbull <pete(a)dunnington.u-net.com>
To: <classiccmp(a)classiccmp.org>
Sent: Friday, July 07, 2000 12:44 AM
Subject: Re: Re[2]: Tim's own version of the Catweasel/Compaticard/whatever
On Jul 6, 15:54, Dwight Elvey wrote:
> mann(a)pa.dec.com (Tim Mann) wrote:
> >
> > Another neat trick might be to notice when there is a CRC error and/or
> > a clock violation, and in that case backtrack to a recent past
decision
> where the
second most likely alternative was close to the most likely,
> try it the other way, and see if the result looks better. Obviously
one
> > can't overdo that or you'll just generate random data with a CRC that
> > matches by chance, but since the CRC is 16 bits, I'd think it should
be
> OK to try
a few different likely guesses to get it to match.
CRC's are quite good at fixing a single
small burst.
Dwight, I think you're confusing CRC (Cyclic Redundancy Check) with ECC
(Error Correction Code). CRC is very good at detecting errors, including
bursts of errors that might slip by simpler checks, but AFAIK tells you
next to nothing about where they occurred. ECC tells you enough to
correct
small errors. I've not heard of anyone using CRCs
for correction (not
directly, anyway).
As I recall,
CRC32 can fix a single error burst up to 12 bits long. The
error correcting method is based on the cycle length of the original
polynomial relative to the length of the data block. What this
means is that if you have a burst longer than 12 bits, it is
more likely that the errors will appear to be outside the data
block than within the data block.
Although disks use the V41 polynomial (X^16 + X^12 + X^5 + 1) not CRC32.
All errors that happen
within a 12 bit window are 100% correctable.
Depends how large the data covered by the check is. For amounts of data
larger than a certain size (dependant on the number of check bits and the
algorithm used) there are several errors that will produce the same change
in the ECC or CRC. So the window size is meaningless unless you also
specify the data size and number of check bits.
--
Pete Peter Turnbull
Dept. of Computer Science
University of York