CRC's are chosen for their immunity to
pattern-sensitivity, among other
things, and ECC's, likewise are chosen on the basis of their suitability for
the sector-size, types of errors anticipated, etc. CRC-16, the one chosen
for FDC use, was chosen because it was already an established "standard."
There were, in fact, standard TTL devices commonly used to append CRC-16 to
data blocks in communication and disk/tape applications. There are a few
bitwise corrections that will make CRC-16 yield a zero, but there's no
reason to believe that introducing a bitwise change at one place or another
will yield the correct data just because CRC-16 yields a zero.
True, the CRC-16 wasn't chosen for correctibility, but if you make multiple
read passes over the data and spot a couple of "flaky" bits (changing
from read to read) and find a combination of 1's
and 0's that matches
the CRC, you're far ahead of someone with a
hardware-only controller
that doesn't allow access to the raw data for such "human judgement"
error correction.
Remember, you have to know how to do it yourself before you can do it
on a computer!
Schematic and code this afternoon folks!
Tim.