A few things come to mind ...
First of all, what, in the sampled bitstream tells you that a bit is "flaky"
or in any sense questionable? Since your sampler is working at a harmonic
of the theoretical data rate, you should be able to find anomalies easier
than one who's sampling at some other frequency, but these frequencies
aren't terribly precise because of mechanical variations and the general
fact that even closely specified crystals still aren't identical in
frequency.
How can you tell that your error correction is of the bit that caused the
error? There are, in every sample, some bits that will fix the CRC that
weren't necessarily wrong in the first place. Since you're looking for read
errors not necessarily written incorrectly, I assume you have some means for
making a decision? Do you simply plow through the file, inverting every bit
and retesting the CRC? How do you decide where the bit boundaries really
are? How do you interpret the chaos in the write-splices? (I imagine these
all are handled as part of the same problem, else they wouldn't be listed
together.)
Do you operate your sampler in parallel with an FDC in order to ensure that
you've gotten a sample that's readable by the FDC? Have you tried picking
your sample clock off the FDC's PLL (if there is one) as opposed to plowing
in with your independently generated 4 MHz clock? You may find that
operating the two in parallel give you more information than either of them
would give you separately.
Have you, in general, considered using a PLL e.g. 4046 or some similar
device for generating a precise multiple of the detected data rate and
"tracking-out" the mechanical influences of your drive on speed? That may
not help in the long run, but I'll bet it would be educational at the least,
and potentially useful in making some of the bit-boundary-related problems I
mentioned.
I thought it would be a week, but it may be two before I'm able to revive my
tangible interests in this problem. I do happen to have a couple of modular
PLL's built up that I can hook into a circuit without much trouble and
thereby observe the difference between the digitally generated and
independently clocked bitstream versus the phase-locked version. I've to to
contemplate, in the meantime, what effects I might have to look for. Any
inputs based on your experience might be helpful.
Dick
----- Original Message -----
From: <CLASSICCMP(a)trailing-edge.com>
To: <classiccmp(a)classiccmp.org>
Sent: Friday, July 07, 2000 11:49 AM
Subject: Re: Re[2]: Tim's own version of the Catweasel/Compaticard/whatever
>CRC's are chosen for their immunity to
pattern-sensitivity, among other
>things, and ECC's, likewise are chosen on the basis of their suitability
for
>the sector-size, types of errors anticipated, etc.
CRC-16, the one
chosen
>for FDC use, was chosen because it was already an
established "standard."
>There were, in fact, standard TTL devices commonly used to append CRC-16
to
>data blocks in communication and disk/tape
applications. There are a few
>bitwise corrections that will make CRC-16 yield a zero, but there's no
>reason to believe that introducing a bitwise change at one place or
another
will yield the
correct data just because CRC-16 yields a zero.
True, the CRC-16 wasn't chosen for correctibility, but if you make
multiple
read passes over the data and spot a couple of
"flaky" bits (changing
from read to read) and find a combination of 1's and 0's that matches
the CRC, you're far ahead of someone with a hardware-only controller
that doesn't allow access to the raw data for such "human judgement"
error correction.
Remember, you have to know how to do it yourself before you can do it
on a computer!
Schematic and code this afternoon folks!
Tim.