First of all, what, in the sampled bitstream tells you
that a bit is "flaky"
or in any sense questionable?
If it's different on multiple read passes, then it's flaky. If it's
different
when read on different drives, then it's flaky. But since this particular
circuit samples the data after it's gone through the AGC and discriminator
section in the drive, you can't look at an individual pulse and say that
it's flaky. A circuit which recorded the analog signal from the head
(or one of the preamp stages) would be far better for spotting flaky bits.
Since your sampler is working at a harmonic
of the theoretical data rate, you should be able to find anomalies easier
than one who's sampling at some other frequency, but these frequencies
aren't terribly precise because of mechanical variations and the general
fact that even closely specified crystals still aren't identical in
frequency.
I don't think there's any magic that results from me working at a harmonic
of the nominal data frequency. I could be sampling at 14.353773 or
3.5798 or 4.00 MHz and it's all the same, because none of them are "locked"
in any way to the actual data rate.
How can you tell that your error correction is of the
bit that caused the
error? There are, in every sample, some bits that will fix the CRC that
weren't necessarily wrong in the first place.
True, but if there's a flaky bit then it's more likely to be causing the
CRC error. If I go to the two flaky bits in a sector and fiddle them
by hand, and all of a sudden I match the CRC, then we're doing pretty well.
Keep in mind that even with more-bits ECC's there are also multiple ways you can
fiddle bits in the data section and still match up with the error correcting
codes.
Since you're looking for read
errors not necessarily written incorrectly, I assume you have some means for
making a decision? Do you simply plow through the file, inverting every bit
and retesting the CRC?
Again, I look for bits that read differently on different read passes, and
fiddle those by hand.
How do you decide where the bit boundaries really
are?
I've got a "software PLL". It synchronizes on both data and clock pulses,
and when it senses that it's seriously out of whack it can adjust more
rapidly than a traditional one-time-constant hardware PLL.
How do you interpret the chaos in the write-splices?
I pretty much ignore the chaos. I've developed some graphing techniques
that help me decide where the write-splices are for a particular low-level
data format. (Remember, I'm mainly concerned with hard-sectored formats
which vary a lot from one controller to the next. Many have *no* address
information recorded magnetically.)
Do you operate your sampler in parallel with an FDC in
order to ensure that
you've gotten a sample that's readable by the FDC?
No, mostly I'm looking at oddball hard-sectored formats that a normal
IBM3740-derived FDC chip can't handle. And if I had the real hard-sectored
random-logic controller, I wouldn't need my analyzer circuit :-).
Have you, in general, considered using a PLL e.g. 4046
or some similar
device for generating a precise multiple of the detected data rate and
"tracking-out" the mechanical influences of your drive on speed?
I thought about it, but I don't think it's necessary. My 8x oversampling
seems to be just fine for both FM and MFM data formats.
When I start making the descriptive web pages this weekend, I'll show some
graphs that indicate how I find write splice areas and track data rate
frequency from my analysis software. (I do no analysis in real-time, it's
all done post-acquisition.)
Tim.