You write about this experiment with considerable confidence in your result,
considering that you haven't any conventional hardware for dealing with this
stuff. There must be something about your results that gives you the
confidence to proceed. What might that be? Are you getting verifiable
results, i.e. data that makes sense like ascii files, etc?
Sampling and recording the analog signal might prove disappointing. The
data will be much harder to recognize in its analog form, particularly on
the inner tracks on a noisy diskette or drive. The AGC amp is normally a
simple one just compensating for the varying levels as the speed of the
media changes with respect to the head. That's why a stepwise change at
track 43 is tolerable. The AGC makes it average around the signal,
regardless of its level. Since the signal is subsequently differentiated in
order to detect the peaks, its the AC processing that yields the data.
----- Original Message -----
From: <CLASSICCMP(a)trailing-edge.com
To: <classiccmp(a)classiccmp.org
Sent: Friday, July 07, 2000 1:15 PM
Subject: Re: Re[2]: Tim's own version of the Catweasel/Compaticard/whatever
>First of all, what, in the sampled bitstream tells
you that a bit is
"flaky"
or in any sense
questionable?
If it's different on multiple read passes, then it's flaky. If it's
different
when read on different drives, then it's flaky.
But since this particular
circuit samples the data after it's gone through the AGC and discriminator
section in the drive, you can't look at an individual pulse and say that
it's flaky. A circuit which recorded the analog signal from the head
(or one of the preamp stages) would be far better for spotting flaky bits.
Yes, but difficult to make sense of unless you stick in a software AGC to
equalize the levels and amplitudes. You're probably looking at it at the
right place.
Since your sampler is working at a harmonic
of the theoretical data rate, you should be able to find anomalies easier
than one who's sampling at some other frequency, but these frequencies
aren't terribly precise because of mechanical variations and the general
fact that even closely specified crystals still aren't identical in
frequency.
I don't think there's any magic that results from me working at a harmonic
of the nominal data frequency. I could be sampling at 14.353773 or
3.5798 or 4.00 MHz and it's all the same, because none of them are
"locked"
in any way to the actual data rate.
I don't know how much time you've spent with a logic analyzer, but
I've
found that the closer your sample rate is to a harmonic of the master clock
in a mainly synchronous, the easier it is to make sense of the transitions.
If you sample at 1% deviation between the two clocks, there will be
occasional frame slips, but they will be obvious because of their size. If
you sample at 5 MHz and the events were derived from an 8 MHz clock, there
will be lots of variation in puses that ought to be the same. You can get
accustomed to it, but it isn't fun. I mentioned I have a modular PLL
circuit. Well, it's for synchronizing my logic analyzer with the system
clock that I use that. My Tek 1240 has only 512 samples at its highest
rate, and therefore I sample at the coarsest granularity that gives me
meaningful results. The PLL helps make the otherwise strange pulse trains
look reasonable. It makes odd-width pulses stand out, which, I'd think
would be helpful in your FD analyzer.
How can you tell that your error correction is of
the bit that caused the
error? There are, in every sample, some bits that will fix the CRC that
weren't necessarily wrong in the first place.
True, but if there's a flaky bit then it's more likely to be causing the
CRC error. If I go to the two flaky bits in a sector and fiddle them
by hand, and all of a sudden I match the CRC, then we're doing pretty
well.
Have you tried stopping the drive and
repositioning the diskette, rotating
it, say, 90 degrees to see if there's a systematic factor to some of the
wierdness you see? If you take four samples, each one at a fairly precise
phase shift, you should be able to correlate out the random (mechanically
generated?) features. To do that you simply add the four bitstreams
together and divide by four. That will attenuate the random signal in an
analog system. The averaging effect of the circuitry might just do the same
thing with the "flaky" bits.
Dick
> Keep in mind that even with more-bits
ECC's there are also multiple ways
you can
fiddle bits in the data section and still match up
with the error
correcting
> codes.
> > Since you're looking for read
> >errors not necessarily written incorrectly, I assume you have some means
for
>making a decision? Do you simply plow through the
file, inverting every
bit
> >and retesting the CRC?
> Again, I look for bits that read
differently on different read passes, and
> fiddle those by hand.
> > How do you decide where the bit
boundaries really
> >are?
> I've got a "software PLL".
It synchronizes on both data and clock pulses,
> and when it senses that it's seriously out of whack it can adjust more
> rapidly than a traditional one-time-constant hardware PLL.
> > How do you interpret the chaos in the
write-splices?
> I pretty much ignore the chaos. I've
developed some graphing techniques
> that help me decide where the write-splices are for a particular low-level
> data format. (Remember, I'm mainly concerned with hard-sectored formats
> which vary a lot from one controller to the next. Many have *no* address
> information recorded magnetically.)
> >Do you operate your sampler in parallel
with an FDC in order to ensure
that
> >you've gotten a sample that's readable by the FDC?
> No, mostly I'm looking at oddball
hard-sectored formats that a normal
> IBM3740-derived FDC chip can't handle. And if I had the real
hard-sectored
> random-logic controller, I wouldn't need my analyzer circuit :-).
> >Have you, in general, considered using
a PLL e.g. 4046 or some similar
> >device for generating a precise multiple of the detected data rate and
> >"tracking-out" the mechanical influences of your drive on speed?
> I thought about it, but I don't think
it's necessary. My 8x oversampling
> seems to be just fine for both FM and MFM data formats.
> When I start making the descriptive web
pages this weekend, I'll show some
> graphs that indicate how I find write splice areas and track data rate
> frequency from my analysis software. (I do no analysis in real-time, it's
> all done post-acquisition.)
> Tim.