Please see my embedded remarks below.
Dick
----- Original Message -----
From: Tim Mann <mann(a)pa.dec.com>
To: <classiccmp(a)classiccmp.org>
Cc: allisonp <allisonp(a)world.std.com>
Sent: Wednesday, July 05, 2000 11:02 PM
Subject: Re: Tim's own version of the Catweasel/Compaticard/whatever
This is great, I was hoping for comments like this.
I'd suggest some factor less than .5, flux
shift errors on floppies
rarely move a great amount unless the spindle bearings are rattling
loose.
For the particular 8" floppies I was trying to read, a few seemed to need
a factor of 0.6 or even 0.75 to be read. Many worked fine with 0.0.
Actually based on media and expected recording
rate it's
possible to plug in a set of expected timing windows and add/subtract
a "precompenstation" window amount based on adjacent bits. For
example adjacent ones or zeros (especially more than two bits)
tend to spread or compress over patterns like alternating ones
and zeros.
In reality, you should not be able to detect the precompensation
windows at
all, if they'r properly applied. The purpose of the precomp is to
anticipate and pre-correct for peak shift due to crowding of the bits. As I
wrote before, since the peaks are of opposite sign, in cases where they are
"too" close together, meaning that one pulse is still decaying while the
other begins to present itself, (detectable by examining the slope of the
signal) the addition of the two values subtacts from the two somewhat
linearly from the theoretical peaks, forcing the peaks themselves to occur
farther apart than they were written. This is helped by the lower
write-current, resulting in detection of the inreasing current in the head
somewhat later than if it had been written at full current, and secondly by
the fact that it is written earlier at the leading end of a pulse train, and
later at the trailing end. The upshot of all this is that the peak detector
senses these peaks at their nominal location even though they were not
written that way. Though it may appear that way, I doubt that consistently
readable diskettes have a great deal of error in their position in the pulse
train. It's important to keep in mind, however, that it's the level and not
the edges that contain the information. But for that, the FM data/clock
separators using one-shots would never have worked at all, with their wide
error margin.
Further with all the "timing image" in
a memory it should be possible
to look at longer strings of transistions and do simple predictive
forecasting (software PLL). Add to that the encoding form (FM,
MFM, M2FM, RLL or GCR), and previous bits history it should be
straightforward enough to predict the likely next transistion(s)
be they one or zero.
If you sample the data at a harmonic of the rate at
which its transitions
were written, ever mindful of the way in which FDC's work, you should be
able to do accurate prediction/correction of the sampled data in accordance
with the model. It's more difficult if you write the data at 500 KHz,
precomp it at 6 MHz and then recover it at 7 MHz. It just makes sense to
use a harmonic of the frequencies involved. It makes the data stream
larger, perhaps, but the patterns, such as they are, with their random
errors due to noise, and systematic error due to mechanical components'
influence, should be much more easily detectable. What's more, if you
really want to eliminate unwanted signal components, you could carefully
make four readings, having rotated the spindle 90 degrees in the same
direction between insertions of the diskette, and then sum the data samples,
aligning to index each time for rough alignment and then to the first ID
address mark once it's found. That may help correllate out the random
error. It should also make the mechanically generated error easily
detectable.
This sounds like just the thing to do. Do you have any references where
I could read up on this kind of algorithm? I've never studied signal
processing -- I have a mathematics and computer science background, not
engineering -- so I've been working by intuition up to this point.
(Hmm, looking at Tim Shoppa's later response gives me the keywords
"partial
response maximum likelihood" to look for. That
should help.)
Another neat trick might be to notice when there is a CRC error and/or
a clock violation, and in that case backtrack to a recent past decision
where the second most likely alternative was close to the most likely,
try it the other way, and see if the result looks better. Obviously one
can't overdo that or you'll just generate random data with a CRC that
matches by chance, but since the CRC is 16 bits, I'd think it should be
OK to try a few different likely guesses to get it to match.
Tim Mann tim.mann(a)compaq.com
http://www.tim-mann.org
Compaq Computer Corporation, Systems Research Center, Palo Alto, CA