From: Richard Erlacher <richard(a)idcomm.com>
To: classiccmp(a)classiccmp.org <classiccmp(a)classiccmp.org>
Date: Wednesday, July 05, 2000 7:30 PM
Subject: Re: Tim's own version of the Catweasel/Compaticard/whatever
You may be onto something, Tim, but I'd make one
observation here. The
signal on pin2 of the 8" drive cable, though often driven with the
1793's TG43 signal, does not turn write precomp on and off, but, rather,
reduces the write current to the heads. This reduces the amplitude of
the
signal
In many cases it's also used to alter write precomp. Most all have some
precomp (Esp DD controllers) and for the TG43 case they alter the precomp
to further compensate for bit shift due to the close magnetic domains.
driving the heads, hence reduces the overall amplitude
of the recovered
signal as well. That same signal is used to enable write
precompensation on
Bogus. the levels are dealt with in the read amps with margin as well.
What's changing of the write current really impacts is the read bit shift
(aka peak shift) as the bit density goes up (inner tracks are shorter
than
outer).
some controllers, many of which use a less-than-ideal
timebase to define
the precompensation offsets imposed on the data stream.
This is true, or worse used oneshots. generally the time base for the
bit encoding was always a crystal with not worse than 200ppm error
and less than 50ppm drift. The typical system was usually within
50ppm of exact and drifted less than 25ppm over temperature extremes.
Often the actual data rate was far lower than that reference(usually 1/4
or 1/8th).
Do you think you could take a stab at swapping the
timebase on your
Catweasel board with a 32 MHz crystal? I think that would be VERY
illuminating, particularly where these precomp/write-current-related
effects
are concerned, because phase noise introduced by the deviation of the
Catweasel timebase from a harmonic of the data rate adds confusion.
There lies a connundrum, study the media and the magnetic domains therein
or get the data? A lower clock would be adaquate for getting the data.
Further, while I was studying digital PLL state machines I found a point
where increasing the clock (greater resolution) produced sharply reduced
improvement. Signal processing theory (analog) suggests the same.
From: Tim Mann <mann(a)pa.dec.com>om>:
> So, what's the heuristic? It's quite
crude and oversimplified too,
> seems to work pretty well. The general idea is that if an interval is
> a bit off from what you were expecting it to be, multiply the error by
> some factor around 0.5 to 0.8 (you sometimes have to tune it for each
> disk if they are particularly bad), and add that to the next interval
I'd suggest some factor less than .5, flux shift errors on floppies
rarely move a great amount unless the spindle bearings are rattling
loose. Actually based on media and expected recording rate it's
possible to plug in a set of expected timing windows and add/subtract
a "precompenstation" window amount based on adjacent bits. For
example adjacent ones or zeros (especially more than two bits)
tend to spread or compress over patterns like alternating ones
and zeros.
Further with all the "timing image" in a memory it should be possible
to look at longer strings of transistions and do simple predictive
forcasting (software PLL). Add to that the encoding form (FM,
MFM, M2FM, RLL or GCR), and previous bits history it should be
straightforward enough to predict the likely next transistion(s)
be they one or zero.
It is serediptious that the code you have effectively accomplishes
a tracking filter (type of PLL). Why, many of the parameters on
the media like peakshift and other behavours tend to average
themselves and cancle. Most of this stuff is not rocket science,
it does however require seeing into the set of abstractions to
make them obvious.
Allison