here you wade through the comments...
On Wed, 5 Jul 2000, Richard Erlacher wrote:
Please see comments embedded below.
Dick
----- Original Message -----
From: allisonp <allisonp(a)world.std.com>
To: Classic Computers <classiccmp(a)classiccmp.org>
Sent: Wednesday, July 05, 2000 7:15 PM
Subject: Re: Tim's own version of the Catweasel/Compaticard/whatever
From: Richard Erlacher
<richard(a)idcomm.com>
To: classiccmp(a)classiccmp.org <classiccmp(a)classiccmp.org>
Date: Wednesday, July 05, 2000 7:30 PM
Subject: Re: Tim's own version of the Catweasel/Compaticard/whatever
You may be onto something, Tim, but I'd make
one observation here. The
signal on pin2 of the 8" drive cable, though often driven with the
1793's TG43 signal, does not turn write precomp on and off, but, rather,
reduces the write current to the heads. This reduces the amplitude of
the
signal
In many cases it's also used to alter write precomp. Most all have some
precomp (Esp DD controllers) and for the TG43 case they alter the precomp
to further compensate for bit shift due to the close magnetic domains.
That's precisely what I said, isn't it? The only thing is that driving
pin
2 (RWC) of the cable doesn't do anything on the controller unless you've
provided circuitry to do that. I did say the TG43 flag on the 179x is used
to enable precomp, right?
driving the heads, hence reduces the overall
amplitude of the recovered
signal as well. That same signal is used to enable write
precompensation on
Bogus. the levels are dealt with in the read amps with margin as well.
What's changing of the write current really impacts is the read bit shift
(aka peak shift) as the bit density goes up (inner tracks are shorter
than
outer).
Gee, if reducing the write current is done to reduce the signal amplitude on
the heads, I wonder why they say that . . .
It does but the peak shift is what is of more interest and the lower read
signal is less of a concern than mashed flux trasistions.
some controllers, many of which use a
less-than-ideal timebase to define
the precompensation offsets imposed on the data stream.
This is true, or worse used oneshots. generally the time base for the
bit encoding was always a crystal with not worse than 200ppm error
Commercial standard for crystal oscillators has been 100 ppm since back in
the mid '70's. There were cheap ones at 1000 ppm, though, but most floppy
drives didn't have need for oscillators. That's where the one-shots lived.
I said crystals not complete oscilators. Many of the cheap cpu clock
rocks were really low accuracy parts. Like Tim said, in the world of
mechanical slop 200ppm is nothing!
and less than 50ppm drift. The typical system
was usually within
50ppm of exact and drifted less than 25ppm over temperature extremes.
Often the actual data rate was far lower than that reference(usually 1/4
or 1/8th).
The one-shots were often timed with 5% resistors and 10% capacitors.
They were temperature and voltage sensitive as all hell. Most of the caps
were not good quality and the resistors while within 5% of stated value
said little about their thermal characteristics.
Do you think you could take a stab at swapping the
timebase on your
Catweasel board with a 32 MHz crystal? I think that would be VERY
illuminating, particularly where these precomp/write-current-related
effects
are concerned, because phase noise introduced by the deviation of the
Catweasel timebase from a harmonic of the data rate adds confusion.
There lies a connundrum, study the media and the magnetic domains therein
or get the data? A lower clock would be adaquate for getting the data.
A clock as slow as 4 MHz would be quite adequate for reading, Allison, but
if you want the optimal relationship between write data and precomp,
ensuring best likelihood of recovering the data, you need to have 16x
resolution as a minimum, and somewhere on the order or 12x as the interval
by which you precompensate. This can vary considerably with the drive, but
it's a typical value for 1980-generation heads and media. SMC and Western
Digital both made parts, rather late in the game, that performed these
functions digitally but used a 32x clock.
I'm quite aware of them, since the early 80s.
>
> Further, while I was studying digital PLLd state machines I found a
point
where
increasing the clock (greater resolution) produced sharply reduced
improvement. Signal processing theory (analog) suggests the same.
That's true and the knee to which you refer lies around 6% jitter.
That's
16x the data rate. The phase noise from a digital PLL is quite tolerable
and the tracking accurate and reasonably continuous at that level. Below
that you have capture and tracking error and above that you're squandering
resources if there's no more compelling reason to have the frequency
available.
Yep.
From: Tim Mann <mann(a)pa.dec.com>om>:
> So, what's the heuristic? It's quite
crude and oversimplified too,
> seems to work pretty well. The general idea is that if an interval is
> a bit off from what you were expecting it to be, multiply the error by
> some factor around 0.5 to 0.8 (you sometimes have to tune it for each
> disk if they are particularly bad), and add that to the next interval
I'd suggest some factor less than .5, flux shift errors on floppies
rarely move a great amount unless the spindle bearings are rattling
loose. Actually based on media and expected recording rate it's
possible to plug in a set of expected timing windows and add/subtract
a "precompenstation" window amount based on adjacent bits. For
example adjacent ones or zeros (especially more than two bits)
tend to spread or compress over patterns like alternating ones
and zeros.
Further with all the "timing image" in a memory it should be possible
to look at longer strings of transistions and do simple predictive
forcasting (software PLL). Add to that the encoding form (FM,
MFM, M2FM, RLL or GCR), and previous bits history it should be
straightforward enough to predict the likely next transistion(s)
be they one or zero.
It is serediptious that the code you have effectively accomplishes
a tracking filter (type of PLL). Why, many of the parameters on
the media like peakshift and other behavours tend to average
themselves and cancle. Most of this stuff is not rocket science,
it does however require seeing into the set of abstractions to
make them obvious.
Allison