Keith wrote:
Without sounding too obvious here, the time between the pulses (which
more or less define the data) were grossly out of spec. The DD pulses
should nominally be 4us, 6us, and 8us apart before pre-write
compensation. Most good disks are slightly faster, and normal times
for these ranges are:
4us: 3.2-4.2us. Many around 3.75us
6us: 5.5-6.2us.
8us: 7.5-8.2us
(notice margins around 1-1.3us)
My original microcontroller implementation was 3.2-4.2, 5.2-6.2, and
7.2-8.2.
When my current FPGA controller would have a problem, I'd notice that
there were problems right on a boundary. So maybe pulses were coming
in at 3.1us apart instead of 3.2. Or maybe 4.3 instead of 4.2. So I
kept bumping the intervals apart, making a larger range of pulse times
acceptable --- the XOR sector checksums were passing, so I was likely
making the right choices. The bits were ending up in the right buckets.
But as I went through some of these disks, I ended up with the
difference between ranges(and basically my noise margin) being reduced
smaller and smaller. Some to the point where an incoming pulse time
might fall darn smack in the middle of the noise margin. Which bucket
does THAT one go into?
My approach has been very successful(easily 95%+), but it makes me
wonder about Phil's DiscFerret dynamic adaptive approach where a
sample of the incoming data defines the ranges.
This is exactly why good floppy
controller hardware uses a PLL for data
recovery, rather than one-shots, simple state machines, and the other
approaches that were taken to save money, board space, etc.
My experiments with floppy data recovery in software used a simple DPLL,
and I found that it tracked the data much better than any simple
threshold scheme.
The best data separators actually used PLLs with two feedback paths with
different characteristics, one to track longer-term variation due to
motor speed, and one to track short-term effects.
Eric