Jules Richardson wrote:
True, though
the hardware I'm actually thinking of using is my USB
hack-and-bodge floppy reader. FPGA, PICmicro, and a few buffers/level
translators.
OK, so you can probably throw something in there to switch heads via
software, presumably?
That's what the User I/O port is for :)
Six pins on the "extended Shugart" port, and (maybe) another eight (plus 5V
and ground) on a separate 10-pin connector.
True. An FDI
file containing the sample data isn't likely to be /that/
much bigger though.
No; the raw data file size is manageable I think - it's just the buffer
size that needs to be reasonably big.
Well given that the average hard drive size these days seems to be about
500GB, and even a CD-R will store 700MB, 100MB per image isn't that
unreasonable. It might take a while to process the data, though.
But it'll
take a bloody age transferring that much data over USB2 Full
Speed (1.2Mbps peak).
I think I did once put in a request for a SCSI version ;)
Urgh, I got the figure wrong. Pretend that decimal point isn't there.. USB2
Low Speed is 1.5Mbps, Full Speed is 12Mbps, High Speed is 480Mbps. Of course
if I'd said MBps (i.e. megabytes per second), it would have been almost right
as well... :)
In any case, you're probably looking at a megabyte or so per second, which
isn't too bad.
If we assume we're reading a floppy disc, the most you'll get on a track is
(IIRC) about 120,000 flux transitions. If we say the drive is spinning at
360RPM, that's about 0.17 seconds to actually read the track, followed by
roughly the same again to transfer the data to the PC.
Being pessimistic, we're talking about half a second per track. That's 40
seconds per side, or 80 seconds for all 80 tracks, both sides.
If we think ST506 at 3600RPM and 5Mbps, it'll probably be a little quicker,
but probably not much better than a quarter-second per track.
MFM is a
(1,3)RLL code. That means you can have a minimum of 1 and a
maximum of 3 empty bit cells between flux transitions.
Yeah, but that's for complete data, is it not? I'm not sure what happens
if given a damaged drive - given that ST506/412 is dumb, presumably it
just blindly spits out what it finds on the disk surface?
I'd expect so -- an ST506 type drive is basically a motor, speed controller,
and head amplifier on a PCB, attached to a HDA. Basically the same type of
hardware you'd find on the average floppy drive PCB.
It'd be nice
to be able to capture whatever the drive throws at this device, in the
hope of making some sense out of (or 'beyond') damaged sections in
software later. If the track data has 'holes' then the device needs to
be able to record the length of those holes in order to portray what's
on the disk accurately.
Good point.. every ST506 drive is going to have one or two (minimum, usually
about 10-20 IME) permanent media errors. The reader hardware is going to need
some way of dealing with these errors, even if that's just stepping the
resolution down and storing a less-accurate measurement of the dead-time.
SRAM is
relatively expensive though. SDRAM isn't, but then you have to
deal with refreshing and other "fun" things like that.
Yeah, I came to that conclusion way back when, too.
I suppose the alternative would be to use an off-the-shelf SDRAM controller,
and then buffer the writes inside the FPGA. If you're writing sequentially,
then you leave the SDRAM in "burst" mode and just keep writing. Do a partial
refresh every so often, and
Yeah. I think what needs to happen is that you sample
a drive,
essentially turn it back into data bytes in software on a host machine,
bung some sector header voodoo around it, and *then* spit it back to the
destination drive using this device...
Well, if you can decode the data then it's probably fair to assume you can
re-encode it as well...
That leaves you with three images -- the raw disc image (say, a .FDI file),
the extracted sector data (an .IMA file if we're talking PC floppies, .ADF for
Amiga, etc.) and the re-encoded data (another .FDI file).
If you split the software up, you can go from FDI -> ADF and play with the ADF
on an emulator, change something on the disc image, then go from ADF -> FDI
again and write it back. For bonus points, you could save the read speed from
stage 1, use it to adjust the data rate for the binary -> MFM conversion, and
you can modify copy-protected discs while leaving speed-based protection
schemes alone (RNC Copylock, Speedlok, etc.)
For now, I'm more worried about ailing STxxx
drives and getting data off
them (and making sense of it) - writing back is more of a secondary
issue (but is probably an extra 10% logic on this device to support
writes, I suspect).
Already done that, it's a finite state machine with index-pulse detection
hardware. Makes dealing with hard-sectored floppies a little easier.
Aside: I'm not sure if it's beneficial having
a number of buffers and
reading/writing multiple head data in parallel? I worry about running
STxxx drives for longer than necessary :-) Probably not viable just from
a RAM cost point of view, though...
Well, the idea was that you'd do two or three reads of each track at once,
meaning that you could use some form of "majority-voting" scheme to figure out
what the data was before it got scrambled.
This would be pretty good for discs where the adhesive that binds the
ferrite/whatever to the mylar disc had weakened -- you may only get one shot
at reading it, so make it count.
I'd still consider the ultimate to be some form of non-contact disc drive, but
that would involve custom-made FDD heads and other such "fun".
Thanks,
--
Phil.
classiccmp at philpem.me.uk
http://www.philpem.me.uk/