Philip Pemberton wrote:
Jules Richardson wrote:
True,
though the hardware I'm actually thinking of using is my USB
hack-and-bodge floppy reader. FPGA, PICmicro, and a few buffers/level
translators.
OK, so you can probably throw something in there to switch heads via
software, presumably?
That's what the User I/O port is for :)
:-)
True. An FDI file containing the sample data isn't
likely to be
/that/ much bigger though.
No; the raw data file size is manageable I think - it's just the
buffer size that needs to be reasonably big.
Well given that the average hard drive size these days seems to be about
500GB, and even a CD-R will store 700MB, 100MB per image isn't that
unreasonable.
Well, I've got various hard disk byte-level 'raw' backups kicking around in
the 40-1024MB region; the increased size of a flux-transition image wouldn't
really make a significant amount of difference to the cost of storage.
It might take a while to process the data, though.
I really don't know - maybe some of the resident Catweasel experts can comment
there. I suppose to 'understand' an image we wouldn't be doing anything in
software that the drive controller doesn't normally do in hardware.
Urgh, I got the figure wrong. Pretend that decimal
point isn't there.. USB2
Low Speed is 1.5Mbps, Full Speed is 12Mbps, High Speed is 480Mbps. Of
course if I'd said MBps (i.e. megabytes per second), it would have been
almost right as well... :)
In any case, you're probably looking at a megabyte or so per second,
which isn't too bad.
To be honest, the quicker the better - but I wouldn't be upset if it took an
hour or two to grab a drive image. Some of the dumps I have were transferred
at 9600 baud serial...
If we assume we're reading a floppy disc, the most
you'll get on a track
is (IIRC) about 120,000 flux transitions. If we say the drive is
spinning at 360RPM, that's about 0.17 seconds to actually read the
track, followed by roughly the same again to transfer the data to the PC.
Being pessimistic, we're talking about half a second per track. That's
40 seconds per side, or 80 seconds for all 80 tracks, both sides.
If we think ST506 at 3600RPM and 5Mbps, it'll probably be a little
quicker, but probably not much better than a quarter-second per track.
Can you read from the buffer and transfer to the PC at the same time as you're
populating the buffer by reading from the drive? Or is what you have
essentially a single microcontroller? One of the things I was trying to
achieve when I was looking at doing this in pure TTL was a degree of
parallelism - because the device doesn't attempt to understand the drive's
data stream (that's done later on the host PC), it can't benefit from re-seek
or re-read attempts anyway, so there's no requirement to ever pause buffer
fill and try again.
MFM is a (1,3)RLL code. That means you can have a
minimum of 1 and a
maximum of 3 empty bit cells between flux transitions.
Yeah, but that's for complete data, is it not? I'm not sure what
happens if given a damaged drive - given that ST506/412 is dumb,
presumably it just blindly spits out what it finds on the disk surface?
I'd expect so -- an ST506 type drive is basically a motor, speed
controller,
and head amplifier on a PCB, attached to a HDA. Basically the same type of
hardware you'd find on the average floppy drive PCB.
Uh huh. So I do think that theoretically you could have a transition time the
length of the track. No likely, but possible. In the middle of that extreme
and 'normal data' there may well be transition times that are quite high, so
they do need to be recorded (so that you can hopefully recover sectors of a
damaged track beyond the bad spot)
It'd be
nice
to be able to capture whatever the drive throws at this device, in the
hope of making some sense out of (or 'beyond') damaged sections in
software later. If the track data has 'holes' then the device needs to
be able to record the length of those holes in order to portray what's
on the disk accurately.
Good point.. every ST506 drive is going to have one or two (minimum,
usually about 10-20 IME) permanent media errors.
For sure - or worse. I've still recovered useful data from drives that have
been 80% bad - sometimes it's worth it.
The reader hardware is going to need
some way of dealing with these errors, even if that's just stepping the
resolution down and storing a less-accurate measurement of the dead-time.
Hmm, so you're proposing that the reader actually tries to understand the
storage mechanism? I was thinking that it was a dumb device, basically copying
the transitions that it sees to the host PC - and it was the software on the
PC which actually does the decoding (and potentially requests re-read attempts
at a later date).
SRAM is relatively expensive though. SDRAM isn't,
but then you have
to deal with refreshing and other "fun" things like that.
Yeah, I came to that conclusion way back when, too.
I suppose the alternative would be to use an off-the-shelf SDRAM
controller, and then buffer the writes inside the FPGA.
Part of me still thinks that the 'elegant' solution to this is perhaps a PCI
card which bridges a 'raw' STxxx interface to an off-the-shelf compact PC
motherboard. Use the PC's memory for buffer (and refresh, of course), boot a
cut-down OS from a CF card (and use the CF card for temporary image storage),
and potentially allow the user to make use of whatever interface to the
'outside world' that they wished (personally I'd likely use Ethernet, run a
FTP server & telnet control interface on the machine etc.)
However, a) I'm not sure if PCI is actually fast enough for what's needed
(i.e. the system could sample into memory from a drive quickly enough), and b)
that we have anyone 'on-tap' who knows how to build a PCI card anyway... :-)
Well, if you can decode the data then it's
probably fair to assume you
can re-encode it as well...
True.
Aside: I'm
not sure if it's beneficial having a number of buffers and
reading/writing multiple head data in parallel? I worry about running
STxxx drives for longer than necessary :-) Probably not viable just
from a RAM cost point of view, though...
Well, the idea was that you'd do two or three reads of each track at
once, meaning that you could use some form of "majority-voting" scheme
to figure out what the data was before it got scrambled.
That was a complete brain-fart on my part, and I was momentarily thinking that
STxxx gives you data from all heads at once in parallel, which would maybe
make reading into multiple buffers a good idea. It doesn't, of course, and you
select a specific head and get that head only's data as the output stream...
This would be pretty good for discs where the adhesive
that binds the
ferrite/whatever to the mylar disc had weakened -- you may only get one
shot at reading it, so make it count.
Yeah, that could work, subject to enough buffer space. (Incidentally you've
now got me worried about how much precomp affects the timing of all of this,
or that clock bits are supposed to be in the middle of a bit cell with
'normal' data at the start. I suppose it doesn't matter if the sampling
resolution is high enough)
I'd still consider the ultimate to be some form of
non-contact disc
drive, but that would involve custom-made FDD heads and other such "fun".
Well for hard disks they're all flying heads anyway...
My ultimate would be something which recorded the signal at a more analogue
level I think, for potential cleaning up in software later (rather than using
a digital interface which can't perform an intelligent analysis of the data
stream). I think it's do-able for a floppy drive, and probably without a lot
of modification to a donor drive itself.
(I've mused before on here about running the entire drive in a bath of
something which might reduce friction and the chances of damage due to
contamination - it'd be interesting purely from an experimentation point of
view, but I'm not sure if it'd ever be possible to prove that it actually helped)
cheers
Jules