I'm not aware of any other scheme that doesn't have the same problem. A real
hard drive saves on-the-fly as the write transitions are happening on the head.
When a track or head transition is happening, the MCU is free to drop the bit
stream and has all it's cycles free. This give it plenty of time to save the
current in-memory track and load a new one from the storage device. When staged
and reporting ready to the host, the MCU will be using most of its cycles and
memory to spin the oversampled track bitstream round and round. Even with
interrupt priority, this leaves very few cycles to try and save track changes on
the fly. So I suggest adding a super cap to the design, diode or'ing it in with
the main power, and running main power into an analog comparitor on the MCU.
That way you can sense when the machine loses power and save the current track
both on power loss and track/head change.
-Alan
On November 4, 2013 at 1:39 PM Al Kossow <aek at bitsavers.org> wrote:
On 11/4/13 9:57 AM, Chuck Guzis wrote:
It would seem entirely reasonable to have
firmware for a machine-specific
hard drive.
Is this just to reduce the storage requirements to just the sector payload?
The tradeoff then is the software and hardware resources for synthesis and
recovery of the
various fields of a sector as opposed to just treating the whole thing as a
bitstream sitting
in what looks like a recirculating shift register.
The problem with the shift register approach is when do you save it? After
every assertion and
negation of the write gate? Every time the cylinder/head changes?