Tony wrote:
I thought the whole point of MFM was to reduce the
number of flux
transitions per (user) data bit. An FM bit cell _always_ has a clock
transition, and may have a data transition as well. MFM removes some of
the 'wasted' clock transitions.
Right. MFM only inserts clock transitions when there are two consecutive
zero user data bits.
In the space that can accomodate eight flux transitions [*], the
different schemes pack different numbers of data bits:
user data
bits per 8 figure
channel potential flux of
code transitions merit
FM 4 0.5
MFM 8 1.0
Apple 13 sect 5 0.625
Apple 16 sect 6 0.75
Dick wrote:
One interesting thing about the Apple GCR modulation
format is that it
essentially was a "double-density" technique.
Tim wrote:
Eric said the same thing, and I disagree with you
both. To me (and all
I said no such thing. I said that Apple used FM for the address fields,
and that the GCR they used for data fields was more efficient than FM, and
less efficient than MFM. There are other GCR/RLL codes that are more
efficient than MFM; some common ones have figures of merit around 1.5.
For modern hard drives, even RLL [**] has been superceded by PRML.
Eric
[*] Sometimes referred to as channel bits, which causes confusion with
user data bits. Also, MFM causes confusion because in the time/space that
can accomodate a maximum of eight flux transitions, it uses no more than
eight, but they may be separated by the minimum time, 1.5x, or 2x that time.
[**] Don't confuse RLL as a channel code with so-called "RLL drives". For
many years, all SCSI and IDE drives internally used RLL channel codes.