On Fri, 30 Nov 2018, Fred Cisin via cctalk wrote:
Well, ATA
drives at that time should have already had the capability to
remap bad blocks or whole tracks transparently in the firmware, although
Not even IDE.
Seagate ST4096 (ST506/412 MFM) 80MB formatted, which was still considered
good size by those of us who weren't wealthy.
Sure! You did need a bad block list for such a drive though.
Of course the
ability to remap bad storage areas transparently is not an
excuse for the OS not to handle them gracefully, it was not that time yet
back then when a hard drive with a bad block or a dozen was considered
broken like it usually is nowadays.
Yes, they still came with list of known bad blocks. Usually taped to the
drive. THIS one wasn't on the manufacturer's list, and neither SpeedStor nor
SpinRite could find it!
There were other ways to lock out a block besides filling it with a garbage
file, but that was easiest.
IIRC for MS-DOS the canonical way was to mark the containing cluster as
bad using a special code in the FAT. Both `format' and `chkdsk' were able
to do that, as were some third-party tools. That ensured that disk
maintenance tools, such as `defrag', didn't reuse the cluster for
something else as it could happen with a real file assignment of such a
cluster.
And, I did try to tell the Microsoft people that the
OS "should recover
gracefully from hardware errors". In those words.
I found switching to Linux a reasonable solution to this kind of customer
service attitude. There you can fix an issue yourself or if you don't
feel like, then you can hire someone to do it for you (or often just ask
kindly, as engineers usually feel responsible for code they have
committed, including any bugs). :)
Did 3.1
support running in the real mode though (as opposed to switching
to the real mode for DOS tasks only)? I honestly do not remember anymore,
and ISTR it was removed at one point. I am sure 3.0 did.
I believe that it did. I don't remember WHAT the program didn't like about
3.1, or if there were a real reason, not just an arbitrary limit.
I don't think that the Cordata's refusal to run on 286 was based on a real
reason.
But, the Win 3.1 installation program(s) balked at anything without A20 and a
tiny bit of RAM above 100000h I didn't have a problem with having a few
dedicated machines (an XT with Cordata interface, an AT with Eiconscript card
for postscript and HP PCL, an AT Win 3.0 for the font editor, a machine for
disk duplication (no-notch disks), order entry, accounting, and lots of
machines with lots of different floppy drive types.) I also tested every
release of my programs on many variants of the platform (after I discovered
the hard way that 286 had a longer pre-fetch buffer than 8088!)
Hmm, interesting. I never tried any version of MS Windows on a PC/XT
class machine and the least equipped 80286-based system I've used had at
least 1MiB of RAM and a chipset clever enough to remap a part of it above
1MiB. And then that was made available via HIMEM.SYS.
What might be unknown to some is that apart from toggling the A20 mask
gate HIMEM.SYS also switched on the so-called "unreal mode" on processors
that supported it. These were at least the 80486 and possibly the 80386
as well (but my memory has faded about it at this point), and certainly
not the 80286 as it didn't support segment sizes beyond 64kiB. This mode
gave access to the whole 4GiB 32-bit address space to real mode programs,
by setting data segment limits (sizes) to 4GiB.
This was possible by programming segment descriptors in the protected
mode and then switching back to the real mode without resetting the limits
to the usual 64kiB value beforehand. This worked because unlike in the
protected mode segment register writes made in the real mode only updated
the segment base and not the limit stored in the corresponding descriptor.
IIRC it was not possible for the code segment to use a 4GiB limit in the
real mode as it would malfunction (i.e. it would not work as per real mode
expectations), so it was left at 64kiB.
According to Intel documentation software was required to reset segment
sizes to 64kiB before switching back to the real mode, so this was not an
officially supported mode of operation. MS Windows may or may not have
made use of this feature in its real mode of operation; I am not sure,
although I do believe HIMEM.SYS itself did use it (or otherwise why would
it set it in the first place?).
I discovered it by accident in early 1990s while experimenting with some
assembly programming (possibly by trying to read from beyond the end of a
segment by using an address size override prefix, a word or a doubleword
data quantity and an offset of 0xffff and not seeing a trap or suchlike)
and could not explain where this phenomenon came from as it contradicted
the x86 processor manual I had. I only learnt later on about this unreal
mode and that HIMEM.SYS used to activate it.
I don't know if the unreal mode has been retained in the x86 architecture
to this day; as I noted above it was not officially supported. But then
some originally undocumented x86 features, such as the second byte of AAD
and AAM instructions actually being an immediate argument that could have
a value different from 10, have become standardised at one point.
Maciej