Joshua Alexander Dersch schrieb:
scheefj at
netscape.net writes:
In the early-mid 80's a program was
"well behaved" if it did it's I/O
thru DOS calls. Those programs would run on just about anything.
Were there similar problems in the CP/M world? That is, was it
commonplace for there to be CP/M programs that bypassed CP/M BDOS
calls and wrote directly to a specific machine's hardware? Seems like
CP/M developers were more disciplined in this fashion, but maybe it's
just because in the CP/M arena there were so many different pieces of
hardware it was the only way to do it? (Whereas with IBM, the PC was
seen as more of a reference standard, even if it wasn't really that
way in the beginning?)
MS-DOS and CP/M were suffering from similar problems.
For CP/M, namely the 1.x and 2.x versions, the difficulty was that the
BIOS was not sufficient to abstract all hardware that was available.
Typically, floppy low level formatters had to be written to directly
talk to the FDC
rather than having an available call "format the floppy in drive A:" as
there were so many different FDC boards.
The interrupt and DMA handling was also lousy thanks to DR abusing some
8080 and even worse Z80 RST vector locations for the BDOS and CCP
buffers and FCBs. Finally, there was no portable way of abstracting
additional serial or parallel ports that did not fit into the IOBYTE
scheme. The whole improved somewhat with CPM+, but still was not good
enough to allow writing portable code.
MSDOS was much better in abstracting hardware, because it had a loadable
driver concept. Difficulty was again the BIOS which required to be "IBM
compatible". For the original IBM PC and XT BIOS, Microsoft made some
hacks of not calling the official entry points but also silently jumping
into unofficial locations which forced them to be fixed and supported in
later AT and AT386 and particularly clone BIOS. Unfortunately, IBM
hardware developers didn't define reasonable interfaces to the hardware
interrupts in the BIOS already, but left this task to the OS. While it
was possible to access serial lines through BIOS software interrupts,
these were (compatibility to old CP/M?) polling interfaces, that didn't
use the available interrupt controllers. Effect was that some hardware
vendors implemented such interfaces differently, and provided some
loadable drivers, but prominently, software vendors who made serial
communications software, like Procomm, pc-terminal etc. again wrote
their own, less portable interrupt-driven,direct-I/O software because
I/O through BIOS was unusable except for 110 Baud TTYs.
When 640K was found to be insufficient, numerous unportable solutions
came up to increase memory, from EMS to XMS, filling the gaps between
video memory and extension ROMs, and reusing the wraparound bug of the
286 that allowed to access 64K beyond 1M without protected mode. The A20
gate came from there, but when I once disassembled and analyzed
HIMEM.SYS, I found about a dozen different ways to switch this bit -
through KBD controller, certain chipset ports, through some reserved
memory cells, by issuing some obscure interrupt. Intels LOADALL also was
played out there - everything done to circumvent the deficiency of the
original design.
Point is there in both cases: the hardware designers did not foresee how
their hardware could or should be used by software, so they basically
implanted the bare chips, not even respecting IRQ and DMA requirements;
the OS developers did not foresee usable and extensible interfaces to
access and abstract various hardware and just hacked something that it
would somehow work; and finally the application designers found the
whole base OS functions where plain unusable and reinvented, each one
differently, the wheel, leaving burnt ground for others that somehow
required similar functions - "thou shall not use the printer port for
your dongle, I have it already."
--
Holger