Subject: Re: "CP/M compatible" vs. "MS-DOS Compatible" machines?
From: Holger Veit <holger.veit at iais.fraunhofer.de>
Date: Thu, 31 Jan 2008 09:53:03 +0100
To: General Discussion: On-Topic Posts Only <cctech at classiccmp.org>
thing about CP/M (and I'm talking about the 8-bit version
here) was that it imposed a file system and made disk I/O uniform--
128 byte sectors, regardless of how the information was actually
formatted onto a drive. CP/M was really primitive when it came to
console I/O, giving only about 3 functions for output and input each.
No cursor positioning or screen control; basic TTY style I/O. And,
while there was an IOBYTE facility to redirect I/O, implementation
was very nonuniform between vendors.
Things like Termcap and the lise were not needed. However the console
IO was not so good. Try printing a string containing $ using the print
string call. The other was passing 8bit data when needed.
Termcap was very needed, but not present. Net result was that almost any
software that required terminal control beyond backspace and CRLF had to
be tweaked manually.
IN that sense yes. Is it requried of the OS no. Does the OS need it, no.
CP/M was minimal as it's day ram was expensive interms of cost, power
and space. Was it short sighted, yes. Would it have been implmented
right in 1975? No as terminals were just starting to have basic
intelligence and maybe 2-3 years later what would the requirements be
I remember having written code to fit in the
Wordstar patch area to adapt to some obscure or not so obscure terminal
dozens of times. Termcap and terminfo under the various Unixes of the
time was god-sent then even if some entries were plainly buggy - maybe
just estimated from some hear-say information.
Many others too like Vedit. It was a one time task, not nice but compact.
Whats more interesting is there was nothing to prevent a termcap file
and later improved CP/M work alikes did exactly that and many more things.
The bottom line is sans BIOS where do you put termcap in a OS thats
As to unix, it was the big machine OS and was not going to run
on a 256kb floppy. It was only popular in academic and research
and far from mainstream till the mid 80s.
The print string BDOS function was indeed an example
that was almost
immediately replaced by do-it-yourself routines, often by using the '\0'
delimiter (which then caused trouble with slow terminals that require
some delaying NUL bytes after a CRLF).
Or the +80H (end of line on high bit set).
mostly uniform, the problem was often it wasnt even
implemented. This was a problem of allowing the BIOS spec to be
minimal and it usually was.
IO byte, besides as you correctly remarked not being implemented at all,
was outdated soon after CP/M was released for the Altair. I haven't seen
many paper tape readers and punches connected to CP/M systems for
serious business work. IObyte did not take care of additional devices
beyond the 4 standard devices; it did not deal well with additional
serial or parallel ports for more terminals and printers. This resulted
in unofficial BIOS extensions to get such available hardware into the
boat, and again unportable programs to swap vectors in to BIOS to write
output to a second printer, for instance. Needless to say that
"well-written" software to use the IOByte failed to use these additional
devices - there was even software that insisted that PTP is dumb and AUX
is intelligent, so abusing these pseudo devices for two printers
resulted in different behaviour.
Paper tape and Punch were rare in most CP/M systems and often unimplmented.
I tend to implment the console and list fields (upper and lower two bits)
completes to use the printer ports and second (and third serial if
Andy Johnson-Laird The Programmers CP/M Handbook went far to extend
and clarify both what the BIOS can do and more.
think no, they gave the hooks and basic requirements. It
was up to the BIOS developer to do a good job or just enough. I've
repeatedly posted that if anything CP/M prevents little and you can
do a great deal at the bios level to really deliver a better system.
The best way to illustrate this is try a system with basic IO and one
with a full interrupt drive IO. The first thing you notice is the
ability to type ahead and the system feels more responsive.
Since the BIOS reduced the available space for BDOS and TPA - which
admittedly improved with CP/M+, which, however, IMHO came too late -
many vendors came up with a not so elaborate BIOS but rather tweaked the
sample code from DRI. It was plainly easier to add some custom program
to directly hack the non standard hardware than extend the BIOS with
useful, clever and portable features. What has been the GHz mania of the
processor later was at that time the "xxK TPA available" selling argument.
Actually I'm running system with CP/M-V2.2 that page the BIOS and
have multiple hard disks and a tpa in the 62-63k range. CP/M+ is not
required to achive that. But both approaches need some kind of memory
paging and the ram/rom and a BIOS to support it.
The otehr issue is most developers didn't have a useful system
interrupts or were time pressed enough to feel that it was worth it.
By useful interrupts I mean.. most early S100 8080 machine if you
pulled the interupt line the default was vector to 38H as that was
RST7 (11111111b) which happend as a result of pull up resistors.
that location is ued by DDT for trap. Early Z80s also did that.
later 8080/8085 and Z80 systems implmented basic vectored interrupts
so how you could use RST 2 through 6 and that meant resonable
interrupt drivers wer possible. The Z80 SBCs like Ampro and others
that used the Z80 peripherals all had the Z80 vectored system which
was powerful and a bit intimidating to those that hand not used
As you can se the whole interrupt thing was not CP/M as a limiting
factor but hardware or implementor understanding.
For those that never used a really nice bios try a VT180, it didn't
do two sided but those disks where just emerging at the time. It did
implement interrupts with ring buffers for IO.
The other thing was DMA. On S100 is was a timing and bus nightmare
and took years to almost get right. Many of the single baord systems
omitted it as it took space (8257 or later 8237 40 pin chips and a latch).
It works fine and made useful systems. However it means the CPU is locked
up for the duration of the transfer and cannot respond to interupts
making for poor latency as floppies are slow. Again CP/M doesn't care
how the transfer happens only that it does happen. So the fist system
built with DMA was a real eye opener. First it allowed background
activities to run faster and smoother like a line printer spooler.
also interrupts could be used byy the disk system to say "ready"
or "ready with error". Thats a lot of available CPU cycles.
the biggest areas of change is that modem programs werent pausing
for disk IO, they could fill a big (say 16k) circular buffer and
the cpu can be processing interrupts for IO and disk to manage
transfers rather than doing a lot of waiting in loops. It doesn't
take a lot more code but the complexity and debugging is greater
due to the near concurrent activities.
it wasn't MS-DOS that was the great advance for the IBM PC
> platform, but rather the well-documented BIOS and I/O interfaces.
> Heck, PC-DOS 1.0 wasn't that different from CP/M-86--you still had to
> do your disk I/O through FCBs, just like CP/M. I believe, to this
> day, you can still issue your DOS calls by loading (CL) with the
> request number and calling TPA:0005.
CP/M-86 and MS DOS were initially designed to allow a simple 8080->8086
cross translator to run the whole set of already existing CP/M
applications without reinventing the wheel. Later DOS versions added
Xenix compatible calls (equivalents of Unix raw I/O: open, close, read,
write, lseek, unlink) but often still the well-understood FCB crap was
used. The call to TPA:0005 is still present in contemporary MS-DOS
versions, as well as INT21h calls;
Thats what I believe in too for dos.
however today the "DOS box" under Windows
just prepares the environment
for such old DOS programs and uses virtualization to fake an existing
DOS. You can't trace an INT21h call any longer into an MSDOS.SYS or IO.SYS.
Yes thats true after NT flavored took over (NT, Win2k XP and likely vista).
for win 3.1x it was dos with gui and for Win9x dos was very much there.
I ahve Dos7 and dos8 which is extractable from win9x. Nothing worth
reporting there. :)