----- Original Message -----
From: ajp166 <ajp166(a)bellatlantic.net>
To: <classiccmp(a)classiccmp.org>
Sent: Friday, October 06, 2000 5:41 PM
Subject: Re: CP/M BIOS setup
From: Richard Erlacher <richard(a)idcomm.com>
Allison suggests that the disk parameters are
obscure and hard to
locate,
but CORTESI's book on CP/M, among others,
provides a bit of software
that
In short simple sentences. If you have not done this you will be
surprized. Cortesi write with the assumption that all bioses
have the structure and form
he uses as example. It is a defective assumption.
I'm not convinced that the structure of the BIOS matters at all if one uses
the BDOS to fetch or point to whatever he's after. I do believe it's fair
to adhere to the basic format and layout provided by DRI, however. No one
says it's the only way to skin the cat. His utilities return the correct
parameters, however, even on my Systems Group machine that is patterned
after the one used in MP/M. At some level that seems to have been kept
compatible.
>In those cases, among which one finds the CCS example, wherein a simple,
>"dumb" BIOS is loaded into a 20-K CP/M requiring a 32K memory in which
>to run, and subseqeuently used to run a "smarter" and more fully
developed
Not really.
What do you mean, Allison? I can assure you that it works fine as a larger
and smarter BIOS was certainly loaded in my setup, so it could handle the
printers, two of which required ETX/ACK and the others requiring X-on/X-off,
and which used an 8" hard disk interfaced via an XCOMP controller and a
couple of 5-1/4" drives via a WD bridge, with PUN and RDR assigned to a mag
cartridge tape.
version of the BIOS together with the OS loaded
into whatever memory it
finds available thereby making a 64K(actually 61K) CP/M quite
attainable,
This is not news, nor significant.
one would have to examine the autocommand
that's loaded in the "dumb"
system
in order to find the image that's going to
contain the "real McCoy" with
the
full-featured BIOS from which the parameters
relating to the directory
and
data areas of the diskette can be extracted.
Presuming there was an autocommand.
This strategy is particularly important in those
rare cases where one
has
actually done what the CCS folks recommend and
format the first two
tracks
of an 8" diskette single-density and the
remainder at double density.
Likewise, the remainder of the diskette can be two-sided. The reason
THEIR
CCS while a decent box is far from being the be all, say all of the bios
world.
What puzzles me is that, if this information is so
readily available,
why
hasn't the entire process been automated
already? I know there are lots
of
Becasue it's is not so readially available. You assume it is and proceed
that
way but once you get off that CCS box the world changes greatly. Look at
the 5.25 formats, look close at the similar but not the same formats.
For
example I have 5 different 5.25 781kb formats that are not even similar.
One uses 1024byte sectors, another has sector 1 on side one and side two
ends with sector 18 (512byte!). There is one that numbers the cylinders
sero thourgh 79 on one side and 80 throgh 159 on the second. The fifth
is like the first save for the skew is appied at format and not in the
bios.
Funny thing the DPH and DPB is exactly the same for all of them
should be done. There is always a bit of a guess
as to whether 4K or 8K
allocation blocks should be used when hooking up a hard disk. That's
not an
issue with floppies, however.
Yes it is. I have floppies that use 1k (most SD though some are DD!),
many that use 2k and even some nut case using 4k(not me honest!).
Yes but that's easily recognizable if you check the directory entries and
disk parameters. Remember, I propose extracting this information from the
physical medium, not from some speculation.
>> I have a fully documented CCS and it clasifies as the early basic CP/M
>> bios of low to average functionality. It's robust but closer to a
minimal
>> example.
>>
>True enough, but it's compatible with a front-panel and the software's
>written for an 8080 so you can use their FDC with an 8080 or 8085 as
>well as a Z80. Moreover, it's rock-solid. The fact that it uses a
nearly
>vanilla-flavored CP/M doesn't detract either.
I've run into absolutely
no
CP/M programs
that won't run on it, while there are numerous utilities
that won't work properly on the more modern MPM-targeted boards I
got from Systems Group.
Sorta. It doesn't support type ahead, circular iterrupt driven buffers
for
fast serial devices and relies on CPU PIO. It's low end. The only thing
it does do is double density and SSSD 8" interchange (sometimes).
What you refer to as skew is what I call the
interleave, while a sector
skew
No I'm using the DRI term out of their books. I know its interleave.
is a difference in sector numbering from idex,
used by some systems
(mostly
early DEC actually, but some truly random-access
systems as well) to
Actually DEC has a two level one for VT180, interleaved 512byte physical
sectors
and interleave inside the sectors. It's one of three formats that were
used for that
though that was least common.
If it's not the stuff from DRI, it's not
relevant, since it's not CP/M.
Your really looking to ignore progress, and even DRI supplied mods?
I'll admit that's a weakness, but for now,
I'm happy to deal with CP/M
only.
AFAIK, DRI didn't issue any patches to v2.2.
There were several
enhanced
If you insist.
That's true, BUT, when you have a two-stage
boot, you can examine the
second
layer boot system, and, in fact, have to in order
to avoid getting
tangled
up in discrepancies between the boot tracks and
the directory and data
area.
Ok but what systems usually use a two stage boot? Few and nearly none.
> storage. You will have to figure out from
that a lot of things that
are
variable and can still end up as the same
answer.
In fact, I don't believe they have to
be "figured out" at all. After
all
the diskette is in the drive. You just have to
look at it.
See my example of the 781k disks. Two of them would defy simple
inspection.
It does get much more messy when you try to
squeeze speed out of the
system
in ways the ultra-slow CPU doesn't let you
appreciate, but when I said
optimal, I meant for the technology of the time, which meant, at least
to
me, getting the most hard disk space to fit into
the parameters the
system
would allow, without overly restricting either
effective space
utilization
or directory space. That seems to have been the
key tradeoff of the
time
... allocation block size versus number of
directory entries. One other
Not really a big deal was made of it as few had real world expereince and
were trying to scare up a few more bytes of the drive they paid so dearly
for and then never filled more than 50%. The other half was hard disks
were new things to have to deal directly with so there was an aura of
mystery to setting values. The only thing that ever and still concerns
me
is the ALLOC vector as for a 8mb logical drive with 4k granularity there
will
be 256 bytes of ram for just for that, add 512 for a host buffer, 128 for
the
directory buffer local variables and you eaten 896 bytes for the first
drive
and about 256+ per drive and that is non recoverable space. That is the
only real problem. Add that to a featureless base driver and it's an
easy
2-3k of space for the bios more if it's a real bios.
factor was swtiching heads rather than moving the
head stack. The heads
take at least 3 ms to move from track to track, plus 8 ms on the
average, to
rotate half a rev, while switching heads took
about 40-50 microseconds
on
More than that as the ST506 didn't do (nor did the controllers of the
time)
fast seeks. You had to wait for the ST225 for that. Stated average
access
time is 178mS for st506, St225 was a more resonable 73 and the Quantum
D540 took that to a mere 57. The D540 could look faster though with it's
8 heads as you didn't shuffle far and the voicecoil actuator was fairly
fast.
The stepper-driven Rodime 204E (1982) was about as fast as that Quantum and
had 640 cylinders instead of the Q540's 512. I used one for years and am
still amazed at the performance considering it had a stepper. The Quantum
drive was one of the last to be built on the full-height model, while the
ST506 ( I have serial number 12-hundred something down in the basement) came
out in '79 when nobody was using voice coils on that product type. The
specified track-to-track step rate for the ST506 (The complete tech ref will
be posted sometime soon, I hope) was 3 ms. It would do the job for sure, in
3 ms. I checked many of them. The Tandon equivalent and the Shugart
equivalent both did the same. Others came later.
the early Seagate ST506's. The trick, to me,
was always finding a way
to
compute head, cylinder and sector from the CP/M
sector number you were
given by the BDOS without having to swallow up half a KByte in lookup
tables.
Floggin. Once the drive was filled to about 30% and had been in use for a
while you were moving around a lot and there was little trickery that
helped at the drive level.
I didn't kill space in lookup tables. It was simple to me. The 4 heads
and the 16 512 byte sectors per track were handled as SPT of 256 in
the DPB so the BDOS would hand back a logical sector on a track
with head number in there too. I treated the four sides as one logical
track. A few right and left shifts would give me a head (upper two bits)
physical sector (middle 4bits) and logical sector index into the physical
sector (lower two bits). The CYLINDER was passed as track by the BDOS.
Obviously it was quite compact.
So, what did you do when you wanted to use 17 512-byte sectors (commonly
used) and a 5-head drive, like the ones form Miniscribe that plagued us now
and again? Or, for that matter, the 6-head ST225, which was somewhat later.
How about 1K sectors? You could get 9 of them per track. People were after
capacity, even though they hadn't yet figured out how to waste it.
If you build like DRI, Cortesi or Laird said you hit the wall every
16k as where ever you are your going back to the directory where
ever it happened to be and that took a long time.
I'm not sure it helped much, but since the early ('506-class) HDD's
stepped
at 3 ms regardless, and since the controllers didn't take advantage of
momentum, I put the logical zero track of every partition in the physical
middle of the corresponding region of the drive. That made the worst-case
directory-seek half as long. Once drives capable of buffering step commands
became available that stunt wasn't necessary. Avoiding the use of physical
track zero was an important trick, however, since almost every drive homed
to that track on power-on, and if anything went wrong, it took a "rest"
there. I had a lot less trouble with drives once I learned not to use
physical track zero for anything that mattered at all.
The first drive I had(still have turns 20 next summer) was a 506. It was
slow, the controllers were slow and the only up side is some would
buffer a full sector for you saving CPU timing. I stopped using it
when I got my second hard disk a D540, in real life it was much
faster and introduced me to the problem of partitions. I moved up
as the drive was available and offered speed, even with 6mhz z80s
the ST506 was ponderously slow. It was only exceeded by an 8"
Memorex 102 I gave away working, CTRL-C was a real wait and
a kick to watch the head creap back.
If it truly "crept" back, it was probably because it was being stepped
too
fast and got lost, finally having to do a recal, which it did slowly.
Allison