the adjacent ones. The BDOS doesn't change,
generally, though there may
be
changes in the BIOS. By finding each sector of the
BDOS, one learns
about
the format of the boot tracks. My CCS system, for
example, requires, at
least for the distributed boot EPROM, that the boot tracks be SSSD.
That
I have a fully documented CCS and it clasifies as the early basic CP/M
bios
of low to average functionality. It's robust but closer to a minimal
example.
the key
parameters are the DPH, DPB and SKEW... also you need to know
how big the sector is and if there is embedded skew within the sector.
Then you need to know the disk layout, things like what side/sector
numbering was used. For example I've seen two sided media where
sector one occured on both sides and where identically formatted, also
I've seen side one as 1 thru 9 and side two as 10 through 17...
These parameters are all there on a boot diskette. It's just necessary
to
find them.
Not all and they may be very hard to find. DPH, DPB ahve pointers to
them as a result of the BIOS call to Seldisk. The SKEW however may
not be used in the SECTRAN call at all! Often the skew translate is
a table but it can be calculated and for DD the SECTRAN call is applied
at the logical sector level and doesn't apply well for double density who
have sector sizes larger than one logical sector. So skew in that case
will likely be burried in the raw read/write routine. Or possibly even
at the logical sector level inside the the physical sector.
So wome things are not guarenteed and also not easily found.
The Multidisk and Eset that I have are not for this
purpose. They want
to
be passed the information that I'm suggesting could
be extracted.
Oh like I said I can be... but if you know it's easier as even with 33mhz
z180 your going to flog a while getting to the same answer.
That's exactly the problem I'm trying to
circumvent. The interleave,
skew,
sector size, etc, are all accurately represented on the
boot diskette.
The
Ah, no. Most boot sectors are not skewed and like you observed may
not be the same density or sector size.
BDOS is the BDOS, i.e. shouldn't be different on
different boot
diskettes,
Likely but not always true.
so long as the CP/M version is the same. Consequently,
it should be
There were patches and the CP/M version can be misleading. Many of the
clones use base 2.2 ID so apps will run normally, most all are written
using
z80 unique instructions where DRI used only 8080.
possible, having once determined the sector size, to
extract,
automatically,
the relative locations of sequential sectors of this
known file. Since
we
KNOW and RECOGNIZE the BDOS, shouldn't it be
possible to find its
beginning,
BDOS is not part of the CP/M file system! It's in the boot tracks.
paramters from the system BIOS and verifying them
against another known
file
e.g.
PIP.COM, should provide the necessary information
about the
directory
and data areas of the diskette. Isn't that so?
You would be forced to do that and huristically that will be a PITA! PIP
is
in the file system whereas BDOS is out on the boot tracks. the boot
tracks
in the CCS case is SSSD and the system tracks can be DSDD! The bios
entries for DPH, DPB do not say if the disk is DSDD or even if it's
floppy.
It will tell you how many logical sectors per track, If skew is used. If
directory
is checked, allocation size and the size of the area used ofr data
storage.
You will have to figure out from that a lot of things that are variable
and can
still end up as the same answer.
> >Another item I've wanted for some time to
automate is the setup of a
hard
>disk BIOS.
Since it's dependent not so much on CP/M quirks but often
more
> >on decisions made on the basis of folklore, I thought it might be
> >interesting to examine the process as a candidate for automation.
it's been done but the usual is to hook the disk IO routine and lod a
mini
hard disk bios in high memory. Teltek, Konan and a few others did that.
A better way is to provide slots that can be filled with an address of
the
driver(s). The reason for the difficulty is the wide assortment of
controllers and the varied protocals to talk to them. If it was always
IDE
or SCSI it would be simpler.
Well, I don't see hand-feeding a set of parameters
that one has to
determine
by guessing on the basis of lots of conflicting
folklore as particularly
easy. Authors who wrote about the process e.g. Laird and Cortesi seemed
to
No folklore. There are detailed tables out there for every drive and
disk going
if one care to look. What do you think Multidisk does/is?
equivocate considerably about this, and, while it's
straightforward to
come
up with a set of parameters that work, it's not
easy to come up with the
optimal ones, at least where the HD is concerned. Both of the authors I
Optimal ones for hard disk in the timeframe they wrote in was simple.
hard disks are FAST and Z80s (pre 1990) are NOT. No amount of
optimization is possible. Actually if you have banked memory caching
is the solution as it steps neatly around the problems. FYI: the
problem is that CP/M does a lot of realatively small transfer with
lots of references to the directory. the true limiter to performance is
not data rate but latency (mostly from shuffling the head). When Laird
wrote a fast drive was a Quantum D540(31mb MFM 5.25 FH) with a
average latency of around 30mS.
is in hand, it's easy, certainly, but what should
one do, given a known
bootable but otherwise undefined boot diskette? The reality of the data
present on a boot diskette defines all the parameters necessary to
recreate
it, doesn't it?
No. the boot tracks are always written by a specialized utility like
SYSGEN
(which is not generic code) that is always system specific.
I get emails from people all the time, asking about how
to build a boot
diskette for a system they can't boot because they don't have a BIOS on
the
diskette for the I/O ports they use.
Most of those I converse with with that set of questions have no sources
to
work from and find 8080 or Z80 asm code scary to terrifying. Often they
dont
know the ports in use nor what they mean. Rare is the one with Docs for
their
system at the time the question is launched. They often think it's just
like a
PC where dos boots on all if the disk fits.
Likewise, I get frequent questions
about how to formulate an optimal configuration for a hard disk. While
it
Like Laird said and I'll say _optimal_ for what? I"d never use the word
optimal.
Again, expereince most want a drop in replacement like a PC. Most do not
code at that level or dont wish to try. Many dont have docs needed. So
what
they want is not optimal, just something that works.
may not be a terrible thing, it is something many
people, including
myself,
though I've done it several times, find daunting.
In the absence of a
rigorous method it's hard to find peace at the end of the task because
so
many less-than optimal solutions will work quite well.
How's a person
to
determine what's best?
Lesse, I have five systems with hard disks all were added later two with
code supplied. I find peace with the fact that they work and are
reliable.
Only one have I applied rigorous and experimental methods to the extreme
to see what was possible and effective... Occams razor won most often.
Here it is: hard disks and performance. Assume nothing about the hard
disk used rare is the old drive/controller that can really help you. DMA
or a seperate processor will help if the CPU is loaded or memory is
short.
Caching at the track or cylinder level with a LRU method really helps if
you have space (64-128k is good). You will cache(call it host buffer if
you like) anyway as most hard disk have sector sizes larger than 128
bytes requireing deblocking. Caching the directory seperate from
the data area cache really pays as it saves head thrashing. Achieve the
above or subset with direct and efficient code.
I've tried this using a IDE drive (still working the code out) and most
decent over 100mb drive have caching (quantum PRO AT series does).
use it as it isolates you from things like skew and all.
If you using a an old SA4000, forget all this as making it work is
three quarters the battle.
Allison