On Thursday 15 November 2007 01:15, Chuck Guzis wrote:
On 15 Nov 2007 at 0:27, Roy J. Tellason wrote:
The x80 CPUs didn't have any good way to
dynamically relocate code and
data and had a fairly small addressing space.
I guess that depended a lot on how you coded things. The biggest single
thing that jumps out at me being absolute addresses, and I guess that
it's possible to avoid those to an extent. And if stuff was modular
enough you could make some small but effective modules that wouldn't take
up all that much room. Wasn't that what ZCPR was trying to do? At least
that's the impression I'm left with after having browsed some doc files,
and that wasn't recently...
I'd call MP/M "static" relocation--once you loaded a program, you
couldn't move it. MP/M used the simple scheme of assembling a
program twice--one off by 100H locations from the original one.
Compare the two, build a bitmap of the differences and you've got a
primitive relocation map. But once the program is loaded, the map is
useless--you have no way of knowing what addresses have been
generated during execution.
I seem to remember some stuff duing my CP/M days that actually had a bitmap of
which locations needed to be fixed, though I'm darned if I can remember just
now what that was.
To make the best use of available memory, you have to
be able to move
active programs to make room for other ones. If you *really* want to
do things right, you can load and unload programs to disk.
JRT Pascal did this pretty well--and it was p-code interpreted.
Wasn't JRT the one that got some really bad reviews in Byte or one of the
other magazines? It was some early Pascal compiler anyhow. I can't say I
ever encountered it or ran across it or talked with anybody who had used it.
The other approaches were more hardware oriented--use
bankswitched
memory to change context or multiple CPUs, one per user (e.g.
Molecular). Neither takes the best advantage of available resources.
Molecular doesn't ring any bells offhand but I do recall a system (and almost
got a hold of one once) that had a system CPU card, one that ran the HD,
and one for each user, complete with serial ports for the user's terminal
and printer. And I think there was a system printer port as well. I think I
have a binder around here someplace that gets into that, but it's not handy
and I can't recall where it is offhand. (Too many books in boxes, not
nearly enough shelving to put them on.)
Pcode allows you to do something that compiling to
native code
doesn't--the ability to design your own machine. For example,
instead of having your P-code instructions reference locations in
memory, you can substitute an ordinal into a local descriptor table,
where all of the good stuff about a variable is kept (e.g. string
length, data type, dimension, location, etc.). When you need to
move, you know where to find all of the stuff that needs adjustment.
Your subroutines can be ordinals into table of descriptors that may,
for example, tell the interpreter that a subroutine isn't present in
memory yet. To the programmer, it all looks seamless; no silly
"CHAIN" statements--the interpreter does it all for you.
And, as long as you don't pollute your P-code, it's portable to just
about any platform.
This sounds pretty good...
I remember one of the floppies I got with my Osborne originally was
labeled "UCSD P-System" (or something pretty close to that). I vaguely
recall poking around with it once, but it had nothing at all to do with
CP/M, wasn't compatible with anything else at all, and at that point in
time I couldn't see the use of it. I probably still have it somewhere, and
some docs on it too.
--
Member of the toughest, meanest, deadliest, most unrelenting -- and
ablest -- form of life in this section of space, ?a critter that can
be killed but can't be tamed. ?--Robert A. Heinlein, "The Puppet Masters"
-
Information is more dangerous than cannon to a society ruled by lies. --James
M Dakin