On 15 Nov 2007 at 0:27, Roy J. Tellason wrote:
The x80 CPUs didn't have any good way to
dynamically relocate code and data
and had a fairly small addressing space.
I guess that depended a lot on how you coded things. The biggest single thing
that jumps out at me being absolute addresses, and I guess that it's
possible to avoid those to an extent. And if stuff was modular enough you
could make some small but effective modules that wouldn't take up all that
much room. Wasn't that what ZCPR was trying to do? At least that's the
impression I'm left with after having browsed some doc files, and that
wasn't recently...
I'd call MP/M "static" relocation--once you loaded a program, you
couldn't move it. MP/M used the simple scheme of assembling a
program twice--one off by 100H locations from the original one.
Compare the two, build a bitmap of the differences and you've got a
primitive relocation map. But once the program is loaded, the map is
useless--you have no way of knowing what addresses have been
generated during execution.
To make the best use of available memory, you have to be able to move
active programs to make room for other ones. If you *really* want to
do things right, you can load and unload programs to disk.
JRT Pascal did this pretty well--and it was p-code interpreted.
The other approaches were more hardware oriented--use bankswitched
memory to change context or multiple CPUs, one per user (e.g.
Molecular). Neither takes the best advantage of available resources.
Pcode allows you to do something that compiling to native code
doesn't--the ability to design your own machine. For example,
instead of having your P-code instructions reference locations in
memory, you can substitute an ordinal into a local descriptor table,
where all of the good stuff about a variable is kept (e.g. string
length, data type, dimension, location, etc.). When you need to
move, you know where to find all of the stuff that needs adjustment.
Your subroutines can be ordinals into table of descriptors that may,
for example, tell the interpreter that a subroutine isn't present in
memory yet. To the programmer, it all looks seamless; no silly
"CHAIN" statements--the interpreter does it all for you.
And, as long as you don't pollute your P-code, it's portable to just
about any platform.
For the 8085? I'd be interested in seeing that,
if that would turn out to be
convenient. I've always sort of liked that chip for some odd reason. What
else in terms of hardware did the system have that you ran this stuff on?
An 8085 running at (IIRC) about 4MHz, between 64 and 256KB of page-
mapped memory (1K pages). Console with keyboard and CRT controller
and 4 async comm terminals. The usual hard disk and floppies; a
printer. When it was moved to the 286, the runtime was reworked to
run under SCO and use special-firmware Beehive terminals.
In the compiler, we avoided writing a ton of 8085 assembly by
abstracting the compiling process into a "compiling machine" with its
own (fairly abstract) instruction set. You write the compiler in it,
then code a small interpreter to get it going and debugged, then
change the interpreted instructions into macros and generated code
directly. We wrote the macro processor in PL/M--and later in C.
I'd like to take credit for having the inspriation, but I learned it
from a fellow who worked on the original IBM COMTRAN
project and
developed his own methodology for cranking out COBOL compilers very
quickly.
Cheers,
Chuck