On 11/05/10 19:13, "Chuck Guzis"<cclist at sydex.com> wrote:
On 3 Nov 2010 at 20:47, Johnny Billquist wrote:
> Ok. So, a program would think it addressed
a memory space, which was
> it's own, and the addresses it used would in no way related to the
> actual physical memory it ended up referring to. I'd call that virtual
> memory. Although, having to map the whole virtual memory as one chunk
> to physical memory makes it a little more work, end less flexible than
> having pages. And it pretty much prevents you from ever being able to
> share memory in a reasonable way between processes.
Well, not really. I
refer you to the CDC 7000 SCOPE 2 operating
system. There's a users' manual on bitsavers, but I suspect the
design notebooks have long vanished from the face of the earth--so
there's no documentation on the innards.
I tried to find any manuals on bitsavers, but I can't see anything about
a CDC 7000 there...
But while I can see that doing shared memory would be possible even with
a single mapping between virtual and physical address space, it would
mean you need to copy data between different locations between each
context switch, which would be rather heavy.
At any rate, the CDC 7600 OS people had a peculiar
problem. On the
6000 series of machines, PPUs are free-range; they have access to all
of memory and at the time (say, 1968), comprised most of the
operating system--there was almost no CPU code involved. You'd stick
a request into your own location 1 and PP 0 would see it and detail
off the work to the rest of the PPs. Very cool--you never gave up
control of the CPU unless it was to yield to the job scheduler.
Gah. I have no idea what PPU mean, nor PP.
But it sounds like what you describe now would not be virtual memory. If
each process have access to all of the memory, then you'd not have your
own address space. Instead you'd have to make sure you kept within your
bondaries. Hopefully the hardware can assist with that, but maybe not.
But that is still something else. It's basically just talking about
physical memory. But I might very well totally be misunderstanding
things here, since (as I said) I don't know what these acronyms really mean.
But this wasn't possible on the 7600, as each PP
was assigned its own
hard-wired slot in CPU memory and was unable to access anything but
that. So the 7600 PPs were detalled off to I/O only. (Now, I'd call
that memory-mapped I/O--you want to to talk to a certain I/O
processor, you communicate with it through a hardware-fixed location
in memory.) Which left the CPU to handle OS tasks such as job
scheduling and file management. A whole new can of worms, as SCM
(the memory that a program could execute from was very fast, but
somewhat limited).
To me, the difference between shared memory I/O and memory mapped I/O is
about how the notification comes across between the subsystems. Is the
slave triggered by a write to the memory, or does the slave poll the
memory location. If the slave polls the memory location, then I'd called
it a shared memory design. If the slave gets triggered by a write to the
memory, then I'd call it memory mapped I/O. And what you describe here
could further be called I/O channels, I think, in IBM speak. Basically,
separate processors running their own code, which can do limited kind of
stuff, mostly related to I/O functions for the main processor. Some of
these designs even allowed you to place the "program" to be run in
shared memory, and then kick off the I/O processor to do the work, and
it signalled back when it was done.
But I digress... :-)
A small permanently-resident "kernel" to
handle PP commination and
task swapping was written, but job processing, file management, etc.
was performed for each job with a sort of matryoushka doll setup of
overlapping field lengths. In other words, a user program was
completely enclosed within the record manager which was completely
enclosed within a buffer manager which was completely enclosed within
the job supervisor for that job. So all user memory was shared with
successively higher privilege level tasks, differing only at what
location their respective location 0s were assigned to physical
memory.
Ah, yes. That is also shared memory between different processes, but in
a somewhat limited hierarchical way. You could for all OSes say that any
process is always sharing it's memory with the operating system. :-)
The 7600 also had a bulk core "LCM" which
couldn't be executed from,
but served for swapping and data storage.
As far as piecemeal swapping, I'll leave that for another time when I
discuss the CDC Zodiac operating system (1970), something for which I
suspect no documentation survives.
Sounds like fun...
Johnny