On 8/31/2006 at 11:10 PM Jerome H. Fine wrote:
Back in the early 1970's, CDC set up a facility in
Toronto.
Yup, you guys had the STAR-65 (eventually sold for scrap). Most of the
FORTRAN compiler came from Toronto.
One of the initial goals was to produce an operating
system
for the STAR-100 which I seem to remember had many of the
advanced features found in a current high quality OS such as
VMS.
That was probably the RED system. What was finally adopted as the one for
the field was the one done at Lawrence Livermore in IMPL (a dialect of
LRLTRAN). Quite a bit different--but it made sense. LLL was the only
STAR-100 customer.
The instruction set also included VECTOR instructions
such as the ability to multiply up to 64K elements
times
a second set of 64K elements and place the products in a
third set of 64K elements. Such an instruction would make
use of 3 registers to specify each of the 3 sets of elements
(high order 16 bits of each register was the count and low
order 48 bits of each register was the virtual memory address).
You forgot the control and/or sparse bit vectors that went along with the
operands. So a vector instruction could have 6 operands. If one used the
large page size (64Kwords), and positioned the operands so that the first
superword straddled a page boundary, it was possible to create a condition
wherein it was impossible for the instruction to have the necessary pages
in memory simultaneously.
Addressing was bit, not byte, or word. However, the boundary had to agree
with operand size. So a bit vector could start on any address, but a byte
had to land on a byte boundary (lower 3 bits of the address zero); a
halfword on a halfword address, etc.
And there were some very very exotic instructions. (Remember Search Masked
Key Bit and the BCD arithmetic (128K max digits) instructions?
A standard OS function was the ability to associate
(MAP) a file to an address range which the OS then managed
for the user when any portion of the MAPPED file was accessed
if any memory within the file range was referenced.
The problem was that mapped file I/O was far easier than the conventional
double-buffered sort, so that the pager became the vehicle for most file
I/O, which didn't help performance much.
For example, if the user program referenced
some code or data in virtual memory that was currently on
disk and not presently in physical memory, a page fault
occurred. Prior to the user program continuing execution,
the OS discarded a LEAST RECENTLY used page, then read into
physical memory the newly referenced page.
Initial versions of the operating system used simple demand paging.
Performance with many codes was dreadful. A program could help things
along by issuing OS ADVISE calls to notify the pager that a specific VM
access was coming up, but these turned out to be worthless. The OS
eventually migrated to a working set algorithm, which was much better
(Remember the DEADBEEF kernel crash code?).
When the "know it alls" with the CYBER 180 OS design made their
presentation in Sunnyvale and described a demand pager, I stood up and
asked them where the he-double-matchsticks they'd had their heads for the
last few years. Seems they didn't realize that CDC already had a virtual
memory machine--and some folks who knew a thing or two about paging
algorithms. I was furious.
One very neat feature of the operating system was the "drop file",
essentially a collection and map of all changed pages (and I/O information)
of the program (called a "controllee") that was running. You could
interrupt your program and resume execution days or weeks later by simply
invoking the drop file instead of the original controllee.
The OS featured a message-passing feature. When your program was finished,
it passed the ASCII message "All Done" to its parent. There was no
specific "End of Job" system call.
By the way, because the STAR-100 was so expensive, a
baby
called the PL-50 was produced which had the same instruction
set and registers, but ran much slower.
We also had STAR-1B's at Sunnyvale for a time which probably ran slower
than 1/100th the speed of the 100. That wouldn't have been so bad, except
that they weren't all that reliable, either. It used the same stations as
the 100. It took the CE's almost 2 weeks to reduce the two of them to
junk. I still have a heat sink from one, filched out of a dumpster.
All of this took place more than 30 years ago.
Seems like yesterday sometimes.
Cheers,
Chuck