Chuck Guzis wrote:
I'm sure and I'd never seriously call it
"memory-mapped I/O"--but
sometimes our world seems akin to that of Humpty-Dumpty: "When I use
a word it means just what I choose it to mean--neither more nor less"
Each adjective and noun can often mean many different things to
individuals with different
backgrounds. The other day I had an interesting discussion concerning
the words "never"
and "always". While my definition for each word is 0% and 100%,
respectively, it soon
became apparent that "never" might, in some circumstances include up to
10% and "always"
might be as low as 90%, depending on the context. DoubleSpeak seems to
be easy for
many individuals where being precise is not very important.
Fortunately, as a programmer,
I learned very early to appreciate the lack of redundancy in any
language used for code
with a computer. Or maybe that should be "UNFORTUNATELY".
Uh-oh, here comes another story...
After I left CDC and the STAR project in 1977, my past came back to
Interesting!! I worked for CDC in Toronto from around 1972 to 1977 on
the local PL-50
program. IIRC, the PL-50 was a much slower version of the STAR-100, but
with the
same instruction set. Initially, the goal was to write an operating system.
However, after almost 35 years, I probably don't have any STAR-100
manuals around. If
I do, they are probably buried in the mountain of old documentation.
Chuck, do you have
any of the manuals around for the instruction set. Might there be such
a manual on bitsavers?
For reasons that are best forgotten, when the project was cancelled just
short of being
finished, I ended up attempting to tidy up some of the loose ends. I
also made a
comparison with the the IBM virtual memory and ran some code on both the
PL-50
and the IBM 370 (?? is that the correct IBM model around 1975?) to
compare the
efficiency of the paging algorithm on each machine.
One rather interesting aspect of the paging algorithm on the STAR-100
made use of the
hardware stack of pages which was kept in an LRU (Least Recently Used)
order in a List
in memory. When the number of available pages (either completely free
or unaltered) fell
below the accepted threshold, the MOST LRU altered page was written to
its disk backup
in anticipation that that page would be the MOST LRU when the time came
to discard that
page from physical memory. In order to test the size of the threshold,
statistics were kept
on how often that page was altered again (which meant that the disk I/O
operation had not
been very useful. Throughput was significantly increased since the
required new page (at
a different virtual address) could be read immediately without having
the double wait time
due to writing out an altered MOST LRU page before the replacement page
could be
read.
Jerome Fine