On Tuesday 13 November 2007 01:58, jim s wrote:
Roy J. Tellason wrote:
On Monday 12 November 2007 17:24, Al Kossow
wrote:
The I/O
on a full blown system is where a modern system might have
emulation problems
This is exactly where simulation has been hung up for years. The mass
storage and terminal system is complex, with microcoded controllers for
tape and disk and dedicated front end processors for terminal/network
I/O.
This sort of thing is exactly the area where I'm fuzziest when it comes
to any sort of a real understanding of big iron. Aside from seeing
references to such stuff from time to time, I really don't have a clue
as to why you'd _want_ something like a separate dedicated processor to
handle I/O, for one example.
The Multics system was designed to be scaled to very large numbers of
processors and thousands of users.
That's the OS. And then there's the hardware, with which some stuff scales
well and some doesn't, and I'm still trying to figure out what the factors
are that get invovled in that.
It was recognized that all of the tasks to handle the
disk i/o which was
very important to performance need not all go thru the processors in all
cases.
Makes sense.
Multics was a virtual memory system. The nature of
this sort of system
was researched by students at MIT and ideas to enhance performance showed
that a lot of the activities could be carried out by subsystems which were
dedicated to just the specific task of moving data to and from the memory.
This is where I get a little puzzled sometimes. Like that BB2 I mentioned,
that uses DMA to send a string of bytes to the disk controller chip rather
than using the processor to do those transfers. Why is not immediately
apparent to me since the processor isn't doing anything else at that point
anyhow...
Early systems also had three level storage systems
which incorporated a
swapping drum to allow the same performance of a large amount of core memory
w/o having the actual core available by swapping user spaces to the drum.
Disk was sufficiently fast to allow the virtual memory approach to start to
be used in a timesharing system, but not fast enough to match the drum
storage performance.
A lot of the early textbooks I was able to get my hands on (mostly the sorts
you could find in a public library) mentioned drum storage, but they never
really got into the performance and capacity comparisons between those and
disk. And then there was mention briefly of those systems that had multiple
fixed heads for R/W rather than moving a single one -- I believe some of the
DEC literature I got my hands on back when also talked about such devices,
but I never really saw hard numbers to be able to compare them. Never
actually saw a drum or a system that used one, either...
When the system I worked on was configured the
threshold was crossed and
the USL Multics system was configured with no drum and a large (1 or 2mb I
think) or memory rather than a drum.
Speaking of crossing a threshold I can still remember years ago, when the
only computer I had was my Osborne Executive, and I was sitting there using
Wordstar, and hit the point where the file I was working on would no longer
fit completely within memory. The transition was so completely seamless and
transparent to me as the user I was impressed. :-)
And on this current linux box, it continues... Though when I run out of
swap, thats another story. It's still pretty seriously robust, though.
Or mass
storage. Or whatever.
The communications front end handled a lot of the mundane tasks of
interfacing with terminals and transmitted in streams of character data to
and from the terminal rather than dealing withcharacter transfers.
And I really don't have that much of a
handle on how the architecture of
those sorts of machines differs from the micros I'm familiar with. Even
that bit of time I got to spend with the Heath H11 was very alien to the
rest of what I'm familiar with.
I realize that I'll never get as familiar with some of this stuff as some
of you guys that have actually worked with it, but can any of you point
me toward some resources that might let me understand some of it better
than I do now?
C. Gordon Bell's books are a good place to start. One oriented towards
DEC systems is at
http://research.microsoft.com/~GBell/CGB%20Files/Computer%20Engineering%207
809%20c.pdf
This book which is online covers all computers for the time it was published
and is very useful to read.
http://research.microsoft.com/~GBell/CGB%20Files/Computer%20Structures%20Pr
inciples%20and%20Examples%201982%20ng%20c.pdf
Looks like some reading material, there... :-)
Also on that site is a listing of computer companies
he compiled which
is very useful if you can't quite think of the name of the computer company
you have a part for. (off the subject but useful)
http://research.microsoft.com/~GBell/CGB%20Files/91_US_minicomputer_compani
es_1960-1982+44_minisupers_superminis_scalables_1983-1995.htm
I find his web site is useful in general to pour over for reading. The
links above are from books he has had put online since going to work for
Microsoft.
http://research.microsoft.com/~GBell/
I'll have a look there when the downloads are finished. Thanks for the
pointers...
--
Member of the toughest, meanest, deadliest, most unrelenting -- and
ablest -- form of life in this section of space, ?a critter that can
be killed but can't be tamed. ?--Robert A. Heinlein, "The Puppet Masters"
-
Information is more dangerous than cannon to a society ruled by lies. --James
M Dakin