On 6/8/20 12:49 PM, ben via cctalk wrote:
But speed is relative. They changed computers where I
was studing
electronics from a IBM-1130 to a VAX (something) in 1982.
I could use the IBM but the not VAX because they would have too
many users if that division had access to the VAX, and the
electonics section had PDP8 computers anyway.
So I suspect any VAX with one user would be faster than REAL
world machines in 1980's.
Ben.
PS:Virtual memory thrashing is what slows a computer down,
not say a 10,000 monkeys typing Shakespeare on ASR-33's.
Well, one of the problems is the OS scheduling users without the ability
to manage resources.
There's nothing wrong with virtual memory, provided that it's managed
correctly. When the illusion of lots of memory causes the scheduler to
drastically over-commit resources, you get thrashing and nobody gets
anything done.
Although there are exceptions. I recall that it was possible, using
large page sizes on the CDC STAR-100 to execute an instruction that
could never get started. The STAR had 512KW (64 bits) of memory and a
large page size was 64KW. A typical vector instruction could require 6
addresses for source, destination and control vectors. Put the starting
address of any of these in last 8 words of a page and the hardware
faulted preemptively for next page. It was kind of funny to watch; the
P-counter for the user never budged, but the pager was sucking up time
like crazy. I think someone eventually devised a check in the pager for
this case, but I'm not certain.
--Chuck