On Jul 19, 2016, at 12:10 PM, Noel Chiappa <jnc at
mercury.lcs.mit.edu> wrote:
From: Paul Koning
The article, as usual, talks about a whole bunch
of things that are
much older than the author seems to know.
"The two most common things in the universe are hydrogen and stupidity." OK,
so technically it's ignorance, not stupidity, but in my book it's stupid to
not know when one's ignorant.
RISC, as a term, may come from IBM, but the
concept goes back at least
as far as the CDC 6000 series.
Hmm; perhaps. I always felt that RISC meant 'making the basic cycle time as
fast as possible by finding the longest path through the logic - i.e. the
limiting factor on the cycle time - and removing it (thereby making the
instruction set less rich); then repeat'. (And there's also an aspect of
moving complexity from the hardware to the compiler - i.e. optimizing system
performance across the _entire_ system, not just across a limited subset like
the hardware only).
"Making the cycle time as fast as possible" certainly applies, in spades, to the
6600. The deeper you dig into its details, the more impressed you will be by the many
different ways in which it does things faster than you would expect to be possible. (For
example, how many other machines have divide logic -- not "reciprocal approximation
-- that divides N bit values in N/2 cycles?) Or context switching that requires just a
single block-memory transaction?
As I've previously discussed, RISC only makes
(system-wide) sense in an
environment in which memory bandwidth is plentiful (so that having programs
contain more, simpler instructions make sense) - does that apply to the CDC
machines?
Yes, 32 way interleaving, 1 microsecond full memory cycle, 100 ns CPU cycle. The Cybers
are not memory bandwidth limited. Note that the 6600 has quite advanced memory operation
scheduling and queueing.
Pipelining, to
the CDC 7600.
Didn't STRETCH have pipelining? Too busy/lazy to check...
Could be. I meant to apply "goes back at least as far as" here as well.
And if you equate RISC to load/store with simple
regular instruction
patterns, you can probably go all the way back to the earliest
computers
Well, I'm not at all sure that load-store is a good indicator for RISC - note
that that the PDP-10 is load-store... But anyway, moving on.
No, but I said "load/store with simple regular instruction patterns". On
reconsideration, I think I'll cancel what I said, though. Early machines tended to be
single address but not load store; rather, you'd find operations like "add memory
to register". A CDC 6000, though, clearly is strictly load store.
One of the books about Turing argues that the ACE can
be seen as a RISC
machine (it's not just that it's load-store; its overall architectural
philosophy is all about maximizing instruction rates).
I think a lot of machine designers, though not all, were seriously interested in making
them go fast. For an example I'd point to the Dutch ARMAC, from around 1956, a drum
main memory machine with a one-track RAM buffer, allowing the programmer to make things go
much faster by arranging for bits of code and associated data to be all in one track.
When your basic machine has a 20 millisecond operation time because of the drum, the need
to optimize becomes rather obvious...
paul