On Jul 20, 2016, at 9:56 AM, Noel Chiappa <jnc at
mercury.lcs.mit.edu> wrote:
From: Paul Koning
> I always felt that RISC meant 'making the
basic cycle time as fast as
> possible by finding the longest path through the logic - i.e. the
> limiting factor on the cycle time - and removing it (thereby making the
> instruction set less rich); then repeat'.
"Making the cycle time as fast as
possible" certainly applies, in
spades, to the 6600. The deeper you dig into its details, the more
impressed you will be by the many different ways in which it does things
faster than you would expect to be possible.
My formulation for RISC had two parts, though: not just minizing the cycle
time, but doing so by doing things that (as a side-effect) make the
instruction set less capable. I'm not very familiar with the 6600 - does this
part apply too?
Depending on what you mean by "less capable", I don't know that I would
agree with that. For example, I doubt that anyone would argue MIPS isn't a RISC
architecture. Yet MIPS is certainly very capable, and it certainly has a rather large
instruction set. The key point is that those instructions are, by and large, conceptually
straightforward, and lend themselves to efficient (small cycle count, small transistor
count) implementation. Also, RISC does not use, or need, microcode.
In that sense, the 6000 certainly qualifies. It has load/store, integer and float
arithmetic on registers, boolean ops, and basic transfer of control instructions.
That's about it. And the implementation is certainly straightforward. A 6600 has a
fair number of gates, but that stems from its multiple functional units, memory
scheduling, and intense emphasis on speed, not from the inherent complexity of its
instruction set. A 6400, which is a single functional unit implementation of the same
instruction set, is a whole lot smaller.
I recommend the excellent (and rather short) book by Thornton, one of the 6600 designers:
http://bitsavers.trailing-edge.com/pdf/cdc/cyber/books/DesignOfAComputer_CD… It
will take you through the design all the way from transistor considerations and circuit
elements to the instruction and memory scheduling machinery.
> RISC only
makes (system-wide) sense in an environment in which memory
> bandwidth is plentiful (so that having programs contain more, simpler
> instructions make sense)
I should have pointed out that programs of that sort take not just more memory
bandwidth, but more memory to hold them. In this day of massive memories, no
biggie, but back in the core memory days, it was more of an issue.
I don't think that's necessarily all that big a delta. Again using MIPS as an
example, its program sizes are not that much larger than, say, the PDP11 for the same
source code.
...
I think a lot of machine designers, though not
all, were seriously
interested in making them go fast.
Again, RISC has two legs, not just making the machine fast, but making them
fast by using techniques that, as a side-effect, make them inscrutable, and
difficult to program. The concept was that they would not, in general, be
programmed in assembler - precisely because they were so finicky.
It is true that a few RISC architectures are not very scrutable. Itanium is a notorious
example, as are some VLIW machines. But many RISC machines are much more sane. MIPS and
ARM certainly are no problem for any competent assembly language programmer. Alpha is a
bit harder but definitely doable too.
The burden on compiler optimizers tends to be higher. But I would argue that's true
for any high performance design: those have multiple pipelines, caches and prefetch, and
lots of other stuff that affects performance. The code optimizer has to know about these
things. (So does the assembly language programmer, if assembly is used for performance
rather than for esoteric bare metal stuff like boot or diagnostic code.) But, with the
possible exception of extremely bizarre designs like Itanium, that's all perfectly
doable.
paul