One other factor is that RISC machines rely on simple
operations
carefully arranged by optimizing compilers (or, in some cases,
skillful programmers). A multi-step operation can be encoded in a
sequence of RISC operations run through an optimizing scheduler more
effectively than the equivalent sequence of steps inside the
micro-engine of a CISC processor.
Lets call them LOAD/STORE architectures.
Classic cpu designs like the PDP-1, might be better called RISC.
Back then you matched the cpu word length to data you were using.
40 bits made a lot of sense for real computing, even if you
had no RAM memory at the time, just drum.
IBM set the standard for 8 bit bytes, 16, 32 bit words and 64 bit
floating point. Things are complex because you need to pack things to
fit the standard size boxes. Every thing is trade off.
Why? Because the IBM 7030 Stretch (64 bits) was a flop.
Save memory, CISC.
Use memory, RISC.
Simple memory, Microprocessors.
Processor development, is always built around what memory you have
around at the time, is my argument.
How many Z80's can you think of USE core memory?
I think only 1 8080A ever used core memory, from BYTE magazine.
Improvements in memory often where improvements in logic as well
for CPU design.
If CPU's were designed for high level languages, why are there
no stack based architectures around like for Pascal's P-code?
(1970's yes, but not today)
The Z80 may be gone, but the 8080 still can be emulated by
bitslices. Did anyone ever use them?
Ben.