On 10/24/2018 07:01 AM, Noel Chiappa via cctalk wrote:
An observation about RISC: I've opined before that
the CISC->RISC
transition was driven, in part, by the changing balance of CPU speed
versus memory speed: with slow memory and fast CPUs, it makes sense to get
as much execution bang out of every fetch buck (so complex instructions);
but when memory bandwidth goes up, one needs a fast CPU to use it all
(so simple instructions).
Maybe I need to finish my coffee before posting, but here goes anyway....
I thought memory and CPU speed used to be somewhat comparable
historically. And that such is NOT the case now.
As such, I feel like the industry has probably ended up going the wrong
way based on Noel's statement.
Am I failing to take into account the memory fetch buck being transacted
out of L1 / L2 cache (hopefully not main memory)?
Will someone show me a clue-by-four (but not hit me in the face with
it)? Please and thank you.
--
Grant. . . .
unix || die