On 10/24/2018 9:47 AM, Grant Taylor via cctalk wrote:
On 10/24/2018 07:01 AM, Noel Chiappa via cctalk
wrote:
An observation about RISC: I've opined before
that the CISC->RISC
transition was driven, in part, by the changing balance of CPU speed
versus memory speed: with slow memory and fast CPUs, it makes sense to
get as much execution bang out of every fetch buck (so complex
instructions); but when memory bandwidth goes up, one needs a fast CPU
to use it all (so simple instructions).
Maybe I need to finish my coffee before posting, but here goes anyway....
I thought memory and CPU speed used to be somewhat comparable
historically.? And that such is NOT the case now.
Statements made here may or may not reflect having morning coffee.
Back then it was throwing floating point numbers around, now it is
pixels at high speed. Regardless of the data, most of the time
(assuming simple hardware) you spend more time calculating the
effective address of data getting the data itself.
A RISC machine may have better space to cache stuff,but inside
knowledge how memory gets to the alu units from main memory was
visible until just a few years ago.
we have the NEW intel 800086 20% faster on benchmarks,using
C+++ MOO-GNU compiler. (Fine print older may have a 200% loss
of speed in some applications, re-compile with the latest (never
released to the public software) written in Chinese.*
I have no idea what is in a modern home computer, but I suspect
it still follows the same design of the IBM PC. Single CPU
with segmented memory and bit of DMA here and there.
Computer Science models are from the transistor era of computing
but don't reflect the internal speeds in the cpu chips.
To me they reflect the vacuum tube model of computing. Time to re-think
again.
Ben.
* if it was real fine print, I need a lawyer to read it.