On 2 Dec 2010 at 10:18, Richard wrote:
I don't recall cache being discussed at all when I
was an undergrad
(1982-1986), but it came up in computer architecture courses when I
was a grad student. Since then, cache (or memory latency in general)
has become the dominating factor in high performance systems.
Memory latency issues are almost as old as stored-program computers.
Just look at the instruction set of an IBM 650 and the mental effort
needed to time instruction execution to drum rotation.
Mainframes have had instruction caches since the 1960s (and probably
before that) and a good programmer learned to use them. (Didn't the
650 have an option of a small core memory that could be used to
execute loops?).
Even in 1971 we were talking about "bubbles in the pipe" that a
memory accesses could cause and what to do about them.
Core latency in general was solved by interleaving, so any non-
consecutive access could have detrimental effects.
Something you don't see anymore is the penalty incurred by a program
repeatedly hitting the same bank of memory. On modern machines, that
might be considered as a plus, but not so on all older hardware.
--Chuck