On 13 Nov 2007 at 16:56, Roy J. Tellason wrote:
Well, my admittedly limited experience with stuff is
that the "main"
processor isn't doing much of anything while it's waiting for I/O to complete
anyhow, or at least that's what my perception has been. Now I'll be the
first to admit that my perceptions are very much skewed toward the 8-bit
micro and crudely-done DOS-based end of things, so maybe I just need to get
a good hard look at how some other stuff is done.
Well, why have a multi-gigahertz CPU if you're not going to do
computationally intensive things with it? A lot of graphics and
multimedia is very computationally intensive in today's world, even
with gigaflop GPUs.
In the CDC 6600, I should probably say that Seymour Cray rigged
things to give the *appearance* of 10 PPUs. Core back then was 1
microsecond and the CPU used it interleaved by a factor of 10, so a
cycle time of 100 nsec was possible. The "10" PPUs all shared a
common ALU--each had its own 1 microsecond 4Kx12 bit memory, P-
counter and accumulator, and each took a turn in the "barrel" so that
each appeared to be an independent CPU. Access to central memory (60
bits wide) by the PPUs was obtained through what was called the "read-
write pyramid" where up to 5 CPU words could be in various stages of
assembly or disassembly. It was very slick.
And yes, in its day, the 6600 was considered to be a computational
"monster" that used a lot of the tricks we use today to speed up CPU
execution. Slow functional units were segmented (early pipelining),
there was a read-ahead cache for instructions, so it was possible to
keep small loops entirely in cache, and elegant scheduling method was
used to control instruction scheduling. And the instruction set
itself was very simple; some refer to it as very RISC-like.
All this with core and discrete transistors, yet.
Cheers,
Chuck