On Tuesday 13 November 2007 13:30, Chuck Guzis wrote:
On 13 Nov 2007 at 1:21, Roy J. Tellason wrote:
I really don't have a clue as to why
you'd
_want_ something like a separate dedicated processor to handle I/O, for
one example. Or mass storage. Or whatever. And I really don't have
that much of a handle on how the architecture of those sorts of machines
differs from the micros I'm familiar with. Even that bit of time I got
to spend with the Heath H11 was very alien to the rest of what I'm
familiar with.
I can't imagine why anyone *wouldn't* want such a thing.
Well, my admittedly limited experience with stuff is that the "main"
processor isn't doing much of anything while it's waiting for I/O to complete
anyhow, or at least that's what my perception has been. Now I'll be the
first to admit that my perceptions are very much skewed toward the 8-bit
micro and crudely-done DOS-based end of things, so maybe I just need to get
a good hard look at how some other stuff is done.
I know for example that when I was looking at that H11 setup we got a hold of
all sorts of software, one of which was "MUBASIC", which would supposedly
support 8 users. It was not at all clear to me at that point in time how it
was supposed to do that, or if it was even possible for it to do that with
the hardware we had there at the time, but this was seriously different from
much else of what I've ever encountered.
Consider one of the big iron systems of the 60's
and 70's--the CDC 6600.
Which is exactly the sort of thing I have absolutely zero experience with. :-)
A CPU with 10 peripheral processors. Within these
processors, much of
what we would call the operating system is resident.
Actually a distributed OS sounds pretty good to me...
The file system, job control and general I/O is
handled without the
intervention of the CPU, which is left to do what it does best--
handle user computations with a minimum of interruption.
This is part of where things get fuzzy for me. What I see a CPU typically
dealing with in the contexts that I'm familiar with is a lot of what you say
in the above that it's not having to deal with. So aside from
computationally-intensive tasks (I guess what the books used to
call "scientific" computing rather than "business" computing back when
that
distinction still meant something), what else is there for it to do?
The PPUs handle the operator display, time (task
switching/time of
day, etc.) and the basic I/O activities, including loading the necessary
programs to do so. PPUs can read and write CPU memory, but the reverse is
not true--the CPU cannot interfere with the activities of the PPUs other
than to post a request by writing to prearranged locations in memory. It's
very secure.
Now there are some particulars that I wasn't aware of. Is that shared memory
stuff how it was done, mostly?
In any processor with pipelining, instruction
look-ahead and caching,
branch prediction and all of those other little gimmicks that make
execution faster, the idea of interposing interrupts that bollix up
all of the above is pure insanity.
Yep. I don't see how you can make them be all that useful without some
serious tradeoffs, but then most of today's processors are horribly kludged
in terms of architecture as far as I'm concerned. At least that's the way
I've felt ever since I bumped into "segment registers" and similar nonsense
back when. :-)
For example, suppose I wanted to read a disk file.
After opening the
file, I issue a read request to the I/O subsystem, which performs the
necessary mapping of my logical request to a physical disk drive and
the process is begun. As long as I can remove data from the read
buffer quickly enough, the processor handling my disk I/O will
continue to read data and place it into the buffer.
Ok...
If I wanted to copy from a disk to a tape, I'd set
up two requests--a
read and a write and, as long as I could move data quickly enough,
the operation had the potential of being continuous, with two
processors attached to my job handling the I/O. The amount of CPU
time consumed is minimal.
Makes sense to me...
A PPU will probably talk in turn to another
specialized processor
that's attached to a specific I/O device that further simplifies the
interaction. Why should a PPU do all of the work to monitor the on-
or off-line states of a bank of disk drives when the processor in the
controller for the drives can do it?
We have that to some extent now, with proessing power in printers, disk
drives, and so forth.
We see this in modern PC's with GPUs--a processor
that's distinct
from the CPU that does a better job of handling a particular task
than the CPU.
It just makes sense. It's a shame we don't any modern PCs with a
standardized peripheral processor structure, particularly given the
cost of a microcontroller nowadays.
The way it's being done these days seems to me to be too driven by specific
applications, and specific things that the designers had in mind when they
came up with some of this stuff. That's fine and dandy for when you have say
a processor that's dedicated to a job like running a printer, or a disk
drive, but not so much for other things that could be a litlte more
generalized.
--
Member of the toughest, meanest, deadliest, most unrelenting -- and
ablest -- form of life in this section of space, ?a critter that can
be killed but can't be tamed. ?--Robert A. Heinlein, "The Puppet Masters"
-
Information is more dangerous than cannon to a society ruled by lies. --James
M Dakin