Chuck Guzis wrote:
On 8/6/2006 at 11:04 AM Dave McGuire wrote:
I love hacking with Z80s. I consider myself
very lucky that much of
my work these days is done in the embedded systems world, where Z80s
(albeit at 50MHz with 24-bit address buses) are alive and well. It's
neat to see how it has made the transition from mainstream
general-purpose processor to common embedded processor. (and I'll bet
Zilog sure is happy about it!)
And yet, to my eye, the Z80 architecture is about as bad as the 8086.
Basically, a lot of instructions grafted onto an 8008 model.
There were some architectures that held real promise back in the early days
that never went anywhere; the GI CP1600, for example. A nice, reasonably
Shirley you jest? GI's devices were seriously crippled in terms
of performance. We used to give the GI rep blank stares when
he came peddling product -- "You don't REALLY think we can use
these things for the types of products we design, do you?"
orthogonal set of 16-bit registers witha
straightforward instruction set.
But I'll be the first to admit that GI was ham-handed about it--10 bit
instruction width; poor I/O performance, etc. Most MPU instruction sets
were cobbled together to fit within the constraints of silicon technology.
Most mainframes of the time had rather elegant, straighforward instruction
sets.
And what are the relative *quantities* involved? As well as $$$?
They were designed for different purposes.
There's a point of view that microprocessors were
a devolution in the field
of computer science, and I have to admit that it has some merit. Before
the 8008/8080, who was even fooling with single-user mono-tasking operating
systems as a serious effort? With mainfraimes we had graphics, database
management, multi-user, multi-tasking, advanced architectures and all
manner of work being done in basic computer science. Along comes the
microprocessor and the siren song of easy money causes some very bright
people to spend great chunks of their professional lives on MS-DOS, Windows
and other already-invented-for-mainframes stuff. Right now, I figure we're
somewhere in OS development on micros about where we were with a Spectra 70
running VMOS.
The problem *there* is that people were coerced into trying to make
a device intended for one market serve another.
Imagine the MPU had NOT come along when it did. Would folks
have "coerced" mainframes to fit inside pinball machines?
Or, under the hood in the automobile? Or, in the microwave
oven?
It would be a silly misapplication of that technology. You
don't *need* (nor WANT) "graphics, database management, multi-user"
in an appliance. OTOH, you *do* want it to fit in a few cubic
inches, cost tens of dollars, run on a few watts and not require
a DEC technician on call -- nor a support aggreement -- in case
your microwave refuses to boil water for you today.
The early (late 70's) "production" KRM's were Nova based.
By the time you added the scanner to it, you had something
the size of a WASHING MACHINE! No, the customer couldn't
just "buy one and bring it $HOME/$WORK to use" -- someone
went *with* it to get it running. And, was often "invited"
to revisit it several times each year :>
Replace the Nova with an MPU (actually, a couple since the
speech was then moved to a DSP) and suddenly you've got a
box that you can *mail* to a user -- and he/she can mail
*back* (if there is a problem). Total cost of ownership
drops tens of thousands of dollars.
So, the $50K 1976 box is $3K in 2006 (dollars NOT adjusted
for inflation!).
In another thread, the discussion is centering around
Microkernels (been
there, done that on a mainframe) and the need to keep I/O drivers as part
of the kernel. Why? Why should the CPU even have access to I/O ports and
interrupts? Why doesn't every device have its own I/O processor with
access to CPU memory? Or better yet, why not a pool of communicating I/O
processors that can be allocated to the tasks at hand, leaving the CPU to
do what it does best?
Been there, done that. You'll find it gets very expensive,
very quickly (even using "cheap MPUs"). Those "device controllers"
still need an OS... and, thus, ways to protect who accesses
what parts of the code within that processor -- so the same
issues exist; they've just been moved to the IO processor
instead of the "main" processor.
Unless, of course, you want to design a custom piece of silicon
to control a special type of device -- but that limits what
types of devices you can *have*; besides disks, tapes, etc.
will you have audio devices? a "BT device" (to encapsulate
the BT stack)? a "servo motor" device? a "compression force"
device? etc.
And, what about devices that are, by their very nature,
"ethereal devices"? (e.g., on very small MPUs, I often build a
"FPU device")
Is silicon now so expensive that we can't advance
to
an idea that's more than 40 years old?
No. But you need to apply different techniques to address
those problems.
You're ignoring the fact that MPU's have caused *other*
approaches to problems that have been poorly (sloppily?)
solved in mainframe approaches.
E.g., in the 70's, a classic paper on grapheme-phoneme
conversion was written (Elovitz, et al. NRL report).
The algorithm implemented therein was written in SNOBOL.
Used over a megabyte of "core" (VM, actually). I'm
sure the authors had no idea of how fast their code was.
Nor did they even *care* (apparently) about designing
the algorithm efficiently -- even if those changes
were TRIVIAL (e.g., if a "rule" handles 80% of the
input cases, wouldn't you apply THAT rule, first?)
But, try to put that algorithm into an MPU and suddenly
you realize, "hey, I'm not going to throw 1MB at this
trivial problem!" Instead, you end up with about 6KB
of code -- including the dataset.
I'm always amazed when I see a gcc process grow to tens
of MB just because there's a big *array* in the code.
Sure, it's nice to just pretend you have infinite VM.
But, couldn't a different approach compile said code
on a 1MB *floppy* system?
And, without the "consumerish" devices that MPUs
have made possible, how much *real* work on real-time
would be done? Would the solutions be "ivory tower"
approaches to the problem (e.g., "maintain a HUGE
table of all processes, sorted by priority and
earliest deadline, etc.")? Or, would they be more
efficient approaches that fit into the resource-strapped
devices that are typically used to implement same?
Forgive the rant, but other than bringing computers to
the masses, do
microcomputers represent a significant forward movement in the state of the
art?
Sure! Would MP3 players have ever left the "lab" if there
wasn't a *cheap* way of making them? Would they have just
been an interesting intellectual exercise?
Would digital cameras have come along to exploit image
compression technologies? Camera *phones*??
The "problem" with the MPU was putting it in a "computer".