Am 13 Nov 2003 18:12 meinte Tom Jennings:
On Thu, 2003-11-13 at 13:37, Steve Thatcher wrote:
> being an engineer that worked in assembly language on many micros
> before and after the 8086, the segmented architecture was not
> that hard to handle and actualy had many side benefits such as
> code size and more efficient use of the bus bandwidth. The pdp11
> may have been better overall, but there is no comparison on terms
> of price, availability, and being able to get the job done for
> the millions of PC users.
Cool, CPU flame wars! :-)
YESSSSS!
First, you state: "...may have been better
overall, but there is no
comparison on terms of price, availability, and being able to get the
job done for the millions of PC users". Intel's ability to 'get the job
done' is real, but hardly due to the 8080 legacy they dragged around.
I didn't find a lot of 8080 legacy. Shure, the registers where
named in a way to make 8080 people feel comfortable, and things
like DAA where added to allow atomatic source translation, but
that's it. It didn't come in my way when programming the 8086.
Clearly, DEC completely ignored the chip-computer
world, until it was
far too late. (I was the one who ported MSDOS to the DEC Rainbow 100A, I
could tell you some terrible DEC stories... :-(
:)
I too wrote assembly on more minis and chips than I
recall, and from a
programmers point of view, Intel sucked/s. Segmentation had hardware
advantages for Intel, for buss and peripheral component compatibility,
The segmentation had _nothing_ to do with the bus, which was a
straight 20/16 Bit address Bus (20 Bit Memory, 16 Bit I/O) and
16 Bit Data.
and that's about it. Don't you remember
address normalization and
comparison woes?
That's rather a fault of Microsoft than Intel.
Memory allocation schemes? Small/Medium/Large Model
compiler "options"?
Now, that is something to blame on dompiler developer.
True, it was initiated by Intel, but I can't blame the
CPU or the CPU designer for that. In fact, I never
understood what these 'models' are good for, since they
just define special cases within the only Model the
CPU knows.
Different and incompatible subroutine calls and
returns? segment:offset performance penalties? Setting up DMA
controllers to get around the 64K absolute boundaries? Hardware address
space kludges? Yuck.
Now, this again is not a foul from Intel, but Rather IBM,
I'm still mad about what the PC did to the nice x86 design.
Especialy the usage of reserved Interrupts for BIOS and DOS :(
I don't see where 'buss bandwidth' is
affected; and if there was any
effect, it was to put the complexity into the software that had to
manage address space calcs in software. And code size efficiencies were
marginal at best; this thread I admit is outside my scope, and the realm
of many papers, related to the RISC vs. complex order code arguments of
yore.
That's the other thing I never understood, RISC vs CISC,
depending on which argument one uses, the 86 becomes CISC
or RISC.
Good code could be written on it, for sure, but it
wasn't "inherently"
efficient.
I thought so.
All CPUs suck, but some are worse than others. The
nicest-looking chip
computer assembly code (in my opinion) is the Moto 6809, though I never
got to use it. The worst, besides PICs, the Signetics 8x300, or may be
the RCA 1802 (ugh)..
Ok, now we're getting there... Signetics always had a faible for
odd designs (although the 2650 was the first CPU I ever owned),
but the 1802 design again is to me also one of the nicest around.
Maybe today it would be called a DSP or alike. The whole setup
was extreme I/O orientated. One thing I always wanted to design
was a dedicated I/O board for the Apple using a 1802 as controler.
To me the most important thing when it comes to programm a CPU is
to understand and _accept_ it's design. To read opcode-tables to
understand the way of operation the designers had in mind is like
reading a book and trying to figure oput the motives of the characters
and what the author wanted to tell ... And like with such BIG, HEAVY
contend loaded books it's often quite trivial. And then programming
is not forceing some way (algorythm) onto the CPU, but rather suit
it so it fits what the chip is ment to do.
I realy dislike it when people want to be each new CPU like the
xyz (put in your favorite CPU), and C is nothing else than a way
to do exactly the same. The result is the quite unified CPU designs
we have today. Everything looks like a bastardized PDP11 (well,
except for the 8051 mixup). It's like no CPU designer has any more
the guts to come up with something interesting, optimized.
Ok, maybe I'm doing the engeneers wrong, and it's the middle
management (*1) that, like in the musik industry, fears anything
special and trys to be as riskless as possible.
The last design which had new ideas (to me) was the C16x series
from Siemens, and maybe ARM - but then, both have
already been
layed out in the mid 80s.
It's realy getting boooooooooring. Out of all the modern stuff
only USB did catch my full attendace as a first class playfield.
Gruss
H.
(Going back to sleepmode)
--
VCF Europa 5.0 am 01./02. Mai 2004 in Muenchen
http://www.vcfe.org/