Dave Dunfield wrote:
I'm not promoting the Octal side (indeed I much
prefer HEX), however
Zilog didn't "opt" for anything - they based their design and instruction
set decoding on the Intel 8080, which was laid out in a manner which
made sense with "Octal". And Intel DIDN'T use xsddsdsx, they DID use
xxdddsss - which made perfect sense from an Octal standpoint (which
is why so many people promoted the use of Octal with it).
But the Z80 isn't an
8085 nor is the 8085 an 8080. (granted, the
last two are much closer related than the first two).
But the Z80 and the 8085 are both based on the 8080 architecture
and instruction set - so much so that they will both run the vast
majority of 8080 code.
But that's a fallacy. You have to tweek the code in almost
all cases (especially if you are designing embedded systems
and not "desktop applications"). So, a smarter approach is
to handle things at the *source* level instead of the *object*.
Is anyone really suggesting that it's an
"accident" that the Z80 happens to run 8080 code ... or did
Zilog begin with the 8080 instruction set definition (hence my point
that they (Zilog) did not make the decision on the bit arrangements
of the opcoodes).
But a Z80 *won't* run 808[05] code. Nor will a 64180 run
Z80 code. (and "Rabbits" don't run anything! :> ) Zilog made
a very conscious decision to make a "different 808x". Everything
from the pinouts, peripherals available, bus timing,
etc.
(where did RST 5.5 go? etc.). Nor did they adhere to
Intel's mnemonics (copywritten?) -- though converting from
one to the other is almost trivial with even the macroassemblers
available back then. Tek used a still different set of
opcodes in their tools, etc.
And, there is
no reason why xx ddd sss is any *better* than
xs dsd sdx or sd xxd ssd for an instruction encoding. *We*
used (split) octal because our MTOS supported hot patching
and it was convenient to "hand assemble" code patches on the
fly to fix bugs, etc. (gdb wasn't around for an 8080 in ~1976)
Doesn't this suggest that xxdddsss actually was *better* - since
you took advantage of the alignment with octal notation to make it
easier to hand-assemble...
But that's the point; it *doesn't* help you hand assemble.
1/4 of the opcode space is devoted to 8 bit moves. Yet,
I find 8 bit moves seldom used (at least in any of the
code that I've written/maintained). Sure, I may want to:
LD D,H
LD E,L
to save a *copy* of r.HL before indexing off of it
ADD HL,BC
But, how often do you LD C,H or LD L,B etc?
And, tracking 16 bit load/stores (LXI's, PUSH's, etc.)
requires memorizing an encoding for *4* possible arguments.
And who can recall the clever encoding of all of the
conditionals?
I.e. you end up committing the opcodes that you *use*
to memory and dig up the oddballs from a cheat sheet
when you need it. (e.g., PCHL/XTHL, etc.)
I'd have rathered some of the opcode space spent on
more useful instructions -- short load-immediates
(where the argument is encoded in the first byte of
the opcode), etc.
Thats my whole point. The fact that the instruction
set happens to
align well with Octal notation is the main reason that a lot of people
used it. It's interesting to note that almost all of the Intel docs are
in hex, or binary notations - but Mits, Heath and several others
thought that Octal was a better fit.
As noted earlier, I happen to be from the "hex" camp ... but I don't
think it's fair to dismiss the octal guys as "nuts" ... the use of octal
on the 8080 did have some benefit, and there were a lot of people
who went that route - to ignore or discount this does not present an
accurate depiction of the time period.
When I worked on 8085's, we let the MDS-800 handle the assemblies.
Burn a set of EPROMs (2 hours!), plug into the target and hope
you've stubbed enough places in the code so you could figure out
where it was based on examining odd display patterns, etc.
Stare at listings, burn another set of EPROMs after lunch.
Two turns of the crank in an 8 hour shift. Did the choice of
opcode assignments increase productivity??
When I was doing Z80-based designs (in the "split octal" world),
a helluva lot of energy was expended to support the "octal"
encoding -- rewriting the Zilog assembler to generate listings
in octal (INCLUDING displaying addresses in split octal!),
building run-time "monitors" to examine and patch code images
during execution, writing the associated software to do so, etc.
I'm convinced the fact that you could get a ten-key
keypad and an inexpensive LCD to display 6 digit SPLIT octal
values had more of an impact on the octal decision than
anything about the opcode encoding -- despite the fact that
it made USING the tools more difficult (since you can only
display a 16 bit value -- *address* -- in 6 digits, you have
to multiplex the display to let the user see the *data* at
that address :< ).
I recall getting an EM-180 and quickly distancing myself
from the octal vs. hex debate... I'll use *symbols*
instead
of dicking around with bit groupings.
Now why some people chose octal for other processors,
which
didn't have an architectural slant toward 8 is more of a mystery
to me....
I think the fact that 0..7 fits in a decimal representation
says a lot about the "why". :-( Amazing to consider how
much "resources" got wasted on silly (in hindsight) decisions.
Sort of like PC (and other) BIOS decisions placing silly
restrictions on the size of a disk or where the boot code
can be located, etc.