But the Z80
and the 8085 are both based on the 8080 architecture
and instruction set - so much so that they will both run the vast
majority of 8080 code.
But that's a fallacy. You have to tweek the code in almost
all cases (especially if you are designing embedded systems
and not "desktop applications"). So, a smarter approach is
to handle things at the *source* level instead of the *object*.
Context: Mid 70's - several guys I knew (myself included)
had little boards with wire-wrapped 8080's, some switches and
a few bytes of RAM ... Didn't matter how "smart" you were, you
worked at the machine level - hand assembling and entering
opcodes.
Thats when I first observed the octal/hex split - some guys chose
Octal for the obvious advantages in hand-coding, I went hex due
to my background...
Is anyone
really suggesting that it's an
"accident" that the Z80 happens to run 8080 code ... or did
Zilog begin with the 8080 instruction set definition (hence my point
that they (Zilog) did not make the decision on the bit arrangements
of the opcoodes).
But a Z80 *won't* run 808[05] code. Nor will a 64180 run
Z80 code. (and "Rabbits" don't run anything! :> ) Zilog made
a very conscious decision to make a "different 808x". Everything
from the pinouts, peripherals available, bus timing, etc.
(where did RST 5.5 go? etc.). Nor did they adhere to
Intel's mnemonics (copywritten?) -- though converting from
one to the other is almost trivial with even the macroassemblers
available back then. Tek used a still different set of
opcodes in their tools, etc.
Although it's possible to create software that will run on an 8080
(or an 8085) and not a Z80 (I used to do this to really annoy a
friend who had a Cromemco Z80 system when I still used the
Altair 8080 :-) ... By and large, the Z80 is a superset of the
8080 - There are some flag differences which prevent it from
being a fully proper superset, however these affected very little
"real" code.
Macroassemblers? My first 8080 design had only "reset", "Deposit",
"next" and "run" switches ... if you made a mistake entering an
opcode
you had to start all over again from 0000 0000 0000 0000.
(or 00 000 000 00 000 000 if you are from the Octal camp)
PS: RST 5.5 "went" the same place that SIM and RIM "went" ...
it stayed in the primordial nothingness which existed before
creation (RST 5.5 is an 8085 extension - the Z80 was based
on the 8080, never the 8085 - so RST 5.5 never existed as
far as the Z80 was concerned).
And, there is no reason why xx ddd sss is any *better*
than
xs dsd sdx or sd xxd ssd for an instruction encoding. *We*
used (split) octal because our MTOS supported hot patching
and it was convenient to "hand assemble" code patches on the
fly to fix bugs, etc. (gdb wasn't around for an 8080 in ~1976)
Doesn't this suggest that xxdddsss actually was *better* - since
you took advantage of the alignment with octal notation to make it
easier to hand-assemble...
But that's the point; it *doesn't* help you hand assemble.
1/4 of the opcode space is devoted to 8 bit moves. Yet,
I find 8 bit moves seldom used (at least in any of the
code that I've written/maintained). Sure, I may want to:
LD D,H
LD E,L
to save a *copy* of r.HL before indexing off of it
ADD HL,BC
But, how often do you LD C,H or LD L,B etc?
Actually, you did have to MOV things fairly often on the
8080 (dedicated registers and all - don't forget that in spite
of Intel always documenting it separately, 'M' was one of the
Octal register representations).
But more importantly - It's not just MOV ... IIRC, the following
instructions on the 8080 have an octal field encoded:
"MOV", "MVI", "ADD", "ADC", "SUB",
"SBB", "INR", "DCR",
"ANA", "XRA", "ORA", "CMP",
"JZ", "JNZ", "JC", "JNC", "JP",
"JM", "JPE", "JPO",
"CZ", "CNZ", "CC", "CNC", "CP",
"CM", "CPE", "CPO",
"RC", "RNZ", "RC", "RNC", "RP",
"RM", "RPE", "RPO",
"RST"
As to how often I use these instruction - how about some hard
data instead of conjecture - I wrote a program to analyze 8080
source, and launched it against two of my earliest code examples,
namely - a small 8080 Monitor and BASIC interpreter that I wrote
for the UNB computer club in the 70's:
In the output below:
Directives - are any non-opcode source lines, EQU, ORG, DB etc.
Octal opcodes - are opcodes from the above list (with an octal field in them)
Non-octal opcodes - are all other opcodes (which don't have an octal field)
Filename : MONITOR.ASM
Total lines : 645
Comment/blank : 100
Directives : 39
Octal opcodes : 200
Non-octal opcodes: 306
Looks like 2/3 of the instructions encoded in my monitor have at least one
octal field representation.
Filename : BASIC.ASM
Total lines : 1975
Comment/blank : 386
Directives : 143
Octal opcodes : 494
Non-octal opcodes: 952
Looks like more than 1/2 of the instructions encoded in my BASIC have
at least one octal field.
Note that I have only use the octal fields depicted in the Intel databook
(mainly the registers, the conditional coding and the RST vector) - in
practice (and this is going way back now) ... Octal makes sense for
other parts of the opcode as well - for example, all arithmetic opcodes
are encoded as 10 rrr aaa
Where aaa is a triplet encoding the arithmetic operation.
And, tracking 16 bit load/stores (LXI's,
PUSH's, etc.)
requires memorizing an encoding for *4* possible arguments.
Yes - these are included in the "Non-octal" catagory above.
And who can recall the clever encoding of all of the
conditionals?
Tough if you think in hex (the conditon field would be split across two
nibbles ... but if you think in Octal, it's not so tough ... all conditions
are represented in the second triplet (from the right):
000 = 0 = NZ
001 = 1 = Z
010 = 2 = NC
011 = 3 = C
100 = 4 = PO
101 = 5 = PE
110 = 6 = P
111 = 7 = M
All transfer instruction are encoded: 11 ccc xxx
Where xxis:
000 = Rcondition
010 = Jcondition
100 = Ccondition
As noted
earlier, I happen to be from the "hex" camp ... but I don't
think it's fair to dismiss the octal guys as "nuts" ... the use of octal
on the 8080 did have some benefit, and there were a lot of people
who went that route - to ignore or discount this does not present an
accurate depiction of the time period.
When I worked on 8085's, we let the MDS-800 handle the assemblies.
Burn a set of EPROMs (2 hours!), plug into the target and hope
you've stubbed enough places in the code so you could figure out
where it was based on examining odd display patterns, etc.
Stare at listings, burn another set of EPROMs after lunch.
Two turns of the crank in an 8 hour shift. Did the choice of
opcode assignments increase productivity??
Probably not - but I seem to recall that even though the 8080 was
a rather expensive chip to get, it still didn't always come with an
MDS-800 - I don't recall any of the guys in our "homebrew computer
club" having an MDS-800 (or any production computer for that
matter).
Some of the guys claimed that thinking of the opcode in Octal made
it much easier for them ... All I am saying is that I can see some
validity to their claim.
When I was doing Z80-based designs (in the "split
octal" world),
a helluva lot of energy was expended to support the "octal"
encoding -- rewriting the Zilog assembler to generate listings
in octal (INCLUDING displaying addresses in split octal!),
building run-time "monitors" to examine and patch code images
during execution, writing the associated software to do so, etc.
This would be because you didn't work in Octal (split or otherwise).
The same could be said for any encoding scheme - If some of your
users had demanded to use base 13 this too would have presented a
challange to you - but at least the "ultimate answer" would work out
correctly: (6x9=42 in base 13 :-)
I'm convinced the fact that you could get a
ten-key
keypad and an inexpensive LCD to display 6 digit SPLIT octal
values had more of an impact on the octal decision than
anything about the opcode encoding -- despite the fact that
it made USING the tools more difficult (since you can only
display a 16 bit value -- *address* -- in 6 digits, you have
to multiplex the display to let the user see the *data* at
that address :< ).
That was certainly part of it - as was the fact that 7447's could
display Octal quite nicely, but didn't work so well with Hex.
But I would also say that a higher percentage of single-board
8080 machines used Octal - because it does make sense
with the instruction set.
I recall getting an EM-180 and quickly distancing
myself
from the octal vs. hex debate... I'll use *symbols* instead
of dicking around with bit groupings.
Why even bother with that ... why not just use mouse clicks
and "drag and drop".
In the days of wirewrap systems, homebrew front panels
and pencil and pad assemblers, bit groupings mattered.
One question though --- since you use symbols and don't
"dick around" with bit groupings ... why is the use of Hex
over Octal so important to you? ... wouldn't one be just as
good as the other to the high level view (like some detail
burried way down in the mouse driver that you don't need
to care about when you click).
Now why some
people chose octal for other processors, which
didn't have an architectural slant toward 8 is more of a mystery
to me....
I think the fact that 0..7 fits in a decimal representation
says a lot about the "why". :-( Amazing to consider how
much "resources" got wasted on silly (in hindsight) decisions.
Sort of like PC (and other) BIOS decisions placing silly
restrictions on the size of a disk or where the boot code
can be located, etc.
I never saw it as a huge waste of "resources" - In the early
days, some guys liked Octal, some guys liked hex - I worked
mainly in hex, but I didn't have a great deal of trouble going
back and forth - you were happy just to find someone who
knew the language - the particular dialect that he chose didn't
seem all that important at the time.
Clearly the debate can rage on forever - If you like hex, and
consider it the "only game in town", then thats OK by me. I
happen to see advantages to both viewpoints.
Regards,
Dave
--
dave06a (at) Dave Dunfield
dunfield (dot) Firmware development services & tools:
www.dunfield.com
com Collector of vintage computing equipment:
http://www.parse.com/~ddunfield/museum/index.html