Dave Dunfield wrote:
But the Z80 and the 8085 are both based on the 8080
architecture
and instruction set - so much so that they will both run the vast
majority of 8080 code.
But that's a fallacy. You have to tweek the code in
almost
all cases (especially if you are designing embedded systems
and not "desktop applications"). So, a smarter approach is
to handle things at the *source* level instead of the *object*.
Context: Mid 70's - several guys I knew (myself included)
had little boards with wire-wrapped 8080's, some switches and
Sure! I've still got a few Augat panels with "big" DRAM arrays
built out of 16Kx1 DRAMs that I hand wrapped (too risky to
do on perfboard to get a good solid Vcc/GND).
a few bytes of RAM ... Didn't matter how
"smart" you were, you
worked at the machine level - hand assembling and entering
opcodes.
Thats when I first observed the octal/hex split - some guys chose
Octal for the obvious advantages in hand-coding, I went hex due
to my background...
Dunno. In my case, it was converting i4004 assembly language
to i8085 assembly language. The i4004 listings were all in Hex
so it (as were the i8085) so the idea of using anything other
than hex to represent data, etc never came up.
OTOH, with the Nova, octal seemed "normal".
But "split octal" on the Z80 was an abomination.
Is anyone really suggesting that it's an
"accident" that the Z80 happens to run 8080 code ... or did
Zilog begin with the 8080 instruction set definition (hence my point
that they (Zilog) did not make the decision on the bit arrangements
of the opcoodes).
But a Z80 *won't* run 808[05] code. Nor will a 64180 run
Z80 code. (and "Rabbits" don't run anything! :> ) Zilog made
a very conscious decision to make a "different 808x". Everything
from the pinouts, peripherals available, bus timing, etc.
(where did RST 5.5 go? etc.). Nor did they adhere to
Intel's mnemonics (copywritten?) -- though converting from
one to the other is almost trivial with even the macroassemblers
available back then. Tek used a still different set of
opcodes in their tools, etc.
Although it's possible to create software that will run on an 8080
(or an 8085) and not a Z80 (I used to do this to really annoy a
friend who had a Cromemco Z80 system when I still used the
Altair 8080 :-) ... By and large, the Z80 is a superset of the
8080 - There are some flag differences which prevent it from
being a fully proper superset, however these affected very little
"real" code.
How do you define "real" code? If you're on a boat at sea and
your navigation system suddenly decides that this opcode should
not behave the way it "does", you might get a bit annoyed
when you can't find any of your lobster pots, etc. I think code
that controls the rudder of a several ton vessel moving at 20
knots is just as "real" as the code that draws a tic tac toe
board on a glass tty... :-(
Macroassemblers? My first 8080 design had only
"reset", "Deposit",
"next" and "run" switches ... if you made a mistake entering an
opcode
you had to start all over again from 0000 0000 0000 0000.
(or 00 000 000 00 000 000 if you are from the Octal camp)
PS: RST 5.5 "went" the same place that SIM and RIM "went" ...
it stayed in the primordial nothingness which existed before
creation (RST 5.5 is an 8085 extension - the Z80 was based
on the 8080, never the 8085 - so RST 5.5 never existed as
far as the Z80 was concerned).
And, there is no reason why xx ddd sss is any *better*
than
xs dsd sdx or sd xxd ssd for an instruction encoding. *We*
used (split) octal because our MTOS supported hot patching
and it was convenient to "hand assemble" code patches on the
fly to fix bugs, etc. (gdb wasn't around for an 8080 in ~1976)
Doesn't
this suggest that xxdddsss actually was *better* - since
you took advantage of the alignment with octal notation to make it
easier to hand-assemble...
But that's the point; it *doesn't* help you
hand assemble.
1/4 of the opcode space is devoted to 8 bit moves. Yet,
I find 8 bit moves seldom used (at least in any of the
code that I've written/maintained). Sure, I may want to:
LD D,H
LD E,L
to save a *copy* of r.HL before indexing off of it
ADD HL,BC
But, how often do you LD C,H or LD L,B etc?
Actually, you did have to MOV things fairly often on the
8080 (dedicated registers and all - don't forget that in spite
of Intel always documenting it separately, 'M' was one of the
Octal register representations).
Sure, but that's just two instructions -- mov a to memory and
move memory to a.
I looked through two 8085 product listings (12KB and 16KB).
Aside from MVI's and MOV A/M,M/A, the only other MOV was
MOV A,B or MOV A,D (followed by ORA C and ORA E, respectively).
*Maybe* that's just a matter of personal style -- but the
code reflects 3 or 4 different authors so obviously we all
shared similar thoughts on how to use the processors resources.
But more importantly - It's not just MOV ... IIRC,
the following
instructions on the 8080 have an octal field encoded:
"MOV", "MVI", "ADD", "ADC", "SUB",
"SBB", "INR", "DCR",
"ANA", "XRA", "ORA", "CMP",
"JZ", "JNZ", "JC", "JNC", "JP",
"JM", "JPE", "JPO",
"CZ", "CNZ", "CC", "CNC", "CP",
"CM", "CPE", "CPO",
"RC", "RNZ", "RC", "RNC", "RP",
"RM", "RPE", "RPO",
"RST"
Sure, and LXI, PSH, POP, INX have a "quad" field.
What's your point? How does memorizing the mappings of
registers to 3 bit fields help you remember the mappings
of condition codes to *that* 3 bit field?
Bottom line, you end up having to just memorize the opcodes
that you use frequently -- and remember WHERE on the quick
reference card each of the other opcodes will be found.
As to how often I use these instruction - how about
some hard
data instead of conjecture - I wrote a program to analyze 8080
source, and launched it against two of my earliest code examples,
namely - a small 8080 Monitor and BASIC interpreter that I wrote
for the UNB computer club in the 70's:
In the output below:
Directives - are any non-opcode source lines, EQU, ORG, DB etc.
Octal opcodes - are opcodes from the above list (with an octal field in them)
Non-octal opcodes - are all other opcodes (which don't have an octal field)
Filename : MONITOR.ASM
Total lines : 645
Comment/blank : 100
Directives : 39
Octal opcodes : 200
Non-octal opcodes: 306
Looks like 2/3 of the instructions encoded in my monitor have at least one
octal field representation.
Filename : BASIC.ASM
Total lines : 1975
Comment/blank : 386
Directives : 143
Octal opcodes : 494
Non-octal opcodes: 952
Looks like more than 1/2 of the instructions encoded in my BASIC have
at least one octal field.
But, again, that doesn't *mean* anything. Every JMP/CALL has an
octal field. But, do you REMEMBER them as "JMP ALWAYS" and
"CALL ALWAYS" and thus synthesize the opcodes from a 5 bit
template with an "ALWAYS" condition? Or, do you just remember
that C3 is JMP and CD is CALL?
Note that I have only use the octal fields depicted in
the Intel databook
(mainly the registers, the conditional coding and the RST vector) - in
practice (and this is going way back now) ... Octal makes sense for
other parts of the opcode as well - for example, all arithmetic opcodes
are encoded as 10 rrr aaa
Where aaa is a triplet encoding the arithmetic operation.
And you remember *those* encodings, as well??
I probably used 20-25 opcodes in 90% of what I wrote.
Aside from initial system startup (LXI SP, etc.), I can
probably disassemble most of what I'd written just by
keeping those 20 opcodes frsh in my mind (and, when
disassembling, your memory keeps getting refreshed
as you use certain code fragments over and over again)
And, tracking
16 bit load/stores (LXI's, PUSH's, etc.)
requires memorizing an encoding for *4* possible arguments.
Yes - these are included in the "Non-octal" catagory above.
And who can recall the clever encoding of all of
the
conditionals?
Tough if you think in hex (the conditon field would be split across two
nibbles ... but if you think in Octal, it's not so tough ... all conditions
are represented in the second triplet (from the right):
000 = 0 = NZ
001 = 1 = Z
010 = 2 = NC
011 = 3 = C
100 = 4 = PO
101 = 5 = PE
110 = 6 = P
111 = 7 = M
All transfer instruction are encoded: 11 ccc xxx
Where xxis:
000 = Rcondition
010 = Jcondition
100 = Ccondition
Yeah, and my quick reference card gives them all nice
HEX values that I can dig up just as easy as you can
build an opcode using all of these little tables.
>> As noted earlier, I happen to be from the
"hex" camp ... but I don't
>> think it's fair to dismiss the octal guys as "nuts" ... the use of
octal
Hmmm... *I* don't recall calling anyone "nuts". Rather, this started
with my observation:
When I was developing Z80-based products, an ongoing *battle*
was the use of hex vs. "split octal" (e.g., 0xFFFF -> 0377 0377).
The octal camp claimed the Z80 was an "octal machine" (oh, really?)
and, for "proof", showed how so many of the opcodes could be
committed to memory just my noting the source & destination
register "codes" and packing them into an octal representation:
xx xxx xxx (of course, I wonder how well their argument would
stand up if Zilog had opted to encode the register fields
as: xs dds dsx?? :> )
Octal? Hex? Just give me a symbolic debugger and let *it*
keep track of these minutae...
on the 8080 did have some benefit, and there were a
lot of people
who went that route - to ignore or discount this does not present an
accurate depiction of the time period.
When I worked on 8085's, we let the
MDS-800 handle the assemblies.
Burn a set of EPROMs (2 hours!), plug into the target and hope
you've stubbed enough places in the code so you could figure out
where it was based on examining odd display patterns, etc.
Stare at listings, burn another set of EPROMs after lunch.
Two turns of the crank in an 8 hour shift. Did the choice of
opcode assignments increase productivity??
Probably not - but I seem to recall that even though the 8080 was
a rather expensive chip to get, it still didn't always come with an
MDS-800 - I don't recall any of the guys in our "homebrew computer
club" having an MDS-800 (or any production computer for that
matter).
Sure. But, the rules applying to hobbyists are obviously
different than those applying to corporations trying to bring
products to market. You wouldn't suggest we NOT purchase
the MDS and, instead, get a bunch of SDK's wired to our
target hardware?
Some of the guys claimed that thinking of the opcode
in Octal made
it much easier for them ... All I am saying is that I can see some
validity to their claim.
And the last P in my comment (above) was:
Octal? Hex? Just give me a symbolic debugger and let *it*
keep track of these minutae...
I don't imagine those hobbyists *prefer* using hex/decimal
keypads and 7 segment displays to write their code?
When I was
doing Z80-based designs (in the "split octal" world),
a helluva lot of energy was expended to support the "octal"
encoding -- rewriting the Zilog assembler to generate listings
in octal (INCLUDING displaying addresses in split octal!),
building run-time "monitors" to examine and patch code images
during execution, writing the associated software to do so, etc.
This would be because you didn't work in Octal (split or otherwise).
The same could be said for any encoding scheme - If some of your
users had demanded to use base 13 this too would have presented a
challange to you - but at least the "ultimate answer" would work out
correctly: (6x9=42 in base 13 :-)
You're missing the point: it was a waste of time to invest
all that energy in half-*ssed tools instead of moving to
more "current" technology. If your goal is to bring a product
to market (NOT to deal with hobbyists), you devote your
resources to things that can measurably improve your
productivity. Throwing resources (software and hardware
development time and money) on tools that don't make a
dramatic increase in your productivity is just foolish.
Like writing code in assembler when you could just as readily
use a HLL.
I'm
convinced the fact that you could get a ten-key
keypad and an inexpensive LCD to display 6 digit SPLIT octal
values had more of an impact on the octal decision than
anything about the opcode encoding -- despite the fact that
it made USING the tools more difficult (since you can only
display a 16 bit value -- *address* -- in 6 digits, you have
to multiplex the display to let the user see the *data* at
that address :< ).
That was certainly part of it - as was the fact that 7447's could
display Octal quite nicely, but didn't work so well with Hex.
But I would also say that a higher percentage of single-board
8080 machines used Octal - because it does make sense
with the instruction set.
But, you're still dealing with the hobbyist world. Every product
*we* shipped was a "single board 808x/4004 machine"... yet, they
didn't "use octal" (and we shipped thousands and thousands
of machines).
If you're a hobbyist, your time is (often) worth nothing.
I've watched people disassemble video arcade pieces "by hand"
and reverse engineer all of the copy protection hacks in the
code.
Um, *why*?
Because they are curious and have decided that they can
AFFORD the time to engage in this activity. But a *business*
would never waste their time on this -- unless there was some
key IP that they were after, etc. (there are firms that expend
a great deal of resources on these sorts of activities!)
I recall
getting an EM-180 and quickly distancing myself
from the octal vs. hex debate... I'll use *symbols* instead
of dicking around with bit groupings.
Why even bother with that ... why not just use mouse clicks
and "drag and drop".
Mid 70's. No GUIs. No mice.
In the days of wirewrap systems, homebrew front
panels
and pencil and pad assemblers, bit groupings mattered.
One question though --- since you use symbols and don't
"dick around" with bit groupings ... why is the use of Hex
over Octal so important to you? ... wouldn't one be just as
good as the other to the high level view (like some detail
burried way down in the mouse driver that you don't need
to care about when you click).
Hex vs.Octal is NOT important to me. Please reread my
original comment:
Octal? Hex? Just give me a symbolic debugger and let *it*
keep track of these minutae...
Now why some people chose octal for other processors,
which
didn't have an architectural slant toward 8 is more of a mystery
to me....
I think the fact that 0..7 fits in a decimal representation
says a lot about the "why". :-( Amazing to consider how
much "resources" got wasted on silly (in hindsight) decisions.
Sort of like PC (and other) BIOS decisions placing silly
restrictions on the size of a disk or where the boot code
can be located, etc.
I never saw it as a huge waste of "resources" - In the early
You didn't rewrite ("re-bug") WORKING development tools
in a production environment to add these "features".
If you've got an assembler, linkage editor, etc. that all
WORK but produce HEX listings, why rewrite them to spit out
listings in split octal? Including addresses (of opcodes,
arguments, link maps, etc.) Are you going to gain that much
in terms of productivity to offset the time spent doing this?
And the bugs that get introduced in the process?
Wouldn't it be better to just live with the hex listings
and purchase a hex keypad for your "run-time monitor"?
After all, you're only going to build a handful of those...
what's it going to cost you vs. a decimal/octal keypad -- an
extra $10??
Spend the resources that you "wasted" hacking the toolchain
to purchase another *real* development system. Or, a
symbolic debugger. Etc. Don't sell your adherence to
The Old Ways as a virtue -- concentrate on getting ahead
of the curve and your competition!
days, some guys liked Octal, some guys liked hex - I
worked
mainly in hex, but I didn't have a great deal of trouble going
back and forth - you were happy just to find someone who
knew the language - the particular dialect that he chose didn't
seem all that important at the time.
Clearly the debate can rage on forever - If you like hex, and
consider it the "only game in town", then thats OK by me. I
happen to see advantages to both viewpoints.