Microcode, which is a no-go for modern designs

Guy Sotomayor Jr ggs at shiresoft.com
Wed Jan 2 12:44:59 CST 2019



> On Jan 2, 2019, at 10:22 AM, Chuck Guzis via cctalk <cctalk at classiccmp.org> wrote:
> 
> On 1/2/19 8:02 AM, Jon Elson via cctalk wrote:
> 
>> Random logic instruction decode was a REAL issue in about 1960 - 1965,
>> when computers were built with discrete transistors.  The IBM 7092, for
>> instance, had 55,000 transistors on 11,000 circuit boards.  I don't know
>> how much of that was instruction decode, but I'll guess that a fair bit
>> was.  The IBM 360's benefited from microcode, allowing them to have a
>> much more complex and orthogonal instruction set with less logic.
>> 
>> But, once ICs were available, the control logic was less of a problem. 
>> But, microcode still made sense, as memory was so slow that performance
>> was dictated by memory cycle time, and the microced did not slow the
>> system down.  Once fast cache became standard, then eliminating
>> performance bottlenecks became important.  And, once we went from lots
>> of SSI chips to implement a CPU to one big chip, then it was possible to
>> implement the control logic within the CPU chip efficiently.
> 
> I don't know--"microcode" in today's world is a very slippery term.   If
> you're talking about vertical microcode, then I'm inclined to agree with
> you.  But even ARM, which is held up as the golden example of
> microcode-less CPU design, is programmed via HDL, which is then compiled
> into a hardware design, at least in instruction decoding. So ARM is a
> programmed implementation.   I suspect that if x86 microcode were to be
> written out in HDL and compiled, Intel could make the same claim.
> 
> I think of it as being akin to "interpreted" vs. "compiled"
> languages--the boundary can be rather fuzzy (e.g. "tokenizing",
> "p-code", "incremental compilation"... etc.)
> 

Remember that ARM licenses it’s ISA as well as implementations.
Some ARM licenses, do their own implementations and those *are*
microcoded.

There are a number of reasons for doing micro-code and a number
of architectures use it especially if the micro-code can be “patched”
(which AFAIK they all do now) to allow for fixing “bugs” once the
chip has been released.  If the bug is severe enough (remember the
DIV bug in the early 80286?) to have a recall done then the overhead
of having patchable micro-code will pay for itself many fold.

It is also important to note, that today’s CPUs are not just a bare
CPU implementing an ISA.  They are embedded in an SoC with
potentially many other micro-controllers/CPUs that are not visible
to the programmer and those are all “micro-coded” and control
various aspects of the SoC.  The last SoC that I worked on had
(in addition to the 8 ARM application CPUs which are micro-coded
BTW) has over 12 other micro-controllers (mostly ARM R5s)and 
4 VLIW DSPs (not to mention several megabytes of SRAM that
is outside of the various caches).  After all you have to do something
with 7 *billion* transistors.  ;-)

Also, recall that there are different forms of micro-code: horizontal
and vertical.  I think that IBM (in the S/360, S/370, S/390, z/Series)
uses the term micro-code for horizontal micro-code and millicode
for vertical microcode.

TTFN - Guy



More information about the cctalk mailing list