Microcode, which is a no-go for modern designs
Chuck Guzis
cclist at sydex.com
Wed Jan 2 12:22:26 CST 2019
On 1/2/19 8:02 AM, Jon Elson via cctalk wrote:
> Random logic instruction decode was a REAL issue in about 1960 - 1965,
> when computers were built with discrete transistors. The IBM 7092, for
> instance, had 55,000 transistors on 11,000 circuit boards. I don't know
> how much of that was instruction decode, but I'll guess that a fair bit
> was. The IBM 360's benefited from microcode, allowing them to have a
> much more complex and orthogonal instruction set with less logic.
>
> But, once ICs were available, the control logic was less of a problem.
> But, microcode still made sense, as memory was so slow that performance
> was dictated by memory cycle time, and the microced did not slow the
> system down. Once fast cache became standard, then eliminating
> performance bottlenecks became important. And, once we went from lots
> of SSI chips to implement a CPU to one big chip, then it was possible to
> implement the control logic within the CPU chip efficiently.
I don't know--"microcode" in today's world is a very slippery term. If
you're talking about vertical microcode, then I'm inclined to agree with
you. But even ARM, which is held up as the golden example of
microcode-less CPU design, is programmed via HDL, which is then compiled
into a hardware design, at least in instruction decoding. So ARM is a
programmed implementation. I suspect that if x86 microcode were to be
written out in HDL and compiled, Intel could make the same claim.
I think of it as being akin to "interpreted" vs. "compiled"
languages--the boundary can be rather fuzzy (e.g. "tokenizing",
"p-code", "incremental compilation"... etc.)
--Chuck
More information about the cctech
mailing list