It may be that compilable languages are defined for
the purpose of
providing for microcode but, that would mean that the sequence of
microinstructions is generally not predictable from the source code of the
program thus translated.
Are you trying to claim that a microcode compiler is non-deterministic?
This seems like a dubious proposition, given that there are generally no
calls to a PRNG.
The only deliberately non-deterministic development tool I've ever used was
a Xilinx FPGA fitter. Somehow it seemed reminiscent of the bogosort
algorithm.
Maximisation of processor throughput, and minimization
of
microinstruction count, is at least half the purpose of microprogramming.
Sure. And the microcode compilers I've written and used are much better
at optimizing horizontal microcode than I have the time or patience to do
by hand.
For such optimisation to be effected, on must
necessarily write directly
in microcode, either bit and byte streams, or coded as in assembly
languages.
No, it doesn't. Microcode almost always has a lot of data dependencies,
which means that a compiler can often do as well as a human at optimizing
it.
In any case, the use of a language translator always
results in a reduction of process throughput.
Compilers only get inefficient when:
1) The source language is semantically far-removed from the object
language, or
2) There are sufficiently few data dependencies that there is a wide
range of possible instruction scheduling options. Compilers are only
starting to get smart about interprocedural optimization.
For a microcode compiler, neither of these conditions are met. The source
language is specifically created to be conceptually and semantically similar
to the operations available in the hardware, but with a lot of the detail
taken care of automatically (by default, but can be overridden).
Recall that microcode involves the establishment of
timing signals
at critical control points within electronic circuits and, the selection of
data paths within those circuits. Given this fact, there seems little
reason to leave the efficiency of microcode up to the accuracy of
a language translator, which we all know to be generally less
accurate that the results obtained by a skilled human programmer.
I beg to differ. The output of microcode compilers I've dealt with has
been substantially more accurate than hand-written "assembly" microcode,
requiring far less debugging.
I would be grateful to learn from you of the tools you
used in the
preparation of microcode.
I've written and used microcode compilers since the early '80s, all for
custom horizontally-microcoded machines. The compilers I've written have
been based on fairly straightforward utilization of lex and yacc (or flex
and bison).
All of the work I did was in graduate
school in the early 90's, and to date I have not seen a single
job made available to a microprogrammer type. I would really love
to have an opportunity to perform this kind of work as a job function.
Get a job at Intel, AMD, NS/Cyrix, or Rise. :-)
Microprogramming is almost a lost art. Things that used to be microprogrammed
are now implemented using RISC processors and C code. Of course, basic RISC
processor architecture (e.g., Stanford MIPS-X) is not that far removed from
vertical microcoding.
As for the i860, sure, it is not actually the equal of
a Cray-1 but, the
architecture is equal to that of the processor section of the Cray-1.
You must be using some strange definition of "equal to" with which I was
formerly unacquainted. You might just as easily claim that the architecture
of the VAX 11/780 is "equal to" that of the IBM 3033. After all, both have
sixteen 32-bit integer general registers.
And in fact, compared to the differences between a Cray 1 and an i860,
the differences between the VAX 11/780 and the IBM 3033 are minor. But
I've never heard anyone claim that they shared the same architecture.
The Cray 1 is a vector processor. The i860 is a scalar processor with a
special dual-dispatch mode that can only be used by code specially written (or
compiled) to take advantage of it, and in dual-dispatch mode it can only
dispatch exactly one integer and one floating point instruction per cycle, in
an aligned dual-word instruction pair. This is arguably too primitive to even
be called superscalar, let alone vector.
Further, I should like to know in what ways you deem
the Cray-1
to differ from the i860, particularly with regard to the processor section.
A comparison of the Cray 1 assembly language programmer's manual (I don't
recall the exact title) with the i860 manual would reveal that there is almost
no similarity whatsoever. The register sets are different; the memory
addressing is different; the specific instructions provded are different;
basically everything is different. As such, I don't even know where to begin
in answering your question. It would perhaps be more useful to enumerate the
areas in which they are similar; it would be a short list.
Have you actually used the i860?
Yes. I've written assembly code for it for signal and image processing.
In my spare time I wrote a program to compute Mandelbrot set images. I've
had somewhat more experience with the Cray XMP and YMP, although on those
I've generally relied on the C compiler under UNICOS, but I've studied the
assembly output and occasionally tweaked it. I haven't ever run any code
on a Cray-1, but the Cray-1 architecture is much closer to that of the
XMP than the i860.
It's been long enough since I used either the Cray or the i860 that I don't
recall the precise details. All my manuals for both are currently in storage.