On 19 Dec 2006 at 11:01, Billy Pettit wrote:
Most of the early compilers at CDC worked this way.
The intermediate
language was assembly and could be used to clean up the final code. It
could be saved as a separate file and also be used as sub-routines. It was
a very fast way to create a compiler.
While there was a command-line option to use COMPASS to perform the
assembly pass, both RUN and FTN (at least prior to 4.0) by default
used their own "cheap and dirty" internal assemblers. I believe that
the deck in FTN was called FTNXAS, or some such. Very few headaches
there--I spent most of my aspirin debugging the COMMON/EQUIVALENCE
statement processor (a state machine made up of a pile of ASSIGNed
GOTOs) with very little commentary.
RUN did have a neat feature of allowing you to stack a mixed deck of
FORTRAN and COMPASS modules. The compiler would read the first card
and if it saw IDENT or some such, pass control to COMPASS, which
would, after assembly, pass control back to RUN.
Cheers,
Chuck
Then there was another method called interpreters, where final machine code
never really exists. The compiler generates a list of psuedo ops that would
be executed by a series of macros. Was fast to write, but incredibly
ineffecient.
And there was a really fascinating one on the IBM 1401 that kept the high
level langauage in the core, and brought in sequential routines from the
tape unit. Each rountine would perform one process on the source. At the
end, what remained in core was the machine language program. It was a true
single pass compiler. But it worked in serial mode, bringing in each
routine in order (63 different ones, if I remember correctly) even if wasn't
needed.
It was probably the slowest compiler I ever worked with, but the concept was
interesting.
Billy