On Sat, 2006-12-16 at 13:35 -0500, Dave McGuire wrote:
On Dec 15, 2006, at 10:18 PM, Chris M wrote:
Is it a *recent* development of compilers that
as an
intermediate step the source code will first be
reduced to assembler mnemonics, before being reduced
to object code?
Absolutely not. The UNIX world, at least, has been doing it that
way for decades.
Right. Also, from the micro world, two words: Small C. I love the
approach. Among other benefits, it allows one to write something in C
to get the algorithm correct, and then re-write and/or optimize the
generated assembly to speed up inner loops, and frequently executed
code, as needed.
Peace,
Warren E. Wolfe
----------------------------------------------------------------------------
--------------
Most of the early compilers at CDC worked this way. The intermediate
language was assembly and could be used to clean up the final code. It
could be saved as a separate file and also be used as sub-routines. It was
a very fast way to create a compiler.
Then there was another method called interpreters, where final machine code
never really exists. The compiler generates a list of psuedo ops that would
be executed by a series of macros. Was fast to write, but incredibly
ineffecient.
And there was a really fascinating one on the IBM 1401 that kept the high
level langauage in the core, and brought in sequential routines from the
tape unit. Each rountine would perform one process on the source. At the
end, what remained in core was the machine language program. It was a true
single pass compiler. But it worked in serial mode, bringing in each
routine in order (63 different ones, if I remember correctly) even if wasn't
needed.
It was probably the slowest compiler I ever worked with, but the concept was
interesting.
Billy