On 15 Dec 2006 at 19:18, Chris M wrote:
Is it a *recent* development of compilers that as an
intermediate step the source code will first be
reduced to assembler mnemonics, before being reduced
to object code?
Those mnemonics aren't used by the compiler in many cases--they're
for the compiler writers and maintainers (and curious users who'd
like to take a peek). While I suppose looking at a binary dump of
the object code might reveal something, it's the hard way to answer
the question "Am I really generating the right code?"
There are many compilers that will generate code that can be
assembled by the same assembler that the programmers use, but if you
have a single target, why parse ASCII text if you don't have to? It
just slows the compilation process down. On the other hand, if
you're writing a compiler to generate native code on multiple
platforms, then using the standard assembler makes some sense. Saves
you from having to know about object file layout and such.
Every compiler needs some sort of assembly pass, if for nothing more
than to satisfy forward references. Because the code being generated
is fairly restricted as to form, a pass-and-a-half assembly phase is
often more than enough.
As to mnemonics, it's interesting that even in P-code
implementations, the instruction mnemonics are often one of the first
things specified in the design process. After all, you have to have
some way to talk about the instructions you're making up. If it's
native code you're compiling to, well, those are already made up for
you.
Cheers,
Chuck