If the
compiler produced code that was so incredibly obtuse and
broken that it took longer to execute an operation than an
interpreter, the compiler was a piece of crap.
You've obviously never written
an interpreter have you.
I have (written interpreters, that is), and I still agree that if
compiled code is slower than interpreted code, something needs
improvement (I wouldn't quite go so far as to use language like
"broken" or "piece of crap"). As a trivial argument, the compiler
could simply generate the sequence of operations the interpreter would,
except without overhead of the interpreter itself.
The only reasons I can see to not do this are (1) the compiler writers
(or its run-time's writers) aren't as clever as the interpreter writers
when it comes to generating fast sequences for certain operations - see
the floating-point stuff mentioned upthread for an example - and (2)
code size.
/~\ The ASCII der Mouse
\ / Ribbon Campaign
X Against HTML mouse at rodents.montreal.qc.ca
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B