The only time
I can see an interpreter having a clear advantage is
when it knows something about the runtime environment that the
compiler and [its] runtime can't know.
I'm having trouble thinking of an example of such a "something". Can
you cite one? It'd help me understand.
this thread is becomming tedious, and suffused with more noise than
information.
Think about the case, for example, where the primatives being modeled
are large and complex. So, the overhead of the interpreter is swamped
by the actual processing. A compiler (or native code) has no advantage
in that case becuase (let's say) 90% of the processing is done inside
the primative.
I think this whole 'concept' is too broad and generic for a simple
description. Just trust those who have written compilers and
interpreters. Interpreters can be suprising fast and can be more
appropriate choices than a compiler. Nothing is black and white.
-brad