On 22 Oct 2006 at 2:52, Jim Leonard wrote:
You can't tell me any interpreter could possibly
beat that. As der
mouse said, even if the interpreter produced exactly the same code,
there is the overhead of the interpreter itself.
No, you're right in the case of most math (although there have been
some really awful floating poing packages; I'd expect that most
compilers and interpreters would be using NDP instructions by now).
But strings are a whole 'nuther story. Take a simple string
expression, for example:
D$=A$+B$+C$
where "+" is the concatenation operator. If the compiler chooses to
implement ASCIIZ strings, where the length of each string isn't known
in advance, execution can really bog down. The code could get as bad
as:
<get length of A$ by scanning for the terminating null>
<get length of B$ by scanning for the terminating null>
<allocate temp1 long enough to hold both A$ and B$>
<move A$ to temp1>
<find the length of temp1>
<move B$ to temp1+length of A$>
<find the length of temp1>
<find the length of C$>
<allocate temp2 sufficient to hold temp1 and C$>
<move temp1 to temp2>
<find the length of temp2>
<move C$ to temp2+length of temp2>
<find the length of temp2>
<(re)allocate D$ sufficient to hold temp2>
<move temp2 to D$>
<free temp1>
<free temp2>
<garbage collect, if need be>
I have actually seen compiled code this awful. And, if a junior
programmer who knew nothing but 'C' <string.h> library functions
would likely be strongly tempted to generate this kind of code. Most
'C' programmers aren't even aware of how inefficient null-teminated
strings are.
Would this be the fault of the runtime or the compiler?
A smart interpreter that kept its strings as descriptor-containing
length items could run rings around the aforementioned compiler,
since the actual statement to be interpreted is likely quite small in
comparison to the size of the strings involved.
Cheers,
Chuck