On 1 Dec 2010 at 20:59, Teo Zenios wrote:
Coding "optimizing" to me means getting the
job done with well
commented code in a manner that the average programmer can figure out,
maintain, and add to as needed. Sure some people can even do better
then a good compiler but will most people understand what is going on
with those special tricks and why they were used, plus can people add
to the code without screwing something up or it crashing if the
hardware is changed?
In my own experience, most programmers think about optimization only
when it matters; i.e. work really hard on the "hot spots" and leave
everything else relatively straightforward.
Effectively using a compiler with an all-out optimizer requires a
fair bit of knowledge on the part of user.
Consider the following (overly) simple example...
DO 100 I=1,N
100 C(I) = A(I)+B(I)
On a machine with pipelined instruction issue, a possible
optimization might be
SET I=1
LOAD A(1)
LOAD B(1)
LABEL1:
IS I > N, EXIT IF TRUE
COMPUTE C
INCREMENT I
STORE C(I-1)
LOAD A(I)
LOAD B(I)
GOTO LABEL1
This potentially allows the CPU to run at issue rate with no
"bubbles" due to waiting on a storage access. In most cases this
will work just fine. However, change the loop index to a non-unitary
stride--e.g., DO 100 I=1,N,100 and you run the risk of an address
fault as the last+1 set of operands for the loop is fetched.
Should you eliminate the prefetch optimization altogether or give the
user the option of selectively applying it?
The Microsoft issue, I recall was with dead code elimination. The C
expression
while( C != 0);
could normally be eliminated if C was 0--unless C referred to a
memory location that was changed by some external influence, e.g. a
hardware status byte. The "volatile" keyword told the optimizer to
keep hands off.
The beauty of a good optimizer is that it allows one to write legible
code that still turns in good performance.
--Chuck