On 4/12/2011 5:55 PM, Mike Loewen wrote:
Some years back, I was giving a presentation to our
group, explaining
why I had coded something in a particular way (because it was faster and
more efficient). One of our so-called programmers raised their hand and
asked, "Why don't you just run it on a faster processor?" I just stared
at her.
We were asked at my local university not all that long ago, by an
otherwise bright CS professor, "What was the easiest way to speed up
your application?" We raised our hands and mentioned several ways of
optimizing the code. His answer was "just wait, and the processors will
get faster, so your application will run faster."
While easiest != best, I was surprised(shocked?) that this was being
taught to pretty impressionable youth.
I will say that faster processors let us get away with using higher
level scripting languages which are easier to write, abstract away
unnecessary details, and have a working piece of code faster. There's
no need to write it in assembly or otherwise, and have it execute in
.00004 seconds when a scripting language that runs the process in 4
seconds is perfectly acceptable. Yes, of course it's not scalable, but
maybe that's unneeded.
The first incarnation of my external amiga floppy drive controller that
was implemented with a microcontroller used their flavor of basic for a
large portion of the code, but then assembly where time critical
functions were done. I used assembly for the UART, and for the portion
of code snags bits off the drive and jams them into a serial FRAM.
Parallax, the microcontroller manufacturer, had written their SXB
(basic) to compile the basic code to assembly, and then used the regular
assembler to finish the job. At any point, you could hit a keystroke
and view the assembler that your basic code had yielded. You could
embed assembly inline with the rest of the basic code, and it all worked
pretty neat.
Keith