On 25/10/11 3:01 PM, Chuck Guzis wrote:
On 25 Oct 2011 at 18:15, Dave Caroline wrote:
An easy cure is give developers slow machines
with limited memory and
slow disk subsystems. I do that to myself. I find it a good way to
discover the "right way" TM
I still am in awe at the speed of the average personal computer.
When programming the analysis software for the PAL "cloning" project,
I was concerned about running nested loops inside of an outer loop
that iterated 65K times (PAL16L8 chips for example, have some tricky
aspects, such as tristate outputs and feedback lines), remembering
how long a simple 65K decrement-and-jump sequence took on a 4MHz Z80
(about a second, if memory serves).
Of course, it didn't matter. The analysis finishes near-
instantaneously. Running a truth table with 16K entries through
Espresso also completes in almost no time at all.
I'm clearly of the "old' mindset.
We know this because you did not optimise unnecessarily.
The problem today is at least two-dimensional. As Mouse lamented, there
is no frugality or understanding of the great excess of speed and space,
in order to use it effectively; but the salt in the wound is a
widespread obsession with *micro*optimisation... time and attention
wasted looking in the wrong directions.
This is clear from forums everywhere - for example, the person who
recently asked in a C/C++ channel which operator was "faster", < or <= ;
or those who ask which integer datatype is "faster" in a SQL schema; or
those who refuse to contemplate languages with high level abstractions
suiting the problem at hand.
These questions, like Tony's "can't find 382.73 Ohm resistors", betray a
truly frightening lack of understanding of what is being done.
--T
--Chuck