> An easy
cure is give developers slow machines with limited memory and
> slow disk subsystems. I do that to myself. I find it a good way to
> discover the "right way" TM
On Tue, 25 Oct 2011, Richard wrote:
This is a recipe for expensive software.
I've seen this idea floated
verbally more than once, but any company that ever cared about
performance or memory footprint (and there are plenty) never did
this, and for good reason. It makes your software ridiculously
expensive because you cripple the productivity of every person
working on it. You're better off developing automated benchmarks
against your software that test the performance or footprint sizes
and having those benchmarks fail with a noisy report as soon as
someone steps across the "that's too much" threshold.
I have long held the position that the biggest problem with most modern
software, PARTICULARLY Microsoft, is that they have grossly indequate
TESTING on slow, small, flaky hardware. When queried, they often state
that they see no need for that. "That's a hardware problem".
"Upgrade
your hardware". "I'm surprised that it will even RUN on such obsolete
junk (6 months old!)" There is an implication that YOU are inadequate if
you complain about performance on machines that are not TODAY's issue.
SOME hardware problems are rare but inevitable, such as disk I/O errors on
read and write. PROPERLY written software should be so crash proof that
it can report what happened and exit "gracefully" when they occur. But,
will that be written into the software if the "developer" has never
personally encountered a hardware error? Will the "developer" do an
adequate job of exception handling if they have never experienced running
the software at less that SICTEEN TIMES the "required" capacity and speed?
Yes, developer productivity is up with state of the hard machines.
BUT, too many current "developer"s do not have a reasonable level of
experience with slow, small, or flaky hardware!
IFF there were adequate testing on lower grade hardware, then certainly
the bulk of the development could be done with high-end machines. BUT,
THERE IS NOT.
If there is not going to be multiple levels of hardware available to
facilitate proper testing, and an obsession that all machines in the
development process should be the same, then they would, indeed ALL need
to be downgraded to lowest level to get proper testing.
If most of the people in the world aren't
doing what you think is
smart, then perhaps you should look deeper into the situation before
declaring the rest of those people idiots. Maybe they know something
you don't.
and maybe they don't.
Microsoft, for quite some time (since the days when Bob Wallace walked
out), has not had an appropriate testing protocol. My favorite example
was the implementation of write-caching in SMARTDRV (1991). They refused
to listen to Beta testers bug reports, because "It works fine on OUR
computers", even when presented with analysis of the problem and that use
of the software on flaky hardware WOULD result in data loss that would, by
the very nature of the problem, end up being blamed on OTHER Microsoft
products. Accordingly, "disk compression" got an undeservedly bad
reputation, and MS-DOS 6.00 had to be upgraded AT NO CHARGE from 6.00 to
6.20. ALL due to failure to test on lower-grade hardware.
"Developer"s with experience on lower-grade hardware don't make THAT kind
of mistake.
There is a normal tendency to pick fan-boys for Beta testing. Microsoft
is not unique in that. I am not really surprised that I was dropped from
their beta test program after I submitted a well documented and thorough
bug report ABOUT THEIR BETA PROGRAM.
--
Grumpy Ol' Fred cisin at
xenosoft.com
While I agree strongly with heavy testing on underfeature hardware, I do
know of one case where the opposite condition exposed the bug - actually
poor programming pracitce. Back in the early years of IBM virtual menory
deployment, we received a program that the vendor claimed ran perfectly
during their tests. We would get results ranging from totally "off the
wall" answers to abends. When we dug into the dumps, we found a lot of
random values in various data areas.
Turns out that the developer was running on a system that was thrashing
its vm pages, so each memory request was receiving a zeroed memory page.
We had just upgraded to a system that was sized to support our growth
over 3-5 years. Most of our jobs didn't even page, and when the
application was loaded, whatever was in the memory page from the
previous jobstep was still present. As far as the OS was concerned the
same "owner" got the memory, so clearing the pages wasn't a security risk.
The cause of the issue was that the programmer, seeing that he always
had zeros during early testing, only initialized NON-ZERO data elements.
It took us nearly a month to convince the vendor that there really was a
problem, with many post-mortem dumps snailmailed back and forth.
--
larry bradford