On 6/12/2013 10:27 PM, Ian King wrote:
On 6/12/13 6:15 PM, "Fred Cisin" <cisin
at xenosoft.com> wrote:
Properly planned testing might significantly reduce the amount of testing
needed. Badly designed testing does nothing but waste everybody's time.
Number of games of Solitaire played is not the right metric unless that is
the primary intended use (maybe it was?)
Yes, you're right, but there are assumptions you're ignoring (presumably
for simplicity's sake). Inadequately staffed testing does the best it can
within the constraints (calendar) prescribed by the corporate masters.
One can plan testing until one is bleary-eyed, but if the product changes
weekly because the marketing weenies say it must (with no change in ship
dates), testing will not be able to plan meaningfully. I'm not referring
to changes in implementation of feature X: I'm talking about, "We GOTTA
have this new feature! EVERYONE wants it!" I'm not joking, and having
dredged up these painful memories from my days at [insert humongous
software company here] I probably won't sleep well tonight....
Two things I remember from back then.
A) We need it yesterday.
B) And this feature, the client just paid for it.
I don't mind if dev works on the fast machines, as
long as when test
brings them a bug from a real-world, dog-slow machine, they don't respond
(wait for it): "It works fine on my machine." IMHO, mayhem is then
justified. -- Ian
Ben.