Posted by Stan Taylor on August 20, 2008
In a new post, Scott Barber reminds testers that there may often be valid business reasons for making decisions that may run counter to the tester's view of what it takes to build quality software:
Most testers I meet simply have not been exposed to the virtually impossible business challenges regularly facing development projects that often lead to decisions that appear completely counter to a commitment to quality when taken out of context. The fact is that there are a huge number of factors influencing a software development project that, at any particular point in the project, may rightly take precedence over an individual tester's assessment of quality. Given their lack of exposure, it's no wonder testers seem to habitually take a "my team doesn't listen to me" point of view.
When I conduct job interviews with QA engineers, I often test the candidate's awareness of these factors by asking this question: "Can you name a time when you just had to put your foot down with regard to quality? For example, declared that the software can't ship due to quality concerns, etc.?"
It's a little bit of a trick question. The answer that I hope to hear is: No; it's not my job to make those decisions; it's my job to provide risk assessment data to decision makers who do have to make these tough decisions. Secondarily, if I'm doing my job correctly throughout the dev cycle, there should not be any surprises of this type. If a situation is building that might result in such a confrontation, then I haven't done my job in monitoring the situation, trying to solve it, or at the very least keeping management in the loop on the building crisis, so that they can make appropriate contingency plans. There's nothing management likes less than getting into a crisis situation with no warning.