Vision of Quality

Ministry of Testing

Case Study:

What do you do when you're asked to compromise your vision of quality for a product? I am the only tester on the team and regularly have to work long hours to achieve what I consider an acceptable amount of testing for the sprint. The product is a financial payment system that pays all the stars of a big TV company. The stakes are huge. The product is bug-ridden and I only have to sneeze and I find a bug. Sometimes testing has had to slip past the 2 week sprint because I've literally found too many bugs / am adopting a too rigorous approach for testing. I have now been put on a performance improvement plan and told that I have to get ALL testing done by the end of the sprint. It seems they just want to release more and more buggy releases and cut back on the amount of testing on stories they release. I really have a professional problem with that. Am I being unreasonable?

-- Anonymous

Recommendation 1:

You're not unreasonable being frustrated with the situation but there also comes a point at which taking a step back and working within the sprint is needed. The team/management need to understand what's really achievable in a 2 week sprint and what that means for the product. You're a quality advocate with specialist knowledge but it shouldn't fall solely on your shoulders. Testing is spilling over into the next sprint which suggests that the team needs to be discussing story sizes and the amount of work they're pulling in to each sprint. It's not about how much work the devs can get through in a sprint, they should be focusing on how much quality work they can successfully release as a team.

Do you have regular retros to discuss team dynamics, sprint successes/failures, points of contention? Do the devs code review/peer-review one another's work prior to it being merged to the test branch? Do the devs write unit tests? Do the devs do a presentation to you demo-ing that ACs have been met so you can do more in-depth testing? When you raise bugs with the team how are they triaged, who makes the judgment call on whether to fix or not?

-- Anonymous

Recommendation 2:

I'm in a similar situation, sole tester for a business that is constantly pushing hard to release everything at once without giving much thought to the resources available to testing. Then complaining when the project / product is riddled with bugs (surprise surprise). Are you involved in the development process? Giving scenarios to the devs so they account for it rather than missing it during the testing phase may help cut back. Also, do you know if this is a dev problem, or perhaps its the requirements that contain a bulk of bugs?

-- Daniel

Recommendation 3:

I am/was in a similar situation. Not enough testers, devs not willing to test, 1000 things that absolutely needed to be in the next version, taking too long or missing a bug QA's fault. I got the help of my manager to get the message across that QA can't do everything and definitely not at the same time. We have created a priority list that really helps. It gives the Product a way to define what is more important than something else. It's also great for showing and making people understand how much is on our plate and that point 10 is probably not going to be finished this week. We let them make the choice of how we spend our time and it's clear that we then can't spend it on something else. We are being transparent about the risks of reducing test time as well. It's not easy but I'm getting better at letting other people accept the risks they took instead of making them mine.

-- AnnaB

Recommendation 4:

The first thing that jumps out to me here is: do the devs do any testing at all? It reads as if you're being used as a scapegoat for a larger problem.

-- Heatherr

Recommendation 5:

And I think you've hit the nail on the head there - faster doesn't mean better. I've been the sole tester for a team of devs and ultimately there's only so much you can do - I often refer people to the triangle. It consists of quality, speed, and cheapness and you can only pick two - if you want it quick and cheap, you sacrifice quality. If you want quality quickly, it's not going to be cheap. If you want quality cheaply, it's not going to be particularly quick. People forget that, it really does sound like the problems you're experiencing are deeply rooted in how the team dynamic had been established - all estimates should be reflective of the full complexity of the work, not just the dev section of it and sprint planning should be mindful of that complexity. By excluding you they're being utterly naive and/or unreasonable, it's no wonder you're workload is excessive. The test environment should be as close to live as possible, if live deals with multiple data sets at a time then the test env should too. Why only test how it handles single data sets if that's not how it operates in RL?! If you're finding bugs in the process of testing the story then you've found a bug that's relevant and in need of raising. The team can triage whether it's a now, next, never fix but not raising it just creates future issues.

I'm kinda glad they didn't split it off but they're missing the point which is that complexity and story size estimation is something that should consider implementation and testing. I'm usually Mcr based and 50k is decent but it doesn't equate getting a superhero. It really does sound like a conversation is needed to explain to the team (and whoever has created that infuriating performance review for you) that faster means sacrificing quality if they're not doing appropriate sprint planning or assisting with basic level testing. As has been said already please know that these issues are not a reflection on you as a tester.

-- Anonymous

Last updated