Solve by automating the GUI?

Case Study:

Hello, I'm very curious how different people describe what specific problem they're trying to solve by automating the GUI in their various contexts.For instance,

  1. Is there a specific risk you're trying to address?

  2. What is that risk, and how do you use automation to manage it?

  3. Do you test by standardised formal test cases, and automate those?

Recommendation:

So, I take a different approach to automation than I do to manual testing. I'd say that the risk is the same, but the way of going about addressing it is different. I don't just convert my tests into scripts, I pick out individual components that need to be addressed, and then I automate them. What I end up with is sometimes hundreds or more automated scripts that would makeup one (or even part of one) traditional test case. A lot of this is to reduce the inherent risk of doing automation. I have a lot more thoughts if you'd like particulars, and this one talk might help point you in the right direction as well.

-- Max Saperstone

Case Study cont...

As you pointed out, a tester will spot 1,000,000 things whereas the automated check will only see what it's explicitly scripted to evaluate. But in addition, a tester might follow any of those observations down a new or modified path that leads to the discovery of a serious issue: quite impossible to do with automation. In that light, we almost go full circle back to my original question. What specific problem(s) are we trying to address by implementing automated checking? Is the problem that testers are bored of executing the same test cases? Why are they executing those same test cases over and over? Is the problem that there is a high likelihood previous functionality is going to unexpectedly randomly break? Doesn't that point to a much deeper issue that should be resolved asap at its core, rather than creating an expensive extraneous apparatus?

In the same exact way that your talk emphasizes trust in automated checking, doesn't a suite of checks that are run over and over by the test team indicate we don't have trust in the developers or stability of our application?

To clarify the very last paragraph... of course we don't trust that things will function 'as they should' that's why we test. I mean specifically random things that are unrelated to what we would determine needs to be tested for a given 'work-item/ticket/issue', based upon a study of the change, weighted with our knowledge of the domain, and other contextual factors.

Recommendation:

So, a few thoughts to your response. I think my point wasn't to just blindly trust your tests, but instead, to develop a method so you can verify their proper execution. And the tests of course then are verifying your software. I wouldn't say that things "randomly break" but I find that developers often aren't aware of the full implications of their changes. One thing I've seen (and implemented and loved) is when some code is changed, just running the tests that are related. Which brings me to the why. The obvious one is to ensure that the software is still working properly. But why automate? It's to find out faster. Have you seen the cost to fix based on when the bug is found? Earlier = cheaper.

Automation empowers finding things earlier and faster. Even if you just have a few tests, you can speed up QAs process. I've seen organisations that when QA gets the software, between 10 and 80% of the time the software won't come up, or is so incredibly broken, you can't even log in. QA then needs to roll back, and wastes lots of time, and potentially loses tests/data. Just having a few tests can fix this, and I've seen it work wonders. And of course if you have a lot, you can save time on regression testing. Because back to your earlier, and my initial point, yeah, stuff breaks that was previously properly working. And especially if you work in a regulated env, it all needs to be retested.

-- Max Saperstone

Case Study cont...

This is very helpful, and I also hope I'm not rambling too much. I think the more I try to understand these concepts and not take things at face value, the better of a tester I'll be.From your last reply, I think I see a couple answers:

  1. Automation serves as a method to verify the proper execution of test cases.

  2. Developers are often unaware of the full implications of their change, so having a lot of automation might flag a bug somewhere totally unexpected we otherwise might have missed.

  3. Developers can release very broken software, and automation decreases the feedback time to notify of a bad build.

  4. Automation can decrease the execution time of a general testing suite aimed at finding regressions/bugs in pre-existing functionality.

The fundamental question I have is, is automation itself (time spent to write the framework or set up the tool, write all these scripts, maintain them, solve the multitudinous non-testing problems incumbant with automating functionality in a non-human fashion, data and interdependency problems, maintenance and cost of infrastructure, etc.) actually helping us in these areas?Due to our great observation, skill, intuition, and curiosity as human beings, would it be better if we executed context-based strategic regression testing ourselves?Is that difference of a maybe a couple minutes to identify a badly broken build worth the effort to automate test cases? Maybe a small handful?Is the chance that a developer made a horrible change that broke some other totally unrelated area worth spending the money to create and feed this automation machine? If that's happening a lot, isn't that fact itself indicating a potentially serious problem? Wouldn't we want to address that kind of mistake at a higher level, to try to prevent it from even happening?Is there a chance that testing as a strategic art and science is taking a back seat to automation, which is mostly a development practice, and therefore not necessarily testing much at all?Again, on a mission to discover and learn here: these are the questions I'm wrestling with, and do not have answers to. I feel there's a strong chance that if I had worked on a team that had a great automation solution, checking off those boxes from your talk, the answer would be clear.That's why I phrased the original question to try to better understand the specific problems we're faced with as testers where we'd want to apply automation. So far, the problem seems to be that an application is so unstable, we need to constantly check apparently unrelated functionality for each ticket/change order to make sure it's not accidentally broken.Sorry this is a huge rant. I just hope to learn and be a better tester and these are my biggest questions right now.

Recommendation:

So, to answer your fundamental question, I'd say, but others would say no. Unfortunately, I think it's dependent on the quality of your work. Some folks/organizations write great automation, with low maintenance, low overhead, that can rapidly identify problem areas. This provides a tremendous ROI. Others do not, and spend much more time and money on automation than they should. They then claim that automation doesn't and can't work. I believe it's because they're following bad automation practices. I've seen and worked with dozens of organizations in the latter category, and been able to fix a lot of them. I believe it comes down to a lot of the same principles as writing good code. Some people can write good maintainable code that provides value to their customer, and some can't. Problem on the automated testing side of things, is that there are far more bad testers, and that overall gives automation a bad name

-- Max Saperstone

Last updated