Why do UI test automation projects often fail? With many automated test frameworks, getting started with your first few tests is the easy part. Building a sustainable, flexible test suite that gives you valuable data on the quality of your applications can be daunting to say the least.
I’ve witnessed dozens of UI test automation projects over the years, that make the same critical mistakes time and time again. I’ll walk through them and provide some practical steps that can help you avoid these pitfalls.
What is success?
Let’s start by defining success as an automation suite that:
- Highlights regressions during development
- Gives you valuable information about release readiness
- Is ‘cheaper’ than the alternatives
So what pitfalls cause us to fall short of these criteria?
1. Burdened by unrealistic expectations
Think about what typically happens when teams start a new UI automation project. Following the usual getting started guide, with their first test they tackle a key use-case. Success comes quickly, and having previously only automated testing of application logic, this single test additionally validates previously untested behaviour such as:
- Proving that the app and its dependencies can be built and deployed
- Exercising application startup and initialisation
- Confirming that the basic navigation flows work
And other complex interactions that can only be properly tested through the deployed UI. Plug this single test into your Continuous Integration system and you could argue you have immediately ticked all of our success criteria.
But don’t let this huge shot in the arm you get from the first test fool you into thinking you’ve cracked it. In truth, you won’t write another test case that delivers anywhere near the value of that first test. You’re into diminishing returns straight away.
Your first test case delivers the best return you will get from test automation
Yet people tend to get carried away. The engineer that kicked it all off just got a big ego boost. Don’t be surprised if they go on to write a ton of tests on top of an increasingly unstable automation suite. In truth, you haven’t figured out how to do test automation at scale yet. You haven’t finished your POC.
Consider the value of automation at every step
The Solution: Continually assess the value of your test suite as it grows. Don’t let early success corner you into going too fast. Don’t ruin the value delivered by your early test cases by introducing instability or by chasing arbitrary targets.
Be pragmatic. Not compromising on quality is one thing, but you should be prepared to compromise on how you evaluate that quality. Aspire to the test pyramid and use other kinds of testing to supplement your UI automation suite.
2. Separation of development and test
Who are your decision makers in relation to test automation? At any given point in a project, there is an optimal level of test cases that maximises the value you can get out of test automation. For a complex project it’s almost certainly not 100%.
As you start to build up your automation suite and learn to overcome problems around gradually slower feedback, scalability and growing complexity, you’ll face many decisions about what to automate and what not. You may also have external pressures brought on by the early success described above such as test automation bonus/performance objectives or pressure to recruit full-time test engineers.
Assessing the risk of gaps in your testing with those who understand the code base well is essential. If the decision-making happens separately to where the knowledge is, then you’re likely to be making the wrong decisions.
The Solution: If you incentivise people to write more UI tests, either through bonus targets or by making it their job, that is what they will do, regardless of whether it is the right thing. Instead, have your core engineering team lead test automation to ensure you make educated, value-based decisions about your test suite.
A non-developer influence is important, but embed that in your teams, don’t hand-off testing to an external group. Knowledge of the code base helps to assess the risk of choosing which types of testing to adopt.
3. It takes effort to interpret the results
50 tests run, 50 tests passed. What does that mean in terms of whether I’m ready to release my product or not? Anything that gets in the way of a business requirement being directly verified in a test case adds waste. If our test automation suite is to deliver value it needs to communicate information about our requirements that we can instantly understand. We don’t want to have to spend time mapping requirements to tests or vice versa.
The benefits of capturing BDD scenarios to define acceptance criteria and the scope of stories is well understood. But too often, when the story is signed off, those scenarios are forgotten or left to rot.
Always capture and maintain your project requirements
The Solution: Always capture your requirements and maintain them over time. Having a full set of requirements for your application is the starting point for knowing what your test results are telling you. Store your requirements in source control alongside your code. For any version of your application have the ability to produce a corresponding set of requirements.
With BDD your requirements are your test cases
Using BDD style gherkin syntax, that full set of requirements can start out as a complete manual regression test suite. Gradually automate these test cases using tools like SpecFlow. This will get your tests talking in the language of your business and lead to conversations about quality based on requirements.
4. Inadequate tooling
Depending on your target platform, you may have a wide or extremely limited range of possible UI automation frameworks to choose from. You can expect them to vary hugely in quality, although in my experience most tend to appear to be up to the task at first glance.
This only makes it harder to establish whether a given tool is worth your time. Typical problems with UI automation tooling include unreliable interactions (eg. occasionally missing a click action), a lack of features, inability to scale tests reliably or failure to return to a known good state.
The Solution: Don’t take early success to mean that your tooling is up to the job, how it helps you scale over time and its reliability under stress will be the true test.
Regardless of what options are available, set high standards for what you expect of a UI automation framework. If your test runs start to fall apart as you put the framework under more pressure, cut back your test suite to a set that it can handle comfortably to maximise value of your automation while you consider other options.
To make it all worthwhile you need to be investing time learning more about your own application, not enhancing or patching up a framework that isn’t up to scratch.
5. The tests are not properly maintained
Like any code base, your test suite will degrade over time if not taken care of. If you think that the quality of your test code doesn’t matter, then don’t be surprised if your tests become difficult to change, hard to interpret or lead to unreliable results.
Tests can fail for all sorts of reasons, not necessarily because the software under test is faulty. Poor quality tests can result in failures due to race conditions, invalid assumptions (eg. about dynamic content) or external dependency changes. How seriously should a team take these things? And how do you decide who deals with the problems?
Treat automated test failures as though they were application defects
The Solution: A test that isn’t trusted by the team adds no value, worse it might be hiding a real application defect that goes unnoticed as it has ‘cried wolf’ so many times before. Assume the worst case for failures – treat them as application defects and address them with the same urgency.
Treat test code like production code
Aspire to the same standards for your test code as your production code. Consider your architecture, patterns and best practices. The Page Object pattern is one commonly used approach. Dependency Injection can help you write cross-platform test cases.
Consider quality checks such as pair programming and code review for your test code.
Conclusion: It doesn’t have to be perfect
Sometimes it can be a burden feeling as though you have to perfect the process. Occasionally it is ok to:
- Have days of failing tests – But don’t let it get out of hand, always stay a small step away from a green run.
- Find there are more important things to do than to fix up test code – If something has to give, then it might have to be test code, but make that the exception, not the rule.
- Release with failing tests – But do this consciously, knowing exactly what the results are telling you, and having acted appropriately (eg. by filling in temporarily with manual testing).
Don’t be a slave to your tests
Don’t feel pressured that they should always be passing, that you can’t release without them. Try to foster a culture where your tests aren’t for someone stood at the end of a production line holding a clipboard.
Make sure that they work for you, that they give you information from which you can make decisions. That is the real value in test automation.
More: Watch my talk on this content: Test Automation in Practice with Xamarin Test Cloud.