From CI to CD with Risk-Based Regression Testing

Having adopted Continuous Integration to build applications multiple times a day, it can be a struggle to find a testing strategy that gives teams the confidence to move to Continuous Delivery. Typically they start out with the aim of automating all tests, but this can be an unrealistic and unhelpful goal.

For a complex application, it is unlikely that automating all test cases for every build is the most effective approach. A risk-based mindset for your regression testing can help you to decide which gaps in your testing – if any – are acceptable.

Eliminating the endgame

Continuous Integration aims to greatly reduce the pain of integrating code by doing it often and in small increments. Continuous Delivery does the same for releasing software, by demanding that every successful build of your software should be a serious candidate for deployment to production.

Make sure that the only reason your next version hasn’t been released yet is a business reason, never a technical or software quality one.

Between application versions, we might create many different release candidates that we choose not to release to live users, perhaps because the target feature set is incomplete. As a development team, you should make sure that the only reason the next version isn’t yet live is a business one, not a technical one. By taking ownership of the quality of all your releases, you can ensure that you are ready to go as soon as the business gives the thumbs-up.

A key aim of any agile software release strategy is to eliminate endgame – ie. the amount of effort it takes to get the software live after feature development is complete. This is often the driver that leads teams to assume that automated regression tests for everything on every build is the only sensible goal.

Don’t even try to test everything

While eliminating endgame is an important consideration, it shouldn’t come at any cost. Any test case that delays the feedback cycle, or that takes up significant time writing or maintaining has to earn its place in the regression suite.

What if the risk of regression in some areas is so low as to not justify the effort to write and maintain those test cases? Then we could choose not to run them on every iteration to give us less to do, which should help us go faster and work more efficiently. Hence even if we could, it’s unlikely that we should try to test everything.

Assessing the risk

Anytime we leave an element of the real system out of an automated test, we introduce risk of a false-positive. Take this integration test example which sacrifices testing through the user interface for the benefit of fast feedback. There could be a defect in the user interface bindings and the test would still pass.

So how big a risk is that? Running through the checklist below might help. If you find yourself answering no to many of the questions, you could consider the risk to be acceptable:

1. Is it trivial to write, run and maintain a test case here?

If so, just write the test and move on.

2. Is there any complexity?

Or are these simple one-liners, perhaps coordination/plumbing/glue code?

3. Has a regression in this area gone unnoticed before?

If so, it suggests that your testing is lacking in this area. I would always advise covering any defect fixes with tests. Ideally test-first to prove that the test catches the defect before applying the fix. Choose the type of testing that provides fastest feedback in order – unit, integration, UI automation then manual.

4. Has it changed recently or likely to be subject to further changes?

A safety net of tests allows you to refactor or change code with confidence.

5. Would the likely end user impact be high-priority or critical?

Is this a vital area of the application? You are in danger of damaging the confidence of your team and stakeholders if key use cases fail regularly on your daily builds.

6. Would it be hard to identify a fix and apply if it did regress?

Is the implementation so simple that you can only imagine small number of ways in which it could regress, all of which would be quick to identify and correct?

7. Is it likely that any regression in this area would go unnoticed?

Would it be easily caught by some other part of your pre-release process? For example, exploratory testing, beta testing etc.?

So how does our example rate against the checklist above? A test for the UI would involve running through a real or simulated device, so the test could be considered to be expensive. The complexity of the bindings is low, a break would be obvious to anyone using that view and a bug would be quick to identify and fix. Moreover, we know that only changing the elements involved in the binding could possibly cause a regression, so we only need to retest when changes happen in this area. On the other hand, if the UI was driven by complex view logic, say for a combination of animations, it might tip the balance the other way and be something that you’d want to test through UI automation.

If you find that you don’t have a lot of code that meets these criteria, look at how you can refactor code or tests to simplify and exclude the costly elements. And if you are still not convinced, you might decide that a small amount of endgame testing just before a release to silence any doubt is preferable to carrying that burden through your daily development several times per day.

There has to be technical leadership in testing decisions

A common theme here is that technical input is crucial in understanding the trade-offs. Outside influences are important, but testing has to be led by those who know the technologies, the codebase and the history of the project well.

If your architects or senior developers are not involved in the decision-making process around your testing strategy then you risk making ill-informed decisions which restrict your potential as a team.

I’m not suggesting that you compromise on software quality, but if you can compromise on how you evaluate that quality, you might just get a boost to your productivity.

Advertisements

One thought on “From CI to CD with Risk-Based Regression Testing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s