30% OFF Tempcover fee in our app

Manage your policies, save vehicles and get app exclusive discount. Register today!

T&Cs apply.

Get the app

Tempcover’s Testing Tactics

What is our testing mission statement?

  • Ensure that we deliver what the customer needs to the highest quality.
  • Plan, build, and execute tests so that we find issues as early as possible to build the highest levels of quality into our products, rather than later on in the software development lifecycle where the effectiveness of building quality into the product is reduced.
  • Make sure that all software delivered has appropriate monitoring and checks/regression testing post it going to production

When should we test?

Firstly we need to understand that testing is part of the software development process. It is not a separate function from “development” but a fundamental part of the whole software development lifecycle.

So before we even think about the development (of which testing is a part) of a product backlog item, there are a few things that need to be in place.

  • Full acceptance criteria have been set and agreed upon
  • Acceptance criteria are fully understood by the development team
  • The product backlog item is given an estimation of the effort required to fulfil the acceptance criteria. This score will take into account both dev and test effort.

 What is acceptance criteria? Mike Cohn (one of the founders of the Scrum Alliance) defines acceptance criteria as “notes about what the PBI must do in order for the product owner to accept it as complete.” 

Without clear and testable acceptance criteria we will always have issues, as a PBI will often mean different things to different people. This means everyone is likely to be aiming for a different end game for each PBI, which inevitably leads to scope creep and missing the objective of the change.

Now we know what the customer needs, do we wait until it’s been fully built before we start testing it?


This is an inefficient, slow way of testing. Testing smaller components as they are developed will

  • Improve product quality
  • Enable faster, more regular releases
  • Make identifying and fixing issues easier, as you will be targeting smaller components
  • Reduce the risk of finding bugs just before release, therefore helping to avoid delays in releasing.

Let’s use a non-software example.

Say that your company builds cars. A customer comes to you with a design and specification for a car they would like you to build for them.

You would not build the whole car, take it for a quick test drive around the block before handing the keys to the customer. If the car does not start or stops working after a short time, it can would be difficult to identify which part of the car the issue is in. You may have to take large parts of the car apart to the part that needs fixing, and you risk affecting other “working” parts.

You would want to test each individual component before it even goes onto the car.

If we focus on the oil pump. It’s a piece of the engine that is hidden from the customer, cannot be accessed by the customer and the customer may not even be aware that it exists. But without it, the engine would not work.

So once the oil pump is built, before it is attached to the engine you would

  • Make sure that oil can be pumped into and out of it.
  • You might check this with new oil, old oil, dirty oil, clean oil, hot oil, cold oil, and various grades and qualities of the oil.
  • Maybe even test it with something that is not oil at all.
  • Leave it running run long periods
  • etc

All of the above would make sure that the oil pump itself works as per its intended design before it gets attached to the rest of the engine, and you don’t move on to the next part until it is all working as intended.

This approach is also true of software development. Fully testing each new function before it is integrated into the overall system helps to improve overall quality, builds robustness, and help to reduce the chance of issues arising late in the software development lifecycle.

Once all the component parts are built and connected together, that’s when we would hand them over to the customer/product owner so that they can do UAT (user acceptance testing). They will manually go through the software as an end-user is intended to. While by this point, the acceptance criteria have been confirmed as having been fulfilled, even with the best planning, there will still be times when what the customer asked for has been mistranslated and development builds something that is not what the customer actually wanted. Also sometimes what sounds and looks good on paper, may not actually work in the real world, and will only be discovered in UAT.

How should we test?

Testing by its very nature is a very repetitive task.

  • Test something once
  • Test it again after bugs are fixed
  • Test again after 2nd bug is fixed
  • Test again after developer refactors their code
  • Test again in different environments
  • Test again after merging code with another team
  • Test again after another bug fix
  • Etc

This is why we aim to use automation as much as possible. Why repeat a task 100 times when you can get a computer to do that for you.

A computer will not get tired, will not get bored, will not get distracted, and will not have personal opinions. A computer will constantly, quickly, and accurately check exactly what you have told it to check. As humans we will get tired, and bored, we will slow down, we will make assumptions and we will miss things.

So automation is the only testing required to develop good quality software?


Automation is useful and an absolute requirement for accurate software with a short lead time. It does however have its limitations. Automation is great for confirming that the software fulfils the acceptance criteria i.e. it does exactly what the customer wants it to do. But what about things that the software might do that the customer might not want it to do? Automation can be used for this to some extent but not fully. You won’t know when writing the automation how the software is going to look, feel and behave when it has been integrated. This is where manual exploratory testing comes into the equation.

Exploratory testing is not just about wildly clicking, mashing the keyboard, or trying completely random things. It should be targeted testing. The tests should be created and executed based on existing business and system knowledge, and lessons learnt from previous projects. Testers should approach this with knowledge of common pitfall areas of both the system under test and similar systems worked on previously. It should be open and flexible. You might see the system react in a certain unexpected way while doing an exploratory test, which may make you think ‘OK, it did that when I entered that and clicked here, so what would happen if I try that after having previously done this?’. It’s that reactive, intuitive, flexible, and instinctive testing that a computer running automated steps is not going to be able to do.

Exploratory testing is the testing approach that will find the subtle issues that may only affect a handful of users but may have a severe impact on them. But it is the fixing of these subtle or low-frequency issues that will really drive up the quality of the final product.

It is also very important the exploratory testing is not endless. You don’t want to go down a rabbit hole from which there is no return. There will always be more testing that could have been done, but the product also needs to be delivered to market eventually. But with strong automation and good quality, targeted exploratory testing the risks of severe issues getting to production are vastly reduced.

Who should test?

Testing is not solely the responsibility of the test team. Testers are the ones that will dictate what needs to be tested, how it needs to be tested, why it needs to be tested when it needs to be tested, and do most of the testing. But everyone has a role to play. Developers for example are the perfect people to help out when creating automation tests, which require code to be written for them to run. Developers can also save time, and wasted effort by giving their stuff a brief check over before checking it in. As already mentioned product owners should be performing UAT. End-of-sprint reviews can be used as an extended UAT phase where anyone in the company can observe, check and offer opinions. So while it is the ultimate responsibility of the test team to ensure that appropriate and accurate testing has been done, they do not have to be the ones performing every click and button press.


To sum up, testing should

  • Be done as early as possible
  • Be done in small chunks
  • Be automated as much as possible
  • Have involvement from everyone in the team

Find out how we can help you
Affordable cover, unbelievably fast

Related articles

30% OFF Tempcover fee when you take a policy out on the app. T&Cs apply.

Get App

Cookie notice

We use essential cookies to make our website work. We'd also like to use non-essential cookies to help us improve our website by collecting and analysing information on how you use our website and for advertising purposes.

You can agree to accept all cookies by clicking 'Accept all cookies' or you can change your preferences by clicking 'Manage Cookies' below. For more information about the cookies we use, see our cookies policy

< Back

Cookie policy

< Back

Manage cookies

We use essential cookies which are necessary to ensure our website works correctly.

We'd like to set non-essential analytics and marketing cookies that help us provide a better experience to our users. These help us improve our website and marketing by collecting and reporting information on the campaigns and web pages you interact with. It also helps us to target our marketing campaigns to people who are most likely to be interested in our services.

We'd also like to set a non-essential cookie which enables us to playback your journey on our website to assist with troubleshooting and to help us improve our website based on the behaviour of our customers.