It’s already proven that automation testing is a required practice if you want continuous and quick delivery of your software product, however it’s essential to do it sincerely so that you can make the delivery smoother and can get maximum benefit out of it. Here are some areas to focus on, to improve the quality of automation.
- Watch out for “Flaky” tests
I often say, “Randomly failing tests are as good as not having any tests – in fact, they’re even worse.” Make sure the tests you write are reliable – i.e. you should be confident that, if it fails, it’s due to a bug in the product and not in the test itself. No matter how much hard work we put in to automate a lot of tests and how quickly we can get them in place, if the tests are not of a good quality then they are effectively useless, as they are unable to fulfill their purpose and provide accurate feedback.
Here are some of the common ways to immediately improve the reliability of your tests:
- Avoid static waits
- “Waits” or “Sleeps” are your enemy. Introducing a pause/wait might seem to be the quickest win to fix a flaky test, but it’s just postponing the inevitable. For example, how would you say for certain how much time a page will take to load on different machines? Today, five seconds might be fine. Tomorrow, it might be ten. And after that…?
- Make the test more intelligent by looking for acknowledgements from the system. The most efficient way is to make your test event-driven; wait for your system to tell you when something has happened, rather than continually asking it. For example, you could listen for Windows registry or file change notifications, or get feedback from the Windows Event Log.
- As a last resort, prefer using retries over static waits. At least the test will continue once it has met the condition, rather than just waiting unnecessarily.
- Entry-Exit rule
- Whichever state you ENTER (Setup) in the test, you should EXIT (Tear-down) the test in the same state.
- Always ensure that tests are well isolated from one another. This allows you to troubleshoot or run them independently.
- Identify and minimize dependencies
- This can be tricky, and needs good developer-automation coordination. Try and make the test as independent of their surroundings as possible. A test should concentrate only on the piece of functionality that it’s intended to verify. For example, consider the scenario where you want to verify that the web service in your application is able to receive and post message properly. For that, we don’t have to write an End-to-end test which involves UI, database etc. We can just test this by using WebClient Get/Post at API level.
- Dependencies come in many forms, for example environment dependencies (e.g. SQL Server, Operating System version/architecture), a product component dependency (e.g. Product A integrates with Product B), UI dependency or a third-party dependency etc. Categorizing the tests properly can help as well.
- Short feedback loops
It’s very important that tests are quick enough to provide you the feedback that you need. In fact, tests should run after every commit. For example, if the developer has to wait for hours to get the feedback from his change or bug fix, he is going to lose interest in the tests. So if your test build takes an hour to run, you can only run it a maximum of eight times in a working day, whereas developer commits will be much more frequent. So what can we do?
- Parallelize tests
- It’s not always possible to reduce the test time by optimizing the tests, as tests might be already in good enough quality, and as you add more tests in the automation suite, it’s definitely going to increase the feedback loop. In this case it may be possible to split the tests and run them parallel, collect the results and merge them. This is similar to the concept as multithreading, but on a grander scale.
- Make sure the tests are easy enough to run locally if required.
- Again, don’t use static waits as it can unnecessarily increase the total test time.
- Layered approach of Automation
Consider sorting the tests into different categories properly. It’s good to have different layers of tests, rather than just one build to run all the tests always.
- Add tests at different level according to the risk level a particular build is intend to serve. For example, It’s not necessary to run all the EndToEnd tests on each commit, but it’s extremely important to run smoke tests with each dev builds, and that’s why the smoke test build should be very light like smoke.
- Generally we categorize tests in following different layers
- Smoke Tests: These should have only a handful of light tests. Just enough to make sure that the dev build hasn’t broken anything very basic and it’s good enough to be picked for further testing. This should run with every developer check-in.
- Component Acceptance Tests: These can mainly be API tests. Intention is to verify the individual small components separately. A lot of tests during story automation will fall into this category. Run them on the successful Smoke Test builds.
- Component Integration Tests: To verify that the different components are talking to each other properly. It’s entirely up to the team if they want to merge component and component integration tests due to any reason. That way it’s optional. As far as you cover all the little components.
- End to End Tests: Again keep it less in number as these are costly tests. Costly in terms of time and resources. Ideally it should test everything here using the real system or components, and should do the same steps as a manual tester will test it. Running them overnight is probably enough.
- Choose the automation candidates wisely
- There is lot to talk about this topic, so for now leaving it to common sense. It can include a good mix of High risk feature, frequently executed scenarios, tests that are difficult to execute manually, UATs etc.
- Coding best practices
- It’s as important to have good coding skills for an automation engineer as it is for a full time developer. Automation engineer DEVELOPS automation tests. So keep brushing up your programming skills and apply coding standards.
- The architecture of test framework is also as important as product architecture. It should follow good design principles for example SOLID.
- Peer Review
- Last but not least, please get your code and test reviewed for continuous improvement and to benefit from others’ experience and knowledge.