A good test report is the one which can potentially help the Test Engineer/Automation Engineer at not just finding the failure in the application but would also guide them with number of supporting steps to easily locate the failure which enables the faster communication & quick resolutions especially in the DevOps world. This also helps to standardize your quality assurance process.
In this article let us try to learn those qualities of an automation test report that contributes to the overall quality of the software application under test. Here, we are not referring to any particular Automation tool or its respective report format, we are just trying to appreciate the qualities which are already existing (could also be the reports you manually create) & some hypothesis of how they can be improved/utilized in some context in your everyday reporting structure.
What info should a good test report capture?
NOTE: We are not just thinking about the immediate benefiters of the reports, but about all the stakeholders of the project.
1.Number of Tests Passed/Failed
When you are doing your batch runs, this metric is a great demonstration of the overall success rate of the automated test suites & can be used for your daily/weekly management of your quality reports
Your test failed at a particular step & it’s good that the tool captures the screenshot for that step. But not just for failures, it will be fantastic if even the successfully passed steps displays the screenshot. This also helps the test engineer to understand the application quickly as well as the business process (if the person would like to check them when free) & can hold good knowledge of the test scripts if new to the team & is quick to the regression execution work😊 I am someone who has done that in the past!
3. Smart identification – Warnings
We can’t thank Automation tools enough for being great all the times in identifying the exact properties of the field that you are using in your script. The tool will try its best to check all the possible properties & try to pass your test by being so SMART before giving up for failure!
These warnings in your test report quickly contributes to your script maintenance & be ready for your script updates as soon as you see them, capture those steps & field names.
4. Incomplete tests in the execution
Your script fails which is fine for you, but you are not happy about the pending incomplete tests in your test suite which sometimes can be huge & requires re-running of your whole test suite when you are back at your work next Morning!
Try to analyse the dependency in the business process & see any of the tests from the incomplete runs can be run independently & can be positioned somewhere else in your test suite if the probability of the failure at that particular step is high!
5. Colour/themes in your report for different metrics
Knowingly/unknowingly a human eye is biased towards some favourite colours & always like to see success/failures in certain colours.
How good it is to satisfy this criterion for not just for Pass/Fail but also colour coding other major metrics in the report as well.
6. Export option
So, you have your test report & how about exporting the report in a certain format to share it with your stakeholders as well as your team. (And may be able to Read/Edit once exported according to their convenience)
Example: Excel, HTML, XML etc.
7. Report Filters
Everyone has their own way of looking at the test results. Can we think of having the ability to filter the test reports based on the metric which you are expecting to get the better visibility into the test analysis! I love filters 😉
8. Auto suggestive error messages
Unknown errors in the real time execution of an application are very common & we absolutely love the automation tools to come up with the auto suggestive descriptive error messages instead of just telling STEP FAILED or ERROR at the step etc. for some of the application failures like browser issues, window popup etc
9. Application Browser link to the failed steps
These days as we mostly work with the cloud applications such as Salesforce & record links works amazing when dealing with the issues.
In built variables in the automation tools to capture the browser link for the failure steps works miracles provided even if the Test Engineer don’t necessarily use these variables at certain steps & may not be useful sometimes. Makes sense?
10. Deciding the order of execution of a test suite
Sometimes the business requires certain processes to be run for some urgent basis & it’s cool that you can run your test suite in a certain order yet be able to know the results that can be decided based on your comprehensive report
11. Time & Date metrics
It’s an important metric for any test automation report & I absolutely like the exact time taken to run the test suite displayed BUT also the time taken for every step!
This helps in deciding whether manual work is appropriate for some critical business requirements of the application.
12. Ability to display the test steps in the report
Instead of some major metrics like test scenarios, it’s always great to see the step names in some way in the report to quickly catch the issue & be able to replicate it on your own.
13. Ability to tag some test cases as critical in the suite & this being captured in the report
In regression testing, it’s sometimes important for certain requirements to be critical & we tag them as such in the script and your report shows those steps as critical & you have a separate view to see the progress of such scripts. When report shared with the stakeholders, they would also look at those which they need & are tagged as important/critical.
14. Graphical charts/Dashboards
Graphical representation of the overall test status & health of the application and the ability to share them separately with the leadership team based on the artifacts they are interested in.
15. Test coverage/Traceability Matrix
At the end of your day, it’s the test coverage that matters the most to the team. Hence your report should capture a matrix/table of all the test cases corresponding to the requirements captured which determines the success of your regression testing.
16. Marking the Defect Status (Open, Closed, Accepted, Rejected, Deferred, Non-reproducible etc.)
Once we get the defects, we refer to them based on certain criteria.
Open – The issue is not fixed yet
Closed – The issue got resolved (Application issue/Script maintenance) & we marked it as closed. (Feels good every time we do this!!!)
Accepted – Issue is accepted by the Developers as well as Testers, verified & taken into further investigation
Rejected – Issue could be due to some adhoc changes in the application/browser & may not be relevant
Deferred – The issue exists but may not be of priority to get it fixed in the current release
Non–Reproducible – Unable to replicate the issue in the test environment
17. Links to previous execution reports
Project stakeholders would trip out if there is a centralized place to look out for all the previous execution reports.
Hope you had a nice read & enjoyed some of my fun quirks on how I deal with automation test reports, as I feel it’s very important to think about the QA artifacts/metrics that can be utilized for the betterment of your project in terms of Validation, Reliability, User Experience, Benefits of the test report. This article might also help the future Automation tool developers on how to encompass the whole lot of features of reporting structures that benefit the Test Automation users of the organization and even for those who are looking to enhance their existing reporting formats.