Test Automation Forum

Welcome to TAF - Your favourite Knowledge Base for the latest Quality Engineering updates.

Test
Automation
Forum

(Focused on Functional, Performance, Security and AI/ML Testing)

Brought to you by MOHS10 Technologies

Hari Krishna Para

ai

in-depth testing of AI applications that use images

Introduction: Generally, in the MLOps (methodology to develop ML based applications) we have design, develop and operations phases, wait something important is missing…I hope by now you got it, yes there is no testing phase in MLOps (like security, bias, performance etc.), but here is the question; how does ML applications are tested in order to make them Responsible-AI (RAI)? Have you ever thought how AI/ML based  applications are tested? If you are someone who is curious about how AI/ML applications are tested then this article is for you. In this article, I’m going to discuss how did we test an AI-based Plant diagnostic application in order to make it reliable, robust and accurate.   Business case    The challenge was to test a plant diagnosis application that supports various crop types. It was developed for farmers and gardeners to diagnose infected crops, offer treatments for diseases and nutrient deficiencies, and enable collaboration with other farmers and so on. The plant disease recognition is done by using AI image recognition technology (artificial intelligence based Neural Networks algorithm).    How AI application testing is different:  Compared to regular software applications, developing AI-based applications is different. With AI-based applications, we work with data and code.AI Application development process goes through steps like data collection, data cleaning, feature engineering, Model selection, Train & test and so on. And this is what AI application development is different from the traditional software development process. With most AI models, the data is split into two sets, one to train the model and the other to test the model. Once certain metrics are used to gauge the model’s performance on the test data, the model is either validated or sent back to the previous stage for revision. Do you think this level of testing is  sufficient for an application that will make decisions, solve problems, and become part of people’s daily lives? Probably not! Let’s continue reading.  How to test an AI app to ensure its reliability:  There are several things that we can do to make an AI model more reliable, such as making it more robust. To achieve this, we need to test the AI models in different ways:  Randomized testing- Test the Al system to evaluate how the model performs with unseen data. Cross-validation techniques- Evaluate the effectiveness of the model by iterating the metrics evaluation across several iterations of splits of the data.Example: K-FoId Cross validation, Bootstrap & LooCv etc.  Test coverage- Pseudo Oracle Based Metamorphic testing, White box coverage-based testing, Layer level coverage, Neuron Coverage based testing. Test for bias- Test for the fairness of the ML model for any discriminatory behavior based on specific attributes like gender, race etc. Test for agency- Testing for closeness to human behavior. To compare two different models, to evaluate the Al ML models  dimensions of Al quality like natural interaction and personality.  Test for concept drift- Continuously check for data drift and hence the model drift which causes the deployed model to perform badly on newer data. Test for explainability- To enable testing for the “transparency of choices” element, we need to have a comprehensive approach to test the models for explainability. Security testing- Security testing for adversarial attacks is a primary component of any AI/ML test. We should test for potential attacks on current training data. Example: White Box and Black Box attacks. Test for Privacy- Test at model level for privacy attacks which makes it possible to infer data, and then to check if the inferred data has PII embedded inside it. Test for Performance- Check whether the system is able to handle different patterns of input loads, including spike pattern like e-commerce site during boxing day etc.  How did we test the Plant Diagnosis application at our AI lab:  In our process of testing the plant diagnosis application, we collected the data and model from our client in the required format. By using our strategic partner’s commercial state-of-the-art testing product called AIensured we tested the model. The results of the model having insights from both data and model performance was shared with the application owner. Following are the key benefits we provided to our client: Generated corner cases (cases where model fails to give actual result) and trained again on corner cases to increase its robustness. We used 11 attack vectors techniques like DeepFool, Universal Perturbation, Pixel Attack, Spatial Transformation etc., to know how robust it is against security attacks. The Model Explainability which includes both white box and black box explanations helped them to understand on which portion of the image their model is focusing and this helped them to know what caused the misclassification. To overcome the oracle problem (not having a defined output) we did metamorphic testing and that included techniques like rotation, shear, brightness etc., which helped them to know how the model is performing. Model quantization allowed them to reduce their model size without losing its accuracy. This helped them to incorporate their model on low-end electronic devices as well. List of the tests that were performed on their model are as depicted in the below graphics:   Results: Bottom line is, after retraining the model with generated corner cases, the performance of the model was found to be increased by around 12%. The report shared by us helped them to make their model explainable and ensured compliance with the required privacy governance and above all, we made their model responsible and robust to security attacks and improved overall performance of the model.  I hope this article was insightful! Please don’t hesitate to contact me in case you have a question or suggestions. Happy learning! 6+

image

In this age of Hyper-Automation why Manual testing is still a boon for enterprise app testing?

Introduction: Test automation has gained much attention recently. Many of the Testers and Developers are using test automation to achieve speed and also organizations prefer automation to deliver on-time services. Automated testing can reduce testing efforts and can be seen as a replacement for manual testing. According to QA Lead “2020 Software Testing Trends: QA Technologies, Data, &Statics”( 24 Actionable Software Testing Trends and Statistics for 2020 (theqalead.com)) 78% of organizations use test automation for functional or regression testing. Its benefits include executing recurring tasks, identifying bugs quicker, precision, and non-stop feedback — all of which save time, personnel, and ultimately lead to a lower software testing budget. However, manual testing still holds a prominent place in the Quality Assurance process. Automation testing doesn’t have decision making capabilities. By making use of automated testing, testers will lose chances to improve the quality of product, while by interacting and visualizing during the testing process.  So, by using both manual testing and automated testing with different permutations and combinations will greatly improve the production quality of the software. Why choose manual testing in this age of Hyper-automation? A suite of test automation looks impressive but it can never replace a manual testing. Manual testing, is required for initial verification of system, so manual testing is required to automate and can never be replaced. From the bar chart we can say that manual testing requires less efforts to train and need less number of tools to do testing as its USP(unique selling point) compared to automation testing, but manual testing requires more resources, time and infrastructure. Coming to automation testing it requires more tools and training slightly less infrastructure compared to manual testing and need very less time and human resource. So, from this we can say that manual testing has its own advantages from automation testing. Let’s see some reasons why automated testing can’t fully replace manual testing Manual testers can quickly reproduce customer-caught errors. Automation can’t catch issues that humans aren’t aware of. Automations is too expensive for small projects. Manual testers learn more about user perspective. Humans are creative and analytical. There is a whole bunch of testing that simply must be manual. There are scenarios that can’t be automated by their nature, for example, mobile applications with a big amount of tapping interaction or captcha verifications. The key advantage of manual testing over automation is its ability to handle complex and nuanced test scenarios. This is achieved by manual creation and execution of tests. Which scenarios ‘need’ Automation? In general tests that take a lot of time and effort to perform manual testing and the scenarios which are more repeatable are most suitable for automation testing. Some of them are: Scenarios which are repeatable on each build, Eg: smoke and sanity tests Scenarios repeatable on different browsers and operating systems, Ex: Comparability Testing Tests that are impossible to perform manually, Ex: Performance testing Tests that have significant downtime between steps,  Scenarios which require multiple set of test data to validate, Ex: Data driven Test Testing non-functionality of an application, Ex: Load testing and Performance testing Test scenarios having low risk, stable codes that are not likely to change often Test scenarios that are prone to human errors Scenarios which can’t be automated? In current times humans interact with apps and products in multiple ways- broadly through touch and touch-less. Here are some examples of test cases that cannot be automated: Using the camera feature of an app to take pictures in different lighting conditions. Performing negative testing to test the reliability of the applications. (Negative mindset to break the application) Hackers are adopting newer techniques and these scenarios have to be tested manually. Applications that are touch-enabled cannot be automated. Testing external features of hardware products, embedded systems etc. Verifying whether a software product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.) Exploratory Testing is completely based on human experience, instincts, and observation while exploring the app as an end-user. Ideally, nothing can compete with the human eye. It is best to perform exploratory testing using a manual approach under any given situation. Installation and setup testing system needs to test with different hardware and software such as loading CD-ROM, memory disks, and tapes. Such type of systems also require manual testing. As we can see, some tests should be performed manually. This especially relates to tests that focus on user interfaces and usability. Although we could automate literally everything, manual testing still provides an effective and quality check of bugs and improprieties. Scenarios which are expensive to automate? The Bottom Line It has been a fabulous experience so far sharing our thoughts around this topic. This article gives an overall view about why we still have manual testing despite the existence of hyper-automation. Automated testing requires coding and test maintenance, but on the plus side, it is much faster and covers many more permutations. On the other hand manual testing is slow, but since it handles more complex scenarios, it still survives in the market today. So, no matter how great test automation evolved you can’t automate everything. Manual testing is still in use and there are still cases where it is the best choice. So, it’s important to consider both manual and automation approaches while you design your QA strategy. One of the key testing principles is that 100% testing automation is impossible, manual testing is still necessary. So, the final verdict is that automation won’t replace manual, but neither will manually obviate automation. 8+

Submit your article summary today!

[wpforms id="2606"]
Contact Form

Thank you for your interest in authoring an article for this forum. We are very excited about it!

Please provide a high level summary of your topic as in the form below. We will review and reach out to you shortly to take it from here. Once your article is accepted for the forum, we will be glad to offer you some amazing Amazon gift coupons.

You can also reach out to us at info@testautomationforum.com