Test Automation Forum

Welcome to TAF - Your favourite Knowledge Base for the latest Quality Engineering updates.

Test
Automation
Forum

(Focused on Functional, Performance, Security and AI/ML Testing)

Brought to you by MOHS10 Technologies

Full Automation Testing

img

Enabling continuous testing for an E-commerce application using Selenium hybrid framework

Introduction: E-commerce applications need to be active at all times to meet the needs of increasingly demanding customers. This means that the software development process for such applications has to be able to handle constant change and churn in a timely manner. One of the biggest challenges in continuously delivering quality software is ensuring that the testing process can keep up with the pace of development. This is where automation tools come in.  Though there are hundreds of commercial and open source automation platforms are available in the market today, In this article, we are going to discuss how a selenium based hybrid framework can be used to enable continuous testing  in the development and maintenance of an e-commerce application.  Common challenges in testing an E-commerce application   Web applications are under immense pressure to offer a streamlined and satisfying digital customer experience while ensuring the quality of their product. In order to achieve this, they must use shift-left and enable continuous testing throughout the software development life-cycle. However, this is easier said than done. Many e-commerce applications are complex, with a hybrid architecture that consists of both legacy and modern components. This can make it difficult to set up a continuous testing framework that covers the entire application.  Considering hybrid selenium based framework  A typical hybrid framework is a combination of both Data-Driven and Key-word driven framework that has been built on top of selenium. Selenium is an open-source popular web-based automation tool. It supports popular browsers like Google Chrome, Microsoft Edge, Mozilla Firefox, works on all major OS and its scripts are written in various languages i.e. Java, Python, JavaScript, C#. This framework is basically code structure that makes code maintenance easy and efficient. Using hybrid framework leads to increased code reusability, higher portability, reduced cost of script maintenance, and better code readability. It is based on the principle of shift-left testing, which means that tests are run as early as possible in the software development process.  Why hybrid framework is necessary for e-commerce applications?  Hybrid framework is particularly well-suited for testing web applications built using the Selenium tool and available for use with a number of popular Continuous Integration (CI) tools, such as Jenkins. Enables Continuous testing for e-commerce applications. It helps to ensure the quality of the product, prevent production issues, and reduce the need for regression testing. By using it, e-commerce organizations can easily set up a continuous testing framework that covers the entire application.  A hybrid framework allows testers to choose the most appropriate tools and techniques for different testing tasks. This can be particularly useful for e-commerce applications, which may have a wide range of features and functionalities that need to be tested and can help reduce the time & effort required to test the application that need to be released on tight timelines.  Steps to enable hybrid framework for continuous testing of an e-commerce applications:  Set up a continuous integration (CI) server: A CI server is a tool that automatically builds and tests your application every time you push code changes to version control. This helps you catch bugs and other issues early in the development process.  Choose a testing framework: There are various testing frameworks available for different programming languages. Some popular options include JUnit for Java, PyTest for Python and Mocha for JavaScript. Choose a framework that is well-suited to your application and the language you are using.  Write test cases: Test cases are the individual tests that your testing framework will run. They should cover a range of scenarios and test the different features of your application. You should aim to have a good balance of unit tests, which test individual components of your application, and integration tests, which test how those components work together.  Set up test automation: Once you have written your test cases, you can automate them using your CI server. This means that every time you push code changes, the CI server will run your tests automatically to ensure that the code is working as expected.  Monitor test results: As your tests run, you should monitor the results to see which tests are passing and which are failing. This will help you to identify any issues with your code and fix them quickly.  By following these steps, you can set up a hybrid framework for continuous testing of e-commerce applications and ensure that it is working correctly.  Some more additional points:  Framework was designed to accelerate the test automation.  Most importantly, you need to develop a test plan that identifies the key functionalities of the e-commerce application that need to be tested continuously. These could include checkout, payment processing, order tracking, and user account management, among others. Additionally, it should map the testing resources so that the framework can be utilized efficiently.  Create positive and negative test scenarios with multiple test data to obtain maximum coverage.  With framework one can complete tests faster, which is essential in a field where customers need top-notch software that runs quickly.  It helps in implementing Continuous Integration (such as Jenkins) to check the build quality as soon as any change is made to the code.  Jenkins keeps track of results and displays them as a trend graph. This provides a clearer picture of how previous tests have performed.  Using framework, you can set up to automatically record results and share them with the team once tests are completed.  It ensures the tests are executed more consistently by automated tests.  Regularly review and update the testing process to ensure that it is effective and efficient.  However, some e-commerce web applications can be difficult to automate due to their dynamic nature. They change their content and functionality based on the user’s actions, so it’s hard to test them in the same way you would do for traditional web pages or apps.  There are several challenges involved in automating dynamic web applications. Some challenges include:  Identifying the elements on the page  Navigation across pages  Form submission  Handling popup windows  Waiting for page

image

Enabling 100% coverage for file upload and download using AutoIt in Selenium

Introduction: A web application has not limited itself to work always within the web. Sometimes there is a need for interaction of the web with the local system for uploading and downloading files. Automating this type of workflow is a bit complex with Selenium as the scope of Selenium is limited within the web.  Let’s learn how AutoIt helps in enabling 100% file upload and download in Selenium. Business case Our client has a background remover application, that uses AI to remove the background and reduce editing time which will help the users to add or remove the background of an image if they want to. Challenges for optimization of testing Every release requires testing, and manual testing was time-consuming. The team has recently implemented an agile process, where the functionality is changing frequently which is a challenge for optimizing the whole testing process. Our automation testing approach Automation helped the customer to verify/test different types of files within a short period. QA team developed a test plan by integrating Selenium with AutoIt Platform for uploading the image from the local system and validating its result. Automation helped the team complete tests faster, reducing overall testing efforts by 25%. Using our internal Selenium-based hybrid framework, we provided the results and records in well-documented formats. We implemented shift-left by integrating the testing environment with Jenkins. Testing Tools used Selenium: We used Selenium as it is an open-source automation testing tool to demonstrate web applications across different browsers and platforms. A wide range of programming languages like Java, Python, Ruby, c#, R-Data, PHP, Perl, etc are supported by Selenium. It supports a variety of operating systems—Windows, Mac, or Linux and browsers like Mozilla Firefox, Internet Explorer, Google Chrome, Safari, or Opera.  Selenium can be integrated with tools such as TestNG and JUnit for managing test cases and generating reports. It is integrated with Maven, Jenkins, and Docker to achieve continuous testing. Selenium focuses on automating web-based applications. Need for third-party tool in Selenium A web application is not limited to functioning entirely within the web. Sometimes, there is a need for the website to interact with the local system for the uploading and downloading of files. Automating this type of workflow can be complex with Selenium, as Selenium’s scope is limited to within the web browser itself. If you need to automate a workflow that will go from the browser to the desktop and from the desktop to Selenium, then the AutoIt tool may be a solution to your problem. What is AutoIt? AutoIt is an open-source scripting language that is designed for automating windows GUI and general scripting. It is a fusion of mouse movement, simulated keystrokes, and window control manipulation to automate a task that is not possible by the selenium Web driver. AutoIt is also very small, self-contained, and will run on all versions of Windows out-of-the-box with no annoying “runtimes” required! An AutoIt automation script can be converted into a compressed, stand-alone executable that can be run on computers even if they do not have the AutoIt interpreter installed. Language used Java: All are very familiar with the term “Java”. It is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. Most developers use java for coding web applications. It has been a popular choice among IT Professionals for over two decades, with millions of Java applications in use today. It is owned by Oracle, and it is used by more than 3 billion devices. Java is a multi-platform and network-centric language that can be used as a platform in itself. It is a fast, secure, reliable programming language for coding everything from mobile apps and enterprise software to big data applications and server-side technologies. In the current job market, the demand for java is significant. Java applications are typically compiled to byte code that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++ but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. What is TestNG?  TestNG is an open-source test automation framework for Java. It is developed on the same lines as JUnit and NUnit. A few advanced and useful features provided by TestNG make it a more robust framework than its peers. The NG in TestNG stands for ‘Next Generation. Created by Cedric Beust, it is used more frequently by developers and testers in test case creation, owing to its ease of using multiple annotations, grouping, dependencies, prioritization, and parameterization features. Using TestNG, you can generate a proper report, and you can easily come to know how many test cases are passed, failed, and skipped. You can execute the failed test cases separately. Why use TestNG in Selenium? By default, Selenium tests do not generate a proper format for the test results. Using TestNG in Selenium, we can generate test results. Feature of TestNG Multiple Before and After annotation options XML-based test configuration Dependent methods Groups/groups of groups Data-driven testing Multithreaded execution Better reporting key benefits With Selenium-based hybrid framework and use of reusable methods, allowed the QA Team to reduce the automation effort by 25%. 40% reduction of the overall testing cost. CI and CD with test automation helped to identify errors and defects. Provides improved accuracy by around 96% Reduces the time by 50%.  Results In recent times, the web GUI has become so much more complex with the integration of several native third-party elements which makes it challenging for the test automation engineers to achieve 100% automation coverage. This problem statement needs a QA org to invest more by procuring additional third-party platforms that can ensure better automation coverage. Using an open-source platform like Selenium, the above automation approach (AutoIT integrated with Selenium) has helped our team to improve the overall project quality and reduce the cost and time as well. The result was a

image

How DevOps builds the future of the software industry?

Introduction: Do you know the present century is totally dependent on technologies and specifically in IT or Digital fields rapid changes are taking place in form of new concepts and practices? DevOps has been proven to be critical in speeding up the Software development life cycle in recent past and has been continuously gaining popularity globally. Patrick Debois in 2009, adopted DevOps process for the first time and started seeing the improvements with respect to the traditional software development challenges. What is DevOps? DevOps stands for a portmanteau of “Development” and “Operation”. DevOps is a combination of cultural philosophies, practices, and kits that increase an organization’s capability to deliver applications and services in very highly demanding situations by applying continuous development and rectifying products in a faster mode than the organization’s traditional SDLC process. This process enables the organization serve a better quality product to the client. There are two loops – Handover Loop and the Feedback loop. The Handover Loop has Build, Test, and Release on the other hand the Feedback loop is having Plan & Operations. Where DevOps is used? The companies like Amazon, Target, Esty, Netflix, Google, and Walmart are using this method and as we know they are very successful organizations that are gaining improvement and delivering quality product day by day. Pillar of DevOps The pillars are the backbone of a structure and on depending those pillars the structure is made. In DevOps the pillars are  Release management Provisioning Configuring management System Integration Monitoring & operation DevOps Methodologies DevOps is the direct application of agility, which rise from the need for faster development and deployment and an increase in software development velocity. Advancements in agile development highlighted the need for a more holistic approach to the software delivery life cycle, resulting in DevOps. Here are some frameworks and some technical terms used in DevOps- Scrum – A framework in which people can get the solution for complex problems while delivering a high-end product. Kanban – Kanban is nothing but a popular framework that is used for the implementation of agile methodology along with DevOps`. Scaled Agile Framework (SAFe) – It is a group of organizational and workflow patterns to implement Agile. SAFe is one of a growing number of frameworks that seek to address the problems encountered when scaling beyond a single team. Lean development – It is a translational property of lean manufacturing principles along with practices. Lean offers a conceptual framework, values, and principles, as well as best practices derived from experience, that support agile organizations. Why should go for DevOps & its practices? Nowadays, mainly in 2022, DevOps is having broad scope to make the whole process of D to D in a nominal period. D to D is nothing but Developing to Deployment and also the monitoring and operating of the software. There are a few points or trends to make the motive to go for DevOps. 1. Container adoption of the DevOps Strategy One of the factors contributing to the rise of DevOps is the growing use of container(placing a software component and its environment dependencies and configuration into an isolated unit) technologies. The use of Kubernetes and containerized services for networking, storage, and security will continue to rise. Containers are the best way to build applications which are scalable, quick changeable to add new features and solve customer issues to gain a competitive advantage. 2. Serverless computing is new approach The concept of serverless computing gives us a new and exciting approach to the installment of software and other services. Serverless computing smoothen up DevOps operations and increases architectural scalability for minimal pay. This approach is also helping coders in the replacement of workload by looking after server maintenance – system updates and cloud monitoring. 3. The Advancement of Microservices Architecture Microservices architecture is gaining popularity in the IT industry in place of traditional architecture. DevOps practices and microservices architecture make the decentralized teams innovate faster, maintaining their own technology stack and standards, organizing performance metrics, and managing the development and release cycles, and the go-to-market time is less in this development and release process.  4. The Rise of DevSecOps-new version Security is the most vital point now-a-days mainly post-pandemic era which is virtual security.  And all the enterprises are constantly running to move faster, and with this, the whole security teams are struggling with quality analysis nothing but testing. For this cause, the previous version of DevOps was not sufficient, and DevSecOps comes into the story to provide security to products. 5. The adoption of low-code applications Taking the step for a low-coding approach in DevOps practice is a game-changer for any team. The agility which is enabling low coding also provides almost all organizations a markable competitive advantage in terms of gaining speed in the demanding software market. That kind of low-coding platform allows those, who are having very minimal coding knowledge. 6. Application of artificial intelligence and machine learning in DevOps Compacting and utilizing the DevOps methodology is essential to keeping pace according to the changing workloads and environments. AI and Machine Learning can help DevOps teams enhance their performance by improving feedback loops and managing alerts. Developers can also benefit from AI and ML by referring to previous application performance and its operational metrics performance which will be recognized and analyzed, sentiment analysis, success building, and test completion.   7. Automation Automation is the backbone of each of every traditional or agile method of developing software and testing. Now at the core of DevOps strategies also following the automation. Because the speed of D-D along with Operations and Monetization will not be achieved without automating the life cycle.  All aspects of DevOps and QA must be adaptable if they are going to keep up with the ever-increasing demands of the future. Leveraging DevOps to our advantage DevOps has evolved tremendously in recent years and will continue to grow. It is rapidly moving beyond automation and powering the emerging DevOps trends such as GitOps, and Site Reliability

ai

in-depth testing of AI applications that use images

Introduction: Generally, in the MLOps (methodology to develop ML based applications) we have design, develop and operations phases, wait something important is missing…I hope by now you got it, yes there is no testing phase in MLOps (like security, bias, performance etc.), but here is the question; how does ML applications are tested in order to make them Responsible-AI (RAI)? Have you ever thought how AI/ML based  applications are tested? If you are someone who is curious about how AI/ML applications are tested then this article is for you. In this article, I’m going to discuss how did we test an AI-based Plant diagnostic application in order to make it reliable, robust and accurate.   Business case    The challenge was to test a plant diagnosis application that supports various crop types. It was developed for farmers and gardeners to diagnose infected crops, offer treatments for diseases and nutrient deficiencies, and enable collaboration with other farmers and so on. The plant disease recognition is done by using AI image recognition technology (artificial intelligence based Neural Networks algorithm).    How AI application testing is different:  Compared to regular software applications, developing AI-based applications is different. With AI-based applications, we work with data and code.AI Application development process goes through steps like data collection, data cleaning, feature engineering, Model selection, Train & test and so on. And this is what AI application development is different from the traditional software development process. With most AI models, the data is split into two sets, one to train the model and the other to test the model. Once certain metrics are used to gauge the model’s performance on the test data, the model is either validated or sent back to the previous stage for revision. Do you think this level of testing is  sufficient for an application that will make decisions, solve problems, and become part of people’s daily lives? Probably not! Let’s continue reading.  How to test an AI app to ensure its reliability:  There are several things that we can do to make an AI model more reliable, such as making it more robust. To achieve this, we need to test the AI models in different ways:  Randomized testing- Test the Al system to evaluate how the model performs with unseen data. Cross-validation techniques- Evaluate the effectiveness of the model by iterating the metrics evaluation across several iterations of splits of the data.Example: K-FoId Cross validation, Bootstrap & LooCv etc.  Test coverage- Pseudo Oracle Based Metamorphic testing, White box coverage-based testing, Layer level coverage, Neuron Coverage based testing. Test for bias- Test for the fairness of the ML model for any discriminatory behavior based on specific attributes like gender, race etc. Test for agency- Testing for closeness to human behavior. To compare two different models, to evaluate the Al ML models  dimensions of Al quality like natural interaction and personality.  Test for concept drift- Continuously check for data drift and hence the model drift which causes the deployed model to perform badly on newer data. Test for explainability- To enable testing for the “transparency of choices” element, we need to have a comprehensive approach to test the models for explainability. Security testing- Security testing for adversarial attacks is a primary component of any AI/ML test. We should test for potential attacks on current training data. Example: White Box and Black Box attacks. Test for Privacy- Test at model level for privacy attacks which makes it possible to infer data, and then to check if the inferred data has PII embedded inside it. Test for Performance- Check whether the system is able to handle different patterns of input loads, including spike pattern like e-commerce site during boxing day etc.  How did we test the Plant Diagnosis application at our AI lab:  In our process of testing the plant diagnosis application, we collected the data and model from our client in the required format. By using our strategic partner’s commercial state-of-the-art testing product called AIensured we tested the model. The results of the model having insights from both data and model performance was shared with the application owner. Following are the key benefits we provided to our client: Generated corner cases (cases where model fails to give actual result) and trained again on corner cases to increase its robustness. We used 11 attack vectors techniques like DeepFool, Universal Perturbation, Pixel Attack, Spatial Transformation etc., to know how robust it is against security attacks. The Model Explainability which includes both white box and black box explanations helped them to understand on which portion of the image their model is focusing and this helped them to know what caused the misclassification. To overcome the oracle problem (not having a defined output) we did metamorphic testing and that included techniques like rotation, shear, brightness etc., which helped them to know how the model is performing. Model quantization allowed them to reduce their model size without losing its accuracy. This helped them to incorporate their model on low-end electronic devices as well. List of the tests that were performed on their model are as depicted in the below graphics:   Results: Bottom line is, after retraining the model with generated corner cases, the performance of the model was found to be increased by around 12%. The report shared by us helped them to make their model explainable and ensured compliance with the required privacy governance and above all, we made their model responsible and robust to security attacks and improved overall performance of the model.  I hope this article was insightful! Please don’t hesitate to contact me in case you have a question or suggestions. Happy learning! 6+

image

Why Codeless Automated Testing is gaining popularity?

Do you want to become a Scriptless automation test engineer? If yes, then this article is for you. What is Scriptless Test Automation?  A Method of creating automated test scripts that do not require coding or programming skills. Testing that serves to reduce the time required for creating automated tests. Automated testing guarantees to perform testing without the requirement of code.  Let’s go a little deeper into scriptless testing and make automation simpler for testers.  Why Codeless Automated Testing is gaining popularity? In the current times of DevOps/Agile, speed continues to be the prime driver throughout the software engineering (CI/CD) process. To accelerate the test automation process, industry leaders in the Quality Engineering space often prefer testing tools and frameworks that require little-to-no code as part of the testing script development process: low-code, no-code, and codeless/scriptless. These types of testing are almost the same. The testing tools using a low-code/scriptless approach allow the test engineers/test developers to create the test scripts without any previous coding experience. Platforms that are based on coding/programming are getting outdated because of the initial time taken to develop the automation framework, the longer time to develop the scripts, and also when it comes to testing maintenance and so on. When you consider a codeless test automation platform (or any other testing tool for that matter), keep in mind that the tool will never 100% replace all your manual testing. It perfectly makes sense for some scenarios to be tested manually that need to most intelligent execution using an expert’s “human touch” due to several environmental reasons. However, there would be several types of tests that are the right candidate for codeless scripting, e.g. scenarios that are repeatedly tested and not much functionality change involved. What is the difference between Script & Scriptless Automation?  Script:  Test steps manually defined before execution.  step1->step2->step3->step4->step5…  Scriptless:  Test steps generated during test execution based on available actions. So, you can move from manual to automation with no code using scriptless and this can increase your speed and test robustness.   Scriptless test automation can help you including:  Test automation coverage scenarios. Improve Quality (enhance quality). Create stable automation. Accelerate quality delivery. No coding required/involves low coding. Thinks like the end-user(customers). Make changes to tests easily. Testing delays (You can speed up test cycles by using scriptless by enabling your business testers). Scriptless Automated Testing Methods:  NLP (natural language processing): Model-based testing Image-based scriptless Recording screenshots Drag and drop-based object mapping Keyword-driven testing Object-driven testing AI Bots for test automation.  Why do we require scriptless automation testing tools?   Test automation can’t succeed without the help of effective test automation tools. It will be more useful to know which automation testing tools are the best as per the user behavior.  To automate tests using tools instead of writing test scripts.   Reduces your time spent on test maintenance with their self-healing ML algorithm. Easy, better, and fast results (faster creation of test automation). Cut Down Costs (decreases cost). Search & Automate. Few clicks are all it takes. Built-in hybrid framework. Automated suggestions. Focus more on testing. To become a test automation engineer coding is mandatory. Right?  Suppose, you want to become a selenium test engineer then the coding is required to write the test scripts by using any programming language like python, java, or JavaScript to automate the web applications.  But if you are not good at programming language and you don’t have any scripting knowledge then the following scriptless/codeless automation testing tools are recommended.  What are the most popular automation testing tools available in the IT industry?  Today, we see a lot of new automation tools and platforms coming up where they have recorded and replay and create our entire automation test or automation project without doing any coding or very less coding required. Enlisted below are some of the automation testing tools which are helpful to automation testers.  1. Katalon Studio: It is one of the free open-source automation testing tools. It has been built with readily available features. You just need to configure the software and use it for automation. It is used to automate Web, Mobile Desktop, and API automation testing. Katalon also works with other tools like JIRA, Slack, and such.  Katalon Studio Website-https://www.katalon.com/  2. Appium: It is an open-source test automation framework for mobile web, and hybrid applications on iOS mobile, Android mobile, and windows desktop platforms. Appium is designed on a server, and the user gains access to its automation framework through a business person/dealer.  Appium Website-https://appium.io/  3.TestingWhiz: Testing Whiz is a Codeless Automation Testing Tool for Software, Web, Mobile, Database, Cloud, Web Services, and API testing. It is very helpful to provide global solutions & software companies for their web applications. We can automate the applications by using the play and record option and also drag and drop commands as well.  Testing Whiz Website-https://www.testing-whiz.com/  4. Perfecto Scriptless: We can automate the web, mobile, and AI testing web applications as well. It is completely AI-based maintenance & also supports cross-browser execution, cloud-based collaboration and schedule and monitoring, and intelligent reporting and debugging and it also supports integration software.  Perfecto Scriptless-https://www.perfecto.io/  5. Tosca: It is a license-based version. It is a software automation testing tool. We can do end-to-end software application testing. It is used for GUI, API, web application, and mobile application testing.  Tosca Website-https://www.tricentis.com/resources/tosca-automate-ui/  Conclusion: Codeless testing is evolving at a very rapid pace and there are several commercial platforms entering the market every now and then. Hence, I would recommend purchasing  a platform that is futuristic and AI powered. I would also recommend keeping substantial consideration for analytics and reporting capabilities of the scriptless testing platform, as the future is all about analytic based Dashboards, auto-healing, auto decision making abilities using the data trend and so on that takes advantage of AI/ML. I hope this article was insightful to provide some food for thought around why to consider scriptless testing platform. Will be happy to help in case you have any

image

In this age of Hyper-Automation why Manual testing is still a boon for enterprise app testing?

Introduction: Test automation has gained much attention recently. Many of the Testers and Developers are using test automation to achieve speed and also organizations prefer automation to deliver on-time services. Automated testing can reduce testing efforts and can be seen as a replacement for manual testing. According to QA Lead “2020 Software Testing Trends: QA Technologies, Data, &Statics”( 24 Actionable Software Testing Trends and Statistics for 2020 (theqalead.com)) 78% of organizations use test automation for functional or regression testing. Its benefits include executing recurring tasks, identifying bugs quicker, precision, and non-stop feedback — all of which save time, personnel, and ultimately lead to a lower software testing budget. However, manual testing still holds a prominent place in the Quality Assurance process. Automation testing doesn’t have decision making capabilities. By making use of automated testing, testers will lose chances to improve the quality of product, while by interacting and visualizing during the testing process.  So, by using both manual testing and automated testing with different permutations and combinations will greatly improve the production quality of the software. Why choose manual testing in this age of Hyper-automation? A suite of test automation looks impressive but it can never replace a manual testing. Manual testing, is required for initial verification of system, so manual testing is required to automate and can never be replaced. From the bar chart we can say that manual testing requires less efforts to train and need less number of tools to do testing as its USP(unique selling point) compared to automation testing, but manual testing requires more resources, time and infrastructure. Coming to automation testing it requires more tools and training slightly less infrastructure compared to manual testing and need very less time and human resource. So, from this we can say that manual testing has its own advantages from automation testing. Let’s see some reasons why automated testing can’t fully replace manual testing Manual testers can quickly reproduce customer-caught errors. Automation can’t catch issues that humans aren’t aware of. Automations is too expensive for small projects. Manual testers learn more about user perspective. Humans are creative and analytical. There is a whole bunch of testing that simply must be manual. There are scenarios that can’t be automated by their nature, for example, mobile applications with a big amount of tapping interaction or captcha verifications. The key advantage of manual testing over automation is its ability to handle complex and nuanced test scenarios. This is achieved by manual creation and execution of tests. Which scenarios ‘need’ Automation? In general tests that take a lot of time and effort to perform manual testing and the scenarios which are more repeatable are most suitable for automation testing. Some of them are: Scenarios which are repeatable on each build, Eg: smoke and sanity tests Scenarios repeatable on different browsers and operating systems, Ex: Comparability Testing Tests that are impossible to perform manually, Ex: Performance testing Tests that have significant downtime between steps,  Scenarios which require multiple set of test data to validate, Ex: Data driven Test Testing non-functionality of an application, Ex: Load testing and Performance testing Test scenarios having low risk, stable codes that are not likely to change often Test scenarios that are prone to human errors Scenarios which can’t be automated? In current times humans interact with apps and products in multiple ways- broadly through touch and touch-less. Here are some examples of test cases that cannot be automated: Using the camera feature of an app to take pictures in different lighting conditions. Performing negative testing to test the reliability of the applications. (Negative mindset to break the application) Hackers are adopting newer techniques and these scenarios have to be tested manually. Applications that are touch-enabled cannot be automated. Testing external features of hardware products, embedded systems etc. Verifying whether a software product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.) Exploratory Testing is completely based on human experience, instincts, and observation while exploring the app as an end-user. Ideally, nothing can compete with the human eye. It is best to perform exploratory testing using a manual approach under any given situation. Installation and setup testing system needs to test with different hardware and software such as loading CD-ROM, memory disks, and tapes. Such type of systems also require manual testing. As we can see, some tests should be performed manually. This especially relates to tests that focus on user interfaces and usability. Although we could automate literally everything, manual testing still provides an effective and quality check of bugs and improprieties. Scenarios which are expensive to automate? The Bottom Line It has been a fabulous experience so far sharing our thoughts around this topic. This article gives an overall view about why we still have manual testing despite the existence of hyper-automation. Automated testing requires coding and test maintenance, but on the plus side, it is much faster and covers many more permutations. On the other hand manual testing is slow, but since it handles more complex scenarios, it still survives in the market today. So, no matter how great test automation evolved you can’t automate everything. Manual testing is still in use and there are still cases where it is the best choice. So, it’s important to consider both manual and automation approaches while you design your QA strategy. One of the key testing principles is that 100% testing automation is impossible, manual testing is still necessary. So, the final verdict is that automation won’t replace manual, but neither will manually obviate automation. 8+

image

10 Most Recommended Tests for your AI/ML/DL Models in 2022

In recent past there has been a spate of accidents involving AI and Machine learning models in practice and deployment. Much so that there is an active database of all such accidents being chronicled (https://incidentdatabase.ai/ ). At a time when AI is making strides in radical business transformation for enterprises, it is vital that we ensure seamless deployments of AI in real transformational scenarios. To ensure such seamless deployments it is vital that we ensure a quality, trustworthy and responsible AI. A critical need to ensure quality, trustworthy, and responsible AI is the focused effort to test AI and ML and DL models thoroughly. In a previous article Why Current Testing Processes In AI/ML Are Not Enough? we showed how existing techniques and processes are not sufficient to ensure a quality, trustworthy and responsible AI. Here in this article we intend to elucidate the complete set of tests as required for an AI model to be able to ensure a quality, trustworthy and responsible AI. We shall enumerate and define each of these tests for AI ML DL models below. 1. Randomized Testing with Train-Test Split: At the core of the article Why Current Testing Processes In AI/ML Are Not Enough? we illustrated that current foundations of testing in ML life cycle rests on the principle of splitting the data into training and test data and testing for metrics on the test data. Metrics could vary from accuracy in classification to MSE in regression. The basic idea is to test how the model performs on unseen data. 2. Cross Validation techniques This is an effective model evaluation technique set which is currently in vogue as part of the ML process. Here again the basic idea is to test how the model performs on unseen data. The idea is to the evaluate the effectiveness of the model by iterating the metrics evaluation across several iterations of splits of the data. This can again be ensured by any of the three techniques below K-Fold Cross Validation: Here the data is split into k parts and each iteration one of the k parts becomes test set and remaining k-1 parts become the training set and metrics are averages across iterations. LOOCV: An extreme form of K Fold cross validation where a single data item is created as test set and remaining n-1 items are treated as train set and over n (size of data) the metrics are averaged Bootstrap: Here the idea is to create a new data set from existing data set of same size by sampling with replacement, and metrics evaluated over several such iterations. These abovementioned test techniques are quite prevalent in today s AI ML DL deployments. However as highlighted in https://medium.com/@srinivaspadmanabhuni/why-current-testing-processes-in-ai-ml-are-not-enough-f9a53b603ec6 these may not be enough to deal with scenarios like corner cases, performance issues, security issues, privacy issues, transparency issues, and fairness/bias issues. Hence we need to expand the scope of testing to cover broader aspects to ensure a quality, trustworthy and responsible AI. To set a benchmark for such a repertoire of tests, we shall refer the quality dimensions of AI in addition the standard ones as defined in ISO25010 in the talk by Rick Marselis at https://www.slideshare.net/RikMarselis/testing-intelligent-machines-approaches-and-techniques-qatest-bilbao-2018 In addition the standard ISO25010 quality metrics, there are three additional quality metrics proposed for testing AI/ML systems. These are as below: a. Intelligent Behaviour: It can be a test for evaluating the intelligence of the system. Within this the traits that can be tested include test for ability to learn, improvisation, transparency of choices, collaboration and naturalness of the interaction. b. Morality: It can be a test for evaluating the moral dimensions of the AI system. This can include broad tests for ethics (including bias), privacy, and human friendliness. c. Personality: This is closely related to testing humanness of the AI system. It includes tests for mood, empathy, humour , charisma like dimensions. In view of this discussion it is vital we evolve a testing strategy involving a comprehensive set of tests for AI/ML systems to look at both these additional dimensions of quality as well as standard dimensions from ISO 25010 perspective. Let us look at some of the important tests we need to incorporate from these additional quality attributes perspective. 3. Tests for Explainability: In order to enable testing for the “transparency of choices” element under Intelligent behavior as above, we need to have a comprehensive approach to test the models for explainability. As we discussed in https://medium.com/@srinivaspadmanabhuni/why-some-ml-models-required-to-have-explainability-fc190906a9c8 these are specifically required when models in AI ML are not interpretable, like neural networks etc. In case of interpretable models, it is fairly easier to get information on the rationale of an inference by an ML model. However in complex models like neural networks these have to be tested for explainability where we test for rationale for any decision. This whole area broadly referred to as XAI (Explainable AI) framed by DARPA at https://www.darpa.mil/program/explainable-artificial-intelligence These explainability tests can be again of two types: Model Agnostic Tests: These tests do not take into account any specific details of the ML model and perform independent of the model, much like the black box testing models. Examples include LIME etc. Model Specific Tests: These explainability tests take into account specifics of the model under consideration. Like if you have a CNN like model, you can use GRAD-CAM like model to transparently look at the rationale of the decision. 4. Security Testing for AI/ML models: In context of the quality attributes in ISO25010 security with its broad needs of (Confidentiality, Integrity, Availability) becomes a vital quality attribute to be tested. In case of AI/ML the specific security needs arise from the new category of threats namely adversarial attacks which attack models with poisoned data, and fool the models. Important that we include security testing for adversarial attacks a primary component of any AI/ML test. We should test for potential attacks on current training data. This kind of test can simulate both kinds of attacks below: White Box attacks: Here there is a knowledge of the parameters

image

Why current testing processes in AI/ML are not enough?

The current notions of quality assurance and testing in AI/ML pipelines is based on the idea of validation using a random set-aside set of data on which the model is tested and metrics computed thereof. Metrics like accuracy on the random set-aside data set termed ambiguously as test data, is the usual rubric for evaluation of the effectiveness of the ML models. But this only gives a partial picture of the quality of the model, which is not sufficient to guarantee good performance on deployment. Probably it is because of the terminology of “test data” used in the process that the big picture of testing is missed out in the ML life cycle. There are however some additional validation mechanisms also suggested to further boost the evaluation process like K Fold cross validation Bootstrap Leave One Out Cross Validation However all the above validation approaches including randomized train test split mechanisms are based on the notion that testing the model on randomized unseen data is a good enough validation of the corresponding model. We feel that is an incomplete picture which is not complete to guarantee overall performance of the model in the field. Here are a set of qualified reasons as to why we need to think beyond current model evaluation and validation approaches to guarantee AI / ML model quality.   The random selection of test set including cross validation based approaches do not guarantee a comprehensive coverage of the input scenarios, especially corner cases which are rare in nature. Even though cross validation approaches try to cover the overall spectrum via k-fold approach, a systematic approach to understand and debug as to the performance of the model for different variable scenarios of inputs is not possible. Hence detecting what types of input variations are not being sufficiently represented in the model, is impossible in current approaches. Testing for security, an important non functional IT requirement, is totally absent in current model evaluation approaches. Not to think of application security, now AI models themselves need to be audited for AI specific attacks, hence there is a need for comprehensive security testing of AI/ML models. In terms of compliance oriented sectors there is an increased push for generation of explanations or rationale for the AI/ML model decisions. So testing for explainability is must for today s AI ML models. Performance of AI/ML models is to be tested independent of the system in which the AI/ML models are deployed. Because there are specific deployment formats like tinyML which need a comprehensive validation of performance at model level. Privacy as well as GDPR imposed constraints on data and derived AI models are a huge set of desiderata for AI ML applications. So testing AI ML models for privacy breaches or attacks and leaks forms an important component of the overall requirements to certify and audit AI models. Testing and assurance of fairness and bias in AI models is an important requirement of AI models to ensure that they do not get recalled or rescinded. Finally testing of data quality at input level before being fed to the ML process is vital, as a lot of quality issues at model arise due to which we need to ensure testing of quality at input data level before being fed to the AI / ML model. In several scenarios there is not sufficient data to test the AI models. In those scenarios the data adequacy of the models need to be tested and if need be mechanisms to augment test data, be made available Overall these desiderata really point us to the requirement of standalone frameworks and processes and products for AI testing which can handle all the abovementioned tests for ML models of all types. To ensure a trustworthy and responsible AI a comprehensive set of tests of all the points above is a mandatory requirement. — Dr. Srinivas Padmanabhuni   testAIng.com     Note: The article has been republished here with prior approval from the author.   About the Author Dr. Srinivas Padmanabhuni works for TestAIng as their CTO. He is a well known personality in the field of Artificial Intelligence (AI) and is recognised for his significant contributions in AI. Dr. Srinivas Padmanabhuni is a Ph.D. in Artificial Intelligence. He speaks in several premeire institutes, forums and authorded several technical articles/books in AI/Data Science. About TestAIng (testAIng.com) testAIng.com (pronounced as tAI) is a leader in testing AI Systems using their state-of-the-art techniques, tools and technologies.They have combined their deep experience in testing along with AI to create a unique and one-of-its-kind proposition for testers who want to either use AI in their testing process or get their AI systems tested. 3+

Performance testing

Performance Testing & Engineering trends of 2020

In this age of Digital Transformation, application performance has become the most important part of business. It is not just about page load or page speed, rather drives the business revenue by ensuring the software can process transactions in a responsive manner for a greater customer experience. Therefore, performance testing/engineering is emerging as a specialization in software testing space and software testers are encouraged to adopt multi-layered testing approach instead of traditional performance testing. In this article, we will talk about some of the best practices that will help testers to uncover different aspects of performance testing & engineering, at a high level. Let’s start with discussing about Test Strategy around performance testing and will also talk about some of the recent trends in Performance Engineering in the Industry in later part of this article. Building a Complete Test Strategy Performance Testing & Engineering requires deep technical knowledge about application & technology which is followed by detailed planning and preparations before execution of test. BA/Product owner, Business Owner, Application developer, Production Monitoring Team are the several participants in planning phase. During planning & strategy prep, we must consider including shift-left and shift-right load testing approach. a) Shift-Left Load testing Shift-Left testing means doing testing in early or every stage of development. It saves time and improves overall quality of application. It doesn’t require to wait until the end of development process to do performance testing for the entire application. b) Shift-Right Load testing Shift-Right testing has big impact on overall quality of application. Applying test practices to production means considering real-world users and their experiences into the development process. Right workload model & plan Right Workload model is always a challenge for Performance Test engineer. Test engineer needs to carefully define think time, pacing, Critical transaction and users’ Load.  An inaccurate workload misguides the server optimization and sometimes it might also delay project deployment. Emphasis on baseline tests or one-user load test One-user load test is important for application baselining. It saves time and helps the tester to find defects in full load test. It isolates performance defects in early round of testing (Never forget to consider one-user load test in your plan). Identifying the performance bottleneck Identifying bottleneck is always a challenge for the tester. It requires deep technical knowledge and expertise. There are few techniques generally used by performance engineers: 1. Trend analysis It is a technique that is commonly used to find out the behavior of the data related to the frequency of the performance issues in order to find out the bottleneck. Performance bottleneck can be of various types such as consistent, fluctuating, increase and decrease. This analysis can be done on both client-side and server-side metrices. 2. Correlation It evaluates the relationship between different performance metrices in order to find out the bottlenecks. It is like comparing user’s response time graph with CPU graph during the test to uncover the performance issues or to do the analysis for the subsequent steps. 3. Comparison As the name suggests, here we compare the performance metrices of the baseline test with unacceptable performance test metrices. This helps in finding out the reasons of the bottlenecks. 4. Elimination Using elimination technique we remove certain components during the test to identify real bottlenecks e.g.  removing web server and doing load test directly for the application server. Creating report Effective reporting of test results is one of the important & key task for a performance tester. Final performance test report provides detailed analysis and recommendations to the project team and also the business. All the stakeholders make their critical Go/No-Go decisions based on this report. Test report must include: Findings/observations Detailed description of performance test results Recommendations Defect details Sign offs Conclusion Performance testing requires deep technical skills for different technologies. The success of performance test depends upon how we plan and execute. I hope this article helps to outline few critical aspects of successful performance test planning & execution. Performance test engineers need to consider the above tricks and techniques in order to achieve significant performance enhancements of the application under test (AUT). I wanted to keep this article as a starting point to give you a summary around the latest performance engineering trends but keep checking this space for few more articles on performance testing & engineering shortly. 5+

Test data management using AI-powered synthetic data generators

Today’s Agile/DevOps setups need the ability to go faster. Availability of huge amount of diversified test data might be critical to success of Test Automation effort. In this article we will discuss about synthetic test data, it’s importance and applications, various options available to us today for generating cheap and adequate test datasets using modern Test Data Management platforms and most importantly, we will examine how the power of AI/ML is being leveraged in this space. Introduction More than 45% global population has now access to Social media, Mobility, Analytics, and Cloud based applications. Software testing needs several combinations of datasets to ensure the software product is doing its job flawlessly on its end user’s systems/devices. Testing without adequate and diversified test data in this scenario might lead to defects/flaws in software which can be a disaster as well. Below are few examples around how testing can be misleading or can even go dangerously wrong due to lack of adequate test data: e-Commerce apps getting slow or even crashing during the Annual Sale season Some unfortunate air accidents that occurred in the past due to software malfunction due to wrong data from sensors Testing approach in Agile/DevOps is based on “Test early and test often (shift-left)” philosophy demands large sets of production-like data in desired formats in the initial development phase of the software product. During this initial development phase Test Engineers adopt various methods or leverage traditional utilities like Spread-sheets etc. to generate test data for their test scripts. Below are a few methods listed around how we generally generate test data: Manual creation of Data files (Spreadsheets, CSVs, Audio/video files etc.) Using SQL statements/stored procedures Getting a copy/dump of source data (risky if the data is confidential) Leveraging an automated data generator/Test Data Manager (TDM) Test data is required in different formats not only for functional testing but throughout the development Lifecyle as below: Functional Testing (Unit, Integration and system testing) Performance Testing (Load testing with thousands of concurrent users) Security Testing (adequate user profile data) Reliability Testing (testing with negative data) Configuration/compatibility testing (localization, Internationalization testing etc.) And so on What is Synthetic Data? As the name suggests, Synthetic means something that is created artificially, and in our context, it is Test Data which is artificially created by a data generator. Data created by real customers or real end-users like UserID, password, Name, Age, Sex, photo, address, telephone numbers, emailID etc. are few examples real data. These data could be more complex and vaster as we get into domains like Healthcare, Automobile, Digital, Social media and so on. And it’s not practical always to have high volume of these diversified data during the testing phase and we have to create them either manually or using a tool as we discussed in the introduction section above.  Below set of images is a good example of Synthetic Data which is created by an AI-powered algorithm. Please note, the images amazingly look like of real people but these people don’t exist actually.                                                                Source: https://www.thispersondoesnotexist.com/ We’ll discuss a bit more about how these AI-powered models work to generate high volume of synthetic data in the later part of this article. Let’s discuss about why do we need high volume of synthetic data. Importance of production-like synthetic data Production data (e.g. user’s profile data in a banking application) is secured and can’t be accessed for testing purpose. Hence, real-like & anonymized test data has to be created somehow for testing purpose. Below are few reasons around why we can’t use real data and have to rely on synthetic data: Data usage restrictions or data protection standards: Real data might be protected under regulatory restrictions e.g. GDPR rules (EU data privacy laws), Export controlled data, PII data and so on. The real data format can be replicated/mimicked/masked by Synthetic data to overcome this challenge. No real data exists:  When we develop an application from scratch (e.g. emerging technologies like Autonomous vehicle space), we need good amount of test data and here Synthetic data is a big help from testing stand point. Cost effectiveness: Generating synthetic data through an AI-powered data generation model is considerably more cost-effective and efficient than creating by manual or other methods. For testing AI/ML based applications: AI/ML based models need humongous amount of data to train and test their accuracy. Synthetic data is used for AI/ML based applications because real data is expensive and time & effort consuming as well.  How an AI-powered synthetic data generator works? AI Models leverage deep neural networks with some additional privacy logic in order to generate unlimited amount of synthetic data that complies with global standards like GDPR, CCPA etc. Most of the modern day’s synthetic data generators have nice user-friendly GUIs and with the click of a few buttons the platform enables you to generate an unlimited amount of highly realistic, but completely anonymous synthetic data. This AI-generated synthetic data looks pretty much like your actual customer data, is unprecedentedly accurate, and becomes a great alternative for your privacy-sensitive data. Let’s have a look at such an AI model that is used at the heart of a modern AI-powered Synthetic Data Generator, called Generative Adversarial Networks (GANs). GANs are modern Machine Learning models using deep learning methods which create new data that promisingly resembles to the input data. GANs can be used to solve complex problems like: Creating huge synthetic data for banking applications or any other domain where getting real time test data is challenging (IoT, Autonomous Vehicle data etc.) New images, videos and audio data can be created by inputting a few relevant sample data New music can be composed without playing any musical instruments Image quality can be enhanced with GAN networks without using any external artifacts Grayscale images or videos can be converted to color images and videos and

Reduce Test script-Testautomation forum

How to reduce test script rework effort and increase test development speed noticeably

Do you feel that you are wasting so much of time in new script development and its maintenance as well? If yes, then let’s talk about it. Introduction In this fast-moving world, we are always trying to save our time as much as possible because time is money, you know. We invest so much time in new script development and its maintenance and that becomes a big challenging task as our script-base expands to cover new and new features of the application under test (AUT). This influences the testing team to consider every possible option in order to save time and rework effort by adopting different models, tools and approaches. I’m listing few common mistakes as below which I have noticed in my past experience that people do: Creating so many Java Classes which is really not required, unnecessary classes will lead to increase in maintenance. Writing the logic inside the test-script that increases the rework/maintenance effort. Improper team coordination for developing the function that leads to code redundancy. Inadequate functions for reusability. No proper project architecture. Absence of standard guidelines for script development and maintenance. All above points will lead to so many issues with the project that impact the deliverable timelines and also accommodating any new changes will always going to be a big challenge. Advantages of reusability in Test Automation Before I explain how to design test automation framework to enable adequate reusability, I would like to outline the benefits of reusability as below: Higher adaptiveness of the script to newer changes/features Lower maintenance cost i.e. faster return on investment (ROI) Faster go-to-market time Enable test-early & test-often approach (Agile/DevOps) Higher confidence with the releases (lower defects) Lower QA cost Higher customer experience Given so many advantages from reusability, the obvious question now is How to enable reusability in our Framework? Let me explain it step by step. In this article, I’m not going to talk about any specific automation tool or technology, I would like to keep it to generic approach which can be considered for almost all kinds of projects. Important factors to keep in mind at the project kick-off Below is a list of some important factors that you need to consider at the project kick-off period which will help in driving the maintenance/rework effort throughout the lifecycle: Application functional knowledge Project architecture Framework approach Reusable action component class (Selenium based) Reusable business component class (application based) Generic functions class Excel integration functions class (non-Selenium based) Debugging the script development Maintain single object repository Consider data driven approach in order to enable execution of scripts with different sets of test data Team coordination and project documentation Now let’s talk about the above points in bit more detail. Application functional knowledge This is a key item and should not be ignored if you really want to save your effort. Automation Engineer must go through all test scenarios as much as possible to understand the functionality. Then they can proceed and plan ~80% reusable functions that will help in saving their development effort and reducing maintenance work. Project architecture A good software architecture is extremely important for a software project. It creates a solid foundation for the software project and makes your platform scalable. It seamlessly increases the performance of the platform, reduces costs and avoids code duplication. I’ve provided a sample project architecture below but you can customize it as per your business requirement. Framework approach   There are many types of framework available to start the automation project but it would add more value if you can go through the business requirements in the beginning and then pick up the right framework for your project which will really help you to move much faster. Reusable action component (Selenium based) It’s a function class, contains only individual action or functions which is not specific to any page, common actions can be added here. For example, when a single function is capable of handling multiple objects then you can keep this function here. For example, few objects on the footer of your HTML page, you can handle all of them using single function. Reusable business component (Application based) It’s a function class that contains small workflows or business transaction like Login into application, create order, cancel order and so on. This can be added here and these transaction or business component may have so many actions inside it. We have to call these actions in test script, definitely this will increase the script development speed and reduce the maintenance effort by increasing reusability. Generic functions It’s a function class, which contains the functions that can be used for other projects as well like password encryption/decryption, capturing date & time, parsing XML etc. Excel integration functions This class should contain the function only specific to Excel and data driven framework. This is also helpful in enabling reusability for other projects as well. Debugging the script development This is also a very important part; we should know how we can debug our code and it really helps in speeding up our development and quickly fixing the defects. Since it varies from IDE to IDE, you can learn debug part based on your IDE selection. Maintain single object repository Single repository allows to maintain only 1 file that minimizes the writing of duplicate object properties but it should have meaningful comments for all page objects. Consider data driven approach in order to enable execution of scripts with different sets of test data Implement data driven approach if business transaction demands the execution of the same flow with different sets of test data. Don’t forget to put more conditions to add more flows in order to have better coverage in the same test script. Team coordination and project documentation Team coordination is very important among the team members. Project plan and function document should be properly discussed among the team members to avoid code redundancy and to ensure the coding guidelines are followed by every test engineer in the

Continuous_testing-Testautomation Forum

How To Setup Effective Continuous Testing In DevOps?

Have you ever wondered how the ongoing COVID19 pandemic has impacted the enterprises to realign their strategies for their survival in these uncertain times? Sudden fall in the customer demand led to huge impact on the financial health of the companies that forced them to be more adaptive than ever. And DevOps comes to rescue in this scenario as this is conceptualized primarily to enable an adaptive software release process while making no compromise on customer experience. Sources say by 2021, ~55% of world’s population (i.e. more than 4B) will have access to internet. There will be around ~37 billion devices (including smartphones, IOT, wearables, etc.) connected to the Internet. And yes, 5G already started finding its existence in few countries and this will be accelerated in 2021 and beyond. These numbers represent the market situation at a high-level. And thanks to the COVID19 pandemic, the world is connected digitally better than ever as everything goes virtual like online classes, working from home, webinars and so on. The above data also signifies what would be the impact if software fails due to bad design/coding, improper testing, security vulnerabilities, bad performance and so on. As per the “Software Fail Watch” report published by Tricentis in 2018, the loss due to software failure was estimated to be around USD 1.715 Trillion that impacted around 3.7 billion people worldwide. Why DevOps? Software development tasks in DevOps are continuous (coding, building, testing, etc.) because it is designed around adaptiveness and speed which are the 2 most critical factors for success. In order to start with this topic, I would like to discuss a bit around an old depiction of PPP Graph (Process, People and Product), as in the chart below: As we can see in the chart, your software product’s success is primarily dependent on broadly 2 parameters i.e. process and the people who are developing the product. The organization can achieve a better product quality by continuously improving these 2 parameters. Other competitors will also try their best to bring maturity around these 2 factors to succeed in the market.  And if you examine carefully DevOps also uses this principle at its core and yes of course, Agile plays a complementary role as well in DevOps space. And also, we need to keep in mind that modern techniques like AI/ML are influencing the ‘People’ part of the above chart specifically to bring more intelligence to the process. Shift-Left, Shift-Right And Continuous Testing Continuous testing in the build pipeline is an integral part of the overall DevOps setup that executes the testcases in parallel and I’ll explain that shortly. We are going to cover Functional testing, Performance testing and Security testing under Continuous testing. As you know “Shift-Left” is a buzz-word in DevOps which actually refers to starting QA automation tasks (like Functional testing, Performance testing and Security testing) as early as possible in the Development process. We also call it “in-sprint” automation in Agile language. Please refer to the above chart that illustrates the different pieces of Continuous Test automation process in DevOps. As you can see in the illustration, the test automation rigor starts much earlier in the life-cycle (i.e. Shift-left) and in a matured DevOps model the Metrics related to testing is collected at each phase and fed into an Analytics platform (can also be AI/ML based) for better predictability and continuous quality improvement. These tools/platforms can be integrated together so that they can smoothly talk to each other and enable touchless test executions. Also, please note that some enterprises adopt Shift-Right approach to even test their application/product in the Production environment in order to evaluate the actual behavior/performance in real-time. As the continuous code delivery matures and Agile teams compress sprint cycle, the window of testing also shrinks due to minimal or zero rework. The idea here is to have all the automation script ready before the build is available so that the test execution can start as soon as a build is deployed. This also enables adequate test executions as the work-product moves forward towards it’s final release. Let’s discuss each of these testing areas one by one. Continuous Functional Test Automation (CFTA) What is your favorite automation tool? We will discuss more about several automation tools and their special advantages in upcoming articles but now let’s try to understand how our Functional test automation tool fits in the DevOps build pipeline. As DevOps focuses on shortening the SDLC in order to release high quality software product, Continuous test automation becomes critical to the overall success for the product. And not only that, over last few years Test Automation tools evolve to become more and more smarter, autonomous and universal. This has enabled superb testing intelligence by infusing AI and ML. AI/ML techniques like Natural Language Processing (NLP) are very helpful in terms of smart recognition of objects which in turn brings additional speed to automation by minimizing the rework due to code/UI change. In this way continuous testing tools can provide quick and timely feedback on business risks through their analytics capabilities. I’ve listed few such popular commercial tools/providers that fit into your continuous test automation space: Eggplant Tricentis Parasoft Micro Focus ACCELQ mabl Broadcom Sauce Labs Perforce Software IBM FrogLogic Ranorex SmartBear Software Cyara Worksoft Experitest If you are an expert in any of these tools then it will be great to have you sharing you story in this forum. Don’t hesitate to contact me if you think you have a great story to share with our readers. Continuous Performance Testing Gone are the days of doing Performance testing much later in the SDLC i.e. after the system integration testing (SIT) and just before going live. It becomes an annoying experience when you have several critical performance issues detected during performance testing and go-live is planned just a few days after. This sometimes puts your go-live at jeopardy and the rework might be very costly as well. As Application Performance also determines success or failure of your product’s

Page 2 of 2
1 2

Submit your article summary today!

[wpforms id="2606"]
Contact Form

Thank you for your interest in authoring an article for this forum. We are very excited about it!

Please provide a high level summary of your topic as in the form below. We will review and reach out to you shortly to take it from here. Once your article is accepted for the forum, we will be glad to offer you some amazing Amazon gift coupons.

You can also reach out to us at info@testautomationforum.com