Continuous Delivery is the process of releasing software functionality on a frequent and ongoing basis without compromising quality. Agile and Test Automation are the fundamental building blocks to realize the goals of continuous delivery. Traditional frameworks of software delivery were based on linear waterfall model where software was tested by QA (Quality Assurance) teams after large software modules were fully developed by the development teams. The time for QA teams to assure the quality of software was typically few weeks to months. Most of the test strategy in traditional waterfall software development models was based on manual testing with very little test automation. Even if there was automation, tests that were automated were focused on the end-to-end testing tier. This approach is not optimal for continuous delivery.
This document focuses on the role of Test Automation and the strategy to move towards achieving maximum test automation to align and position an organization to realize the goals of continuous delivery. Non-functional test strategy is not discussed within this document.
Functional Testing: There are typically three types of functional automated tests in a software testing life cycle. Unit Tests, Integration Tests and End-to-End (E2E) Tests. In traditional waterfall based software development, test automation was expensive because teams focused on the wrong levels of test automation. What we see in most traditional applications is that there are lots of E2E tests, which are mostly manual, followed by few integration tests and even fewer unit tests. The following diagram illustrates the transformation that needs to fundamentally occur in testing strategy to provide optimal results and ROI.
Unit Tests: Unit tests are automated tests that are written to test the most basic unit level functionality within a software application or service. Every function, or unit of code, that is implemented in an application needs to be tested to ensure that when certain inputs are provided to that function the expected output is generated. Developers are responsible for implementing these tests. It is important to ensure that unit tests are written at the same time any new functionality or change is being implemented.
Unit tests provide a faster and easier way to validate code changes. This helps drive refactoring, continuous integration and ensures that code is continuously tested at the most basic level. It is a standard industry best practice that a good unit test should not have any external dependency and should be run on its own. This means that external dependencies should be mocked by the developer writing the unit test. The purpose of unit testing is to ensure the internal consistency of the code and it is not intended to ensure system integration or interaction of the code with outside modules.
Integration Tests: Integration tests are implemented to ensure proper integration of sub-systems within an application. Outside dependencies such as file systems, databases, external APIs etc. are part of integration tests. As such, integration tests are more extensive and complex than unit tests.
A logical dependency map of modules that comprise a software application or system needs to be created to design robust integration test suite. Each module integration with other modules becomes an integration suite or test, based on the complexity of the modules.
End-to-end Tests: End-to-end tests simulate user interaction with the system and ensure that the end to end workflow works as expected. End-to-end tests are the most complex tests and expensive to write and maintain than integration and unit tests.
Strategy for Functional Test Automation
Goals: Defining the goals of test automation is critical first step before moving ahead and driving automation. Answering some of the following questions will help set such goals. What should be the minimum unit test coverage percentage for an application? How many integration points does my application have and do we have automated integration test scenarios? What are the most critical E2E scenarios that we should have? Do we have E2Es categorized by criticality? What percent of our testing is manual vs. automatic today? How quickly should regression testing of an application complete?
Test Pyramid Breakdown: The test pyramid in the continuous delivery model indicates that as you progress up from the bottom of the pyramid, the complexity of the tests increases. At the same time, the number of tests at the bottom of the pyramid should be more than the number of tests at the top. A standard industry best practice is to have 70/20/10 split for the test suite ratio between Unit to Integration to E2E tests. What that means is if your application has 100 total tests, 70 of them should be unit tests, 20 of them should be integration tests and 10 of them should be end-to-end tests.
Application analysis and tool selection: Based on the type of application, appropriate tools and test programming languages should be selected for ensuring proper implementation of test suite.
Getting to Continuous Delivery
The holy grail of Continuous Delivery is when a functionality can be deployed into production on the same day as that corresponding user story for that functionality is accepted by the product owner. This goal cannot be realized unless test automation becomes the cornerstone of software development.
Following is a high-level test automation strategy that we recommend for organizations that want to improve their software delivery cycle times.
Existing brown-field project(s) with an unknown/immature test automation state:
One of the most common questions we get is “Our legacy or old applications have no test automation. Do we dedicate time to go back and write tests and stop active development?” Here’s our opinion and a proven way of attacking brown field applications.
New Feature Development:
- All new features are required to have unit tests for new components and at least 1 E2E or Integration test that act as acceptance testing for the feature.
- Unit tests should be reviewed with Dev/Tech Leads in code review sessions to familiarize the team with testing best practices and expectations.
- To start, no percentage of unit testing is required but eventually new features should be required to have 80% coverage or better and should have to explain why a feature is delivered with less than 100%
- As testing proficiency increases, unit and integration testing of new features should also span to any affected components that integrate with the new feature. This should serve to grow the suite at a reasonable pace.
All bug fixes should be delivered with, at minimum, one unit test that verifies the expected behavior and fix.
Existing Code Testing As Resources Permit:
- Focus on business-level E2E tests that verify core business functionality but these tests do NOT have to be extremely granular or test every scenario.
- These tests should be the regression safety net providing some assurance that new deployments of the application function as expected without introducing regression issues.
Green-field or new projects should adhere to more stringent test automation rules.
New Feature Development:
- Delivered with a minimum of 70% unit coverage and integration/E2E tests where appropriate to verify the feature meets acceptance criteria.
- Features should be code reviewed with the Dev/Tech lead at minimum and tests should be reviewed as the first part of the code review session to demonstrate that the code works as the developer intended. (This is a great communication tool for developers to demonstrate their thinking of how the feature should work).
- As the project matures, quality gates should be designed to reject builds that lower overall code coverage at the unit and possibly integration level.
Bug fixes should be minimal due to above quality measures but any bug discovered should immediately have an associated test created that re-creates the issue and then code should be written to fix the issue.
- Coverage metrics and best practices should be discussed in all development department meetings as a point of emphasis and team pride that code is being delivered with high quality
- If possible, find information radiators (displays) that can give teams real-time feedback on the quality of their code
Case Study: See How Mednax Automated Processes to Increase the Pace of Innovation
Oteemo created a blueprint for Mednax to build a robust DevSecOps practice that would accelerate their cloud adoption and development lifecycle. See how Oteemo helped Mednax automate its ability to accelerate deployment time from months to minutes.