Only Oteemo transforms business through acceleration, enablement, and adoption
Oteemo uniquely transforms teams and processes too
Why we’re different
Get to know us
Work with us
by Samuel Brown | Nov 02, 2017
As a consultant, I often engage with clients who want to achieve Continuous Delivery (CD) but can’t understand why they have difficulty shipping new features. Even with all the latest and greatest tools to support them, something is lost in translation.
Often, what seems to be lost is a basic understanding of the stages of CD, and why each of them plays an important role in helping development teams achieve high-velocity and high-quality delivery. In this post, we are going to cover the stages of CD and why each of them is critical to ensuring success in delivering features to your customers.
Before discussing the individual stages, I want to highlight an often overlooked aspect of CD: fast feedback. The entire point of continuous integration and delivery is to ship high-quality software as quickly as possible. Continuous Delivery is only possible if we know very early when a commit of code will NOT make it to the end of the pipeline so that we can fix it and make it ready for release. The sooner we get the required feedback, the sooner we can fix the problem and get the pipeline moving again.
Fast feedback is the main reason that we divide a build pipeline into discrete stages. This allows us to get feedback on whether any given part of the pipeline is healthy or not, as quickly as possible. Faster stages should be planned early in the pipeline so that if a failure occurs, we can react quickly. Now, onwards to the stages of CD!
The stages described below are viewed from the perspective of running them on a Continuous Integration server like Jenkins, Bamboo, TeamCity, etc. The actions taken during each step will most likely be triggered via a build script, but we will not be diving that deep here.
Instead, each stage is described conceptually, showing you how to approach it in order to achieve the desired outcome while getting the quick feedback required to maintain the health of your application code.
The first stage should always be the build or assembly of your application. For compiled languages, this means pulling in the dependencies and compiling your application to ensure there are no errors when building it into an executable application. In the case of non-compiled languages, this first step should pull in all dependencies to assemble your application into its executable form.
This might seem like an unnecessary step that should rarely fail, but often failures occur due to mismatches in dependency versions. To avoid this, my simple advice is to ALWAYS pin (lock) the versions of your dependencies. It might seem convenient to let automatic upgrades occur to patch vulnerabilities or other issues with your dependencies, but consistency should be preserved from the developer workstation to the build server.
Anything that is built/tested on your developer machine should be exactly reproducible by the build server. This can prevent time lost to chasing inconsistencies and investigating failures.
Unit testing seems to be the stage that everyone “knows” they need but never seem to get around to implementing at any depth. While this stage is easy to set up and run whatever tests your organization deems necessary, unit testing is only as useful as the coverage it provides. It is far too easy to adopt an attitude of “we’ll test later”.
This mindset can become ingrained, resulting in little to no unit testing. In my experience, thorough unit testing can cut the time spent fixing bugs to a third, when compared to the time spent writing new code later. At first, your team might find testing feels unnatural and burdensome, but no one said changing culture was easy! I’ve found that creating examples for different types of tests can speed adoption and that by enforcing unit test requirements, your team will see that management is dedicated to providing adequate time to write tests.
Unit tests should execute quickly because they require no outside resources like filesystems or databases. They should be viewed as your first line of defense against defects because a hole in coverage here could result in a bug in production. I can’t stress enough how important unit testing is to the overall health and maintainability of an application.
Also, in my personal experience, developers who have the time and space to write good unit tests are generally happier and prouder of their work and produce better code.
Code quality means different things to different organizations. We define code quality as analysis of your code for security vulnerabilities, and comparing it against a set of coding rules meant to keep your code readable and maintainable according to team standards. Running code analysis can, and should, be done at multiple points, both on the developer workstation and/or through a branch build of checked-in code.
When the code is finally merged into the master branch, there should be few, if any, surprises or unresolved issues. Unfortunately, issues detected by code analysis are often left to build up over time and then ignored due to a perceived misconfiguration of rules or false positives.
Like any good analysis, your rules should be customized and agreed upon prior to use so that it is detecting real issues. The most practical way to use the Code Quality Analysis stage is to implement it in developer branch builds to catch issues early before they are merged.
You should then also implement analysis on the master branch build to ensure that these merges did not introduce any new issues. This stage should be viewed as a quality control gate that ensures your code adheres to standards and is free of automatically detected vulnerabilities or security issues.
Integration testing is where I tend to see teams take the biggest short-cuts and miss an opportunity to bulletproof their application. Integration testing is often skipped because of the high costs involved in setting up the tests to simulate real application behavior. An example would be standing up a small database to test database connection settings and the application’s SQL for CRUD operations.
Creating the database, the appropriate test data, and all of the scenarios to make this a realistic test is very time-consuming. However, it can be one of the most rewarding things you can do to make sure the application works correctly when interfacing with external resources.
Implementing the above tests give you the ability to upgrade database versions, change DB drivers, and optimize SQL statements all with the comfort of knowing that your automated integration testing regression suite will catch any issues.
If possible, make sure that the external resources are as portable and local to the test code as possible to reduce dependencies on shared systems. You can implement the same type of testing for communicating with external API’s or other outside systems.
These tests will quickly detect issues indicating that communicating with the external system may be broken. Unit tests will not catch these types of issues and End-to-End tests (discussed next) will not always catch these problems either.
Second only to unit testing, implementing a solid set of integration tests is one of the best things you can do to ensure the long-term stability of your application.
End to End (E2E) or Acceptance tests is the stage where I tend to see teams spend the majority of their resources, and is the stage that takes the longest on the CI server to run. Unfortunately, the time invested here does not always result in a good return.
Because E2E tests by nature need a full environment and are generally executing complex scenarios, they tend to be very brittle. This means that they are difficult to maintain and often result in false positives when they fail. From an outside observer’s perspective, E2E tests seem to be the most important type of tests because they exercise a real system as a user would. Unfortunately, there are too many underlying scenarios that get missed by these tests.
They provide a false sense of security. For example, how would you simulate a database failure in an E2E test? How would you ensure that a background process updated the correct value in your database for your application to read via an E2E test? Many scenarios are simply too complex to test at that level and are much better served (and faster) at the unit or integration testing level.
In my experience the best E2E tests are:
Things like login, changing a password and navigating the application are all excellent candidates for E2E testing. Additionally testing actions that are critical functionality for the business, such as executing a trade in a stock application, are good candidates as well. An example of a poor use of E2E tests is ensuring that all of your form validation rules execute properly.
This is because the validation rules will need to be updated every time the form changes or validation rules are updated. This is, once again, better done via unit or integration tests. There is certainly value in testing at this level, but unit and integration testing should be where the bulk of your testing time should be invested.
If your application has survived the previous stages and batteries of tests then hopefully you can be confident that you have a releasable application. Your process may require your teams to run manual, performance, or other types of exploratory testing prior to production release, but the candidate artifact you have created should be stored so that it can continue to move towards production.
Versioning your release artifact and storing it provides a library where you can pull and deploy any version of your application when it is deemed ready. I often opt to use the last number in semver as the build number from my CI tool. This provides another level of traceability to go back and review the build log for that version if there is a later issue. It is a good idea to come up with an understood version naming strategy that helps your team trace the history of the build.
Lastly, I would recommend using a binary storage manager to store your artifacts. They are expressly designed to streamline the creation and retrieval of versioned artifacts via automation so that your other systems can interact with them to deploy your application.
The Continuous Delivery and DevOps landscapes are covered with tools, approaches, and ideas to help teams deliver software quickly, but often it is a good idea to review the fundamentals of application delivery. Focusing on tuning and tweaking the above stages in your CD pipeline to provide a solid safety net of tests and issue-free software will give you and your teams confidence to move faster in delivery while ensuring quality.
Some of the above ideas may seem simple but can revolve more around culture change than anything else. Try to start introducing concepts slowly, gain buy-in, show wins and you’ll be on your way to delivering solid features to your customers faster than you thought possible.
The Jenkins Password Parameter Obituary
Artifactory vs Nexus | Managing Artifacts | Beyond Storage
Jenkins Dynamic Executor Nodes on AWS
As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.
We help customers win by meeting their business objectives efficiently and effectively.
Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!
© 2021 Oteemo Inc. All rights reserved