Only Oteemo transforms business through acceleration, enablement, and adoption
Oteemo uniquely transforms teams and processes too
Why we’re different
Get to know us
Work with us
by Mike Santangelo | Aug 31, 2016
But there’s a problem with Jenkins. Not a process-breaking, “oh-no-all-is-lost” type problem, but a problem nonetheless – Build Executors. For any organization that uses Jenkins more than the bare minimum, build executors can turn into a bottleneck very quickly. There’s nothing worse than kicking off a job, then realizing that you’re in the queue behind jobs that take hours.
Of course, you can stand up a massive bare metal server with enough juice for hundreds of executors. Or you can stand up the x1.32xlarge instance in AWS and do the same. But, as we all know, there are practical problems with either of those solutions. Big iron servers need a place to live, lots of power, and someone with the ability to physically access them in case of emergency; that’s not even mentioning the cost of a really big bare metal server. Meanwhile, standing up AWS’s x1.32xlarge server in perpetuity can put a massive dent in your organizations’ IT budget.
You could also, if the mood struck you, stand up smaller servers just to use as executor nodes. You can then tie those in to the main Jenkins server and expand your capability without breaking the bank. That may be true but think of the waste of resources by having servers just sitting there, whether bare metal or AWS instances, and not being used for days on end. We all know that these days IT efficiency is almost as important as productivity for most organizations.
So you ask — well then, Mike, what’s the solution? Jenkins dynamic executors are the solution. Enabled by a Jenkins AWS plugin found here, dynamic executors solve the problem of executor bottlenecks quickly and easily. Whether your Jenkins server is bare metal or an AWS instance.
I won’t take time here to walk through setting up the plugin, associated “clouds”, or AMI’s related to this plugin. The page above does an admirable job of making all of that clear. What I will talk about is its ability to expand and contract your Jenkins executor nodes depending on demand; and how great that can be for your organization.
How nice would it be if you could commandeer the master executor nodes and send everyone else to peripheral nodes? Using this plugin and labels that becomes a viable solution. You can set up your AMI’s to only kick off for jobs containing certain labels. For example, you could have an AMI set up to spin up a brand new executor node whenever the labels “dev && <dept-x>” are included. Then when the department-x dev team needs to run jobs in the dev environment, they label them accordingly and have their own executor nodes to run all their jobs on. Best thing is, the organization sets the upper limit on number of servers they can spin up for that AMI. So if they overload one new executor node, the plugin will spin up a second one for them. Up to the maximum specified by your organization.
Here’s another example. Let’s say you don’t care about whether your customers are using the master build executors or not. You just want to be sure that you can expand capacity so no one is waiting too long for their job to execute. When the master executor nodes get overwhelmed, the administrator sets up an AMI to run. Just as with the first example, it’ll stand up as many peripheral executor nodes as you allow it to stand up.
Now for the best part. You set each AMI with an idle timeout value. Since this is in AWS, if a server is up for a minute you get charged for an hour. So the thought process here is that we might as well keep these up as long as we can in case they are still needed. This, by the way, plays in to the one downside of this process, which I’ll cover in a minute. So we can set the idle timeout to 45 minutes or so. If it’s not used for that amount of time, the plugin will tear it down for you automatically. That way you don’t have these servers sitting out there putting money in Amazon’s pocket for something your organization is not using.
Nothing is perfect, so there must be a downside to this process. Spinning up a new AMI, installing the Jenkins agent, and making a new executor node ready for use does take time. As each new executor node is stood up there will be a delay, and this could be an issue if you have impatient customers. Of course, you could always point out that a 5-10 minute wait is better than the 2 hour wait to use the main build executors – your mileage may vary.
Once a dynamic server has been spun up it’ll stay up until it reaches the idle timeout value. If an idle server is sitting out there then Jenkins will use that before spinning up a new one. Which means lead time to spin up a new executor node will be gone for that instance.
So there you have it — Dynamic Jenkins Executor Nodes. I’ve found this to be amazingly useful in our current organization, and I think most people can find some sort of use for it as well.
Stages of Continuous Delivery Part 1: The Build
The Jenkins Password Parameter Obituary
Artifactory vs Nexus | Managing Artifacts | Beyond Storage
As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.
We help customers win by meeting their business objectives efficiently and effectively.
Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!
© 2021 Oteemo Inc. All rights reserved