Don’t Ignore the [Connection] Pools, but Mind the Queues

May 19, 2023 | By Brice Dardel

connection pools

With the continued growth and transition to microservices, it’s important to ensure that the time and money re-engineering systems to modern, cloud-based solutions lead to tangible benefits to the organization. In this multi-part series, we’ll look at different components and pitfalls that need to be considered when modernizing to microservices. 

In this blog, we’ll look at Connection Pools and Queues. This is a bonus piece of content from our series of Modernizing to Microservices. Head over to our blog to view the complete series or you can find the links handy at the end of our article.

The Consequences of Misconfigured Connection Pools

Connection pools are everywhere and will have serious consequences if misconfigured. A particularly dangerous problem arises when you don’t realize that your connection pool is too small because actions get put in a queue that is then emptied as the pool frees up.

These pools need to be reviewed and tested independently, with a close eye on the default configuration properties. Examples of pools to be mindful of:

  1. The inbound connection pool – how many connections can your service handle concurrently?
  2. The default thread pool – how many threads can you use in your application? For example, in Java, parallel streams, or CompletableFutures? These all have default thread pools that you want to customize to your usage pattern.
  3. Our favorite: the connection pool for outbound calls to downstream services. Most developers assume that if you do a call to a downstream service in an HTTP client, it will happen immediately and synchronously. But that’s not necessarily true: connection requests to downstream services might end up in a queue themselves. For example, ClosableHttpRequest in SpringBoot has a default of five maximum connections per route and 20 connections total – for the whole service! Delaying downstream processing is a bad choice, in general. Our recommendation is to customize those pool numbers so they are high enough not to delay downstream processing, without worrying about overwhelming the downstream services. It is their responsibility to scale up, and limiting the outbound connections will tend to hide the problem. If many opened connections are an issue on the caller’s side, then it should scale up – or revisit the initial architecture choices and consider reactive programming or a message queue.

Need to catch-up? Previously, lessons included:

Part 1: The Importance of Starting with the Team

Part 2: Defining Ownership

Part 3: Process Management and Production Capacity

Part 4: Reserving Capacity for Innovation

Part 5: Microservices Communication Patterns

Part 6: Using Shadow Release Strategy

Part 7: Performance Testing Microservices

Part 8: Memory Configuration Between Java and Kubernetes

Part 9: Prioritizing Testing within Microservices

Part 10: Distributed Systems

Bonus: Kubernetes Health Endpoints to Achieve Self-Healing

Ready to modernize your organization’s microservices? Oteemo is a leader in cloud-native application development. Learn more: https://oteemo.com/cloud-native-application-development/

0 Comments

Submit a Comment

Who We Are & What We Do

As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.

We help customers win by meeting their business objectives efficiently and effectively.

icon         icon        icon

Newsletter Signup:

Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!