Think Before you NodePort in Kubernetes

December 12, 2017 | By Nate Abele

When first starting with Kubernetes, new users quickly find out about exposing their applications via the built-in Service resource. Two options are provided for Services intended for external use: a NodePort, or a LoadBalancer. Often, the NodePort service type can seem the more appealing of the two for a variety of reasons, including:

  • Many demos and getting-started guides use it.
  • Cloud load balancers cost money, and every LoadBalancer Kubernetes Service creates a separate one by default in supported cloud environments.
  • There are no built-in cloud load balancers for Kubernetes in bare-metal environments (unless Packet has done something clever that I haven’t heard about yet), and physical load balancers aren’t free either.

Despite these advantages, there are some strong reasons why NodePort may not be your best choice. Before you just set type: NodePort in your Service manifest and start handing out URLs with extra colons in them, consider the following.

NodePort Punches a Gaping Hole in Your Cluster Security

NodePort, by design, bypasses almost all network security in Kubernetes. NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them. In any kind of cluster intended to actually be used, applications either can’t be fronted by NodePort Services, or they receive traffic from anywhere. This problem is severe enough that Sysdig actually provides an example of a rule to detect and alert on outgoing traffic from NodePort Services. To fix this you need to either put a network filter in front of all the nodes, or move everything that accesses the application inside your cluster so those NodePort Services can become NetworkPolicy-enforced ClusterIP Services.  The latter option might not even be possible, particularly if you are providing an application to the public.

In addition, if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe for others that aren’t — and since all of your Services will be exposed on all nodes, your security and monitoring can’t afford to miss or dismiss anything.

NodePort Is a Pain for You

When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.) Because most Services use ports far outside this range, the standard ports for such services as HTTPS, SSH, HTTP, cannot be used. By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.

If using dynamic allocation, you also don’t know in advance what the port number will be, which means the Service you just created will have to be examined to see what port it’s exposing, and on most hosts, you need to then open that port in the firewall. Alternatively, you could pre-emptively open the whole NodePort range in the firewall, but every time you make a request like this, the Network Administrator’s Guild adds a tally mark next to your name on a big Naughty List at Network Admin Headquarters.  You could conceivably also just turn the firewall off, but if you make this request, the Guild will airdrop hungry wolves to your location.  None of these are very good options (and the last one involves somebody cleaning up after the wolves).

A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to. If there is a conflict with an existing port on a host (for example if a process has bound that port dynamically to communicate with something), the entire NodePort Service will fail to be created. If you have a lot of cluster nodes this can be a pain to debug, to put it mildly. Plus, if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one. Wasn’t the point of using Kubernetes to make things easier and more automated?

NodePort is a Pain for Others

Is your application intended to be accessed from outside the network trust zone where your cluster resides? High ports can be problematic, especially if the people trying to access it aren’t the people who control the network.

I remember an install using a Kubernetes deployment product that was intended to be used in the customer’s VPC on AWS and exposed its dashboard and authentication applications on static NodePorts. Unfortunately the customer’s network policy blocked outbound HTTP/S requests on ports other than 80 and 443 — not the smoothest experience, to say the least. It is probably not a coincidence that said product quickly evolved to use Ingress exposing HTTPS on port 443, with path-based routing for those apps (see the info on Ingress in the next section).  

I was also once conducting a training session on Kubernetes basics and used NodePort services with the “guestbook” example Kubernetes app in a lab exercise.  When the attendees got to that exercise, none of them could successfully connect to their guestbook from the classroom LAN, because the training facility network blocked outbound HTTP connections to non-standard ports.

There are Better Ways to Do It

If you haven’t yet looked at the Ingress resource in Kubernetes, you really should. Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services. With a single point of entry to expose and secure, you have a lot less to worry about:

  • one load balancer
  • one network filter
  • one HA-enabling IP address for clients to use,
  • a mechanism you can use for routing external requests in any environment (not just the clouds supported by LoadBalancer Services)

As a bonus, you get simpler TLS management! You also aren’t exposing internal details about your cluster to anybody curious enough to fire up nmap.

Even if you don’t set up Ingress, at least consider putting a real load balancer in front of your NodePort Services before opening them up to the world — or if you have a BGP-capable routing device in your network, Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP. You will have the same single-point-of-inspection and HA benefits as using an Ingress, you just won’t be able to manage request routing through the Kubernetes API. If your application can’t be L7-routed, you may not be able to use Ingress anyway — in which case, half a loaf is better than none.

In conclusion

NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others. Don’t do it! Everyone will thank you.

5 Comments

  1. k8snoob

    What are you talking about , you have to expose ports on a node other wise you cant route anything to the cluster !

    • Chris Cooney

      I suspect the title was hyperbolic – the point here is to not deploy a suite of services as NodePort types, instead favouring an Ingress and ClusterIP services. This is sound advice.

    • Michael Schmid

      I agree to k8snoob,
      I just did a tutorial from kubernetes.io where they use Ingress together with 2 services that they expose via NodePort. So what’s the problem? o_O

  2. Vineet Shivhare

    what about UDP ? how can we expose as AWS doesn’t have LB for UDP

  3. Victor

    Can you load balance a service like Mysql, DNS Bind9 or other differents from Http/s ?

Who We Are & What We Do

As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.

We help customers win by meeting their business objectives efficiently and effectively.

icon         icon        icon

Newsletter Signup:

Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!