Only Oteemo transforms business through acceleration, enablement, and adoption
Oteemo uniquely transforms teams and processes too
Why we’re different
Get to know us
Work with us
by Cloud Native Application Development Team | Dec 12, 2017
Despite these advantages, there are some strong reasons why NodePort may not be your best choice. Before you just set type: NodePort in your Service manifest and start handing out URLs with extra colons in them, consider the following.
NodePort, by design, bypasses almost all network security in Kubernetes. NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them. In any kind of cluster intended to actually be used, applications either can’t be fronted by NodePort Services, or they receive traffic from anywhere. This problem is severe enough that Sysdig actually provides an example of a rule to detect and alert on outgoing traffic from NodePort Services. To fix this you need to either put a network filter in front of all the nodes, or move everything that accesses the application inside your cluster so those NodePort Services can become NetworkPolicy-enforced ClusterIP Services. The latter option might not even be possible, particularly if you are providing an application to the public.
In addition, if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe for others that aren’t — and since all of your Services will be exposed on all nodes, your security and monitoring can’t afford to miss or dismiss anything.
When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.) Because most Services use ports far outside this range, the standard ports for such services as HTTPS, SSH, HTTP, cannot be used. By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
If using dynamic allocation, you also don’t know in advance what the port number will be, which means the Service you just created will have to be examined to see what port it’s exposing, and on most hosts, you need to then open that port in the firewall. Alternatively, you could pre-emptively open the whole NodePort range in the firewall, but every time you make a request like this, the Network Administrator’s Guild adds a tally mark next to your name on a big Naughty List at Network Admin Headquarters. You could conceivably also just turn the firewall off, but if you make this request, the Guild will airdrop hungry wolves to your location. None of these are very good options (and the last one involves somebody cleaning up after the wolves).
A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to. If there is a conflict with an existing port on a host (for example if a process has bound that port dynamically to communicate with something), the entire NodePort Service will fail to be created. If you have a lot of cluster nodes this can be a pain to debug, to put it mildly. Plus, if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one. Wasn’t the point of using Kubernetes to make things easier and more automated?
Is your application intended to be accessed from outside the network trust zone where your cluster resides? High ports can be problematic, especially if the people trying to access it aren’t the people who control the network.
I remember an install using a Kubernetes deployment product that was intended to be used in the customer’s VPC on AWS and exposed its dashboard and authentication applications on static NodePorts. Unfortunately the customer’s network policy blocked outbound HTTP/S requests on ports other than 80 and 443 — not the smoothest experience, to say the least. It is probably not a coincidence that said product quickly evolved to use Ingress exposing HTTPS on port 443, with path-based routing for those apps (see the info on Ingress in the next section).
I was also once conducting a training session on Kubernetes basics and used NodePort services with the “guestbook” example Kubernetes app in a lab exercise. When the attendees got to that exercise, none of them could successfully connect to their guestbook from the classroom LAN, because the training facility network blocked outbound HTTP connections to non-standard ports.
If you haven’t yet looked at the Ingress resource in Kubernetes, you really should. Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services. With a single point of entry to expose and secure, you have a lot less to worry about:
As a bonus, you get simpler TLS management! You also aren’t exposing internal details about your cluster to anybody curious enough to fire up nmap.
Even if you don’t set up Ingress, at least consider putting a real load balancer in front of your NodePort Services before opening them up to the world — or if you have a BGP-capable routing device in your network, Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP. You will have the same single-point-of-inspection and HA benefits as using an Ingress, you just won’t be able to manage request routing through the Kubernetes API. If your application can’t be L7-routed, you may not be able to use Ingress anyway — in which case, half a loaf is better than none.
NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others. Don’t do it! Everyone will thank you.
Talk to our Enterprise Container Architect to find out how you can get started with Containers and Kubernetes
Kubernetes Tooling For TechOps And Support (Local Kubernetes Clusters)
Kubernetes tooling for TechOps and Support
Ingress 102: Kubernetes Ingress Implementation Options
As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.
We help customers win by meeting their business objectives efficiently and effectively.
Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!
© 2021 Oteemo Inc. All rights reserved