Only Oteemo transforms business through acceleration, enablement, and adoption
Oteemo uniquely transforms teams and processes too
Why we’re different
Get to know us
by Chris McGrath | Jan 16, 2020
This blog post is the 3rd in a 4 part series with the goal of thoroughly explaining how Kubernetes Ingress Works:
In the previous posts, I purposefully left out some advanced implementation options as I wanted to keep a nice easy to follow flow of related topics that could help develop a strong foundational mental schema. In this post, I’ll build on that foundation with the purpose of opening up your mind to the possibilities of options for:
Option 1: NodePort/LoadBalancer Service with externalTrafficPolicy: “Cluster”
You can add this to the yaml configuration of Kubernetes Services of type NodePort and LoadBalancer. If you don’t specify anything, externalTrafficPolicy will default to “Cluster” (can also be written as externalTrafficPolicy: “”). Setting to “Cluster” will Load Balance traffic equally between all nodes.
Option 2: NodePort/LoadBalancer Service with externalTrafficPolicy: “Local”
You can add this to the yaml configuration of Kubernetes Services of type NodePort and LoadBalancer. “Local” refers to the fact that a Node will only forward incoming NodePort traffic to a pod that exists locally on that host. Setting to “Local” will Load Balance traffic to the subset of nodes that have Ingress Controllers running on them.
Option 3: Pod directly exposed on the LAN using hostNetwork: “true”
hostNetwork: “True” will give a pod the same IP address as the host node and directly expose the pod on the LAN network. If the pod is listening for traffic on port 443, the host node will be listening for traffic on port 443, and traffic that comes in on port 443 of the host node will be mapped to the pod. This bypasses Kubernetes Services and Container Network Interfaces altogether. Note: If you enable this setting, you’ll also need to set the Ingress Controller Pod’s dnsPolicy spec field to ClusterFirstWithHostNet, so that it can resolve Inner Cluster DNS names.
Option 4: Use an Inlets Server to expose a Cluster IP Ingress Controller Service on the Internet or a LAN:
Note: Inlets, by CNCF Ambassador Alex Ellis, is not a Kubernetes Native solution; however, it happens to work great with Kubernetes.
Inlets is a universal solution to ingress that allows you to forward traffic coming to an Inlets Server (hosted on the Public Internet or a LAN) to a service hosted on a normally unreachable private inner network.
Option 1: Buy a Public TLS Certificate and manually create Kubernetes TLS Secrets:
This option exists, but overall it’s not recommended. Mainly for the reason mentioned in the previous article: Kubernetes API decided to standardize on the PEM format for HTTPS/TLS certs, but historically there are over 15++ formats that HTTPS/TLS certs can exist in, if you end up with a binary encoded TLS cert, you’ll probably have to figure out a bespoke method to convert it to a PEM text encoded TLS cert. Even if you have the mywebsite.com.key and mywebsite.com.crt file in the correct PEM text formatting, you could still mess up the manual creation of a Kubernetes secret from files.
Option 2: Use Ansible to automate management of a Private CA, Certs, and Kubernetes Secrets Generation:
Ansible automation can make the management of anything trivial. If your Kubernetes Cluster is strictly for development or internal company use only, you can choose to shift complexity away from Kubernetes Tooling to Ansible Automation. If your team already has ansible knowledge it may make more sense to leverage ansible than to learn a Kubernetes specific piece of tooling, which usually ends up being relatively more complex.
Option 3: Use Jetstack’s Cert-Manager to automate Kubernetes TLS Secrets:
Cert-Manager is a Kubernetes Operator, software bot, by Jetstack that can automatically talk to TLS Certificate Authority’s software bots to automatically provision and rotate before expiration Kubernetes TLS Secrets. (Generally, this is the best option to use.)
For Options 1, 2, and 3 HTTPS works as Follows:
The Ingress Controller is able to configure itself using an Ingress Object and Kubernetes TLS Secret existing in another namespace. Traffic for https://grafana.mydomain.com is encrypted over the Internet and the LAN. HTTPS terminates at the Ingress Controller and is then cleartext within the Kubernetes Cluster.
Option 4: Terminate HTTPS at the Load Balancer:
AWS’s Classic Elastic Load Balancers are used by default on AWS for Kubernetes Services of type LoadBalancer. These allow you to specify an annotation on the Ingress Controller’s LB Service that references ACM (AWS Certificate Manager). The declarative configuration will then provision a Public or Private IP LB that can terminate HTTPS for your site. Inlets Caddy Server is another variation of this, and there’s a WAF variation that I’ll cover in the next article as well.
Cons: (These apply to AWS cELB, not Inlets)
Option 5:Terminate HTTPS only at the destination pod:
This is the only option that offers end to end HTTPS traffic encryption, with no middle man, in many cases this is considered to be overkill; however, hosting HashiCorp Vault in clusters isn’t all that uncommon, and it’s definitely not overkill in that scenario because there’s a kubectl plugin called ksniff, which makes it easy to take a tcpdump of a pod and load it into wireshark running on an admin laptop. Terminating HTTPS at the vault pod can mitigate traffic sniffing based attacks.
Assigning a dedicated Layer 4 LB to Kubernetes HC Vault service is one way of doing this, but LBs cost money so another option is to reuse an Ingress Controllers Layer 4 LB, and configure the Ingress Controller to act as a Layer 4 proxy for certain domains. (This assumes that the Ingress Controller is the only spot terminating HTTPS.) Nginx Ingress Controller can do this using –enable-ssl-passthrough, doing so causes a small latency hit as it works by introducing a virtual L4 LB in front of its L7 LB logic.
Option 6: Terminate HTTPS in multiple places including the destination pod:
Terminating HTTPS in multiple places is another way of offering end to end HTTPS encryption, it just means that you’ll have trusted middlemen (entities that are terminating and then re-encrypting traffic.) Web Application Firewalls (WAFs) and Service Meshes are common variants of this. The idea is that your HTTPS traffic could get terminated by a WAF, then re-encrypted on the WAF so it can be terminated by an Ingress Controller, then re-encrypted on the Ingress Controller so it can be terminated a final time at the pod level by an Istio sidecar proxy container and then forwarded cleartext to the destination container over the pod’s localhost.
Option 1: Cloud Public LB Service type LB
Note: The HA Cloud Load Balancer could be
A Layer 4 Network LB (aware of IP/Ports) or
A Layer 7 Application LB (aware of HTTP URLs, Paths, and HTTPS TLS Certificates)
(The options available depend on the Cloud Provider used)
Option 2: On-Premises Manually configure DIY LBs to point to service type NodePort
A Small Medium Business or Home Lab could be configured with a DMZ LAN to host the Kubernetes Cluster, and then have its trusted LAN behind the DMZ. website1.com and website2.com could CNAME to a Dynamic DNS address of a Home / SMB IGW Router, that router could then port forward traffic coming in on ports 443 and 80 of its WAN IP to a private IP in a DMZ. The kube-apiserver wouldn’t be accessible from the public internet, but would be accessible from the Home / SMB / Corporate LAN, and if the cluster was ever compromised the Home / SMB / Corporate LAN would be safe.
Option 3: On-Premises Metal LB
In the first article of the series, I pointed out that Kubernetes Nodes double as routers. Well, they can also pull triple duty and act as their own Layer 4 Load Balancers. Metal LB is software that allows Kubernetes Nodes to do just this. It’s not universally compatible (doesn’t work with all OS’s, Kubernetes Implementations, Cloud Providers, and CNIs) but it does work with my 2 favorite CNI’s Canal and Cilium. It’s also worth mentioning that ExternalTrafficPolicy: Cluster and Local are supported.
Metal LB comes in 2 modes: Layer 4 mode and BGP mode.
Metal LB Layer 4 mode works like this:
One of your Nodes is elected to be the leader, that one reconfigures its ethernet port to act like a switch so it can have multiple MAC/IP addresses. All incoming traffic is directed to the single node acting as the LB, if Node 1 fails then within 10 seconds Metal LB will come up on a different node with the same IP address so although there is no HA there is self-healing.
Metal LB BGP Mode:
Your Nodes act like Routers and talk to your Internet Gateway Router using BGP routing protocol, which allows for true HA Load Balancing. In this mode, every node acts as a Router, a LB, and a Node. Configuration of BGP mode makes provisioning a public IP a bit easier, but the initial setup is complex and full of limitations and gotchas.
Option 4: Minikube (No Load Balancer, hostNetwork: true)
You may be wondering how an ingress controller would work on minikube where a single node cluster is running in a VirtualBox VM.
By default Minikube will be provisioned with 2 Network Adapters, the first is for internet access and uses NAT to form a 1-way traffic flow Network Boundary, the second only allows for a 2-way traffic flow, but maintains the Network Boundary by only allowing communication with your laptop.
You can use the following 2 commands to see what IPs your minikube got provisioned with:
Bash(Laptop running minikube)# kubectl get pod -l=k8s-app=kube-proxy -n=kube-system -o yaml | egrep -hi "hostNet|hostIP"
Bash(minikube)# minikube ip
Bash(Laptop running minikube)# minikube addons enable ingress
Will deploy an ingress controller pod to the kube-system namespace. It will be configured with “hostNetwork: true” which will allow it to listen for traffic coming in on ports 80 and 443 of the Minikube VM’s IP address, 192.168.99.101 in my case, (remember the minikube VM will have 2 IPs, we’ll need to use the IP that allows incoming traffic from the Laptop).
After enabling nginx-ingress-controller deployment on minikube, to get it to work I deployed a service and ingress yaml object that referenced the URL mywebsite.laptop. I then edited my host file to map that URL to the IP address of the Ingress Controller that allows incoming traffic.
Bash(Laptop running minikube)# sudo nano /etc/hosts
Once that was done, the website hosted in minikube is visible by going to
In addition to all of the above permutations of how you can architect and configure Ingress into the cluster, if you look at the official docs, for Ingres Controllers, you’ll see there’s a plethora of flavors to choose from when it comes to the Ingress Controller (Self Configuring L7 LB) itself.
Unfortunately, I don’t have enough hands-on experience with multiple Ingress Controllers to offer a meaningful deep-dive comparison of the different options; however, there are already a few articles that have done a decent job comparing different Ingress Controllers:
If you want to learn more about the different Ingress Controllers available I suggest you read those links. What I’ll be sharing in this section of the article, is practical advice on:
What’s the best Ingress Controller to start with:
There are so many that it’s hard to know where to start. For people new to Kubernetes the Nginx Ingress Controller by the maintainers of Kubernetes is probably the best place to start because it’s well documented and easy to use. There are, however, 2 big gotchas to be aware of if you choose to use the Nginx Ingress Controller:
One’s offered by the maintainers of Nginx and the other is offered by the maintainers of Kubernetes so it’s safe to say that both versions are equally official. That being said, the one offered by the Kubernetes maintainers is more popular and thus recommended for beginners (It has 3 times as many GitHub stars and forks, it’s also worth mentioning that Bitnami offers a hardened fork of this variant.)
When I was first learning Kubernetes I wasn’t aware that there were 2 versions. Now I realize that when the advice I found on Stack Overflow or a Grafana Graph I tried to import wasn’t working, it was because their configuration syntaxes and exported Prometheus Metrics are incompatible with each other.
Here are links to the configs + a readme of the key differences:
Good strategies for kicking the fires on another Ingress Controller:
Once you feel comfortable managing 1 Ingress Controller, before you start looking into other alternatives I highly recommend you figure out how to manage multiple deployments of the same Ingress Controller within a Kubernetes Cluster. I hinted that this was possible in an earlier diagram and now I’ll take some time to really elaborate on that diagram and why it’s so critical to figure out how to manage multiple instances of Ingress Controllers:
Let’s start with what’s going on here:
Let’s say that we have 3 Nginx Ingress Controllers running in our Cluster all running version 0.21.0 configured exactly the same way, the main difference between them is that:
The Public and Staging Website’s Ingress objects have annotations that specifically reference their respective Ingress Controllers:
The result is that the public site is only accessible over the Public LB, the staging site is only accessible over the Staging LB, and all other sites hosted on the cluster like Grafana are only accessible over the Private LB.
So why bother with multiple ingress controllers with multiple classes? What benefits do we get?
When does it make sense to look into other Ingress Controllers:
(At first, this will seem like an unrelated aside but bear with me, it’ll tie into Ingress Controllers)
Kubernetes has tooling applications that you can install in your cluster, that shift the level of effort and complexity required to implement advanced concepts from insane to still really hard:
The advanced topics I’m referring to are things like:
In my opinion, it seems that other Ingress Controllers exist because they make these advanced concepts possible or integrate well with them. Traefik Ingress Controller for example: has built-in support for Traffic Mirroring, automatic HTTPS certificate provisioning, and authentication proxies. An Nginx Ingress Controller could do the same, but not using built-in support, it’d have to delegate to additional Kubernetes Applications like Jet Stack’s Cert Manager, KeyCloak (OIDC provider), and KeyCloak Gatekeeper (OIDC Auth Proxy). Ambassador can do Traffic Mirroring, act as an API Gateway, and integrates well with Istio Service Mesh.
Let’s circle back to “When does it make sense to look into other Ingress Controllers?”
Only when you have a hard requirement that the Nginx Ingress Controller can’t solve. Example: Having an API Gateway might be a minimum requirement for production go live, if that’s the case then an Ingress Controller like Kong or Ambassador is worth looking into. Whenever I’m on a project involving Kubernetes, there’s always an overabundance of work to do, and it’s always worth separating must have for production go live, vs looks cool and helps enable nice to have features.
Kubernetes Tooling For TechOps And Support (Local Kubernetes Clusters)
Kubernetes tooling for TechOps and Support
Ingress 101 | What Is Kubernetes Ingress And Why Does It Exist?
As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.
We help customers win by meeting their business objectives efficiently and effectively.
Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!
© 2021 Oteemo Inc. All rights reserved