In this post we will address the creation and usage of wild-card certificates in our Kubernetes cluster using cert-manager and nginx-ingress. This article is intended for people with some base understanding of Kubernetes, Cert-manager, and Nginx. That being said, Cert-manager is an awesome tool that automatically negotiate’s SSL certificates on our behalf using external providers like Let’s Encrypt. The Nginx-ingress-controller is a tool that allows you to configure a HTTP load balancer to expose your Kubernetes services outside of your cluster.

We will be installing nginx-ingress and cert-manager with a Kubernetes package management tool called helm. Once these tools are installed we will create a cluster-issuer for cert-manager and a wild-card certificate. The wild-card certificate secret will only be created in a single namespace. So we will then use a tool call kubed to replicate the secret across multiple namespaces. To spice things up instead of manually annotating the secret for kubed, we will create a kubernetes cronjob to annotate it for us.

Install Helm

First thing’s first lets install Helm client tool.

For mac:

$ brew install kubernetes-helm

Other Operating Systems refer to: https://docs.helm.sh/using_helm/#installing-helm

Since I have RBAC enabled on my cluster, we must deploy a service account and cluster role/binding for the tiller pod to use.

$ cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF

Deploy tiller pod in your cluster

helm init --service-account tiller

Install Nginx Ingress Controller

Using helm we will install an nginx ingress controller to be able to forward external traffic to the correct kubernetes resources in our cluster.

$ helm install --name nginx stable/nginx-ingress

Install Cert-manager

Next we will install cert-manager. Cert-manager is an awesome tool that reaches out to external certificate providers like Let’s Encrypt and generates valid TLS certificates for us.

NOTE: Be sure to configure the correct permissions to run cert-manager in the cloud provider you’re using, for more info refer too: https://github.com/jetstack/cert-manager/blob/master/docs/reference/issuers/acme/dns01.rst
$ helm install --name cert-manager stable/cert-manager

Make Sure everything is up and running

List installed charts

helm ls

Expected Output:


NAME        REVISION   UPDATED                STATUS        CHART             NAMESPACE
cert-manager   1     Tue Sep 4 23:20:02 2018  DEPLOYED   cert-manager-v0.4.1    default
nginx          1     Tue Sep 4 23:19:54 2018  DEPLOYED   nginx-ingress-0.26.0   default

Make sure pods are up and running

kubectl get pods

Expected Output:


NAME                                                READY     STATUS       RESTARTS        AGE
cert-manager-756d6d885d-6nxz7                        1/1       Running         0           48s
nginx-nginx-ingress-controller-5b79ff75b6-p4zzn      1/1       Running         0           57s
nginx-nginx-ingress-default-backend-db4db5d6d-7c9bp  1/1       Running         0           57s

Make sure load balancer has been deployed for nginx

kubectl get svc -l component=controller

Expected Output:

NAME                              TYPE             CLUSTER-IP         EXTERNAL-IP                                                 PORT(S)                     AGE
nginx-nginx-ingress-controller   LoadBalancer       <Some I.P>    <SomelongPublicDNSEntryForYourLoadbalancer.comamazonaws.com>  80:30050/TCP,443:31903/TCP     2m

Install Cluster-Issuers

In order for cert-manager to be able to create certificates we must configure a ClusterIssuer. This tells cert-manager information about what certificate provider we would like to use, and what domain validation technique to use.

$ cat <<EOF | kubectl create -f -
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'test@test.com'
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: aws
route53:
region: us-east-1
EOF

Install Wild-Card Certificate

Next lets create the CRD which tells the clusterissuer to go get one from let’s encrypt using DNS as the validation? If that is the case it would be great to explain that to give some context to the reader.

$ cat <<EOF | kubectl create -f -
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: wildcard-certificate
spec:
acme:
config:
- dns01:
provider: aws
domains:
- '*.<YOUR-DOMAIN>'
secretName: wildcard-certificate
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
commonName: '*.<YOUR-DOMAIN>'
EOF

Next lets tail the logs for cert-manager to see what is going on.

kubectl logs -f cert-manager-756d6d885d-6nxz7

If everything has been configured correctly you should see cert manager pick up the wild card certificate, reach out to Lets Encrypt, validate your domain, and finally issue a certificate. Once the certificate has been issued by Lets Encrypt you should now see a secret called wildcard-certificate.

NOTE: Again if you see a “not authorized” or “permission denied” make sure you had cert-manager permissions configured correctly.
kubectl get secrets

Expected Output:

NAME                             TYPE                                     DATA       AGE
cert-manager-token-z6c5f          kubernetes.io/service-account-token       3         10m
default-token-24pf2               kubernetes.io/service-account-token       3         10m
letsencrypt-prod                  Opaque                                    1         10m
nginx-nginx-ingress-token-7nk2t   kubernetes.io/service-account-token       3         10m
wildcard-certificate              kubernetes.io/tls                         2          3m

Install Kubed

So far we have created a secret containing the wild card certificate in only one namespace (default). If we wish to deploy ingress’s that use this TLS certificate in other namespaces we must some how replicate this secret into other namespaces. We will do this with the help of a tool called kubed.

$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm install appscode/kubed --name kubed --namespace kube-system \
--set apiserver.enabled=false \
--set config.clusterName=prod_cluster

Make sure kubed is up and running

$ kubectl get pods --namespace kube-system -l app=kubed

Expected Output:

NAME                           READY    STATUS     RESTARTS     AGE
kubed-kubed-79d887dd5d-mxfgq    1/1     Running       0          1m

Annotate Secrets

Now that kubed is up and running lets annotate the secret we want to replicate across multiple namespaces.

$ kubectl annotate secret wildcard-certificate kubed.appscode.com/sync="app=kubed"

Expected Output:

secret " wildcard-certificate" annotated

Create a Labeled Namespace

Next we will create a new namespace. This new namespace will have a label on it to let kubed now we want to replicate the secret here.

$ cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: demo-namespace
labels:
app: kubed
EOF

Make sure the namespace was created:

$ kubectl get namespaces

Expected Output:

NAME STATUS AGE
default Active 1h
demo-namespace Active 8s
kube-public Active 1h
kube-system Active 1h

Kubed picks up that a new namespace has been created with the label app=kubed so it replicates that secret into this new namespace. Make sure the secret has been added to the new namespace:

kubectl get secrets --namespace demo-namespace

Expected Output:

NAME TYPE DATA AGE
default-token-264z2 kubernetes.io/service-account-token 3 19s
wildcard-certificate kubernetes.io/tls 2 19s

Create Cronjob

So far so good. We have * deployed nginx, cert-manager, and kubed * created a cluster-issuer and wild-card certificate * waited for Lets Encrypt to issue us a TLS certificate * annotated this new secret to replicate it across it multiple namespaces

However, what if we wanted to automate this process? As you noticed cert-manager needed to wait for Lets Encrypt to validate the domain before creating a secret with the TLS certificate. So let’s write a kubernetes cron-job to annotate the secret for us when it is created.

Bash

Lets throw together a quick bash script to annotate the secrets for us.

Our script expects a list containing the names of the secrets we want to annotate.

#!/bin/bash

IFS=', ' read -r -a SECRETS_LIST <<< "${SECRETS_LIST}"

#Function to check if a given secret exists
#Parameters: ${1}=Namespace ${2}=Secret Name
secret_exists() {
/usr/local/bin/kubectl get secrets -n $1 | \
grep $2 \
&>/dev/null
echo "$?"
}

#Function to check if a given label is on a secret
# Parameters: ${1}=Namespace ${2}=Secret Name ${3}=Annotation
annotation_exists() {
/usr/local/bin/kubectl get secrets -n ${1} ${2} -o=jsonpath='{.metadata.annotations}' | \
grep kubed.appscode.com/sync:app=kubed \
&>/dev/null
echo "$?"
}

#Function to add annotation to secret
#Parameters: ${1}=Namespace ${2}=Secret Name
add_annotation() {
/usr/local/bin/kubectl annotate secret -n ${1} ${2} kubed.appscode.com/sync="app=kubed" \
&>/dev/null
echo "$?"
}

echo ${SECRETS_LIST[@]}
COUNTER=0
for item in "${SECRETS_LIST[@]}"
do
echo "Working on: ${item}"
echo "Checking if secret exists..."
secret_res=$(secret_exists ${SECRETS_NAMESPACE} ${item})
if [ ${secret_res} = 0 ];
then
echo "Secret Exists!"
echo "Checking if annotation exists..."
anno_res=$(annotation_exists ${SECRETS_NAMESPACE} ${item})
echo "result=${anno_res}"
if [ ${anno_res} = 0 ];
then
echo "Annotation exists!"
let COUNTER=COUNTER+1
else
echo "Annotation does not exist"
echo "Adding Annotation..."
add_anno_res=$(add_annotation ${SECRETS_NAMESPACE} ${item})
fi
else
echo "Secret does not exist"
fi
done

if [ ${COUNTER} = ${#SECRETS_LIST[*]} ];
then
echo "All secrets are annotated"
fi

Configmap

We will mount this bash script into our cronjob using a configmap, so lets go ahead and copy this code into a file called cronjob.sh and then create a configmap.

$ kubectl create configmap wild-card-certs-cronjob --from-file=cronjob.sh

Expected Output

configmap/wild-card-certs-cronjob created

Cronjob

Our cronjob.yaml will use a docker image that has kubectl already installed.

$ cat <<EOF | kubectl create -f -
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: wild-card-certs-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccount: wild-card-certs-cronjob
serviceAccountName: wild-card-certs-cronjob
containers:
- name: wild-card-certs-cronjob
image: dtzar/helm-kubectl
command: ["/bin/bash"]
args:
- /cronjob/cronjob.sh
env:
- name: SECRETS_LIST
value: "wildcard-certificate"
- name: SECRETS_NAMESPACE
value: "default"
volumeMounts:
- name: cronjob
mountPath: /cronjob
restartPolicy: OnFailure
volumes:
- name: cronjob
configMap:
name: wild-card-certs-cronjob
defaultMode: 0777
EOF

Expected Output:

cronjob.batch/wild-card-certs-cronjob created

Service Account, ClusterRole, ClusterRoleBinding

Since we are using RBAC in our cluster we must create a service account, cluster role and role binding for our cronjob to use. This will allow our cronjob to be able to manipulate secrets, and cronjobs with in our cluster.

$ cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
name: wild-card-certs-cronjob
name: wild-card-certs-cronjob
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
name: wild-card-certs-cronjob
name: wild-card-certs-cronjob
rules:
- apiGroups: ['*']
resources: ['secrets','cronjobs']
verbs: ['*']
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
name: wild-card-certs-cronjob
name: wild-card-certs-cronjob
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: wild-card-certs-cronjob
subjects:
- kind: ServiceAccount
name: wild-card-certs-cronjob
namespace: default
EOF

Use Wild-card Secret in ingress

Lets plop this secret in an ingress and see it in action!

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: <YOURINGRESS>
namespace: demo1
spec:
rules:
- host: <YOURSERVICE.YOURDOMAIN>
http:
paths:
- backend:
serviceName: <YourSERVICE>
servicePort: 80
path: /
tls:
- hosts:
- <YOURSERVICE.YOURDOMAIN>
secretName: wildcard-certificate

If everything works as expected you should be able to now visit your website and see that is being served over https! To further inspect the certificate from the client side we can run the following curl command to get and inspect the certificate.

echo | \
openssl s_client -showcerts -servername<YOURSERVICE.YOURDOMAIN> \
-connect <YOURSERVICE.YOURDOMAIN>:443 2>/dev/null | \
openssl x509 -inform pem -noout -text

Conclusion

We have just learned how to automate, the negotiation and creation, of wild card certificates using cert-manager, and  creating an ingress into our cluster using nginx. Finally in order to replicate the secrets created by cert-manager to multiple namespaces we have used a tool called kubed. Armed with this knowledge we should be able to easily deploy applications protected by SSL!

References:

https://rimusz.net/lets-encrypt-wildcard-certs-in-kubernetes/