Unlock the power of modern technology to transform your business.
Choose the service that is right for your organization:
Choose the training that is right for your role:
Learn about the Oteemo Way.
Meet the members of Oteemo!
Kubernetes
by Rahul Surasinghe | Oct 09, 2019
As Kubernetes and its RedHat version, Openshift, gain more popularity, we at Oteemo are tasked with solving some of the challenges with integrating third party tools with the platform. One of our most interesting and challenging assignments was integrating VMWare’s NSX-T networking software with Redhat’s Openshift platform. Our hope is that you walk away from this blog with some of the prerequisites that you can implement to successfully install Openshift with NSX-T integration.
NSX is a software defined network (SDN) provided by VMWare. NSX-T is a specific offering of NSX that supports different virtualization platforms (e.g. KVM, Docker, OpenShift, etc.). This SDN can be integrated with RedHat’s OpenShift Container Platform (OCP). However, integration between these two platforms isn’t as easy as filling a few parameters in a host file. Therefore, it is highly recommended that one reads the official NSX-T integration with OpenShift documentation from VMware and RedHat respectively or the following blog may seem like a foreign language.[1][2]
As one can see from the official documentation, the integration process has many manual steps. These manual steps such as tagging the second vNic with the cluster name and VM name will eventually succumb to human error. Furthermore, troubleshooting this error isn’t trivial; speaking from experience, it takes going through all the installation steps meticulously to find the root source of the problem. These manual steps are tedious and often lead to frustration because something will go awry and seeing an OpenShift installation fail at the 40 minute mark is demoralizing.
Therefore, this blog will focus on automating the manual steps and build confidence that the OpenShift installation with NSX-T will run seamlessly and successfully. In fact, we at Oteemo, were able to spin-up OpenShift clusters within 35 minutes successfully and repeatedly. We hope to help you achieve this as well.
There are a few prerequisites that have to be met on NSX-T, which determines a successful OpenShift installation.
These are the NSX-T prerequisites:[2]
This blog will focus on the automation of the following prerequisites: 3 & 5. However, it is important to state that there is symmetry between the approach used to automate 3 with the other prerequisites (e.g. 1, 2, and 4).
By implementing this automation, you and your team will save time spent on troubleshooting to swiftly getting an OpenShift cluster up with NSX-T integration.
Automating the creation of the IP block also called a Pod CIDR block (but for clarity will refer to it as IP block) was done using Ansible; it can also be automated using Python, specifically, by implementing the request module, but there is more overhead using this approach. Therefore, we will stick to using Ansible to create the IP Block.
In order to create the IP block using ansible, we need to create four files: 1. create_ipblock.yml– This is the ansible playbook that will call the role and the task located in ../roles/tasks
create_ipblock.yml
../roles/tasks
# create_ipblock.yml --- - name: "Automate IP Block Creation" hosts: localhost connection: local gather_facts: no roles: - create_ipblock
2. ../roles/create_ipblock/tasks/main.yml: The main task that [1] calls. It gathers IP block information, checks if the IP block already exists, populates a JSON file [5] using the jinja2 template [4], and creates the IP block.
../roles/create_ipblock/tasks/main.yml
--- - name: Get IP Block Information uri: url: "https://{{ hostname }}/api/v1/pools/ip-block" method: GET return_content: yes client_cert: "{{ cert_path }}" client_key: "{{ key_path }}" force_basic_auth: yes validate_certs: no register: nsx_facts - name: Don't create if IP Block already exists set_fact: create_ipblock: false when: ip_block_pods_name == "{{ item.display_name }}" with_items: "{{ nsx_facts.json.results }}" - debug: var: create_ipblock - name: block: - name: Create json from template template: src: ip_block.json.j2 dest: /tmp/ip_block.json - name: Create IP Block uri: url: "https://{{ hostname }}/api/v1/pools/ip-blocks" method: POST return_content: yes client_cert: "{{ cert_path }}" client_key: "{{ key_path }}" force_basic_auth: yes validate_certs: no status_code: 201 body: "{{ lookup('file', '/tmp/ip_block.json') }}" body_format: json register: nsx_block - name: Creation results debug: var: nsx_block when: create_ipblock | bool
../answerfile.yml
ipblock.json.j2
POST https:///api/v1/pools/ip-blocks
{ "display_name": "{{ ip_block_pods_name }}", "description": "{{ ip_block_pods_desc }}". "tags": [ { "{{ ip_block_pods_cluster_scope }}": "{{ ip_block_pods_cluster_name }}" }, ], "cidr": "{{ ip_block_pods_cidr }}" }
5. ipblock.json – The JSON body that is populated from [4]
ipblock.json
In order to make a unique IP block under the IPAM setting in NSX-T, one must make changes to the answerfile.yml that contains the variables. The variables that need to be changed include:
answerfile.yml
ip_block_pods_name
ip_block_pods_desc
ip_block_pods_cluster_scope
ip_block_pods_cluster_name
ip_block_pods_cidr
After making changes to one’s variables file, to create the IP block is as easy as running the create_ipblock.yml playbook (e.g. ansible-playbook create_ipblock.yml).
ansible-playbook create_ipblock.yml
The tagging of the logical ports that are associated to the second vNic is essential for a successful OpenShift installation with NSX-T integration. The tagging allows the NSX Container Plugin (NCP) to recognize which port is the parent VIF for all of the pods running in an OpenShift node.[2] Furthermore, it allows the NSX node agents to propagate the tags to the NCP which in turn make these tags prominent in NSX-T and its respective resource when new OpenShift resources are created.[2] In other words, when an OpenShift project is created, an associated tier 1 router will be created in NSX-T and it will have the tags, which were created when tagging the logical ports. This is integral to making NSX-T work properly with OpenShift.
Note: VMWare dropped the ‘T’ nomenclature in naming the NSX Container Plugin (NCP) and NSX node agents.
In order to tag all the logical ports associated with the second vNic, we need to use Python (i.e. tag_vnic.py); the process to tag the ports is more complex such that it would be very difficult to do it with Ansible. The tags in question are two key/value pairs:
tag_vnic.py
tags = [{'ncp/node_name': 'node_name'}, {'ncp/cluster': 'cluster_name'}]
As one can see, tags is a list of nested dictionaries. Therefore, tags will be a key in the json body of the PUT request in which the values are the contents of the tags. The keys 'ncp/node_name' and 'ncp/cluster' are mandatory keys while its values should be the name of the VM that the logical port of the second vNic belongs to and the name of the cluster respectively.
tags
'ncp/node_name'
'ncp/cluster'
Note: This script was created by altering the nsx_cleanup.py script authored by Yasen Simeonov from VMware.[4] The link to his Github repo and Oteemo Github repo containing the tagging code will be linked below.[4]
nsx_cleanup.py
The script, tag_vnic.py, imports 4 important modules.
optparse
requests
itertools
json
The script runs 4 methods:
get_logical_ports_for_second_vnic(self)
generate_node_names(self)
self.cluster
-<m/c/i>-###
dev-m-001
get_lport_attachment_id(self, nodes)
lport_attachment_id
tag_logical_ports_of_second_vnic(self, filter_nodes, json_ports)
To run the script:
python2.7 tag_vnic.py --nsx-cert= --key= --mgr-ip=<https:// --cluster=
In our engagement, we noticed that there is sparse documentation regarding the integration between NSX-T and OpenShift. For example, RedHat’s official documentation is verbatim word-for-word similar to VMWare’s documentation. Therefore, if one runs into troubleshooting issues, there isn’t much documentation that one can refer to for finding a solution. Furthermore, most of the troubleshooting solutions are locked behind RedHat’s forums, which requires a paid subscription account; most of these solutions seem to be for earlier versions of OpenShift and NSX-T — it might not be useful for later versions of the platforms. Finally, both of these platforms are being actively developed. Therefore, new compatibility issues arise. One such issue that Oteemo faced was that NSX-T uses two different APIs in their UI. The Simple UI uses the new declarative API and the Advanced Settings UI uses the current API. The Simple UI and its new, declarative API were released in NSX-T version 2.4.[5] This new UI was introduced to help users configure and create NSX-T resources easily with “bare minimum user input.”[5] The new, Simple UI uses the declarative API. The declarative API adheres to infrastructure as code; allowing the user to leverage automation frameworks such as Ansible or scripting languages such as Python.[5] As great as this is, VMware neglected to state in their NSX-T integration with OpenShift documentation that the NSX-T GUI uses both APIs dependent on whether the user uses the Simple UI or Advanced Settings UI. This led to many OpenShift installations with NSX-T integrations to fail because both APIs were being used on the same installation run. It wasn’t until receiving official VMware support and reading a forum post that we figured this out.[6] As for which API should one use for their installation, it depends. The new API is still being actively developed and during the time of this publication, there still aren’t official Ansible modules for NSX-T. However, there is an official VMware GitHub repo where they are actively developing this.[7] In our engagement, we leveraged the old API due to our client having already created existing NSX-T resources using the Advanced Settings UI. However, It would be best practice to leverage the new, declarative API because the old one will be phased out. Therefore, make sure to create any OpenShift cluster dependent NSX-T resources using the Simple UI, otherwise, the installation will fail. Furthermore, make sure that the Python script that automates these prerequisites uses the new API as well.
By following these steps, one will have the ability to automate the NSX-T prerequisites that is required for OpenShift and NSX-T integration. This will save a lot of time from the manual process of entering these items in the NSX-T web UI and prevent human error of mistyping a tag name. Have a great and seamless OpenShift install with NSX-T integration!
Your email address will not be published. Required fields are marked *
Comment
Name *
Email *
Website
Save my name, email, and website in this browser for the next time I comment.
Submit Comment
Ingress 102: Kubernetes Ingress Implementation Options
Kubernetes tooling for TechOps and Support
Ingress 101 | What Is Kubernetes Ingress And Why Does It Exist?