On-Premise Setup of Kubernetes Cluster using KubeSpray (Offline Mode) – PART 1

Today, most organizations are moving to Managed Services like EKS (Elastic Kubernetes Services), and AKS (Azure Kubernetes Services), for easier handling of the Kubernetes Cluster. With Managed Kubernetes we do not have to take care of our Master Nodes, cloud providers will be responsible for all Master Nodes and Worker Nodes, freeing up our time. We just need to deploy our Microservices over the Worker nodes. You can pay extra to achieve an uptime of 99.95%. Node repair ensures that a cluster remains healthy and reduces the chances of possible downtime. This is good in many cases but it makes it an expensive ordeal as AKS costs $0.10 per cluster per hour. You have to install upgrades for the VPC CNI yourself and also, install Calico CNI. There is no IDE extension for developing EKS code. it also creates a dependency on the particular Cloud Provider.

To skip the dependency on any Cloud Provider we have to create a Vanilla Kubernetes Cluster. This means we have to take care of all the components – all the Master and Worker Nodes of the Cluster by ourselves.

Here we got a scenario in which one of our client’s requirements was to set up a Kubernetes cluster over On-premises Servers, under the condition of no Internet connectivity. So I choose to perform the setup of the Kubernetes Cluster via Kubespray.

Why Kubespray?

Kubespray is a composition of Ansible playbooks, inventory,
provisioning tools, and domain knowledge for generic
OS/Kubernetes clusters configuration management tasks.
Kubespray provides a highly available cluster, composable
(choice of the network plugin for instance), supports most popular Linux distributions, and continuous integration tests
.

Creating a cluster


● Minimum required version of Kubernetes is v1.22
● Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands
● The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required. Checkout offline_environment.md file.
● The target servers are configured to allow IPv4 forwarding.
● If using IPv6 for pods and services, the target servers are configured to allow IPv6 forwarding.
● The firewalls are not managed, you have to implement your own rules the way you are used to. In order to avoid any issues during the deployments you should disable your firewall.
● If Kubespray is run from a non-root user account, the correct privilege escalation method should be configured in the target servers. Then the ansible_become flag or command parameters –become or -b should be specified.

Setup K8s Cluster using Kubespray(Offline Mode):

Prerequisite:

● All required binaries should be downloaded Offline.
● All binaries are pushed to whitelisted Storage to K8s Servers as a defined directory structure.
● Ansible over jump host.
● Passwordless connectivity b/w jump host and all K8s nodes.
● Python over all K8s nodes.
● ACR connectivity b/w all K8s nodes and decided ACR.
● YUM Proxy setup from K8s nodes to nexus.
● Get the Kubespray Ansible Role over Jump host.

After completing all the prerequisites We can continue further

1) First of all we need to pull the required code from the official GitHub repository.

git clone https://github.com/kubernetes-sigs/kubespray.git

As we are setting up an Offline Kubernetes cluster, we have to do
some required changes in the below file and we have to update
all the offline binaries over the location.

vi kubespray/roles/download/defaults/main.yml
Previous Download file
Updated Download File
Nexus View

We have to modify all the required URLs as per the directory structure given over Nexus.

2) After Modifying all the Binary URLS we will provide Root Privileges to all the tasks mentioned in cluster.yml file.
cluster.yml

cd kubespray/
vi cluster.yml

3) After editing cluster.yml we have to set SELINUX Disabled over each Kubernetes node. We have to run the below command over each server.

sudo vi /etc/selinux/config

We have to uncomment the line and set

SELINUX=disabled

4) After the previous step, we have to turn off the SWAP Memory. For that, we need to edit the ‘fstab’ file over each K8s node. We have to run the below command over each server.

sudo vi /etc/fstab

Here we have to comment on the line which is having below content:

#/dev/mapper/rootVG-swapLV swap swap default     0        0

5) After setting up all the K8s nodes We have to restart the server.
We have to run this command over all the servers.

sudo init 6

6) Now we will come back to our jump host and run the below
commands one by one.

a) Copy ‘inventory/sample’ as ‘inventory/mycluster’:

cp -rfp inventory/sample inventory/mycluster

b) Update Ansible inventory file with inventory builder:

declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

c) Review and change parameters:

cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

7) Deploy Kubespray with Ansible Playbook

ansible-playbook -i inventory/mycluster/hosts.yaml --become
--become-user=root cluster.yml

Note: It will take 25-30 mins to run the complete playbook.

8) After the completion of the playbook we can run the below command over K8s Master Node to check whether all nodes are connected up and running.

sudo /usr/local/bin/kubectl get nodes -o wide

It will give you the detail of your Cluster Nodes.

Conclusion

This is how we can set up Kubernetes Cluster via Kubespray. With all that set, we need to add monitoring, tracing, logging, and all the associated operations for troubleshooting. Now as we have reached the end of the blog – what do you think I could have done something differently? Do comment. Also If you face any sort of issue in setting up the Kubernetes cluster drop a comment. I’ll be back with Part – 2 of the blog series where I’ll discuss setting up the Metrics Server, and Redis Setup.

Blog Pundits: Mehul Sharma and Sandeep Rawat

OpsTree is an End-to-End DevOps Solution Provider.

Connect with Us

Author: Rishabh Sharma

DevOps

4 thoughts on “On-Premise Setup of Kubernetes Cluster using KubeSpray (Offline Mode) – PART 1”

  1. Thank you for the guide, could you please kindly tell me where can i find all those binaries please?

    1. Hi Supawat,
      These all Binaries are publicly available.
      You can get all the binaries from Google depending on your required version or in their Specific GitHub Repositories.
      For instance I am providing you one URL for Calico releases from which you can download as per your specific version requirement.
      https://github.com/projectcalico/calico/releases

  2. Hi Rishabh Sharma, have you tried install any kubernetes v1.22.3-1.26.2 between this using kubespray on RHEL 8.4. need your inputs

Leave a Reply