Kubernetes Diary – Software LoadBalancer

Problem Statement..?

Most of us, who have used Kubernetes with a public cloud, have created a cloud loadbalancer as well. Ever thought about how can this be achieved in a Private Data Center. The easiest way would be to use the concept of Node Port and expose our services with it. In this blog, however, we won’t take the easy way out. Well, at least not the easiest way. We are going to talk about ways to achieve the same goal of Software LoadBalancer in a Private Data Center with some interesting tools.

Kubernetes Cluster on Bare Metal System Made Possible using MetalLB

Basic understanding first..!

Q. What makes it possible to automatically attach an external LoadBalancing Solution (like AWS-ELB) to an underlying cloud-provider (like AWS) with a service object of type: LoadBalancer in Kubernetes, as shown below?

Example:

apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  ports:
  - port: 8765
    targetPort: 9376
type: LoadBalancer

Solution:

It’s the Kubernetes “cloud-controller-manager“. This is where the magic happens. It is not an easy task to develop a Kubernetes core while integrating it with the cloud platform it is going to run on simultaneously. Moreover, it is also not practical since the development of the Kubernetes project and cloud platform are at a different pace. To overcome such real-world issues, a daemon called Cloud Controller Manager(CCM) was introduced that embeds cloud-specific control loops in the Kubernetes setup. CCM can be linked to any cloud provider as long as two conditions are satisfied: the cloud provider has a CloudProviderInterface (CPI) and the core CCM package has support for the said cloud provider. But, as of now, this is true for very few providers:

List: providors.go

  • AWS
  • AZURE
  • GCE
  • Openstack
  • VSphere

Architecture with the cloud controller manager (CCM):
The architecture of a Kubernetes cluster without the cloud controller manager (CCM):

My case is the one without CCM since my datacenter (OpenNebula) doesn’t fall under the supported category nor does it provides any custom CCM support, like DigitalOcean does. To read more look at the digitalocean-cloud-controller-manager page.

So how do we create a LoadBalancer type service object if we don’t support custom CCM (cloud-controller-manager)?

Luckily we have two very promising solutions available. They are:

MetalLb:

Enters Metallb which can provide virtual load balancer in two modes:

The latter is simpler because it works with almost any layer 2 network without further configuration. In ARP mode, Metallb is quite simple to configure. We just have to give it a bunch of IP’s to use and we are good to go.

The deployment manifests are available here. To configure the IP addresses we need to go with a ConfigMap.

metallb-configmap.yml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.12.0.200-10.12.0.220

kubectl apply -f metallb-configmap.yml

We will also need to generate a secret to secure Metallb components’ communication, this can be done using this command to generate the Kubernetes secret yaml:

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -o yaml --dry-run=client > metallb-secret.yaml

Once everything is deployed you should see your pods inside the metallb-system namespace:

NAME                          READY   STATUS    RESTARTS   AGE
controller-57f648cb96-tvr9q   1/1     Running   0          3d6h
speaker-uj78g                 1/1     Running   0          3d6h
speaker-y7iu6                 1/1     Running   0          3d6h
speaker-ko09j                 1/1     Running   0          3d6h
speaker-de43w                 1/1     Running   0          3d6h
speaker-gt654                 1/1     Running   0          3d6h
speaker-asd32                 1/1     Running   0          3d6h
speaker-a43de                 1/1     Running   0          3d6h
speaker-df54r                 1/1     Running   0          3d6h
speaker-lo78h                 1/1     Running   0          3d6h
speaker-hj879                 1/1     Running   0          3d6h

WOoooo… !! Congratulation it’s all set and ready to be tested.

K8s, MetalLB and Pihole – Virtual Thoughts

 

Try creating any kubernetes service with type: LoadBalancer and you will be assigned an ExternalIP. But this is not all. Further, we might have to do some NATting since the ExternalIP (Range: 10.12.0.200-10.12.0.220 ), in the manifest above, is within a private network. This can be done in either of the two ways, if there is a NAT service option in our cloud provider’s (Local Data Center) Management UI, we can simply do the mapping there, else, we can log in to our router and write the NATting rules there.

Testing phase

What you need to know about MetalLB - Objectif Libre

I carried out testing to make sure there would be no performance penalties.

Infra Configuration:

  • 3 kubernetes worker nodes were set up with BGP configuration on the edge router.
  • MetalLB
  • NGINX
  • External DNS are deployed

Workload:

Two web applications were deployed, one with state and a database; the other, stateless, roughly simulating our workloads. Their traffic was exposed to NGINX ingress, and NGINX ingress service was set to type LoadBalancer with a MetalLB IP attached. I used locust.io to simulate traffic to web applications.

The goal was to see if taking a node down would cause downtime or network instability.

Test procedure followed:

The traffic was simulating 10,000 users in parallel with a pool of 3 nodes. Nodes were taken down one by one as per the testing procedure. We observed that the traffic was largely unaffected except for a few increases in latency as the database was rescheduled. Then we created artificial latency with NetEm on the nodes, and had an interesting finding: MetalLB essentially monitors the Ready status on the node, and when node health status fails MetalLB takes it out of the pool. When a heavily loaded node is falling in and out of Ready status, which is quite common in our cluster, MetalLB will not help a great deal. But it does resolve the main issues of site instability.

Porter:

Core Features

  • ECMP routing load balancing
  • BGP dynamic routing configuration
  • VIP management
  • LoadBalancerIP assignment in Kubernetes services
  • Installation with Helm Chart
  • Dynamic BGP server configuration through CRD
  • Dynamic BGP peer configuration through CRD

Deployment Architecture

porter deployment

Read more: https://github.com/kubesphere/porter

Similarity/difference between these two:

Apparently, both Porter and MetalLB are similar, both are service proxy, and are equipped with support for baremetal kubernetes clusters.

Summary:

I have personally tested Metallb in the production environment in various datacenters one such is: Alibaba(UAE Region) and even on public-cloud like AWS. It’s amazing. Mostly, in my environment, I have an ingress that routes all the external traffic within my cluster, and an external_ip is attached to it.

For more interesting Kubernetes updates and problem statements, follow me on:

Thank you all..!

Image Reference

 

Opstree is an End to End DevOps solution provider

CONTACT US

 

Leave a Reply