{"id":4361,"date":"2020-10-13T17:15:10","date_gmt":"2020-10-13T11:45:10","guid":{"rendered":"https:\/\/opstree.com\/blog\/\/?p=4361"},"modified":"2026-02-18T15:12:56","modified_gmt":"2026-02-18T09:42:56","slug":"kubernetes-diary-software-loadbalancer","status":"publish","type":"post","link":"https:\/\/opstree.com\/blog\/2020\/10\/13\/kubernetes-diary-software-loadbalancer\/","title":{"rendered":"Kubernetes Diary &#8211; Software LoadBalancer"},"content":{"rendered":"\r\n<h2 class=\"wp-block-heading\">Problem Statement..?<\/h2>\r\n\r\n\r\n\r\n<p>Most of us, who have used Kubernetes with a public cloud, have created a cloud loadbalancer as well. Ever thought about how can this be achieved in a Private Data Center. The easiest way would be to use the concept of Node Port and expose our services with it. In this blog, however, we won&#8217;t take the easy way out. Well, at least not the easiest way. We are going to talk about ways to achieve the same goal of Software LoadBalancer in a Private Data Center with some interesting tools.<\/p>\r\n\r\n\r\n\r\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/collabnix.com\/wp-content\/uploads\/2019\/08\/metallb-1024x560.png\" alt=\"Kubernetes Cluster on Bare Metal System Made Possible using MetalLB\" width=\"494\" height=\"270\" \/><\/figure>\r\n<p><!--more--><\/p>\r\n\r\n\r\n\r\n\r\n\r\n<h2 class=\"wp-block-heading\">Basic understanding first..!<\/h2>\r\n\r\n\r\n\r\n<p><strong>Q. <\/strong>What makes it possible to automatically attach an external LoadBalancing Solution (like <strong>AWS-ELB<\/strong>) to an underlying cloud-provider (like <strong>AWS<\/strong>) with a service object of <em><strong>type: LoadBalancer<\/strong><\/em> in Kubernetes, as shown below?<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p><strong>Example:<\/strong><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">apiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  name: example-service\r\nspec:\r\n  selector:\r\n    app: example\r\n  ports:\r\n  - port: 8765\r\n    targetPort: 9376\r\ntype: LoadBalancer<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\r\n<div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\r\n<h5 class=\"wp-block-heading\"><strong>Solution:<\/strong><\/h5>\r\n\r\n\r\n\r\n<p>It&#8217;s the Kubernetes &#8220;<em><strong>cloud-controller-manager<\/strong><\/em>&#8220;. This is where the magic happens. It is not an easy task to develop a Kubernetes core while integrating it with the cloud platform it is going to run on simultaneously. Moreover, it is also not practical since the development of the Kubernetes project and cloud platform are at a different pace. To overcome such real-world issues, a daemon called <strong>Cloud Controller Manager(<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/architecture\/cloud-controller\/\" target=\"_blank\" rel=\"noreferrer noopener\">CCM<\/a>)<\/strong> was introduced that embeds cloud-specific control loops in the Kubernetes setup. <strong>CCM<\/strong> can be linked to any cloud provider as long as two conditions are satisfied: the cloud provider has a <strong>CloudProviderInterface<\/strong> (<a href=\"https:\/\/github.com\/kubernetes\/cloud-provider\/blob\/master\/cloud.go\" target=\"_blank\" rel=\"noreferrer noopener\">CPI<\/a>) and the core CCM package has support for the said cloud provider. But, as of now, this is true for very few providers:<\/p>\r\n\r\n\r\n\r\n<p>List: <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/cmd\/cloud-controller-manager\/providers.go\" target=\"_blank\" rel=\"noreferrer noopener\">providors.go<\/a><\/p>\r\n\r\n\r\n\r\n<ul>\r\n<li>AWS<\/li>\r\n<li>AZURE<\/li>\r\n<li>GCE<\/li>\r\n<li>Openstack<\/li>\r\n<li>VSphere<\/li>\r\n<\/ul>\r\n<\/div>\r\n<\/div><\/div>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\r\n<div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\r\n<h5 class=\"wp-block-heading\">Architecture with the cloud controller manager (CCM):<\/h5>\r\n\r\n\r\n\r\n<div class=\"wp-block-image\">\r\n<figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i.imgur.com\/eZK0HZ7.png\" alt=\"\" width=\"652\" height=\"281\" \/><\/figure>\r\n<\/div>\r\n\r\n\r\n\r\n<h5 class=\"wp-block-heading\">The architecture of a Kubernetes cluster without the cloud controller manager (CCM):<\/h5>\r\n\r\n\r\n\r\n<div class=\"wp-block-image\">\r\n<figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i.imgur.com\/yzrbLX9.png\" alt=\"\" width=\"767\" height=\"312\" \/><\/figure>\r\n<\/div>\r\n\r\n\r\n\r\n<p>My case is the one without CCM since my datacenter (OpenNebula) doesn&#8217;t fall under the supported category nor does it provides any custom CCM support, like <strong>DigitalOcean<\/strong> does. To read more look at the <a href=\"https:\/\/github.com\/digitalocean\/digitalocean-cloud-controller-manager\" target=\"_blank\" rel=\"noreferrer noopener\">digitalocean-cloud-controller-manager<\/a> page.<\/p>\r\n<\/div>\r\n<\/div><\/div>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\">So how do we create a LoadBalancer type service object if we don&#8217;t support custom CCM (<strong>cloud-controller-manager<\/strong>)?<\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<p>Luckily we have two very promising solutions available. They are:<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<ul>\r\n<li><a href=\"https:\/\/metallb.universe.tf\/\" target=\"_blank\" rel=\"noreferrer noopener\">Metallb<\/a><\/li>\r\n<li><a href=\"https:\/\/github.com\/kubesphere\/porter\" target=\"_blank\" rel=\"noreferrer noopener\">Porter<\/a><\/li>\r\n<\/ul>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\">MetalLb:<\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/banzaicloud.com\/blog\/load-balancing-on-prem\/metallb-l2.gif\" alt=\"\" \/><\/figure>\r\n<p>\r\n\r\n<\/p>\r\n<p>Enters\u00a0<code>Metallb<\/code>\u00a0which can provide virtual load balancer in two modes:<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<ul>\r\n<li><a href=\"https:\/\/metallb.universe.tf\/configuration\/#bgp-configuration\" target=\"_blank\" rel=\"noreferrer noopener\">BGP<\/a><\/li>\r\n<li><a href=\"https:\/\/metallb.universe.tf\/configuration\/#layer-2-configuration\" target=\"_blank\" rel=\"noreferrer noopener\">ARP<\/a><\/li>\r\n<\/ul>\r\n<p>\r\n\r\n<\/p>\r\n<p>The latter is simpler because it works with almost any layer 2 network without further configuration. In ARP mode, Metallb is quite simple to configure. We just have to give it a bunch of IP&#8217;s to use and we are good to go.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p>The deployment manifests are available <a href=\"https:\/\/github.com\/iiamvishalraj\/metallb\/blob\/main\/metallb.yaml\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>. To configure the IP addresses we need to go\u00a0<a href=\"https:\/\/metallb.universe.tf\/configuration\/\" target=\"_blank\" rel=\"noreferrer noopener\">with a\u00a0<em>ConfigMap<\/em><\/a>.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-verse\">metallb-configmap.yml<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  namespace: metallb-system\r\n  name: config\r\ndata:\r\n  config: |\r\n    address-pools:\r\n    - name: default\r\n      protocol: layer2\r\n      addresses:\r\n      - 10.12.0.200-10.12.0.220<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-code\"><code>kubectl apply -f metallb-configmap.yml<\/code><\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\r\n<div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\r\n<p>We will also need to generate a secret to <strong>secure<\/strong> Metallb components&#8217; communication, this can be done using\u00a0<a href=\"https:\/\/github.com\/iiamvishalraj\/metallb\/blob\/main\/generate-secret.sh\" target=\"_blank\" rel=\"noreferrer noopener\">this<\/a> command\u00a0to generate the Kubernetes secret yaml:<\/p>\r\n<\/div>\r\n<\/div><\/div>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=\"$(openssl rand -base64 128)\" -o yaml --dry-run=client &gt; metallb-secret.yaml\r\n<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p>Once everything is deployed you should see your pods inside the\u00a0<code>metallb-system<\/code>\u00a0namespace:<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">NAME                          READY   STATUS    RESTARTS   AGE\r\ncontroller-57f648cb96-tvr9q   1\/1     Running   0          3d6h\r\nspeaker-uj78g                 1\/1     Running   0          3d6h\r\nspeaker-y7iu6                 1\/1     Running   0          3d6h\r\nspeaker-ko09j                 1\/1     Running   0          3d6h\r\nspeaker-de43w                 1\/1     Running   0          3d6h\r\nspeaker-gt654                 1\/1     Running   0          3d6h\r\nspeaker-asd32                 1\/1     Running   0          3d6h\r\nspeaker-a43de                 1\/1     Running   0          3d6h\r\nspeaker-df54r                 1\/1     Running   0          3d6h\r\nspeaker-lo78h                 1\/1     Running   0          3d6h\r\nspeaker-hj879                 1\/1     Running   0          3d6h<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p>WOoooo&#8230; !! Congratulation it&#8217;s all set and ready to be tested.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/virtualthoughts.blob.core.windows.net\/uploads\/2019\/10\/MetalLB.png\" alt=\"K8s, MetalLB and Pihole \u2013 Virtual Thoughts\" \/><\/figure>\r\n<p>\r\n\r\n<\/p>\r\n<p>&nbsp;<\/p>\r\n<p>Try creating any kubernetes service with type: LoadBalancer and you will be assigned an ExternalIP. But this is not all. Further, we might have to do some <strong>NATting<\/strong> since the ExternalIP (Range: 10.12.0.200-10.12.0.220 ), in the manifest above, is within a private network. This can be done in either of the two ways, if there is a NAT service option in our cloud provider&#8217;s (Local Data Center) Management UI, we can simply do the mapping there, else, we can log in to our router and write the NATting rules there.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\"><strong>Testing phase<\/strong><\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.objectif-libre.com\/wp-content\/uploads\/2019\/05\/archi-300x223.png\" alt=\"What you need to know about MetalLB - Objectif Libre\" \/><\/figure>\r\n<p>\r\n\r\n<\/p>\r\n<p>I carried out testing to make sure there would be no performance penalties.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h5 class=\"wp-block-heading\">Infra Configuration:<\/h5>\r\n<p>\r\n\r\n<\/p>\r\n<ul>\r\n<li>3 kubernetes worker nodes were set up with BGP configuration on the edge router.<\/li>\r\n<li>MetalLB<\/li>\r\n<li>NGINX<\/li>\r\n<li>External DNS are deployed<\/li>\r\n<\/ul>\r\n<p>\r\n\r\n<\/p>\r\n<h5 class=\"wp-block-heading\">Workload:<\/h5>\r\n<p>\r\n\r\n<\/p>\r\n<p>Two web applications were deployed, one with state and a database; the other, stateless, roughly simulating our workloads. Their traffic was exposed to NGINX ingress, and NGINX ingress service was set to type LoadBalancer with a MetalLB IP attached. I used <a href=\"https:\/\/locust.io\/\" target=\"_blank\" rel=\"noreferrer noopener\">locust.io<\/a> to simulate traffic to web applications.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p>The goal was to see if taking a node down would cause downtime or network instability.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h5 class=\"wp-block-heading\">Test procedure followed:<\/h5>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-image\">\r\n<figure class=\"alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/uploads-ssl.webflow.com\/5ea88781be15eaddffecc8d7\/5ee9ec2b18b46500f0aff078_Jirle8n2GRHAdzpohvchobbgX7ywIAHM8g5nRLr-sAOvdE5eEV31_MZdPaF58hEzp-wJni2jVhw2TrvjQvQ19CHCdhN_48iwIhJ12Nm63P5S-hFuzDYAf00eL5HFIxqpQuy9DJmC.jpeg\" alt=\"\" width=\"606\" height=\"377\" \/><\/figure>\r\n<figure class=\"alignright\">The traffic was simulating 10,000 users in parallel with a pool of <strong>3 nodes<\/strong>. Nodes were taken down one by one as per the testing procedure. We observed that the traffic was largely unaffected except for a few increases in latency as the <strong>database<\/strong> was rescheduled. Then we created artificial latency with <a href=\"https:\/\/man7.org\/linux\/man-pages\/man8\/tc-netem.8.html\" target=\"_blank\" rel=\"noreferrer noopener\">NetEm<\/a> on the nodes, and had an interesting finding: MetalLB essentially monitors the Ready status on the node, and when node health status fails MetalLB takes it out of the pool. When a heavily loaded node is falling in and out of Ready status, which is quite common in our cluster, MetalLB will not help a great deal. But it does resolve the main issues of site instability.<\/figure>\r\n<\/div>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\">Porter:<\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-coblocks-gallery-stacked alignfull\">\r\n<ul class=\"coblocks-gallery has-fullwidth-images\">\r\n<li class=\"coblocks-gallery--item\">\r\n<figure class=\"coblocks-gallery--figure\"><img decoding=\"async\" class=\"has-shadow-none\" src=\"https:\/\/github.com\/kubesphere\/porter\/raw\/master\/doc\/img\/porter-logo.png\" alt=\"logo\" data-id=\"\" data-imglink=\"\" \/><\/figure>\r\n<\/li>\r\n<\/ul>\r\n<\/div>\r\n<p>\r\n\r\n<\/p>\r\n<h5 class=\"wp-block-heading\">Core Features<\/h5>\r\n<p>\r\n\r\n<\/p>\r\n<ul>\r\n<li>ECMP routing load balancing<\/li>\r\n<li>BGP dynamic routing configuration<\/li>\r\n<li>VIP management<\/li>\r\n<li>LoadBalancerIP assignment in Kubernetes services<\/li>\r\n<li>Installation with Helm Chart<\/li>\r\n<li>Dynamic BGP server configuration through CRD<\/li>\r\n<li>Dynamic BGP peer configuration through CRD<\/li>\r\n<\/ul>\r\n<p>\r\n\r\n<\/p>\r\n<h5 class=\"wp-block-heading\">Deployment Architecture<\/h5>\r\n<p>\r\n\r\n<\/p>\r\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/github.com\/kubesphere\/porter\/raw\/master\/doc\/img\/porter-deployment.png\" alt=\"porter deployment\" width=\"574\" height=\"371\" \/><\/figure>\r\n<p>\r\n\r\n<\/p>\r\n<p>Read more: <a href=\"https:\/\/github.com\/kubesphere\/porter\" target=\"_blank\" rel=\"noopener\">https:\/\/github.com\/kubesphere\/porter<\/a><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h3 class=\"wp-block-heading\">Similarity\/difference between these two:<\/h3>\r\n<p>\r\n\r\n<\/p>\r\n<p>Apparently, both Porter and MetalLB are similar, both are service proxy, and are equipped with support for <strong>baremetal<\/strong> kubernetes clusters.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\">Summary:<\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<p>I have personally tested Metallb in the production environment in various datacenters one such is: <strong>Alibaba(UAE Region)<\/strong> and even on public-cloud like <strong>AWS<\/strong>. It&#8217;s amazing. Mostly, in my environment, I have an ingress that routes all the external traffic within my cluster, and an external_ip is attached to it.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p>For more interesting Kubernetes updates and problem statements, follow me on:<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\r\n<div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\r\n<p><a href=\"https:\/\/www.linkedin.com\/in\/vishal-raj-01baa2197\/\" target=\"_blank\" rel=\"noreferrer noopener\">Linkedin<\/a><\/p>\r\n<\/div>\r\n<\/div><\/div>\r\n<p>\r\n\r\n<\/p>\r\n<p>Thank you all..!<\/p>\r\n<p>Opstree is an End to End DevOps solution provider<\/p>\r\n<p><a class=\"wp-block-button__link\" title=\"https:\/\/www.opstree.com\/contact-us\" href=\"https:\/\/www.opstree.com\/contact-us\" target=\"_blank\" rel=\"noopener\">CONTACT US<\/a><\/p>\r\n<p><\/p>\r\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Problem Statement..? Most of us, who have used Kubernetes with a public cloud, have created a cloud loadbalancer as well. Ever thought about how can this be achieved in a Private Data Center. The easiest way would be to use the concept of Node Port and expose our services with it. In this blog, however, &hellip; <a href=\"https:\/\/opstree.com\/blog\/2020\/10\/13\/kubernetes-diary-software-loadbalancer\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Kubernetes Diary &#8211; Software LoadBalancer&#8221;<\/span><\/a><\/p>\n","protected":false},"author":181194697,"featured_media":29900,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[28070474],"tags":[768739309,96903315,219611],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/opstree.com\/blog\/wp-content\/uploads\/2025\/11\/DevSecOps-1.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pfDBOm-18l","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/4361"}],"collection":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/users\/181194697"}],"replies":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/comments?post=4361"}],"version-history":[{"count":28,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/4361\/revisions"}],"predecessor-version":[{"id":30834,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/4361\/revisions\/30834"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media\/29900"}],"wp:attachment":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media?parent=4361"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/categories?post=4361"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/tags?post=4361"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}