Active-Active Infrastructure using Terraform and Jenkins on Microsoft Azure

In this blog, we will create an active-active infrastructure on Microsoft Azure using Terraform and Jenkins.

Prime Reasons to have an active-active set-up of your infrastructure

Disaster Recovery:

Disaster recovery (DR) is an organization’s method of regaining access and functionality to its IT infrastructure after events like a natural disaster, cyber attack, or even business disruptions just like during the COVID-19 pandemic.

  • Ensure business resilience
    No matter what happens, a good DR plan can ensure that the business can return to full operations rapidly, without losing data or transactions.
  • Maintain competitiveness
    Loyalty is rare and when a business goes offline, customers turn to competitors to get the goods or services they require. A DR plan prevents this.
  • Avoid data loss
    The longer a business’s systems are down, the greater the risk that data will be lost. A robust DR plan minimizes this risk.
  • Maintain reputation
    A business that has trouble resuming operations after an outage can suffer brand damage. For that reason, a solid DR plan is critical.
Continue reading “Active-Active Infrastructure using Terraform and Jenkins on Microsoft Azure”

What is SRE (Site Reliability Engineer)

Before deep dive into the SRE world, let’s talk about, where SRE is derived from. The concept of SRE got originated in 2003 by Ben Treynor Sloss. In 2003, when the cloud wasn’t a thing, Google was one of the most prominent web companies with a massive and distributed infrastructure. They had several challenges to face simultaneously; keep the trust and reputation of their services, provide a smooth user experience involving minimum downtime and latency, manage dozens of sprawling data centers, etc. They needed to rely heavily on automation and, thereby, formulated strategies that led them to implement large-scale automation. Small Companies at that time could bear the loss of a few hours of downtime but giants like Google could not afford it as they were a frontier of best user experience. Therefore, come to think of it, building a team that can help ensure the application’s availability and reliability was an obvious outcome.

Continue reading “What is SRE (Site Reliability Engineer)”

Kafka within EFK Monitoring

Introduction

Today’s world is completely internet driven. Whether it is shopping, banking, or entertainment, almost everything is available with a single click.

From a DevOps perspective, modern e-commerce and enterprise applications are usually built using a microservices architecture. Instead of running one large monolithic application, the system is divided into smaller, independent services. This approach improves scalability, manageability, and operational efficiency.

However, managing a distributed system also increases complexity. One of the most critical requirements for maintaining microservices is effective monitoring and log management.

A commonly used monitoring and logging stack is the EFK stack, which includes Elasticsearch, Fluentd, and Kibana. In many production environments, Kafka is also introduced into this stack to handle log ingestion more reliably.

Kafka is an open-source event streaming platform and is widely used across organizations for handling high-throughput data streams.

This naturally raises an important question.

Why should Kafka be used along with the EFK stack?

In this blog, we will explore why Kafka is introduced, what benefits it brings, and how it integrates with the EFK stack.

Let’s get started.

Why Kafka Is Needed in the EFK Stack

While traveling, we often see crossroads controlled by traffic lights or traffic police. At a junction where traffic flows from multiple directions, these controls ensure smooth movement by allowing traffic from one direction while holding others temporarily.

In technical terms, traffic is regulated by buffering and controlled flow.

Kafka plays a very similar role in log management.

Imagine hundreds of applications sending logs directly to Elasticsearch. During peak traffic, Elasticsearch may become overwhelmed. Scaling Elasticsearch during heavy ingestion is not always a good solution because frequent scaling and re-sharding can cause instability.

Kafka solves this problem by acting as a buffer layer. Instead of pushing logs directly to Elasticsearch, logs are first sent to Kafka. Kafka then delivers them in controlled, manageable batches to Elasticsearch.

High-Level Architecture Overview

The complete flow consists of the following blocks.

  • Application containers or instances

  • Kafka

  • Fluentd forwarder

  • Elasticsearch

  • Kibana

Each block is explained below with configurations.

Block 1 Application Logs and td-agent Configuration

This block represents application containers or EC2 instances where logs are generated. The td-agent service runs alongside the application to collect logs and forward them to Kafka.

td-agent is a stable distribution of Fluentd maintained by Treasure Data and the Cloud Native Computing Foundation. It is a data collection daemon that gathers logs from various sources and forwards them to destinations such as Kafka or Elasticsearch.

td-agent Configuration

Use the following configuration inside the td-agent configuration file.

Source Configuration

<source>
@type tail
read_from_head true
path <path_of_log_file>
tag <tag_name>
format json
keep_time_key true
time_format <time_format_of_logs>
pos_file <pos_file_location>
</source>

The source block defines how logs are collected.

  • path specifies the log file location

  • tag is a user-defined identifier for logs

  • format defines the log format such as json or text

  • keep_time_key preserves the original timestamp

  • time_format defines the timestamp pattern

  • pos_file tracks the read position of logs

Match Configuration to Kafka

<match <tag_name>>
@type kafka_buffered
output_include_tag true
brokers <kafka_hostname:port>
default_topic <kafka_topic_name>
output_data_type json
buffer_type file
buffer_path <buffer_path_location>
buffer_chunk_limit 10m
buffer_queue_limit 256
buffer_queue_full_action drop_oldest_chunk
</match>

The match block defines where logs are sent.

  • kafka_buffered ensures reliable delivery

  • brokers defines Kafka host and port

  • default_topic is the Kafka topic for logs

  • buffer settings control local buffering and backpressure

Block 2 Kafka Setup

Kafka acts as the central buffering and streaming layer.

Kafka uses Zookeeper for coordination and self-balancing. In production setups, Zookeeper is usually deployed separately.

Download Kafka

wget http://mirror.fibergrid.in/apache/kafka/0.10.2.0/kafka_2.12-0.10.2.0.tgz

Extract the Package

tar -xzf kafka_2.12-0.10.2.0.tgz

Starting Zookeeper

Zookeeper must be started before Kafka.

Update JVM heap size in the shell profile.

vi .bashrc
export KAFKA_HEAP_OPTS="-Xmx500M -Xms500M"

The heap size should be approximately 50 percent of the available system memory.

Reload the configuration.

source .bashrc

Start Zookeeper in the background.

cd kafka_2.12-0.10.2.0
nohup bin/zookeeper-server-start.sh config/zookeeper.properties > ~/zookeeper-logs &

Starting Kafka

cd kafka_2.12-0.10.2.0
nohup bin/kafka-server-start.sh config/server.properties > ~/kafka-logs &

Stopping Services

bin/kafka-server-stop.sh
bin/zookeeper-server-stop.sh

For advanced configurations, always refer to the official Kafka documentation.

Block 3 td-agent as Kafka Consumer and Elasticsearch Forwarder

At this stage, logs are available in Kafka topics. The next step is to pull logs from Kafka and send them to Elasticsearch.

Here, td-agent is configured as a Kafka consumer and forwarder.

Kafka Source Configuration

<source>
@type kafka_group
brokers <kafka_dns:port>
consumer_group <consumer_group_kafka>
topics <kafka_topic_name>
</source>
  • consumer_group ensures distributed consumption

  • each log record is consumed by only one consumer

Match Configuration to Elasticsearch

<match <kafka_topic_name>>
@type forest
subtype elasticsearch
<template>
host <elasticsearch_ip>
port <elasticsearch_port>
user <es_username>
password <es_password>
logstash_prefix <index_prefix>
logstash_format true
include_tag_key true
tag_key tag_name
</template>
</match>

Key concepts used here.

  • forest dynamically creates output instances per tag

  • logstash_prefix defines index naming in Elasticsearch

  • logs become visible in Kibana using this index

Block 4 Elasticsearch Setup

Elasticsearch acts as the storage and indexing layer.

Follow the official Elasticsearch documentation to install and configure Elasticsearch on Ubuntu or your preferred operating system.

Block 5 Kibana Setup

Kibana provides visualization and search capabilities on top of Elasticsearch.

Install Kibana using the official documentation.

You can configure Nginx to expose Kibana on port 80 or 443 for easier access.

Final Architecture Summary

With this setup, the complete EFK stack is integrated with Kafka.

  • Applications send logs to td-agent

  • td-agent pushes logs to Kafka

  • Kafka buffers and streams logs

  • td-agent forwarder consumes logs

  • Elasticsearch stores logs

  • Kibana visualizes logs

The same architecture can be used in standalone environments for learning or across multiple servers in production.

On-Premise Setup of Kubernetes Cluster using KubeSpray (Offline Mode) – PART 1

Today, most organizations are moving to Managed Services like EKS (Elastic Kubernetes Services), and AKS (Azure Kubernetes Services), for easier handling of the Kubernetes Cluster. With Managed Kubernetes we do not have to take care of our Master Nodes, cloud providers will be responsible for all Master Nodes and Worker Nodes, freeing up our time. We just need to deploy our Microservices over the Worker nodes. You can pay extra to achieve an uptime of 99.95%. Node repair ensures that a cluster remains healthy and reduces the chances of possible downtime. This is good in many cases but it makes it an expensive ordeal as AKS costs $0.10 per cluster per hour. You have to install upgrades for the VPC CNI yourself and also, install Calico CNI. There is no IDE extension for developing EKS code. it also creates a dependency on the particular Cloud Provider.

To skip the dependency on any Cloud Provider we have to create a Vanilla Kubernetes Cluster. This means we have to take care of all the components – all the Master and Worker Nodes of the Cluster by ourselves.

Here we got a scenario in which one of our client’s requirements was to set up a Kubernetes cluster over On-premises Servers, under the condition of no Internet connectivity. So I choose to perform the setup of the Kubernetes Cluster via Kubespray.

Why Kubespray?

Kubespray is a composition of Ansible playbooks, inventory,
provisioning tools, and domain knowledge for generic
OS/Kubernetes clusters configuration management tasks.
Kubespray provides a highly available cluster, composable
(choice of the network plugin for instance), supports most popular Linux distributions, and continuous integration tests
.

Continue reading “On-Premise Setup of Kubernetes Cluster using KubeSpray (Offline Mode) – PART 1”

Understanding the Ansible Helm Diff Plugin for Kubernetes Deployments

Introduction

Helm is one of the important tools for managing resources for Kubernetes. When we talk about large-scale helm manageability, there is a requirement for another tool through which we can manage helm deployments. There can be multiple options through which we can manage Helm but Ansible gives more flexibility to manage Helm deployments. Not only flexibility, but Ansible consists of many features and core Kubernetes modules through which we can manage Helm deployments.

Having a large variety of Kubernetes core modules, Ansible is not only for Helm deployments but also helps to manage Kubernetes and can be used to manipulate other kinds of commands. Continue reading “Understanding the Ansible Helm Diff Plugin for Kubernetes Deployments”