Introduction
Today’s world is completely internet driven. Whether it is shopping, banking, or entertainment, almost everything is available with a single click.
From a DevOps perspective, modern e-commerce and enterprise applications are usually built using a microservices architecture. Instead of running one large monolithic application, the system is divided into smaller, independent services. This approach improves scalability, manageability, and operational efficiency.
However, managing a distributed system also increases complexity. One of the most critical requirements for maintaining microservices is effective monitoring and log management.
A commonly used monitoring and logging stack is the EFK stack, which includes Elasticsearch, Fluentd, and Kibana. In many production environments, Kafka is also introduced into this stack to handle log ingestion more reliably.
Kafka is an open-source event streaming platform and is widely used across organizations for handling high-throughput data streams.
This naturally raises an important question.
Why should Kafka be used along with the EFK stack?
In this blog, we will explore why Kafka is introduced, what benefits it brings, and how it integrates with the EFK stack.
Let’s get started.
Why Kafka Is Needed in the EFK Stack
While traveling, we often see crossroads controlled by traffic lights or traffic police. At a junction where traffic flows from multiple directions, these controls ensure smooth movement by allowing traffic from one direction while holding others temporarily.
In technical terms, traffic is regulated by buffering and controlled flow.
Kafka plays a very similar role in log management.
Imagine hundreds of applications sending logs directly to Elasticsearch. During peak traffic, Elasticsearch may become overwhelmed. Scaling Elasticsearch during heavy ingestion is not always a good solution because frequent scaling and re-sharding can cause instability.
Kafka solves this problem by acting as a buffer layer. Instead of pushing logs directly to Elasticsearch, logs are first sent to Kafka. Kafka then delivers them in controlled, manageable batches to Elasticsearch.
High-Level Architecture Overview
The complete flow consists of the following blocks.
-
Application containers or instances
-
Kafka
-
Fluentd forwarder
-
Elasticsearch
-
Kibana
Each block is explained below with configurations.
Block 1 Application Logs and td-agent Configuration
This block represents application containers or EC2 instances where logs are generated. The td-agent service runs alongside the application to collect logs and forward them to Kafka.
td-agent is a stable distribution of Fluentd maintained by Treasure Data and the Cloud Native Computing Foundation. It is a data collection daemon that gathers logs from various sources and forwards them to destinations such as Kafka or Elasticsearch.
td-agent Configuration
Use the following configuration inside the td-agent configuration file.
Source Configuration
The source block defines how logs are collected.
-
path specifies the log file location
-
tag is a user-defined identifier for logs
-
format defines the log format such as json or text
-
keep_time_key preserves the original timestamp
-
time_format defines the timestamp pattern
-
pos_file tracks the read position of logs
Match Configuration to Kafka
The match block defines where logs are sent.
-
kafka_buffered ensures reliable delivery
-
brokers defines Kafka host and port
-
default_topic is the Kafka topic for logs
-
buffer settings control local buffering and backpressure
Block 2 Kafka Setup
Kafka acts as the central buffering and streaming layer.
Kafka uses Zookeeper for coordination and self-balancing. In production setups, Zookeeper is usually deployed separately.
Download Kafka
Extract the Package
Starting Zookeeper
Zookeeper must be started before Kafka.
Update JVM heap size in the shell profile.
The heap size should be approximately 50 percent of the available system memory.
Reload the configuration.
Start Zookeeper in the background.
Starting Kafka
Stopping Services
For advanced configurations, always refer to the official Kafka documentation.
Block 3 td-agent as Kafka Consumer and Elasticsearch Forwarder
At this stage, logs are available in Kafka topics. The next step is to pull logs from Kafka and send them to Elasticsearch.
Here, td-agent is configured as a Kafka consumer and forwarder.
Kafka Source Configuration
-
consumer_group ensures distributed consumption
-
each log record is consumed by only one consumer
Match Configuration to Elasticsearch
Key concepts used here.
-
forest dynamically creates output instances per tag
-
logstash_prefix defines index naming in Elasticsearch
-
logs become visible in Kibana using this index
Block 4 Elasticsearch Setup
Elasticsearch acts as the storage and indexing layer.
Follow the official Elasticsearch documentation to install and configure Elasticsearch on Ubuntu or your preferred operating system.
Block 5 Kibana Setup
Kibana provides visualization and search capabilities on top of Elasticsearch.
Install Kibana using the official documentation.
You can configure Nginx to expose Kibana on port 80 or 443 for easier access.
Final Architecture Summary
With this setup, the complete EFK stack is integrated with Kafka.
-
Applications send logs to td-agent
-
td-agent pushes logs to Kafka
-
Kafka buffers and streams logs
-
td-agent forwarder consumes logs
-
Elasticsearch stores logs
-
Kibana visualizes logs
The same architecture can be used in standalone environments for learning or across multiple servers in production.