{"id":12234,"date":"2022-11-01T12:51:32","date_gmt":"2022-11-01T07:21:32","guid":{"rendered":"https:\/\/opstree.com\/blog\/\/?p=12234"},"modified":"2026-01-08T14:47:31","modified_gmt":"2026-01-08T09:17:31","slug":"kafka-within-efk-monitoring","status":"publish","type":"post","link":"https:\/\/opstree.com\/blog\/2022\/11\/01\/kafka-within-efk-monitoring\/","title":{"rendered":"Kafka within EFK Monitoring"},"content":{"rendered":"<h2 data-start=\"541\" data-end=\"556\">Introduction<\/h2>\n<p data-start=\"558\" data-end=\"705\">Today\u2019s world is completely internet driven. Whether it is shopping, banking, or entertainment, almost everything is available with a single click.<\/p>\n<p data-start=\"707\" data-end=\"1023\">From a DevOps perspective, modern e-commerce and enterprise applications are usually built using a microservices architecture. Instead of running one large monolithic application, the system is divided into smaller, independent services. This approach improves scalability, manageability, and operational efficiency.<\/p>\n<p data-start=\"1025\" data-end=\"1206\">However, managing a distributed system also increases complexity. One of the most critical requirements for maintaining microservices is <strong data-start=\"1162\" data-end=\"1205\">effective monitoring and log management<\/strong>.<\/p>\n<p data-start=\"1208\" data-end=\"1438\">A commonly used monitoring and logging stack is the <strong data-start=\"1260\" data-end=\"1273\">EFK stack<\/strong>, which includes Elasticsearch, Fluentd, and Kibana. In many production environments, Kafka is also introduced into this stack to handle log ingestion more reliably.<\/p>\n<p data-start=\"1440\" data-end=\"1571\">Kafka is an open-source event streaming platform and is widely used across organizations for handling high-throughput data streams.<\/p>\n<p data-start=\"1573\" data-end=\"1617\">This naturally raises an important question.<\/p>\n<p data-start=\"1619\" data-end=\"1669\">Why should Kafka be used along with the EFK stack?<\/p>\n<p data-start=\"1671\" data-end=\"1792\">In this blog, we will explore why Kafka is introduced, what benefits it brings, and how it integrates with the EFK stack.<\/p>\n<p data-start=\"1794\" data-end=\"1812\">Let\u2019s get started.<\/p>\n<h2 data-start=\"1819\" data-end=\"1858\">Why Kafka Is Needed in the EFK Stack<\/h2>\n<p data-start=\"1860\" data-end=\"2119\">While traveling, we often see crossroads controlled by traffic lights or traffic police. At a junction where traffic flows from multiple directions, these controls ensure smooth movement by allowing traffic from one direction while holding others temporarily.<\/p>\n<p data-start=\"2121\" data-end=\"2199\">In technical terms, traffic is regulated by <strong data-start=\"2165\" data-end=\"2198\">buffering and controlled flow<\/strong>.<\/p>\n<p data-start=\"2201\" data-end=\"2251\">Kafka plays a very similar role in log management.<\/p>\n<p data-start=\"2253\" data-end=\"2523\">Imagine hundreds of applications sending logs directly to Elasticsearch. During peak traffic, Elasticsearch may become overwhelmed. Scaling Elasticsearch during heavy ingestion is not always a good solution because frequent scaling and re-sharding can cause instability.<\/p>\n<p data-start=\"2525\" data-end=\"2741\">Kafka solves this problem by acting as a <strong data-start=\"2566\" data-end=\"2582\">buffer layer<\/strong>. Instead of pushing logs directly to Elasticsearch, logs are first sent to Kafka. Kafka then delivers them in controlled, manageable batches to Elasticsearch.<img decoding=\"async\" class=\"wp-image-12248\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2022\/10\/screenshot-2022-10-14-at-5.58.12-pm.png?w=1024\" alt=\"\" width=\"800\" \/><\/p>\n<h2 data-start=\"2748\" data-end=\"2783\">High-Level Architecture Overview<\/h2>\n<p data-start=\"2785\" data-end=\"2836\">The complete flow consists of the following blocks.<\/p>\n<ul data-start=\"2838\" data-end=\"2928\">\n<li data-start=\"2838\" data-end=\"2875\">\n<p data-start=\"2840\" data-end=\"2875\">Application containers or instances<\/p>\n<\/li>\n<li data-start=\"2876\" data-end=\"2883\">\n<p data-start=\"2878\" data-end=\"2883\">Kafka<\/p>\n<\/li>\n<li data-start=\"2884\" data-end=\"2903\">\n<p data-start=\"2886\" data-end=\"2903\">Fluentd forwarder<\/p>\n<\/li>\n<li data-start=\"2904\" data-end=\"2919\">\n<p data-start=\"2906\" data-end=\"2919\">Elasticsearch<\/p>\n<\/li>\n<li data-start=\"2920\" data-end=\"2928\">\n<p data-start=\"2922\" data-end=\"2928\">Kibana<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"2930\" data-end=\"2980\">Each block is explained below with configurations.<\/p>\n<h2 data-start=\"2987\" data-end=\"3041\">Block 1 Application Logs and td-agent Configuration<\/h2>\n<p data-start=\"3043\" data-end=\"3229\">This block represents application containers or EC2 instances where logs are generated. The <strong data-start=\"3135\" data-end=\"3147\">td-agent<\/strong> service runs alongside the application to collect logs and forward them to Kafka.<\/p>\n<p data-start=\"3231\" data-end=\"3482\">td-agent is a stable distribution of Fluentd maintained by Treasure Data and the Cloud Native Computing Foundation. It is a data collection daemon that gathers logs from various sources and forwards them to destinations such as Kafka or Elasticsearch.<\/p>\n<h3 data-start=\"3484\" data-end=\"3510\">td-agent Configuration<\/h3>\n<p data-start=\"3512\" data-end=\"3583\">Use the following configuration inside the td-agent configuration file.<\/p>\n<h3 data-start=\"3585\" data-end=\"3609\">Source Configuration<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-variable\">&lt;source&gt;<\/span><br \/>\n<span class=\"hljs-meta\">@type<\/span> tail<br \/>\nread_from_head true<br \/>\npath <span class=\"hljs-variable\">&lt;path_of_log_file&gt;<\/span><br \/>\ntag <span class=\"hljs-variable\">&lt;tag_name&gt;<\/span><br \/>\nformat json<br \/>\nkeep_time_key true<br \/>\ntime_format <span class=\"hljs-variable\">&lt;time_format_of_logs&gt;<\/span><br \/>\npos_file <span class=\"hljs-variable\">&lt;pos_file_location&gt;<\/span><br \/>\n<span class=\"hljs-variable\">&lt;\/source&gt;<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<p data-start=\"3819\" data-end=\"3867\">The source block defines how logs are collected.<\/p>\n<ul data-start=\"3869\" data-end=\"4153\">\n<li data-start=\"3869\" data-end=\"3909\">\n<p data-start=\"3871\" data-end=\"3909\">path specifies the log file location<\/p>\n<\/li>\n<li data-start=\"3910\" data-end=\"3955\">\n<p data-start=\"3912\" data-end=\"3955\">tag is a user-defined identifier for logs<\/p>\n<\/li>\n<li data-start=\"3956\" data-end=\"4010\">\n<p data-start=\"3958\" data-end=\"4010\">format defines the log format such as json or text<\/p>\n<\/li>\n<li data-start=\"4011\" data-end=\"4061\">\n<p data-start=\"4013\" data-end=\"4061\">keep_time_key preserves the original timestamp<\/p>\n<\/li>\n<li data-start=\"4062\" data-end=\"4107\">\n<p data-start=\"4064\" data-end=\"4107\">time_format defines the timestamp pattern<\/p>\n<\/li>\n<li data-start=\"4108\" data-end=\"4153\">\n<p data-start=\"4110\" data-end=\"4153\">pos_file tracks the read position of logs<\/p>\n<\/li>\n<\/ul>\n<h3 data-start=\"4155\" data-end=\"4187\">Match Configuration to Kafka<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-variable\">&lt;match &lt;tag_name&gt;<\/span>&gt;<br \/>\n<span class=\"hljs-meta\">@type<\/span> kafka_buffered<br \/>\noutput_include_tag true<br \/>\nbrokers <span class=\"hljs-variable\">&lt;kafka_hostname:port&gt;<\/span><br \/>\ndefault_topic <span class=\"hljs-variable\">&lt;kafka_topic_name&gt;<\/span><br \/>\noutput_data_type json<br \/>\nbuffer_type file<br \/>\nbuffer_path <span class=\"hljs-variable\">&lt;buffer_path_location&gt;<\/span><br \/>\nbuffer_chunk_limit 10m<br \/>\nbuffer_queue_limit 256<br \/>\nbuffer_queue_full_action drop_oldest_chunk<br \/>\n<span class=\"hljs-variable\">&lt;\/match&gt;<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<p data-start=\"4517\" data-end=\"4561\">The match block defines where logs are sent.<\/p>\n<ul data-start=\"4563\" data-end=\"4754\">\n<li data-start=\"4563\" data-end=\"4607\">\n<p data-start=\"4565\" data-end=\"4607\">kafka_buffered ensures reliable delivery<\/p>\n<\/li>\n<li data-start=\"4608\" data-end=\"4647\">\n<p data-start=\"4610\" data-end=\"4647\">brokers defines Kafka host and port<\/p>\n<\/li>\n<li data-start=\"4648\" data-end=\"4693\">\n<p data-start=\"4650\" data-end=\"4693\">default_topic is the Kafka topic for logs<\/p>\n<\/li>\n<li data-start=\"4694\" data-end=\"4754\">\n<p data-start=\"4696\" data-end=\"4754\">buffer settings control local buffering and backpressure<\/p>\n<\/li>\n<\/ul>\n<h2 data-start=\"4761\" data-end=\"4783\">Block 2 Kafka Setup<\/h2>\n<p data-start=\"4785\" data-end=\"4841\">Kafka acts as the central buffering and streaming layer.<\/p>\n<p data-start=\"4843\" data-end=\"4964\">Kafka uses Zookeeper for coordination and self-balancing. In production setups, Zookeeper is usually deployed separately.<\/p>\n<h3 data-start=\"4966\" data-end=\"4984\">Download Kafka<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-attribute\">wget<\/span> http:\/\/mirror.fibergrid.in\/apache\/kafka\/<span class=\"hljs-number\">0.10.2.0<\/span>\/kafka_2.<span class=\"hljs-number\">12<\/span>-<span class=\"hljs-number\">0.10.2.0<\/span>.tgz<br \/>\n<\/code><\/div>\n<\/div>\n<h3 data-start=\"5073\" data-end=\"5096\">Extract the Package<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-attribute\">tar<\/span> -xzf kafka_2.<span class=\"hljs-number\">12<\/span>-<span class=\"hljs-number\">0.10.2.0<\/span>.tgz<br \/>\n<\/code><\/div>\n<\/div>\n<h3 data-start=\"5145\" data-end=\"5167\">Starting Zookeeper<\/h3>\n<p data-start=\"5169\" data-end=\"5208\">Zookeeper must be started before Kafka.<\/p>\n<p data-start=\"5210\" data-end=\"5252\">Update JVM heap size in the shell profile.<\/p>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-attribute\">vi<\/span> .bashrc<br \/>\n<\/code><\/div>\n<\/div>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-built_in\">export<\/span> <span class=\"hljs-attribute\">KAFKA_HEAP_OPTS<\/span>=<span class=\"hljs-string\">\"-Xmx500M -Xms500M\"<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<p data-start=\"5326\" data-end=\"5406\">The heap size should be approximately 50 percent of the available system memory.<\/p>\n<p data-start=\"5408\" data-end=\"5433\">Reload the configuration.<\/p>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-selector-tag\">source<\/span> <span class=\"hljs-selector-class\">.bashrc<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<p data-start=\"5459\" data-end=\"5493\">Start Zookeeper in the background.<\/p>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-built_in\">cd<\/span> kafka_2.12-0.10.2.0<br \/>\n<span class=\"hljs-built_in\">nohup<\/span> bin\/zookeeper-server-start.sh config\/zookeeper.properties &gt; ~\/zookeeper-logs &amp;<br \/>\n<\/code><\/div>\n<\/div>\n<h3 data-start=\"5617\" data-end=\"5635\">Starting Kafka<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-built_in\">cd<\/span> kafka_2.12-0.10.2.0<br \/>\n<span class=\"hljs-built_in\">nohup<\/span> bin\/kafka-server-start.sh config\/server.properties &gt; ~\/kafka-logs &amp;<br \/>\n<\/code><\/div>\n<\/div>\n<h3 data-start=\"5743\" data-end=\"5764\">Stopping Services<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\">bin\/kafka-server-<span class=\"hljs-selector-tag\">stop<\/span><span class=\"hljs-selector-class\">.sh<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\">bin\/zookeeper-server-<span class=\"hljs-selector-tag\">stop<\/span><span class=\"hljs-selector-class\">.sh<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<p data-start=\"5838\" data-end=\"5916\">For advanced configurations, always refer to the official Kafka documentation.<\/p>\n<h2 data-start=\"5923\" data-end=\"5988\">Block 3 td-agent as Kafka Consumer and Elasticsearch Forwarder<\/h2>\n<p data-start=\"5990\" data-end=\"6113\">At this stage, logs are available in Kafka topics. The next step is to pull logs from Kafka and send them to Elasticsearch.<\/p>\n<p data-start=\"6115\" data-end=\"6182\">Here, td-agent is configured as a <strong data-start=\"6149\" data-end=\"6181\">Kafka consumer and forwarder<\/strong>.<\/p>\n<h3 data-start=\"6184\" data-end=\"6214\">Kafka Source Configuration<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><span class=\"hljs-variable\">&lt;source&gt;<\/span><br \/>\n<span class=\"hljs-meta\">@type<\/span> kafka_group<br \/>\nbrokers <span class=\"hljs-variable\">&lt;kafka_dns:port&gt;<\/span><br \/>\nconsumer_group <span class=\"hljs-variable\">&lt;consumer_group_kafka&gt;<\/span><br \/>\ntopics <span class=\"hljs-variable\">&lt;kafka_topic_name&gt;<\/span><br \/>\n<span class=\"hljs-variable\">&lt;\/source&gt;<\/span><br \/>\n<\/code><\/div>\n<\/div>\n<ul data-start=\"6359\" data-end=\"6462\">\n<li data-start=\"6359\" data-end=\"6409\">\n<p data-start=\"6361\" data-end=\"6409\">consumer_group ensures distributed consumption<\/p>\n<\/li>\n<li data-start=\"6410\" data-end=\"6462\">\n<p data-start=\"6412\" data-end=\"6462\">each log record is consumed by only one consumer<\/p>\n<\/li>\n<\/ul>\n<h3 data-start=\"6464\" data-end=\"6504\">Match Configuration to Elasticsearch<\/h3>\n<div class=\"contain-inline-size rounded-2xl corner-superellipse\/1.1 relative bg-token-sidebar-surface-primary\">\n<div class=\"sticky top-[calc(--spacing(9)+var(--header-height))] @w-xl\/main:top-9\">\n<div class=\"absolute end-0 bottom-0 flex h-9 items-center pe-2\">\n<div class=\"bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs\"><\/div>\n<\/div>\n<\/div>\n<div class=\"overflow-y-auto p-4\" dir=\"ltr\"><code class=\"whitespace-pre!\"><code class=\"whitespace-pre!\"><span class=\"hljs-variable\">&lt;match &lt;kafka_topic_name&gt;<\/span>&gt;<br \/>\n<span class=\"hljs-meta\">@type<\/span> forest<br \/>\nsubtype elasticsearch<\/code><\/code><span class=\"hljs-variable\">&lt;template&gt;<\/span><br \/>\nhost <span class=\"hljs-variable\">&lt;elasticsearch_ip&gt;<\/span><br \/>\nport <span class=\"hljs-variable\">&lt;elasticsearch_port&gt;<\/span><br \/>\nuser <span class=\"hljs-variable\">&lt;es_username&gt;<\/span><br \/>\npassword <span class=\"hljs-variable\">&lt;es_password&gt;<\/span><br \/>\nlogstash_prefix <span class=\"hljs-variable\">&lt;index_prefix&gt;<\/span><br \/>\nlogstash_format true<br \/>\ninclude_tag_key true<br \/>\ntag_key tag_name<br \/>\n<span class=\"hljs-variable\">&lt;\/template&gt;<\/span><br \/>\n<span class=\"hljs-variable\">&lt;\/match&gt;<\/span><\/p>\n<\/div>\n<\/div>\n<p data-start=\"6832\" data-end=\"6855\">Key concepts used here.<\/p>\n<ul data-start=\"6857\" data-end=\"7021\">\n<li data-start=\"6857\" data-end=\"6912\">\n<p data-start=\"6859\" data-end=\"6912\">forest dynamically creates output instances per tag<\/p>\n<\/li>\n<li data-start=\"6913\" data-end=\"6970\">\n<p data-start=\"6915\" data-end=\"6970\">logstash_prefix defines index naming in Elasticsearch<\/p>\n<\/li>\n<li data-start=\"6971\" data-end=\"7021\">\n<p data-start=\"6973\" data-end=\"7021\">logs become visible in Kibana using this index<\/p>\n<\/li>\n<\/ul>\n<h2 data-start=\"7028\" data-end=\"7058\">Block 4 Elasticsearch Setup<\/h2>\n<p data-start=\"7060\" data-end=\"7113\">Elasticsearch acts as the storage and indexing layer.<\/p>\n<p data-start=\"7115\" data-end=\"7247\">Follow the official Elasticsearch documentation to install and configure Elasticsearch on Ubuntu or your preferred operating system.<\/p>\n<h2 data-start=\"7254\" data-end=\"7277\">Block 5 Kibana Setup<\/h2>\n<p data-start=\"7279\" data-end=\"7357\">Kibana provides visualization and search capabilities on top of Elasticsearch.<\/p>\n<p data-start=\"7359\" data-end=\"7407\">Install Kibana using the official documentation.<\/p>\n<p data-start=\"7409\" data-end=\"7486\">You can configure Nginx to expose Kibana on port 80 or 443 for easier access.<\/p>\n<h2 data-start=\"7493\" data-end=\"7522\">Final Architecture Summary<\/h2>\n<p data-start=\"7524\" data-end=\"7589\">With this setup, the complete EFK stack is integrated with Kafka.<\/p>\n<ul data-start=\"7591\" data-end=\"7792\">\n<li data-start=\"7591\" data-end=\"7629\">\n<p data-start=\"7593\" data-end=\"7629\">Applications send logs to td-agent<\/p>\n<\/li>\n<li data-start=\"7630\" data-end=\"7663\">\n<p data-start=\"7632\" data-end=\"7663\">td-agent pushes logs to Kafka<\/p>\n<\/li>\n<li data-start=\"7664\" data-end=\"7698\">\n<p data-start=\"7666\" data-end=\"7698\">Kafka buffers and streams logs<\/p>\n<\/li>\n<li data-start=\"7699\" data-end=\"7735\">\n<p data-start=\"7701\" data-end=\"7735\">td-agent forwarder consumes logs<\/p>\n<\/li>\n<li data-start=\"7736\" data-end=\"7765\">\n<p data-start=\"7738\" data-end=\"7765\">Elasticsearch stores logs<\/p>\n<\/li>\n<li data-start=\"7766\" data-end=\"7792\">\n<p data-start=\"7768\" data-end=\"7792\">Kibana visualizes logs<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"7794\" data-end=\"7909\">The same architecture can be used in standalone environments for learning or across multiple servers in production.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Today\u2019s world is completely internet driven. Whether it is shopping, banking, or entertainment, almost everything is available with a single click. From a DevOps perspective, modern e-commerce and enterprise applications are usually built using a microservices architecture. Instead of running one large monolithic application, the system is divided into smaller, independent services. This approach &hellip; <a href=\"https:\/\/opstree.com\/blog\/2022\/11\/01\/kafka-within-efk-monitoring\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Kafka within EFK Monitoring&#8221;<\/span><\/a><\/p>\n","protected":false},"author":227078591,"featured_media":29900,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[28070474],"tags":[44070,768739308,676319247,5265318,207392,4996032],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/opstree.com\/blog\/wp-content\/uploads\/2025\/11\/DevSecOps-1.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pfDBOm-3bk","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/12234"}],"collection":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/users\/227078591"}],"replies":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/comments?post=12234"}],"version-history":[{"count":26,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/12234\/revisions"}],"predecessor-version":[{"id":30324,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/12234\/revisions\/30324"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media\/29900"}],"wp:attachment":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media?parent=12234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/categories?post=12234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/tags?post=12234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}