{"id":1875,"date":"2019-12-10T14:08:10","date_gmt":"2019-12-10T08:38:10","guid":{"rendered":"https:\/\/opstree.com\/blog\/\/?p=1875"},"modified":"2025-11-23T12:14:05","modified_gmt":"2025-11-23T06:44:05","slug":"__trashed","status":"publish","type":"post","link":"https:\/\/opstree.com\/blog\/2019\/12\/10\/__trashed\/","title":{"rendered":"EFK 7.4.0 Stack on Kubernetes. (Part-1)"},"content":{"rendered":"\r\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"1080\" class=\"wp-image-1880\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2019\/11\/efk2-1.png?w=1024\" alt=\"\" \/><\/figure>\r\n\r\n\r\n\r\n<p class=\"has-medium-font-size\"><strong>INTRODUCTION<\/strong><\/p>\r\n\r\n\r\n\r\n<p>The Elastic Stack is the next evolution of the EFK Stack.<\/p>\r\n\r\n\r\n\r\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1394\" height=\"730\" class=\"wp-image-1953\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2019\/11\/screen-shot-2019-11-25-at-4.52.49-pm.png?w=1024\" alt=\"\" \/><\/figure>\r\n\r\n\r\n\r\n<p>To achieve this, we will be using the EFK stack version 7.4.0 composed of <em><strong>Elastisearch, Fluentd, Kibana, Metricbeat, Hearbeat, APM-Server<\/strong>, and <strong>ElastAlert<\/strong><\/em> on a Kubernetes environment. This article series will walk-through a standard Kubernetes deployment, which, in my opinion, gives a overall better understanding of each step of installation and configuration.<\/p>\r\n\r\n\r\n\r\n<div class=\"wp-block-group\">\r\n<div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\r\n<p class=\"has-medium-font-size\"><strong>PREREQUISITE<\/strong>S<\/p>\r\n\r\n\r\n\r\n<p>Before you begin with this guide, ensure you have the following available to you:<\/p>\r\n\r\n\r\n\r\n<ul>\r\n<li>A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled\r\n<ul>\r\n<li>Ensure your cluster has enough resources available to roll out the EFK stack, and if not scale your cluster by adding worker nodes. We\u2019ll be deploying a 3-Pod Elasticsearch cluster each master &amp; data node (you can scale this down to 1 if necessary).<\/li>\r\n<\/ul>\r\n<ul>\r\n<li>Every worker node will also run a Fluentd &amp;,Metricbeat Pod.<\/li>\r\n<li>As well as a single Pod of Kibana, Hearbeat, APM-Server &amp; ElastAlert.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<li>The\u00a0<code>kubectl<\/code>\u00a0command-line tool installed on your local machine, configured to connect to your cluster. <br \/>Once you have these components set up, you\u2019re ready to begin with this guide.<\/li>\r\n<li>For Elasticsearch cluster to store the data, create the StorageClass in your appropriate cloud provider. If doing the on-premise deployment then use the NFS for the same.<\/li>\r\n<li>Make sure you have applications running in your K8s Cluster to see the complete functioning of EFK Stack.<\/li>\r\n<\/ul>\r\n\r\n\r\n\r\n<p>&nbsp;<\/p>\r\n<\/div>\r\n<\/div>\r\n\r\n\r\n\r\n<p class=\"has-medium-font-size\" style=\"text-align: left;\"><strong>Step 1 &#8211; Creating a Namespace<\/strong><\/p>\r\n<p class=\"has-medium-font-size\">\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Before we start deployment, we will create the namespace. Kubernetes lets you separate objects running in your cluster using a \u201cvirtual cluster\u201d abstraction called Namespaces. In this guide, we\u2019ll create a\u00a0<code>logging<\/code>\u00a0namespace into which we\u2019ll install the EFK stack &amp; it&#8217;s components.<br \/>To create the\u00a0<code>logging<\/code>\u00a0Namespace, use the below yaml file.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#logging-namespace.yaml\r\nkind: Namespace\r\napiVersion: v1\r\nmetadata:\r\n  name: logging<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p class=\"has-medium-font-size\" style=\"text-align: left;\"><strong>Step 2 &#8211; Elasticsearch StatefulSet Cluster<\/strong><\/p>\r\n<p class=\"has-medium-font-size\">\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">To setup a monitoring stack first we will deploy the <strong>elasticsearch<\/strong>, this will act as <strong>Database<\/strong> to store all the data (metrics, logs and traces). The database will be composed of three scalable nodes connected together into a Cluster as recommended for production.<br \/><br \/>Here we will enable the x-pack authentication to make the stack more secure from potential attackers.<br \/><br \/>Also, we will be using the custom docker image which has <strong>elasticsearch-s3-repository-plugin<\/strong> installed and required certs. This will be required in future for <strong>Snapshot Lifecycle Management (SLM).<\/strong><br \/><br \/><strong>Note: Same Plugin can be used to take snapshots to AWS S3 and Alibaba OSS.<\/strong><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">1. <em><strong>Build the docker image from below Docker fil<\/strong>e<\/em><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">FROM docker.elastic.co\/elasticsearch\/elasticsearch:7.4.0\r\nUSER root\r\nARG OSS_ACCESS_KEY_ID\r\nARG OSS_SECRET_ACCESS_KEY\r\nRUN elasticsearch-plugin install --batch repository-s3\r\nRUN elasticsearch-keystore create\r\nRUN echo $OSS_ACCESS_KEY_ID | \/usr\/share\/elasticsearch\/bin\/elasticsearch-keystore add --stdin s3.client.default.access_key\r\nRUN echo $OSS_SECRET_ACCESS_KEY | \/usr\/share\/elasticsearch\/bin\/elasticsearch-keystore add --stdin s3.client.default.secret_key\r\nRUN elasticsearch-certutil cert -out config\/elastic-certificates.p12 -pass \"\"\r\nRUN chown -R elasticsearch:root config\/<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now let&#8217;s build the image and push to your private container registry.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">docker build -t elasticsearch-s3oss:7.4.0 --build-arg OSS_ACCESS_KEY_ID=&lt;key&gt; --build-arg OSS_SECRET_ACCESS_KEY=&lt;ID&gt; .\r\n\r\ndocker push &lt;registerypath&gt;\/elasticsearch-s3oss:7.4.0<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">2. <strong><em>Setup the ElasticSearch\u00a0<code>master<\/code>\u00a0node<\/em><\/strong>:<br \/><br \/>The first node of the cluster we&#8217;re going to setup is the master which is responsible of controlling the cluster.<br \/><br \/>The first k8s object, we\u2019ll create a headless Kubernetes service called\u00a0<code>elasticsearch-master-svc.yaml<\/code>\u00a0that will define a DNS domain for the 3 Pods. A headless service does not perform load balancing or have a static IP.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#elasticsearch-master-svc.yaml\r\napiVersion: v1\r\n kind: Service\r\n metadata:\r\n   namespace: logging \r\n   name: elasticsearch-master\r\n   labels:\r\n     app: elasticsearch\r\n     role: master\r\n spec:\r\n   clusterIP: None\r\n   selector:\r\n     app: elasticsearch\r\n     role: master\r\n   ports:\r\n     - port: 9200\r\n       name: http\r\n     - port: 9300\r\n       name: node-to-node<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Next, part is a\u00a0<code>StatefulSet Deployment<\/code> for master node (\u00a0<code>elasticsearch-master.yaml<\/code>\u00a0)\u00a0which describes the running service (docker image, number of replicas, environment variables and volumes).<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#elasticsearch-master.yaml\r\napiVersion: apps\/v1\r\nkind: StatefulSet\r\nmetadata:\r\n  namespace: logging\r\n  name: elasticsearch-master\r\n  labels:\r\n    app: elasticsearch\r\n    role: master\r\nspec:\r\n  serviceName: elasticsearch-master\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: elasticsearch\r\n      role: master\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: elasticsearch\r\n        role: master\r\n    spec:\r\n      affinity:\r\n        # Try to put each ES master node on a different node in the K8s cluster\r\n        podAntiAffinity:\r\n          preferredDuringSchedulingIgnoredDuringExecution:\r\n            - weight: 100\r\n              podAffinityTerm:\r\n                labelSelector:\r\n                  matchExpressions:\r\n                  - key: app\r\n                    operator: In\r\n                    values:\r\n                      - elasticsearch\r\n                  - key: role\r\n                    operator: In\r\n                    values:\r\n                      - master\r\n                topologyKey: kubernetes.io\/hostname\r\n      # spec.template.spec.initContainers\r\n      initContainers:\r\n        # Fix the permissions on the volume.\r\n        - name: fix-the-volume-permission\r\n          image: busybox\r\n          command: ['sh', '-c', 'chown -R 1000:1000 \/usr\/share\/elasticsearch\/data']\r\n          securityContext:\r\n            privileged: true\r\n          volumeMounts:\r\n            - name: data\r\n              mountPath: \/usr\/share\/elasticsearch\/data\r\n        # Increase the default vm.max_map_count to 262144\r\n        - name: increase-the-vm-max-map-count\r\n          image: busybox\r\n          command: ['sysctl', '-w', 'vm.max_map_count=262144']\r\n          securityContext:\r\n            privileged: true\r\n        # Increase the ulimit\r\n        - name: increase-the-ulimit\r\n          image: busybox\r\n          command: ['sh', '-c', 'ulimit -n 65536']\r\n          securityContext:\r\n            privileged: true\r\n\r\n      # spec.template.spec.containers\r\n      containers:\r\n        - name: elasticsearch\r\n          image: &lt;registery-path&gt;\/elasticsearch-s3oss:7.4.0\r\n          ports:\r\n            - containerPort: 9200\r\n              name: http\r\n            - containerPort: 9300\r\n              name: transport\r\n          resources:\r\n            requests:\r\n              cpu: 0.25\r\n            limits:\r\n              cpu: 1\r\n              memory: 1Gi\r\n          # spec.template.spec.containers[elasticsearch].env\r\n          env:\r\n            - name: network.host\r\n              value: \"0.0.0.0\"\r\n            - name: discovery.seed_hosts\r\n              value: \"elasticsearch-master.logging.svc.cluster.local\"\r\n            - name: cluster.initial_master_nodes\r\n              value: \"elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2\"\r\n            - name: ES_JAVA_OPTS\r\n              value: -Xms512m -Xmx512m\r\n            - name: node.master\r\n              value: \"true\"\r\n            - name: node.ingest\r\n              value: \"false\"\r\n            - name: node.data\r\n              value: \"false\"\r\n            - name: search.remote.connect\r\n              value: \"false\"           \r\n            - name: cluster.name\r\n              value: prod\r\n            - name: node.name\r\n              valueFrom:\r\n                fieldRef:\r\n                  fieldPath: metadata.name\r\n         # parameters to enable x-pack security.\r\n            - name: xpack.security.enabled\r\n              value: \"true\"\r\n            - name: xpack.security.transport.ssl.enabled\r\n              value: \"true\"\r\n            - name: xpack.security.transport.ssl.verification_mode\r\n              value: \"certificate\"\r\n            - name: xpack.security.transport.ssl.keystore.path\r\n              value: elastic-certificates.p12\r\n            - name: xpack.security.transport.ssl.truststore.path\r\n              value: elastic-certificates.p12\r\n          # spec.template.spec.containers[elasticsearch].volumeMounts\r\n          volumeMounts:\r\n            - name: data\r\n              mountPath: \/usr\/share\/elasticsearch\/data\r\n\r\n      # use the secret if pulling image from private repository\r\n      imagePullSecrets:\r\n        - name: prod-repo-sec\r\n  # Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite.\r\n  volumeClaimTemplates:\r\n  - metadata:\r\n      name: data\r\n    spec:\r\n      accessModes:\r\n      - ReadWriteOnce\r\n      storageClassName: elastic-cloud-disk\r\n      resources:\r\n        requests:\r\n          storage: 20Gi<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now, apply the these files to K8s cluster to deploy elasticsearch master nodes.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">$ kubectl apply -f elasticsearch-master.yaml \\\r\n                   elasticsearch-master-svc.yaml<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\"><strong>3. S<\/strong><em><strong>etup the ElasticSearch\u00a0<code>data<\/code>\u00a0node<\/strong>:<\/em><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">The second node of the cluster we&#8217;re going to setup is the data which is responsible of hosting the data and executing the queries (CRUD, search, aggregation).<br \/><br \/>Here also, we\u2019ll create a headless Kubernetes service called\u00a0<code>elasticsearch-data-svc.yaml<\/code>\u00a0that will define a DNS domain for the 3 Pods.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#elasticsearch-data-svc.yaml\r\napiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  namespace: logging \r\n  name: elasticsearch\r\n  labels:\r\n    app: elasticsearch\r\n    role: data\r\nspec:\r\n  clusterIP: None\r\n  selector:\r\n    app: elasticsearch\r\n    role: data\r\n  ports:\r\n    - port: 9200\r\n      name: http\r\n    - port: 9300\r\n      name: node-to-node<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Next, part is a\u00a0<code>StatefulSet Deployment<\/code>\u00a0for data node\u00a0<code>elasticsearch-data.yaml<\/code>\u00a0, which describes the running service (docker image, number of replicas, environment variables and volumes).<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#elasticsearch-data.yaml\r\napiVersion: apps\/v1\r\nkind: StatefulSet\r\nmetadata:\r\n  namespace: logging \r\n  name: elasticsearch-data\r\n  labels:\r\n    app: elasticsearch\r\n    role: data\r\nspec:\r\n  serviceName: elasticsearch-data\r\n  # This is number of nodes that we want to run\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: elasticsearch\r\n      role: data\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: elasticsearch\r\n        role: data\r\n    spec:\r\n      affinity:\r\n        # Try to put each ES data node on a different node in the K8s cluster\r\n        podAntiAffinity:\r\n          preferredDuringSchedulingIgnoredDuringExecution:\r\n            - weight: 100\r\n              podAffinityTerm:\r\n                labelSelector:\r\n                  matchExpressions:\r\n                  - key: app\r\n                    operator: In\r\n                    values:\r\n                      - elasticsearch\r\n                  - key: role\r\n                    operator: In\r\n                    values:\r\n                      - data\r\n                topologyKey: kubernetes.io\/hostname\r\n      terminationGracePeriodSeconds: 300\r\n      # spec.template.spec.initContainers\r\n      initContainers:\r\n        # Fix the permissions on the volume.\r\n        - name: fix-the-volume-permission\r\n          image: busybox\r\n          command: ['sh', '-c', 'chown -R 1000:1000 \/usr\/share\/elasticsearch\/data']\r\n          securityContext:\r\n            privileged: true\r\n          volumeMounts:\r\n            - name: data\r\n              mountPath: \/usr\/share\/elasticsearch\/data\r\n        # Increase the default vm.max_map_count to 262144\r\n        - name: increase-the-vm-max-map-count\r\n          image: busybox\r\n          command: ['sysctl', '-w', 'vm.max_map_count=262144']\r\n          securityContext:\r\n            privileged: true\r\n        # Increase the ulimit\r\n        - name: increase-the-ulimit\r\n          image: busybox\r\n          command: ['sh', '-c', 'ulimit -n 65536']\r\n          securityContext:\r\n            privileged: true\r\n      # spec.template.spec.containers\r\n      containers:\r\n        - name: elasticsearch\r\n          image: &lt;registery-path&gt;\/elasticsearch-s3oss:7.4.0\r\n          imagePullPolicy: Always\r\n          ports:\r\n            - containerPort: 9200\r\n              name: http\r\n            - containerPort: 9300\r\n              name: transport\r\n          resources:\r\n            limits:\r\n              memory: 4Gi\r\n          # spec.template.spec.containers[elasticsearch].env\r\n          env:\r\n            - name: discovery.seed_hosts\r\n              value: \"elasticsearch-master.logging.svc.cluster.local\"\r\n            - name: ES_JAVA_OPTS\r\n              value: -Xms3g -Xmx3g\r\n            - name: node.master\r\n              value: \"false\"\r\n            - name: node.ingest\r\n              value: \"true\"\r\n            - name: node.data\r\n              value: \"true\"\r\n            - name: cluster.remote.connect\r\n              value: \"true\"\r\n            - name: cluster.name\r\n              value: prod\r\n            - name: node.name\r\n              valueFrom:\r\n                fieldRef:\r\n                  fieldPath: metadata.name\r\n            - name: xpack.security.enabled\r\n              value: \"true\"\r\n            - name: xpack.security.transport.ssl.enabled\r\n              value: \"true\"  \r\n            - name: xpack.security.transport.ssl.verification_mode\r\n              value: \"certificate\"\r\n            - name: xpack.security.transport.ssl.keystore.path\r\n              value: elastic-certificates.p12\r\n            - name: xpack.security.transport.ssl.truststore.path\r\n              value: elastic-certificates.p12 \r\n          # spec.template.spec.containers[elasticsearch].volumeMounts\r\n          volumeMounts:\r\n            - name: data\r\n              mountPath: \/usr\/share\/elasticsearch\/data\r\n\r\n      # use the secret if pulling image from private repository\r\n      imagePullSecrets:\r\n        - name: prod-repo-sec\r\n\r\n# Here we are using the cloud storage class to store the data, make sure u have created the storage-class as pre-requisite.\r\n  volumeClaimTemplates:\r\n  - metadata:\r\n      name: data\r\n    spec:\r\n      accessModes:\r\n      - ReadWriteOnce\r\n      storageClassName: elastic-cloud-disk\r\n      resources:\r\n        requests:\r\n          storage: 50Gi<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now, apply these files to K8s Cluster to deploy elasticsearch data nodes.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">$ kubectl apply -f elasticsearch-data.yaml \\\r\n                   elasticsearch-data-svc.yaml<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">4.<strong> G<\/strong><em><strong>enerate a X-Pack password and store in a k8s secret<\/strong>:<\/em><\/p>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">We enabled the x-pack security module above to secure our cluster, so we need to initialize the passwords. Execute the following command which runs the program\u00a0<code>bin\/elasticsearch-setup-passwords<\/code>\u00a0within the\u00a0<code>data<\/code>\u00a0node container (any node would work) to generate default users and passwords.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">$ kubectl exec $(kubectl get pods -n logging | grep elasticsearch-data | sed -n 1p | awk '{print $1}') \\\r\n    -n monitoring \\\r\n    -- bin\/elasticsearch-setup-passwords auto -b\r\n\r\nChanged password for user apm_system\r\nPASSWORD apm_system = uF8k2KVwNokmHUomemBG\r\n\r\nChanged password for user kibana\r\nPASSWORD kibana = DBptcLh8hu26230mIYc3\r\n\r\nChanged password for user logstash_system\r\nPASSWORD logstash_system = SJFKuXncpNrkuSmVCaVS\r\n\r\nChanged password for user beats_system\r\nPASSWORD beats_system = FGgIkQ1ki7mPPB3d7ns7\r\n\r\nChanged password for user remote_monitoring_user\r\nPASSWORD remote_monitoring_user = EgFB3FOsORqOx2EuZNLZ\r\n\r\nChanged password for user elastic\r\nPASSWORD elastic = 3JW4tPdspoUHzQsfQyAI<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Note the\u00a0<code>elastic<\/code>\u00a0user password and we will add into a k8s secret (<code>efk-pw-elastic<\/code>) which will be used by another stack components to connect elasticsearch data nodes for data ingestion.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">$ kubectl create secret generic efk-pw-elastic \\\r\n    -n logging \\\r\n    --from-literal password=3JW4tPdspoUHzQsfQyAI<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p class=\"has-medium-font-size\" style=\"text-align: left;\"><strong>Step 3 &#8211; Kibana Setup<\/strong><\/p>\r\n<p class=\"has-medium-font-size\">\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">To launch Kibana on Kubernetes, we\u2019ll create a <strong>configMap<\/strong> <code>kibana-configmap<\/code>,to provide a config file to our deployment with all the required properties, <strong>Service<\/strong> called\u00a0<code>kibana<\/code>, and a <strong>Deployment<\/strong> consisting of one Pod replica. You can scale the number of replicas depending on your production needs, and <strong>Ingress<\/strong> which helps to routes outside traffic to Service inside the cluster. You need an Ingress controller for this step.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">#kibana-configmap.yaml \r\napiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: kibana-configmap\r\n  namespace: logging\r\ndata:\r\n  kibana.yml: |\r\n    server.name: kibana\r\n    server.host: \"0\"\r\n    # Optionally can define dashboard id which will launch on main Kibana Page.\r\n    kibana.defaultAppId: \"dashboard\/781b10c0-09e2-11ea-98eb-c318232a6317\"\r\n    elasticsearch.hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']\r\n    elasticsearch.username: ${ELASTICSEARCH_USERNAME}\r\n    elasticsearch.password: ${ELASTICSEARCH_PASSWORD}\r\n---\r\n#kibana-service.yaml \r\napiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  namespace: logging\r\n  name: kibana\r\n  labels:\r\n    app: kibana\r\nspec:\r\n  selector:\r\n    app: kibana\r\n  ports:\r\n    - port: 5601\r\n      name: http\r\n---\r\n#kibana-deployment.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  namespace: logging \r\n  name: kibana\r\n  labels:\r\n    app: kibana\r\nspec:\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: kibana\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: kibana\r\n    spec:\r\n      containers:\r\n        - name: kibana\r\n          image: docker.elastic.co\/kibana\/kibana:7.4.0\r\n          ports:\r\n            - containerPort: 5601\r\n          env:\r\n            - name: SERVER_NAME\r\n              valueFrom:\r\n                fieldRef:\r\n                  fieldPath: metadata.name\r\n            - name: SERVER_HOST\r\n              value: \"0.0.0.0\"\r\n            - name: ELASTICSEARCH_HOSTS\r\n              value: http:\/\/elasticsearch.logging.svc.cluster.local:9200\r\n            - name: ELASTICSEARCH_USERNAME\r\n              value: kibana\r\n            - name: ELASTICSEARCH_PASSWORD\r\n              valueFrom:\r\n                secretKeyRef:\r\n                  name: elasticsearch-pw-elastic\r\n                  key: password\r\n            - name: XPACK_MONITORING_ELASTICSEARCH_USEARNAME\r\n              value: elastic\r\n            - name: XPACK_MONITORING_ELASTICSEARCH_PASSWORD\r\n              valueFrom:\r\n                secretKeyRef:\r\n                  name: efk-pw-elastic\r\n                  key: password\r\n          volumeMounts:\r\n          - name: kibana-configmap\r\n            mountPath: \/usr\/share\/kibana\/config\r\n      volumes:\r\n      - name: kibana-configmap\r\n        configMap:\r\n          name: kibana-configmap\r\n---\r\n#kibana-ingress.yaml\r\napiVersion: extensions\/v1beta1\r\nkind: Ingress\r\nmetadata:\r\n  name: kibana\r\n  namespace: logging\r\n  annotations:\r\n    kubernetes.io\/ingress.class: \"nginx\"\r\nspec:\r\n  # Specify the tls secret.\r\n  tls:\r\n  - secretName: prod-secret\r\n    hosts:\r\n    - kibana.example.com\r\n   \r\n  rules:\r\n  - host: kibana.example.com\r\n    http:\r\n      paths:\r\n      - path: \/\r\n        backend:\r\n          serviceName: kibana\r\n          servicePort: 5601\r\n<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now, let&#8217;s apply these files to deploy Kibana to K8s cluster.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<pre class=\"wp-block-syntaxhighlighter-code\">$ kubectl apply  -f kibana-configmap.yaml \\\r\n                 -f kibana-service.yaml \\\r\n                 -f kibana-deployment.yaml \\\r\n                 -f kibana-ingress.yaml<\/pre>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now, Open the Kibana with the domain name\u00a0 \u00a0in your browser, which we have defined in our Ingress or user can expose the kiban service on Node Port and access the dashboard.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-image\">\r\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"415\" height=\"276\" class=\"wp-image-1918\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2019\/11\/login.png?w=415\" alt=\"\" \/><\/figure>\r\n<\/div>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Now, login with username\u00a0<code>elastic<\/code>\u00a0and the password generated before and stored in a secret (<code>efk-pw-elastic<\/code>) and you will be redirected to the index page:<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-image\">\r\n<figure class=\"alignright size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1918\" height=\"904\" class=\"wp-image-1919\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2019\/11\/mainpage.png?w=1024\" alt=\"\" \/><\/figure>\r\n<\/div>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Last, create the separate admin user to access the kibana dashboard with role superuser.<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<div class=\"wp-block-image\">\r\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"475\" height=\"719\" class=\"wp-image-1920\" src=\"https:\/\/opstree.com\/blog\/\/wp-content\/uploads\/2019\/11\/useradd.png?w=475\" alt=\"\" \/><\/figure>\r\n<\/div>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">Finally, we are ready to use the\u00a0<strong>ElasticSearch + Kibana<\/strong>\u00a0stack which will serve us to store and visualize our infrastructure and application data (metrics, logs and traces).<\/p>\r\n<p>\r\n\r\n<\/p>\r\n<h2 class=\"wp-block-heading\" id=\"nextsteps\">Next steps<\/h2>\r\n<p>\r\n\r\n<\/p>\r\n<p style=\"text-align: left;\">In the following article [<a href=\"https:\/\/opstree.com\/blog\/\/2019\/12\/17\/collect-logs-with-fluentd-in-k8s-part-2\/\" target=\"_blank\" rel=\"noreferrer noopener\">Collect Logs with Fluentd in K8s. (Part-2)<\/a>], we will learn how to install and configure fluentd to collect the logs.<\/p>\r\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>INTRODUCTION The Elastic Stack is the next evolution of the EFK Stack. To achieve this, we will be using the EFK stack version 7.4.0 composed of Elastisearch, Fluentd, Kibana, Metricbeat, Hearbeat, APM-Server, and ElastAlert on a Kubernetes environment. This article series will walk-through a standard Kubernetes deployment, which, in my opinion, gives a overall better &hellip; <a href=\"https:\/\/opstree.com\/blog\/2019\/12\/10\/__trashed\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;EFK 7.4.0 Stack on Kubernetes. (Part-1)&#8221;<\/span><\/a><\/p>\n","protected":false},"author":173461938,"featured_media":29900,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[28070474,50684163,277419945],"tags":[1040049,768739310,768739311],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/opstree.com\/blog\/wp-content\/uploads\/2025\/11\/DevSecOps-1.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pfDBOm-uf","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/1875"}],"collection":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/users\/173461938"}],"replies":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/comments?post=1875"}],"version-history":[{"count":27,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/1875\/revisions"}],"predecessor-version":[{"id":30034,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/1875\/revisions\/30034"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media\/29900"}],"wp:attachment":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media?parent=1875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/categories?post=1875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/tags?post=1875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}