Long since Prometheus took on the role of monitoring the systems, it has been the undisputed open-source leader for monitoring and alerting in Kubernetes systems, it has become a go-to solution. While Prometheus does some general instructions for achieving high availability but it has limitations when it comes to data retention, historical data retrieval, and multi-tenancy. This is where Thanos comes into play. In this blog post, we will discuss how to integrate Thanos with Prometheus in Kubernetes environments and why one should choose a particular approach. So let’s get started.
Source code quality analysis is a basic piece of the Continuous Integration process. Along with automated tests, it is the key component to deliver reliable software without numerous bugs, security vulnerabilities, or performance spills.
There are many open source as well as commercial tools available in the market for static code analysis such as LGTM, PMD,Graudit, reshift, Codacy, and many more. One of the best static code analyzer you can find on the market is SonarQube.
When we say CI/CD as code, it should have modularity and reusability which results in Reducing integration problems and allowing you to deliver software more rapidly.
Jenkins Shared library is the concept of having a common pipeline code in the version control system that can be used by any number of pipelines just by referencing it. In fact, multiple teams can use the same library for their pipelines.
Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.
Practical Strategy As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:
Directory Structure
You would have the following folder structure in a git repo:
└── vars
├── opstreePipeline.groovy
├── opstreeStatefulPipeline.groovy
├── opstreeStubsPipeline.groovy
└── pipelineConfig.groovy
Setting up Library in Jenkins Console.
This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library
Pipeline.yaml
Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml
ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079
Jenkinsfile
Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:
and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?
Reference links: Image: Google image search (jenkins.io)
We likely know Kafka as a durable, scalable and fault-tolerant publish-subscribe messaging system. Recently I got a requirement to efficiently monitor and manage our Kafka cluster, and I started looking for different solutions. Kafka-manager is an open source tool introduced by Yahoo to manage and monitor the Apache Kafka cluster via UI.
Before I share my experience of configuring Kafka manager on Kubernetes, let’s go through its considerable features
As per their documentation on github below are the major features:
Clusters:
Manage multiple clusters.
Easy inspection of the cluster state.
Brokers:
Run preferred replica election.
Generate partition assignments with the option to select brokers to use
Run reassignment of a partition (based on generated assignments)
Topics:
Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+)
Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config)
The topic list now indicates topics marked for deletion (only supported on 0.8.2+)
Batch generate partition assignments for multiple topics with the option to select brokers to use
Batch run reassignment of partition for multiple topics
Add partitions to an existing topic
Update config for an existing topic
Metrics:
Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper.
Optionally enable JMX polling for broker level and topic level metrics.
Prerequisites of Kafka Manager:
We should have a running Apache Kafka with Apache Zookeeper.
Apache Zookeeper
Apache Kafka
Deployment on Kubernetes:
To deploy Kafka Manager on Kubernetes, we need to create deployment and service file as given below.
After deployment, we should able to access Kafka manager service via http://:8080 We have two files to Kafka-manager-service.yaml and kafka-manager.yaml to achieve above-mentioned setup. Let’s have a brief description of the different attributes used in these files. Deployment configuration file: namespace: provide a namespace to isolate application within Kubernetes. replicas: number of containers to spun up. image: provide the path of docker image to be used. containerPorts: on which port you want to run your application. environment: “ZK_HOSTS” provide the address of already running zookeeper. Service configuration file: This file contains the details to create Kafka manager service ok Kubernetes. For demo purpose, I have used the node port method to expose my service. As we are using Kubernetes for our underlying platform of deployment it is recommended not to use external IP to access any service. Either we should go with LoadBalancer or use ingress (recommended method) rather than exposing all microservices. To configure ingress, please take a note from Kubernetes Ingress. Once we are able to access Kafka manager we can see similar screens.
Cluster Management
Topic List
Major Issues
To get broker level and topic level metrics we have to enable JMX polling.
So what we will generally do is to set the environment variable in the kubernetes manifest but somehow it is not working most of the times.
To resolve this you need to update JMX settings while creating your docker image as given as below.
vim /opt/kafka/bin/kafka-run-class.sh
if [ -z "$KAFKA_JMX_OPTS" ]; then
#KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false "
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=$HOSTNAME -Djava.net.preferIPv4Stack=true"
fi
Conclusion
Deploying Kafka manager on Kubernetes encourages the easy setup, provides efficient manageability and all-time availability. Managing Kafka cluster over CLI becomes a tedious task and here Kafka manager helps to focus more on the use of Kafka rather than investing our time to configure and manage it. It becomes useful at Enterprise Level, where system engineers can manage multiple Kafka clusters easily via UI.