GitOps with Jenkins and Kubernetes

While tools like Kubernetes is becoming an essential need for modern cloud-based infrastructure, there is a high potential for cloud-native CI/CD. To achieve that there is a philosophical approach has emerged i.e. GitOps. As we have discussed the important principles of GitOps in our previous blog, So in this blog, we will see how to implement GitOps in our current DevOps processes, and finally GitOps implementation in a light manner. If you haven’t gone through our previous blog, here you can take a look at it.

Continue reading “GitOps with Jenkins and Kubernetes”

One more reason to use Docker

Recently I was working on a project which includes Terraform and AWS stuff. While working on that I was using my local machine for terraform code testing and luckily everything was going fine. But when we actually want to test it for the production environment we got some issues there. Then, as usual, we started to dig into the issue and finally, we got the issue which was quite a silly one 😜. The production server Terraform version and my local development server Terraform version was not the same. 

After wasting quite a time on this issue, I decided to come up with a solution so this will never happen again.

But before jumping to the solution, let’s think is this problem was only related to Terraform or do we have faced the similar kind of issue in other scenarios as well.

Well, I guess we face a similar kind of issue in other scenarios as well. Let’s talk about some of the scenario’s first.

Suppose you have to create a CI pipeline for a project and that too with code re-usability. Now pipeline is ready and it is working fine in your project and then after some time, you have to implement the same kind of pipeline for the different project. Now you can use the same code but you don’t know the exact version of tools which you were using with CI pipeline. This will lead you to error elevation. 

Let’s take another example, suppose you are developing something in any of the programming languages. Surely that utility or program will have some dependencies as well. While installing those dependencies on the local system, it can corrupt your complete system or package manager for dependency management. A decent example is Pip which is a dependency manager of Python😉.

These are some example scenarios which we have faced actually and based on that we got the motivation for writing this blog.

The Solution

To resolve all this problem we just need one thing i.e. containers. I can also say docker as well but container and docker are two different things.

But yes for container management we use docker.

So let’s go back to our first problem the terraform one. If we have to solve that problem there are multiple ways to solve this. But we tried it to solve this using Docker.

As Docker says

Build Once and Run Anywhere

So based on this statement what we did, we created a Dockerfile for required Terraform version and stored it alongside with the code. Basically our Dockerfile looks like this:-

FROM alpine:3.8

MAINTAINER OpsTree.com

ENV TERRAFORM_VERSION=0.11.10

ARG BASE_URL=https://releases.hashicorp.com/terraform

RUN apk add --no-cache curl unzip bash \
    && curl -fsSL -o /tmp/terraform.zip ${BASE_URL}/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \
    && unzip /tmp/terraform.zip -d /usr/bin/

WORKDIR /opstree/terraform

USER opstree

In this Dockerfile, we are defining the version of Terraform which needs to run the code.
In a similar fashion, all other above listed problem can be solved using Docker. We just have to create a Dockerfile with exact dependencies which are needed and that same file can work in various environments and projects.

To take it to the next level you can also dump a Makefile as well to make everyone life easier. For example:-

IMAGE_TAG=latest
build-image:
    docker build -t opstree/terraform:${IMAGE_TAG} -f Dockerfile .

run-container:
    docker run -itd --name terraform -v ~/.ssh:/root/.ssh/ -v ~/.aws:/root/.aws -v ${PWD}:/opstree/terraform opstree/terraform:${IMAGE_TAG}

plan-infra:
    docker exec -t terraform bash -c "terraform plan"

create-infra:
    docker exec -t terraform bash -c "terraform apply -auto-approve"

destroy-infra:
    docker exec -t terraform bash -c "terraform destroy -auto-approve"

And trust me after making this utility available the reactions of the people who will be using this utility will be something like this:-

Now I am assuming you guys will also try to simulate the Docker in multiple scenarios as much as possible.

There are a few more scenarios which yet to be explored to enhance the use of Docker if you find that before I do, please let me know.

Thanks for reading, I’d really appreciate any and all feedback, please leave your comment below if you guys have any feedback.

Cheers till the next time.

Jenkins Pipeline Global Shared Libraries

When we say CI/CD as code, it should have modularity and reusability which results in Reducing integration problems and allowing you to deliver software more rapidly.

Jenkins Shared library is the concept of having a common pipeline code in the version control system that can be used by any number of pipelines just by referencing it. In fact, multiple teams can use the same library for their pipelines.

Our thought is putting all pipeline functions in vars is much more practical approach, while there is no other good way to do inheritance, we wanted to use Jenkins Pipelines the right way but it has turned out to be far more practical to use vars for global functions.

Practical Strategy
As we know Jenkins Pipeline’s shared library support allows us to define and develop a set of shared pipeline helpers in this repository and provides a straightforward way of using those functions in a Jenkinsfile.This simple example will just illustrate how you can provide input to a pipeline with a simple YAML file so you can centralize all of your pipelines into one library. The Jenkins shared library example:And the example app that uses it:

Directory Structure

You would have the following folder structure in a git repo:

└── vars
    ├── opstreePipeline.groovy
    ├── opstreeStatefulPipeline.groovy
    ├── opstreeStubsPipeline.groovy
    └── pipelineConfig.groovy

Setting up Library in Jenkins Console.

This repo would be configured in under Manage Jenkins > Configure System in the Global Pipeline Libraries section. In that section Jenkins requires you give this library a Name. Example opstree-library

Pipeline.yaml

Let’s assume that project repository would have a pipeline.yaml file in the project root that would provide input to the pipeline:Pipeline.yaml

ENVIRONMENT_NAME: test
SERVICE_NAME: opstree-service
DB_PORT: 3079
REDIS_PORT: 6079

Jenkinsfile

Then, to utilize the shared pipeline library, the Jenkinsfile in the root of the project repo would look like:

@Library ('opstree-library@master') _
opstreePipeline()

PipelineConfig.groovy

So how does it all work? First, the following function is called to get all of the configuration data from the pipeline.yaml file:

def call() {
  Map pipelineConfig = readYaml(file: "${WORKSPACE}/pipeline.yaml")
  return pipelineConfig
}

opstreePipeline.groovy

You can see the call to this function in opstreePipeline(), which is called by the Jenkinsfile.

def call() {
    node('Slave1') {

        stage('Checkout') {
            checkout scm
        }

         def p = pipelineConfig()

        stage('Prerequistes'){
            serviceName = sh (
                    script: "echo ${p.SERVICE_NAME}|cut -d '-' -f 1",
                    returnStdout: true
                ).trim()
        }

        stage('Build & Test') {
                sh "mvn --version"
                sh "mvn -Ddb_port=${p.DB_PORT} -Dredis_port=${p.REDIS_PORT} clean install"
        }

        stage ('Push Docker Image') {
            docker.withRegistry('https://registry-opstree.com', 'dockerhub') {
                sh "docker build -t opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} ."
                sh "docker push opstree/${p.SERVICE_NAME}:${BUILD_NUMBER}"
            }
        }

        stage ('Deploy') {
            echo "We are going to deploy ${p.SERVICE_NAME}"
            sh "kubectl set image deployment/${p.SERVICE_NAME} ${p.SERVICE_NAME}=opstree/${p.SERVICE_NAME}:${BUILD_NUMBER} "
            sh "kubectl rollout status deployment/${p.SERVICE_NAME} -n ${p.ENVIRONMENT_NAME} "

    }
}

You can see the logic easily here. The pipeline is checking if the developer wants to deploy on which environment what db_port needs to be there.

Benefits

The benefits of this approach are many, some of them are as mentioned below:

  • How to write groovy code is now none of the developer’s perspective.
  • Structure of the Pipeline.yaml is really flexible, where entire data structures can be passed as input to the pipeline.
  • Code redundancy saved to a large extent.

 Jenkinsfiles could actually just look more commonly, like this:

@Library ('opstree-library@master') _
opstreePipeline()

and opstreePipeline() would just read the the project type from pipeline.yaml and dynamically run the exact function, like opstreeStatefulPipeline(), opstreeStubsPipeline.groovy() . since pipeline are not exactly groovy, this isn’t possible. So one of the drawback is that each project would have to have a different-looking Jenkinsfile. The solution is in progress!So, what do you think?

Reference links: 
Image: Google image search (jenkins.io)

Docker-Compose As A Bundled Application

When docker was released as a new containerization tool, it took the market by a storm. With its lightweight images, multi-os support, and ability to ship containers, it’s popularity only roared. I have been using it for more than six months now, I can see why it is so. Hypervisors, another type of virtualizing tools,  have been hard on hardware. Which means they require a lot of resources to run. This increases the cost of running applications way more than those running on containers. This is the problem docker solved and hence, it’s popularity. Docker engine just sits on host OS and translates the instructions from an application to the underlying OS. It does not need one extra layer of virtual OS, just the binaries and libraries of application bundled in the image. Right? Now, hold on to that thought. We all have been working with docker and an extension with docker-compose. Why? Because it makes our job easy, We are spared from typing hundreds of ad-hoc commands in terminal to set up a slightly or very complicated application with certain dependencies. We can just describe it in a `docker-compose.yml` file and our job is done. However, the problem arises when we have to share that compose file:

  • Other users might need to use the file in a different environment, so they will need to edit all the values pertaining to their need, manually, and keep separate compose files for each environment.
  • Troubleshooting various configuration issues can be a tedious task since there is no single place where the configuration of the application can be stored. Changes will have to be made in the file.
  • This also makes communication between Dev and Ops team more tricky than it has to be resulting in communication gap and time wastage.

To have a more clear picture of the issue, we can have look at the below image:

We have compose file and configuration for separate environments, we make changes according to environment needs in different compose files, which could be a long manual task depending on the size of our project.


All of this points to the fact that there is no way to bundle the applications that use efficiently-bundled docker images. See the irony here? Well, there “was” no way, until there was. Enter ‘docker-app’. This, relatively, new tool is the answer to packaging docker-compose applications. I came across it when I was, myself, struggling to re-use a docker-compose application I had written in another environment. As soon as I read about it, I had to try it, which I did and loved. It made the task much easier as it provided a template of compose file and a key-value store for environment dependent parameters.


Now, we have an artefact with extention of ‘.dockerapp’. We can pass configuration values either through CLI or files or both and it will render compose file according to those values.

Let us now go through an example of how the docker app works. I am going to deploy a dummy application Spring3hibernate from Opstree Github repository in QA env and later in PROD by making simple configuration changes.
Installing docker-app is easy, though, there is one thing one should keep in mind: it can be installed as a plugin in docker-CLI or as standalone CLI tool itself. I will be installing it as a standalone CLI tool on linux. If you wish to install it as a plugin to docker-CLI and/or on another OS, visit their Github page: https://github.com/docker/app (Also, please visit github page for basics)
Before continuing, please ensure you have docker-CLI and docker-compose installed.
Please follow below steps to install docker-app:

$ export OSTYPE="$(uname | tr A-Z a-z)"
$ curl -fsSL --output "/tmp/docker-app-${OSTYPE}.tar.gz" \
"https://github.com/docker/app/releases/download/v0.8.0/docker-app-${OSTYPE}.tar.gz"
$ tar xf "/tmp/docker-app-${OSTYPE}.tar.gz" -C /tmp/
$ install -b "/tmp/docker-app-standalone-${OSTYPE}" /usr/local/bin/docker-app

Create a new directory in your home, we’ll call it app home:

$ cd ~
$ mkdir spring3hibernate-app
$ cd spring3hibernate-app/

Now, clone the app from Opstree Github repository. This app needs only mysql as a dependency.

$ git clone https://github.com/opstree/spring3hibernate.git

We need to update database properties file and nginx config file with below contents respectively:

$ vim ~/spring3hibernate-app/spring3hibernate/src/main/resources/database.properties

Replace below content over there:

database.driver=com.mysql.jdbc.Driver
database.url=jdbc:mysql://mysql:3306/employeedb
database.user=admin
database.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
hibernate.show_sql=true
hibernate.hbm2ddl.auto=update
upload.dir=c:/uploads

For nginx conf file:

$ vim ~/spring3hibernate-app/spring3hibernate/nginx/default.conf
server {
    listen       80;
    server_name  localhost;

    location / {
        stub_status on;
        proxy_pass http://springapp1:8080/;

    }
# redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Move ‘default.conf’ to ~/spring3hibernate-app/spring3hibernate/nginx/conf/qa/ as we have different conf file for PROD which goes to ~/spring3hibernate-app/spring3hibernate/nginx/conf/prod/

upstream s3hbackend {
    server springapp1:8080;
    server springapp2:8080;
}
server {
       listen 80;
       location / {
           stub_status on;
           proxy_pass http://s3hbackend;
       }
  
       # redirect server error pages to the static page /50x.html
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /usr/share/nginx/html;
       }

}

This is the configuration for the nginx load balancer. Remember this, we’ll use it later. Let’s create our docker-app now, make sure you are in the app home directory
when executing this command:

$ docker-app init --single-file s3h

This will create a single file named s3h.dockerapp which will look like this: 

# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: s3h
# A short description of the application
description:
# List of application maintainers with name and email for each
maintainers:
  - name: ubuntu
    email:


---
# This section contains the Compose file that describes your application services.
version: "3.6"
services: {}


---
# This section contains the default values for your application parameters.

{}

As you can see this file is divided into three parts, metadata, compose, and parameters. They are all in one file because we used –single-file switch. We can divide them up in multiple files by using docker-app split command in app home directory, docker-app merge will put them back in one file. Now, for QA, we have the following configuration for s3h.dockerapp file:

version: 0.1.0
name: s3h
description:
maintainers:
  - name: atbk5
    email: [email protected]


---
version: "3.7"
services:
  mysql:
    image: mysql:5.7
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.env.rootpass}
      MYSQL_DATABASE: ${mysql.env.database}
      MYSQL_USER: ${mysql.env.user}
      MYSQL_PASSWORD: ${mysql.env.userpass}
    restart: always
    networks:
      - backend
    volumes:
      - db_data:/var/lib/mysql


  spring1:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp1
    restart: always
    networks:
      - backend
      - frontend


  spring2:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp2
    restart: always
    networks:
      - backend
      - frontend
    x-enabled: ${spring.app2}


  nginx:
    depends_on:
      - spring1
    image: nginx:alpine
    container_name: proxy
    restart: always
    networks:
      - frontend
    volumes:
      - ${nginx.conf}:/etc/nginx/conf.d
    ports:
      - ${nginx.port}:80
    x-enabled: ${nginx.status}


networks:
  frontend:
  backend:


volumes:
  db_data:


---
mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/qa
  port: 81
  status: true
spring:
  app2: false

As mentioned before, first part contains app metadata, second part contains actual compose file with lots of variables, and last part contains values of those variables. Special mention here is x-enabled variable, docker-app provides functionality to temporarily disable a service using this variable. Now, try a few commands:

$ docker-app inspect

It will produce summary of whole app.

$ docker-app render

It will replace all variables with their values and will produce a compose file

$ docker-app render --set nginx.status=”false”

It will remove nginx from docker-app compose as well as deploy

$ docker-app render | docker-compose -f - up

It will spin up all the containers according to rendered compose file. We can see the application running on port 81 of our machine.

$ docker-app --help

To check out more commands and play around a bit.
At this point, it will be better to create two directories in app home: qa and prod. Create a file in qa: qa-params.yml. Another file in prod: prod-params.yml. Copy all parameters from above s3h.dockerapp file to qa-params.yaml (or not). More importantly, copy below changes in parameters to prod-params.yml

mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/prod
  port: 80
  status: true
spring:
  app2: true

We are going to loadbalance springapp1 and springapp2 in PROD environment, since we have enabled springapp2 using x-enabled parameter. We have also changed nginx conf bind path to the new conf file and host port for nginx to 80 (for Production). All so easily. Run command:

$ docker-app render --parameters-file ./prod/prod-params.yaml

This command will produce a compose file ready for production deployment. Now run:

$ docker-app render --parameters-file ./prod/prod-params.yml | docker-compose -f - up

And production is deployed … Visit port 80 of your localhost to verify. What’s more exciting is that we can also share our docker-apps through docker hub, we can tag the app and push it to our remote registry as images after logging in:

$ docker login

Provide your username and password for docker hub, create an account if not yet created.

$ docker-app push --tag atbk5/s3h.dockerapp:latest

If we wish to upload additional files as well, we will have to split our project using docker-app split and put additional files in the directory before pushing. The additional files will go as attachments which can be accessed later.

Conclusion

With the arrival of docker app, our large, composite, and containerized applications can also be shipped and re-used as images. That is cool. But there’s something cooler which we haven’t explored yet. Deploying our docker-apps on kubernetes with the goal of exploring how far in management, and how optimal in delivery, we can go with our applications. Let’s keep this as a topic for the next blog. Until then, have a nice one. 🙂

Image Source: https://reflectoring.io/externalize-configuration/

Can you integrate a GitHub Webhook with Privately hosted Jenkins No? Think again

Introduction

One of the most basic requirement of CI implementation using Jenkins is to automatically trigger a Jenkins job post every commit. As you are already aware there are two ways in which a Jenkins job can be triggered in an automated fashion is:

  • Pull | PollSCM
  • Push | Webhook

It is a no-brainer that a Push-based trigger is the most efficient way of triggering a Jenkins job else you would be unnecessarily hogging your resources. One of the hurdles in implementing a push-based trigger is that your VCS & Jenkins server should be in the same network or in simple terms they can talk to each other.

In a typical CI setup, there is a SAAS VCS i.e GitHub/GitLab and a privately hosted Jenkins server, which make a Push-based triggering of Jenkins job impossible. Till a few days back I was under the same impression until I found this awesome blog that talks about how you can integrate a Webhook with your private Jenkins server.

In this blog, I’ll be trying to explain how I implemented the Webhook relay. Most importantly the reference blog was about integration of WebhookRelay with GitHub, with GitLab still there were some unexplored areas and I faced some challenges while doing the integration. This motivated me to write a blog so that people will have a ready reference on how to integrate GitLab with Webhook Relay.

Overall Workflow

Step 1: Download WebHook Relay Agent on the local system

Copy and execute the command

curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay && chmod +wx relay && sudo mv relay /usr/local/bin

Note: Webhook Relay and Webhook Relay agent are different. Webhook Relay is running on public IP which triggers by GitLab and Webhook Relay Agent is a service which gets trigger by Webhook relay.

Step 2: Create a Webhook Relay Account

After successfully signing up we will land on Webhook Relay home page.

Step 3: Setting up the Webhook Relay Agent.

We have to create Access Tokens.
Now after navigating through Access token, click on Create Token button. Then we are provided with a Key and Secret pair.
Copy and execute:

relay login -k token-key -s token-secret
 
 
If it prompts a success message it means our Webhook relay agent is successfully setup.

Step 4: Create GItLab Repository

We will keep our repository a public one to keep things simple and understandable. Let’s say our Gitlab repository’s name is  WebhookProject.

Step 5: Install GitLab and GitLab Hook Plugin.

Go to Manage Jenkins →  Manage Plugins → Available
 

Step 6: Create Jenkins Job

 
Configure job: Add Gitlab repository link
 
Now we’ll choose the build trigger option:
 
 
 
Save the job.

Step 7: Connecting GitLab Repository, Webhook Relay, and Webhook Relay Agent

The final and most important step is to Connect the Overall flow.

Start forwarding Webhooks to Jenkins

Open terminal and type command:

relay forward --bucket gitlab-jenkins http://localhost:8080/project/webhook-gitlab-test
Note: Bucket name can be anything
 
 
Note: Do not stop this process by doing (ctrl+c).Open a new terminal or a new tab for commit to gitlab.

The most critical part of the workflow is the link generated by the Webhook Relay Agent. Copy this link and paste Gitlab repository(webhookProject) → Settings → Integrations

Paste the link.
For the sake of simplicity uncheck the Enable SSL Verification and click Add webhook button
Until now all major configuration has been done. Now Clone GitLab repository and push commits to the remote repository.
Go to Jenkins job and see build is triggered by GitLab webhook.
To see GitLab webhook Relay Logs, Go to :
Gitlab Repository → Settings → Integrations → webhook → Edit
 
 
To see Logs of Webhook Relay Agent trigger Jenkins, Go to:
Webhook Relay UI page → Relay Logs.

So now you know how to do WebHook integration between your VCS & Jenkins even when they are not directly reachable to each other.
Can you integrate a GitHub Webhook with Privately hosted Jenkins? Yes
Cheers Till Next Time!!!!