Docker-Compose As A Bundled Application

When docker was released as a new containerization tool, it took the market by a storm. With its lightweight images, multi-os support, and ability to ship containers, it’s popularity only roared. I have been using it for more than six months now, I can see why it is so. Hypervisors, another type of virtualizing tools,  have been hard on hardware. Which means they require a lot of resources to run. This increases the cost of running applications way more than those running on containers. This is the problem docker solved and hence, it’s popularity. Docker engine just sits on host OS and translates the instructions from an application to the underlying OS. It does not need one extra layer of virtual OS, just the binaries and libraries of application bundled in the image. Right? Now, hold on to that thought. We all have been working with docker and an extension with docker-compose. Why? Because it makes our job easy, We are spared from typing hundreds of ad-hoc commands in terminal to set up a slightly or very complicated application with certain dependencies. We can just describe it in a `docker-compose.yml` file and our job is done. However, the problem arises when we have to share that compose file:

  • Other users might need to use the file in a different environment, so they will need to edit all the values pertaining to their need, manually, and keep separate compose files for each environment.
  • Troubleshooting various configuration issues can be a tedious task since there is no single place where the configuration of the application can be stored. Changes will have to be made in the file.
  • This also makes communication between Dev and Ops team more tricky than it has to be resulting in communication gap and time wastage.

To have a more clear picture of the issue, we can have look at the below image:

We have compose file and configuration for separate environments, we make changes according to environment needs in different compose files, which could be a long manual task depending on the size of our project.


All of this points to the fact that there is no way to bundle the applications that use efficiently-bundled docker images. See the irony here? Well, there “was” no way, until there was. Enter ‘docker-app’. This, relatively, new tool is the answer to packaging docker-compose applications. I came across it when I was, myself, struggling to re-use a docker-compose application I had written in another environment. As soon as I read about it, I had to try it, which I did and loved. It made the task much easier as it provided a template of compose file and a key-value store for environment dependent parameters.


Now, we have an artefact with extention of ‘.dockerapp’. We can pass configuration values either through CLI or files or both and it will render compose file according to those values.

Let us now go through an example of how the docker app works. I am going to deploy a dummy application Spring3hibernate from Opstree Github repository in QA env and later in PROD by making simple configuration changes.
Installing docker-app is easy, though, there is one thing one should keep in mind: it can be installed as a plugin in docker-CLI or as standalone CLI tool itself. I will be installing it as a standalone CLI tool on linux. If you wish to install it as a plugin to docker-CLI and/or on another OS, visit their Github page: https://github.com/docker/app (Also, please visit github page for basics)
Before continuing, please ensure you have docker-CLI and docker-compose installed.
Please follow below steps to install docker-app:

$ export OSTYPE="$(uname | tr A-Z a-z)"
$ curl -fsSL --output "/tmp/docker-app-${OSTYPE}.tar.gz" \
"https://github.com/docker/app/releases/download/v0.8.0/docker-app-${OSTYPE}.tar.gz"
$ tar xf "/tmp/docker-app-${OSTYPE}.tar.gz" -C /tmp/
$ install -b "/tmp/docker-app-standalone-${OSTYPE}" /usr/local/bin/docker-app

Create a new directory in your home, we’ll call it app home:

$ cd ~
$ mkdir spring3hibernate-app
$ cd spring3hibernate-app/

Now, clone the app from Opstree Github repository. This app needs only mysql as a dependency.

$ git clone https://github.com/opstree/spring3hibernate.git

We need to update database properties file and nginx config file with below contents respectively:

$ vim ~/spring3hibernate-app/spring3hibernate/src/main/resources/database.properties

Replace below content over there:

database.driver=com.mysql.jdbc.Driver
database.url=jdbc:mysql://mysql:3306/employeedb
database.user=admin
database.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
hibernate.show_sql=true
hibernate.hbm2ddl.auto=update
upload.dir=c:/uploads

For nginx conf file:

$ vim ~/spring3hibernate-app/spring3hibernate/nginx/default.conf
server {
    listen       80;
    server_name  localhost;

    location / {
        stub_status on;
        proxy_pass http://springapp1:8080/;

    }
# redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Move ‘default.conf’ to ~/spring3hibernate-app/spring3hibernate/nginx/conf/qa/ as we have different conf file for PROD which goes to ~/spring3hibernate-app/spring3hibernate/nginx/conf/prod/

upstream s3hbackend {
    server springapp1:8080;
    server springapp2:8080;
}
server {
       listen 80;
       location / {
           stub_status on;
           proxy_pass http://s3hbackend;
       }
  
       # redirect server error pages to the static page /50x.html
       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /usr/share/nginx/html;
       }

}

This is the configuration for the nginx load balancer. Remember this, we’ll use it later. Let’s create our docker-app now, make sure you are in the app home directory
when executing this command:

$ docker-app init --single-file s3h

This will create a single file named s3h.dockerapp which will look like this: 

# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: s3h
# A short description of the application
description:
# List of application maintainers with name and email for each
maintainers:
  - name: ubuntu
    email:


---
# This section contains the Compose file that describes your application services.
version: "3.6"
services: {}


---
# This section contains the default values for your application parameters.

{}

As you can see this file is divided into three parts, metadata, compose, and parameters. They are all in one file because we used –single-file switch. We can divide them up in multiple files by using docker-app split command in app home directory, docker-app merge will put them back in one file. Now, for QA, we have the following configuration for s3h.dockerapp file:

version: 0.1.0
name: s3h
description:
maintainers:
  - name: atbk5
    email: [email protected]


---
version: "3.7"
services:
  mysql:
    image: mysql:5.7
    container_name: mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.env.rootpass}
      MYSQL_DATABASE: ${mysql.env.database}
      MYSQL_USER: ${mysql.env.user}
      MYSQL_PASSWORD: ${mysql.env.userpass}
    restart: always
    networks:
      - backend
    volumes:
      - db_data:/var/lib/mysql


  spring1:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp1
    restart: always
    networks:
      - backend
      - frontend


  spring2:
    depends_on:
      - mysql
    build:
      context: ./spring3hibernate/
      dockerfile: Dockerfile
    container_name: springapp2
    restart: always
    networks:
      - backend
      - frontend
    x-enabled: ${spring.app2}


  nginx:
    depends_on:
      - spring1
    image: nginx:alpine
    container_name: proxy
    restart: always
    networks:
      - frontend
    volumes:
      - ${nginx.conf}:/etc/nginx/conf.d
    ports:
      - ${nginx.port}:80
    x-enabled: ${nginx.status}


networks:
  frontend:
  backend:


volumes:
  db_data:


---
mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/qa
  port: 81
  status: true
spring:
  app2: false

As mentioned before, first part contains app metadata, second part contains actual compose file with lots of variables, and last part contains values of those variables. Special mention here is x-enabled variable, docker-app provides functionality to temporarily disable a service using this variable. Now, try a few commands:

$ docker-app inspect

It will produce summary of whole app.

$ docker-app render

It will replace all variables with their values and will produce a compose file

$ docker-app render --set nginx.status=”false”

It will remove nginx from docker-app compose as well as deploy

$ docker-app render | docker-compose -f - up

It will spin up all the containers according to rendered compose file. We can see the application running on port 81 of our machine.

$ docker-app --help

To check out more commands and play around a bit.
At this point, it will be better to create two directories in app home: qa and prod. Create a file in qa: qa-params.yml. Another file in prod: prod-params.yml. Copy all parameters from above s3h.dockerapp file to qa-params.yaml (or not). More importantly, copy below changes in parameters to prod-params.yml

mysql:
  env:
    rootpass: password
    database: employeedb
    user: admin
    userpass: password
nginx:
  conf: /home/ubuntu/dockerApp/spring3hibernate/nginx/conf/prod
  port: 80
  status: true
spring:
  app2: true

We are going to loadbalance springapp1 and springapp2 in PROD environment, since we have enabled springapp2 using x-enabled parameter. We have also changed nginx conf bind path to the new conf file and host port for nginx to 80 (for Production). All so easily. Run command:

$ docker-app render --parameters-file ./prod/prod-params.yaml

This command will produce a compose file ready for production deployment. Now run:

$ docker-app render --parameters-file ./prod/prod-params.yml | docker-compose -f - up

And production is deployed … Visit port 80 of your localhost to verify. What’s more exciting is that we can also share our docker-apps through docker hub, we can tag the app and push it to our remote registry as images after logging in:

$ docker login

Provide your username and password for docker hub, create an account if not yet created.

$ docker-app push --tag atbk5/s3h.dockerapp:latest

If we wish to upload additional files as well, we will have to split our project using docker-app split and put additional files in the directory before pushing. The additional files will go as attachments which can be accessed later.

Conclusion

With the arrival of docker app, our large, composite, and containerized applications can also be shipped and re-used as images. That is cool. But there’s something cooler which we haven’t explored yet. Deploying our docker-apps on kubernetes with the goal of exploring how far in management, and how optimal in delivery, we can go with our applications. Let’s keep this as a topic for the next blog. Until then, have a nice one. 🙂

Image Source: https://reflectoring.io/externalize-configuration/

Lets Get Started With Packer

In this blogpost, we will see how to get started with packer. We will cover installation, writing a template for creating AWS AMI. To get the basic understanding of how packer works, You can refer to our previous blog “Intro To Packer“.
Installation 
  1. Official method to download packer as precompiled binary, packer does not provide system packages and neither they have any plan to make it avail as such:-$curl -L https://releases.hashicorp.com/packer/1.4.0/packer_1.4.0_linux_amd64.zip
  2. After downloading the binary unzip it to the location you want to keep it. If you want it to be installed such that it can be used by system-wide users, do  not unzip in user space $sudo unzip packer_1.4.0_linux_amd64.zip -d /usr/local/packer 
  3. After unzipping the package, the directory should contain a single binary program called packer . 
  4. The final step to installation is to make sure the directory you installed Packer to is set on the PATH, so that it can be used using a command line. Open the /etc/environment and append the below line to the end of the file export PATH=”$PATH:/usr/local/packer” After adding the line into the file to let the change reflect source the environment file $source /etc/environment 
  5. Verify the installation by firing packer command or simply check its version by    $packer –version . You should see the version of packer as an output.
Once installed, running packer is as simple as packer build , which will take the build-file and run the steps we provide within. Let’s get started with a simple build file.
 
Setting Up Stage
 
                                                                                                                         
As we are building an image for AWS cloud, there are certain prerequisites which need to be taken care of.
You should have IAM user who has access to create and destroy ec2 instance, create an AMI, create and destroy security groups etc. You can find sample IAM policy for packer user in sample minimum IAM user policy for Packer.
 
After setting up your IAM user for packer, generate the access key and id and save it.
Now having noted the key, you can either directly use it in your template (which is not suggested) or you can configure it as an environment variable or the AWS CLI config on which you have the packer installed.
 
I have configured it with AWS CLI config so I did not have to define in variable section or in the builder section. You can also pass your access keys as variable as an option while running packer build command.
Here we will be installing apache webserver in the image. I have named this json file as httpd.json and used httpd.sh script to install httpd under provisioner section.
 
 
Below is the sample httpd.json file
 
{
     “variables”: {
     “ami_id”: “ami-0a574895390037a62”,
     “app_name”: “httpd”
   },

   “builders”: [{
     “type”: “amazon-ebs”,
     “region”: “ap-south-1”,
     “vpc_id”: “vpc-df95d4b7”,
     “subnet_id”: “subnet-175b2d7f”,
     “source_ami”: “{{user `ami_id`}}”,
     “instance_type”: “t2.micro”,
     “ssh_username”: “ubuntu”,
     “ami_name”: “PACKER-DEMO-{{user `app_name` }}”,
     “tags”: {
         “Name”: “PACKER-DEMO-{{user `app_name` }}”,
         “Env”: “DEMO”

       }
   }],

  “provisioners”: [
   {
       “type”: “shell”,
       “script”: “httpd.sh”
    }
]

}

 
Below is the simple httpd.sh
 
#!/bin/bash

sudo apt-get update
sudo apt-get install -y httpd

 
First Validate your template by firing below command:-
packer validate httpd.json
 
You should get the output as a success, or as an error indicating the line number.
 
Now run packer build to build your image:-
 
packer build httpd.json
 
After a successful build, you will get AMI id as output and success message.
 
==> amazon-ebs: Prevalidating AMI Name: PACKER-DEMO-httpd
   amazon-ebs: Found Image ID: ami-0a574895390037a62
==> amazon-ebs: Creating temporary keypair: packer_5cd559df-84ce-ff8a-fa93-0c4477d988e4
==> amazon-ebs: Creating temporary security group for this instance: packer_5cd559e2-ea81-be15-b94a-c28493c0d3ff
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups…
==> amazon-ebs: Launching a source AWS instance…
==> amazon-ebs: Adding tags to source instance
   amazon-ebs: Adding tag: “Name”: “Packer Builder”
   amazon-ebs: Instance ID: i-06ed051a3435865c4
==> amazon-ebs: Waiting for instance (i-06ed051a3435865c4) to become ready…
==> amazon-ebs: Using ssh communicator to connect: *.*.*.*
==> amazon-ebs: Waiting for SSH to become available…
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Stopping the source instance…
   amazon-ebs: Stopping instance
==> amazon-ebs: Waiting for the instance to stop…
==> amazon-ebs: Creating AMI PACKER-DEMO-httpd from instance i-06ed051a3435865c4
   amazon-ebs: AMI: ami-0ce41081a3b649374
==> amazon-ebs: Waiting for AMI to become ready…
==> amazon-ebs: Adding tags to AMI (ami——)…
==> amazon-ebs: Tagging snapshot: snap-0ee3ce80ec289ed24
==> amazon-ebs: Creating AMI tags
   amazon-ebs: Adding tag: “Name”: “PACKER-DEMO-httpd”
   amazon-ebs: Adding tag: “Env”: “DEMO”
==> amazon-ebs: Creating snapshot tags
==> amazon-ebs: Terminating the source AWS instance…
==> amazon-ebs: Cleaning up any extra volumes…
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group…
==> amazon-ebs: Deleting temporary keypair…
Build ‘amazon-ebs’ finished.

==> Builds finished. The artifacts of successful builds are:
–> amazon-ebs: AMIs were created:
ap-south-1: ami——————–

 
Few things to keep in mind:-
 
  • Packer does not create the image of any running instance, instead, it spins a temporary instance and create the image, post image creation it destroys all the resources which were created by a packer in order to create images. 
  • Though packer gives us ease of taking machine AMI’s programmatically, purging of an older image should also be kept in mind because AMIs gets stored over s3 and it might add up to your cost. 
  • Though a rollback becomes a lot easier in immutable infra. It can become a pain in the neck if you frequently make changes in production. 
  • We cannot expect it to solve all our problems, its only job is to create an image. You will have to decide when to create an image and what post action needs to be taken or deployed after image creation.
I hope the above setup will help you in getting started with it. Later we will discuss how we can use it along with Ansible and Terraform to achieve immutable Infra.
I appreciate any suggestions and comments or any questions/doubts faced while implementing it.

Intro to Packer

Packer is an opensource tool developed by HashiCorp to create machine images for multiple cloud platforms like AWS, GCP, Azure or even VMWare. As the name suggests it packs all your software, packages, configurations while baking your machine images. Perhaps Packer is the only tool right now in the market which solely focuses on creating machine images and giving us the ability to automate the machine image creation process.

In this blog post, we will learn What Packer does and how it does things. Sounds Interesting!!!!

 
What is Packer and Machine Images
 
“Packer can be used to creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel.” 
https://www.packer.io/intro/ 
It does installs and configures the software by using different SCM tools such as Ansible, Chef or Puppet, shell scripts within your Packer-made images. You can either include your scripts in json template itself or you can source it from a file.
 
“A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc”
https://www.packer.io/intro/ 
 
Why the Heck We Should Consider to Learn Packer!!!!!!!!
 
Consider a couple of scenarios mentioned below:-
 
Scenario1
 
If you want to have an immutable infrastructure in place. The key guideline behind an immutable infrastructure is that you never modify a running server. If a change is required, you instead completely replace the server with a new instance that contains the update or change.
The new server instance is created with an origin image that is built upon or a restored image from a previously defined server state. Version control and tag your images for easy rollback and distribution. Image contains all the application code, runtime dependencies, and configuration–in essence, the state needed for the software to run as expected. You will want to minimize the time required to bake all your required stuff into your image which can be achieved if you have proper tag maintained over your previous release images which can be used as an origin-golden image to bake the new image. The entire process of baking and using images become outstandingly easy by using Packer.
 
Scenario2
 
If you have autoscaling in place, there must be a requirement to scale up a new serviceable VMs as soon as possible but there are some concerns which spoil your expectations of serviceable VMs in the least time:
  • OS boot 
  • OS configuration 
  • SCM with Ansible or Chef 
  • Setting up your application
With having a pre-baked image in place, your time to scale up your VMs will drastically decrease.
 
So How Does Packer Works!!!!!!!!
 
Packer uses the JSON file as a template, it takes template as in input rolls up a temporary VMs based on the details provided, does the required configuration and stops it. After stopping the VMs it starts creating the image and save it as the name/tag provided in the template.
 
json file packer engine EC2 AMI
              
                                                       
Basic Concepts of Packer
 
There are two things which you will need to know to get started with packer:
  • Templates 
  • Commands
 
Templates
 
There are four sections in the Packer template: 
  • Variables(optional)-is an object of one or more key/value strings that define user variables contained in the template. If it is not specified, then no variables are defined 
  • Builders(required)- is an array of one or more objects that defines the builders that will be used to create machine images for this template, and configures each of those builders. 
  • Provisioners(optional)- is an array of one or more objects that defines the provisioners that will be used to install and configure the software for the machines created by each of the builders 
  • post-processors(optional)-  is an array of one or more objects that defines the various post-processing steps to take with the built images. If not specified, then no post-processing will be done
Sub-Commands
 
Likewise, Unix packer also takes subcommand and options. There are three sub-commands:
  • build-The packer build command takes a template and runs all the builds within it in order to generate a set of artefacts. The various builds specified within a template are executed in parallel unless otherwise specified. And the artefacts that are created will be outputted at the end of the build. 
  • validate- The packer validate command is used to validate the syntax and configuration of a template. The command will return a zero exit status on success, and a non-zero exit status on failure. Additionally, if a template doesn’t validate, any error messages will be outputted. 
  • inspect -The packer inspect takes a template and outputs the various components a template defines. This can help you quickly learn about a template without having to dive into the JSON itself. The command will tell you things like what variables a template accepts, the builders it defines, the provisioners it defines and the order they’ll run, and more.

Hope this blog helps you understand the basics of Packer. Having covered all the basics understanding, we can now “Get Started With Packer“.