Best Practices of Ansible Role

I have written about Ansible Roles in my career. But when I talk about the “Best Practice of writing an Ansible Role”, half of them were not following the best practices.
When I started writing this blog, I had only limited knowledge of Ansible Roles and about the practices being followed. But reading more on Ansible roles has helped in enhancing my knowledge.
Without the proper understanding of the Architecture of Ansible Role, I was incapable of enjoying all the functionality for writing an Ansible Role. Earlier, I used “command” and “shell” modules for writing an Ansible Role. Here, in this blog, I’ve discussed the best practices of Ansible Role. Let’s
read these in detail.
 

Continue reading “Best Practices of Ansible Role”

Docker Logging Driver

The  docker logs command batch-retrieves logs present at the time of execution. The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command.

These logs are basically stored at “/var/lib/docker/containers/.log”, So basically it is not easy to use this file by using Filebeat because the file will change every time when the new container is up with a new container id.

So, How to monitor these logs which are formed in different files ? For this Docker logging driver were introduced to monitor the docker logs.

Docker includes multiple logging mechanisms to help you get information from running containers & services. These mechanisms are called logging drivers. These logging drivers are configured for the docker daemon.

To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts.

The default logging driver is json-file. The following example explicitly sets the default logging driver to syslog:

{                                            
  “log-driver”: “syslog”
}

After configuring the log driver in daemon.json file, you can define the log driver & the destination where you want to send the logs for example logstash & fluentd etc. You can define it either on the run time execution command as “–log-driver=syslog –log-opt syslog-address=udp://logstash:5044” or if you are using a docker-compose file then you can define it as:

“`
logging:
driver: fluentd
options:
fluentd-address: “192.168.1.1:24224”
tag: “{{ container_name }}”
“`

Once you have configured the log driver, it will send all the docker logs to the configured destination. And now if you will try to see the docker logs on the terminal using the docker logs command, you will get a msg:

“`
Error response from daemon: configured logging driver does not support reading
“`

Because all the logs have been parsed to the destination.

Let me give you an example that how i configured logging driver fluentd
and parse those logs onto Elasticsearch and viewed them on Kibana. In this case I am configuring the logging driver at the run-time by installing the logging driver plugin inside the fluentd but not in daemon.json. So make sure that your containers are created inside the same docker network where you will be configuring the logging driver.

Step 1: Create a docker network.

“`
docker network create docker-net
“`

Step 2: Create a container for elasticsearch inside a docker network.

“`
docker run -itd –name elasticsearch -p 9200:9200 –network=docker-net elasticsearch:6.4.1
“`

Step 3: Create a fluentd configuration where you will be configuring the logging driver inside the fluent.conf which is further being copied inside the fluentd docker image.

fluent.conf

“`

@type forward
port 24224
bind 0.0.0.0

@type copy

@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key app
flush_interval 1s
index_name fluentd
type_name fluentd

@type stdout

“`

This will also create an index naming as fluentd & host is defined in the name of the service defined for elasticsearch.

Step 4: Build the fluentd image and create a docker container from that.

Dockerfile.fluent

“`
FROM fluent/fluentd:latest
COPY fluent.conf /fluentd/etc/
RUN [“gem”, “install”, “fluent-plugin-elasticsearch”, “–no-rdoc”, “–no-ri”, “–version”, “1.9.5”]
“`

Here the logging driver pluggin is been installed and configured inside the fluentd.

Now build the docker image. And create a container.

“`
docker build -t fluent -f Dockerfile.fluent .
docker run -itd –name fluentd -p 24224:24224 –network=docker-net fluent
“`

Step 5: Now you need to create a container whose logs you want to see on kibana by configuring it on the run time. In this example, I am creating an nginx container and configuring it for the log driver.

“`
docker run -itd –name nginx -p 80:80 –network=docker-net –log-driver=fluentd –log-opt fluentd-address=udp://:24224 opstree/nginx:server
“`

Step 6: Finally you need to create a docker container for kibana inside the same network.

“`
docker run -itd –name kibana -p 5601:5601 –network=docker-net kibana
“`

Now, You will be able to check the logs for the nginx container on kibana by creating an index fluentd-*.

Types of Logging driver which can be used:

       Driver           Description

  •  none:           No logs are available for the container and docker logs does  not return any output.
  •  json-file:     The logs are formatted as JSON. The default logging driver for Docker.
  •  syslog:     Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
  •  journald:     Writes log messages to journald. The journald daemon must be running on the host machine.
  •  gelf:     Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
  •  fluentd:     Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
  •  awslogs:     Writes log messages to Amazon CloudWatch Logs.
  •  splunk:     Writes log messages to splunk using the HTTP Event Collector.
  •  etwlogs:     Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
  •  gcplogs:     Writes log messages to Google Cloud Platform (GCP) Logging.
  •  logentries:     Writes log messages to Rapid7 Logentries.