Category: DevOps
Docker Logging Driver
The docker logs command batch-retrieves logs present at the time of execution. The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command.
These logs are basically stored at “/var/lib/docker/containers/.log”, So basically it is not easy to use this file by using Filebeat because the file will change every time when the new container is up with a new container id.
So, How to monitor these logs which are formed in different files ? For this Docker logging driver were introduced to monitor the docker logs.
Docker includes multiple logging mechanisms to help you get information from running containers & services. These mechanisms are called logging drivers. These logging drivers are configured for the docker daemon.
To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts.
The default logging driver is json-file. The following example explicitly sets the default logging driver to syslog:
{
“log-driver”: “syslog”
}
After configuring the log driver in daemon.json file, you can define the log driver & the destination where you want to send the logs for example logstash & fluentd etc. You can define it either on the run time execution command as “–log-driver=syslog –log-opt syslog-address=udp://logstash:5044” or if you are using a docker-compose file then you can define it as:
“`
logging:
driver: fluentd
options:
fluentd-address: “192.168.1.1:24224”
tag: “{{ container_name }}”
“`
Once you have configured the log driver, it will send all the docker logs to the configured destination. And now if you will try to see the docker logs on the terminal using the docker logs command, you will get a msg:
“`
Error response from daemon: configured logging driver does not support reading
“`
Because all the logs have been parsed to the destination.
Let me give you an example that how i configured logging driver fluentd
and parse those logs onto Elasticsearch and viewed them on Kibana. In this case I am configuring the logging driver at the run-time by installing the logging driver plugin inside the fluentd but not in daemon.json. So make sure that your containers are created inside the same docker network where you will be configuring the logging driver.
Step 1: Create a docker network.
“`
docker network create docker-net
“`
Step 2: Create a container for elasticsearch inside a docker network.
“`
docker run -itd –name elasticsearch -p 9200:9200 –network=docker-net elasticsearch:6.4.1
“`
Step 3: Create a fluentd configuration where you will be configuring the logging driver inside the fluent.conf which is further being copied inside the fluentd docker image.
fluent.conf
“`
@type forward
port 24224
bind 0.0.0.0
@type copy
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key app
flush_interval 1s
index_name fluentd
type_name fluentd
@type stdout
“`
This will also create an index naming as fluentd & host is defined in the name of the service defined for elasticsearch.
Step 4: Build the fluentd image and create a docker container from that.
Dockerfile.fluent
“`
FROM fluent/fluentd:latest
COPY fluent.conf /fluentd/etc/
RUN [“gem”, “install”, “fluent-plugin-elasticsearch”, “–no-rdoc”, “–no-ri”, “–version”, “1.9.5”]
“`
Here the logging driver pluggin is been installed and configured inside the fluentd.
Now build the docker image. And create a container.
“`
docker build -t fluent -f Dockerfile.fluent .
docker run -itd –name fluentd -p 24224:24224 –network=docker-net fluent
“`
Step 5: Now you need to create a container whose logs you want to see on kibana by configuring it on the run time. In this example, I am creating an nginx container and configuring it for the log driver.
“`
docker run -itd –name nginx -p 80:80 –network=docker-net –log-driver=fluentd –log-opt fluentd-address=udp://:24224 opstree/nginx:server
“`
Step 6: Finally you need to create a docker container for kibana inside the same network.
“`
docker run -itd –name kibana -p 5601:5601 –network=docker-net kibana
“`
Now, You will be able to check the logs for the nginx container on kibana by creating an index fluentd-*.
Types of Logging driver which can be used:
Driver Description
- none: No logs are available for the container and docker logs does not return any output.
- json-file: The logs are formatted as JSON. The default logging driver for Docker.
- syslog: Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
- journald: Writes log messages to journald. The journald daemon must be running on the host machine.
- gelf: Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
- fluentd: Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
- awslogs: Writes log messages to Amazon CloudWatch Logs.
- splunk: Writes log messages to splunk using the HTTP Event Collector.
- etwlogs: Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
- gcplogs: Writes log messages to Google Cloud Platform (GCP) Logging.
- logentries: Writes log messages to Rapid7 Logentries.
Prometheus Overview and Setup
Overview
To monitor your services and infra with Prometheus your service needs to expose an endpoint in the form of port or URL. For example:- {{localhost:9090}}. The endpoint is an HTTP interface that exposes the metrics.
For some platforms such as Kubernetes and skyDNS Prometheus act as directly instrumented software that means you don’t have to install any kind of exporters to monitor these platforms. It can directly monitor by Prometheus.
One of the best thing about Prometheus is that it uses a Time Series Database(TSDB) because of that you can use mathematical operations, queries to analyze them. Prometheus uses SQLite as a database but it keeps the monitoring data in volumes.
Pre-requisites
- A CentOS 7 or Ubuntu VM
- A non-root sudo user, preferably one named prometheus
Installing Prometheus Server
mkdir /opt/prometheus-setup
cd /opt/prometheus-setup
useradd prometheus
Use wget to download the latest build of the Prometheus server and time-series database from GitHub.
wget https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.0.0.linux-amd64.tar.gz
The Prometheus monitoring system consists of several components, each of which needs to be installed separately.
tar -xvzf ~/opt/prometheus-setup/prometheus-2.0.0.linux-amd64.tar.gz .
mv prometheus-2.0.0.linux-amd64 prometheus sudo mv prometheus/prometheus /usr/bin/ sudo chown prometheus:prometheus /usr/bin/prometheus sudo chown -R prometheus:prometheus /opt/prometheus-setup/mkdir /etc/prometheusmv prometheus/prometheus.yml /etc/prometheus/sudo chown -R prometheus:prometheus /etc/prometheus/prometheus --version
You should see the following message on your screen:
prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
build user: root@615b82cb36b6
build date: 20171108-07:11:59
go version: go1.9.2
sudo vi /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
[Service]
User=prometheus
ExecStart=/usr/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /opt/prometheus-setup/
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start prometheus
systemctl enable prometheus
Installing Node Exporter
Node exporter is a binary which is written in go which monitors the resources such as cpu, ram and filesystem.
wget https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.15.1.linux-amd64.tar.gz
You can now use the tar command to extract : node_exporter-0.15.1.linux-amd64.tar.gztar -xvzf node_exporter-0.15.1.linux-amd64.tar.gz .
mv node_exporter-0.15.1.linux-amd64 node-exporter
mv node-exporter/node_exporter /usr/bin/
Running Node Exporter as a Service
useradd prometheus
To make it easy to start and stop the Node Exporter, let us now convert it into a service. Use vi or any other text editor to create a unit configuration file called node_exporter.service.
sudo vi /etc/systemd/system/node_exporter.service
This file should contain the path of the node_exporter executable, and also specify which user should run the executable. Accordingly, add the following code:
[Unit]
Description=Node Exporter
[Service]
User=prometheus
ExecStart=/usr/bin/node_exporter
[Install]
WantedBy=default.target
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service
Starting Prometheus Server with a new node
vim /etc/prometheus/prometheus.yml
# my global configuration which means it will applicable for all jobs in file global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. scrape_interval should be provided for scraping data from exporters evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. Evaluation interval checks at particular time is there any update on alerting rules or not. # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. Here we will define our rules file path #rule_files: # - "node_rules.yml" # - "db_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: In the scrape config we can define our job definitions scrape_configs: # The job name is added as a label `job=` to any timeseries scraped from this config. - job_name: 'node-exporter' # metrics_path defaults to '/metrics' # scheme defaults to 'http'.# target are the machine on which exporter are running and exposing data at particular port. static_configs: - targets: ['localhost:9100']
systemctl restart prometheus
Classless Inter Domain Routing Made Easy (Cont..)
Introduction :
Network Mask /Subnet Mask –
In Decimal : 255.0.0.0
Subnetting :
|
Network ID(N)
|
Network ID(N)
|
Host ID(H)
|
Host ID(H)
|
|
Network ID(N)
|
Network ID(N)
|
Subnet
|
Host ID(H)
|
|
Network ID(N)
|
Network ID(N)
|
Subnet
|
Subnet/Host
|
CIDR : Classless Inter Domain Routing
|
Network ID(N)
|
Host ID(H)
|
CIDR – 200.200.200.200/24
CLASS C SUBNETTING :
|
200
|
10
|
20
|
0
|
|
200
|
10
|
20
|
20
|
|
255
|
255
|
255
|
0
|
|
11001000
|
00001010
|
00010100
|
00010100
|
|
11111111
|
11111111
|
11111111
|
00000000
|
|
11001000
|
00001010
|
00010100
|
00000000
|
|
200
|
10
|
20
|
0
|
|
200
|
10
|
20
|
0
|
|
11001000
|
00001010
|
00010100
|
00000000
|
|
11001000
|
00001010
|
00010100
|
11100000
|
|
200
|
10
|
20
|
224
|
|
200
|
10
|
20
|
0/27
|
|
200
|
10
|
20
|
32/27
|
|
200
|
10
|
20
|
64/27
|
|
200
|
10
|
20
|
96/27
|
|
200
|
10
|
20
|
128/27
|
|
200
|
10
|
20
|
160/27
|
|
200
|
10
|
20
|
192/27
|
|
200
|
10
|
20
|
224/27
|
|
200
|
10
|
20
|
0/27
|
|
200
|
10
|
20
|
31 /27
|
|
200
|
10
|
20
|
31/27
|
|
200
|
10
|
20
|
32/27
|
|
200
|
10
|
20
|
33/27
|
|
200
|
10
|
20
|
62/27
|
|
200
|
10
|
20
|
63/27
|
Conclusion :
Classless Inter Domain Routing Made Easy
-
What is CIDR ?
-
How CIDR Came into Picture ?
-
What CIDR do ?
1. IP Addresses
IP Address –
Everyone has some Address, so as these devices also have an Internet Protocol Address (IP Address), also called as Logical Address.
By their Name that are unique for each of them.
Structure of IP Address –
So, 0’s for Other and 1 for the Number whose sum will be 192
Type of IP Address –
-
Assignment Method
-
Classes : 1) Classful
2) Classless -
Public / Private
-
Version
Assignment Methods :
Classes :
-
Classful
-
Classless
-
IP Address begins with 0,First Bit will always be Zero
-
7 Remaining Bits in Network part : Only 128 Possible class A Network
-
24 Bits in Local Part : Over 16 million hosts per Class A Network
-
All class A network parts are assigned or reserved.
|
Network ID(N)
|
Host ID(H)
|
Host ID(H)
|
Host ID(H)
|
-
First two Bit will always be One and Zero
-
14 Bits in Network part – Over 16,000 possible Class B Network
-
16 Bits in Local Part – Over 65,000 possible Hosts
|
Network ID(N)
|
Network ID(N)
|
Host ID(H)
|
Host ID(H)
|
-
First three Bit will always be One,One and Zero
-
21 Bits in Network part – Over 2 Million possible Class C Network
-
8 Bits in Local Part – Only 256 possible Hosts per class C Network
|
Network ID(N)
|
Network ID(N)
|
Network ID(N)
|
Host ID(H)
|
IP Address begins with 1110
Used for Multicasting, Not defining networks.
-
Sending messages to group of hosts
-
just to one (Unicasting)
-
ALL HOSTS (Broadcasting)
-
Say to send a videoconference stream to a group of receivers
All OSPF Routers address is used to send HELLO PACKETS
All the routers address is used to send OSPF routing information to designated routers on a network segment.
used to send routing information to all EIGRP routers on a network segment.
Private/Public:
-
Addresses beginning with 127 are reserved for loopback and internal testing – Used for Self Testing that TCP/IP is properly working or not.
-
XXX.0.0.0 reserved for Network Address
-
XXX.255.255.255 reserved for Broadcast
-
0.0.0.0 – First Address – Represent Local Network / Used for Default Routing
-
255.255.255.255 – Broadcast
BroadCast Address – 150.150.255.255
Classless Inter Domain Routing Made Easy (Cont..)