Linux Namespaces – Part 1

Overview

First of all I would like to give credit to Docker which motivated me to write this blog, I’ve been using docker for more then 6 months but I always wondered how things are happening behind the scene. So I started in depth learning of Docker and here I am talking about Namespace which is the core concept used by Docker.

Before talking about Namespaces in Linux, it is very important to know that what namespaces actually is?

Let’s take an example, We have two people with the same first name Abhishek Dubey and Abhishek Rawat but we can differentiate them on the basis of their surname Dubey and Rawat. So you can think surname as a namespace.

In Linux, namespaces are used to provide isolation for objects from other objects. So that anything will happen in namespaces will remain in that particular namespace and doesn’t affect other objects of other namespaces. For example:- we can have the same type of objects in different namespaces as they are isolated from each other.

In short, due to isolation, namespaces limits how much we can see.

Now you would be having a good conceptual idea of Namespace let’s try to understand them in the context of Linux Operating System.

Linux Namespaces

Linux namespace forms a single hierarchy, with all processes and that is init. Usually, privileged processes and services can trace or kill other processes. Linux namespaces provide the functionality to have many hierarchies of processes with their own “subtrees”, such that, processes in one subtree can’t access or even know those of another.
A namespace wraps a global system resource (For ex:- PID) using the abstraction that makes it appear to processes within the namespace that they have, using their own isolated instance of the said resource.

In the above figure, we have a process named 1 which is the first PID and from 1 parent process there are new PIDs are generated just like a tree. If you see the 6th PID in which we are creating a subtree, there actually we are creating a different namespace. In the new namespace, 6th PID will be its first and parent PID. So the child processes of 6th PID cannot see the parent process or namespace but the parent process can see the child PIDs of the subtree.

Let’s take PID namespace as an example to understand it more clearly. Without namespace, all processes descend(move downwards) hierarchically from First PID i.e. init. If we create PID namespace and run a process in it, the process becomes the First PID in that namespace. In this case, we wrap a global system resource(PID). The process that creates the namespace still remains in the parent namespace but makes it child for the root of the new process tree.
This means that the processes within the new namespace cannot see the parent process but the parent process can see the child namespace process. 
I hope you have got a clear understanding of Namespaces concepts & what purpose they serve in a Linux OS. The next blog of this series will talk about how we use namespace to restrict usage of system resources such as network, mounts, cgroups…

    Forward and Reverse Proxy

    Overview

    Before talking about forward proxy and reverse proxy let’s talk about what is the meaning of proxy.
    Basically proxy means someone or something is acting on behalf of someone.
    In the technical realm, we are talking about one server is acting behalf of the other servers.

    In this blog, we will talk about web proxies. So basically we have two types of web proxies:-

    • Forward Proxy
    • Reverse Proxy
    The forward proxy is used by the client, for example:- web browser, whereas reverse proxy is used by the server such as web server.

    Forward Proxy

    In Forward Proxy, proxy retrieves data from another website on the behalf of original requestee. For example:- If an IP is blocked for visiting a particular website then the person(client) can use the forward proxy to hide the real IP of the client and can visit the website easily.
    Let’s take another example to understand it more clearly. For example, we have 3 server
    Client                      -> Your computer from which you are sending the request
    Proxy Site               -> The proxy server, proxy.example.com
    Main Web server    -> The website you want to see
    Normally connection can happen like this 
    In the forward proxy, the connection will happen like this
    So here the proxy client is talking to the main web server on the behalf of the client.
    The forward proxy also acts as a cache server. For example:- If the content is downloading multiple times the proxy can cache the content on the server so next time when another server is downloading the same content, the proxy will send the content that is previously stored on the server to another server. 

     Reverse Proxy

    The reverse proxy is used by the server to maintain load and to achieve high availability. A website may have multiple servers behind the reverse proxy. The reverse proxy takes requests from the client and forwards these requests to the web servers. Some tools for reverse proxy are Nginx, HaProxy.
    So let’s take the similar example as the forward proxy
    Client                      -> Your computer from which you are sending the request
    Proxy Site               -> The proxy server, proxy.example.com
    Main Web server    -> The website you want to see
    Here it is better to restrict the direct access to the Main Web Server and force the requests or requestors to go through Proxy Server first. So data is being retrieved by Proxy Server on the behalf of Client.
    • So the difference between Forward Proxy and Reverse Proxy is that in Reverse Proxy the user doesn’t know he is accessing Main Web Server, because of the user only communicate with Proxy Server.
    • The Main Web Server is invisible for the user and only Reverse Proxy Server is visible. The user thinks that he is communicating with Main Web Server but actually Reverse Proxy Server is forwarding the requests to the Main Web Server.

    Prometheus Overview and Setup

    Overview

    Prometheus is an opensource monitoring solution that gathers time series based numerical data. It is a project which was started by Google’s ex-employees at SoundCloud. 

    To monitor your services and infra with Prometheus your service needs to expose an endpoint in the form of port or URL. For example:- {{localhost:9090}}. The endpoint is an HTTP interface that exposes the metrics.

    For some platforms such as Kubernetes and skyDNS Prometheus act as directly instrumented software that means you don’t have to install any kind of exporters to monitor these platforms. It can directly monitor by Prometheus.

    One of the best thing about Prometheus is that it uses a Time Series Database(TSDB) because of that you can use mathematical operations, queries to analyze them. Prometheus uses SQLite as a database but it keeps the monitoring data in volumes.

    Pre-requisites

    • A CentOS 7 or Ubuntu VM
    • A non-root sudo user, preferably one named prometheus

    Installing Prometheus Server

    First, create a new directory to store all the files you download in this tutorial and move to it.

    mkdir /opt/prometheus-setup
    cd /opt/prometheus-setup
    Create a user named “prometheus”

    useradd prometheus

    Use wget to download the latest build of the Prometheus server and time-series database from GitHub.


    wget https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.0.0.linux-amd64.tar.gz
    
    The Prometheus monitoring system consists of several components, each of which needs to be installed separately.

    Use tar to extract prometheus-2.0.0.linux-amd64.tar.gz:

    tar -xvzf ~/opt/prometheus-setup/prometheus-2.0.0.linux-amd64.tar.gz .
    
     Place your executable file somewhere in your PATH variable, or add them into a path for easy access.

    mv prometheus-2.0.0.linux-amd64  prometheus
    sudo mv  prometheus/prometheus  /usr/bin/
    sudo chown prometheus:prometheus /usr/bin/prometheus
    sudo chown -R prometheus:prometheus /opt/prometheus-setup/
    mkdir /etc/prometheus
    mv prometheus/prometheus.yml /etc/prometheus/
    sudo chown -R prometheus:prometheus /etc/prometheus/
    prometheus --version
      

    You should see the following message on your screen:

      prometheus,       version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
      build user:       root@615b82cb36b6
      build date:       20171108-07:11:59
      go version:       go1.9.2
    Create a service for Prometheus 

    sudo vi /etc/systemd/system/prometheus.service
    [Unit]
    Description=Prometheus
    
    [Service]
    User=prometheus
    ExecStart=/usr/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /opt/prometheus-setup/
    
    [Install]
    WantedBy=multi-user.target
    systemctl daemon-reload
    
    systemctl start prometheus
    
    systemctl enable prometheus

    Installing Node Exporter


    Prometheus was developed for the purpose of monitoring web services. In order to monitor the metrics of your server, you should install a tool called Node Exporter. Node Exporter, as its name suggests, exports lots of metrics (such as disk I/O statistics, CPU load, memory usage, network statistics, and more) in a format Prometheus understands. Enter the Downloads directory and use wget to download the latest build of Node Exporter which is available on GitHub.

    Node exporter is a binary which is written in go which monitors the resources such as cpu, ram and filesystem. 

    wget https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.15.1.linux-amd64.tar.gz
    

    You can now use the tar command to extract : node_exporter-0.15.1.linux-amd64.tar.gz

    tar -xvzf node_exporter-0.15.1.linux-amd64.tar.gz .
    
    mv node_exporter-0.15.1.linux-amd64 node-exporter

    Perform this action:-

    mv node-exporter/node_exporter /usr/bin/
    

    Running Node Exporter as a Service

    Create a user named “prometheus” on the machine on which you are going to create node exporter service.

    useradd prometheus

    To make it easy to start and stop the Node Exporter, let us now convert it into a service. Use vi or any other text editor to create a unit configuration file called node_exporter.service.


    sudo vi /etc/systemd/system/node_exporter.service
    
    This file should contain the path of the node_exporter executable, and also specify which user should run the executable. Accordingly, add the following code:

    [Unit]
    Description=Node Exporter
    
    [Service]
    User=prometheus
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=default.target

    Save the file and exit the text editor. Reload systemd so that it reads the configuration file you just created.


    sudo systemctl daemon-reload
    At this point, Node Exporter is available as a service which can be managed using the systemctl command. Enable it so that it starts automatically at boot time.

    sudo systemctl enable node_exporter.service
    You can now either reboot your server or use the following command to start the service manually:
    sudo systemctl start node_exporter.service
    Once it starts, use a browser to view Node Exporter’s web interface, which is available at http://your_server_ip:9100/metrics. You should see a page with a lot of text:

    Starting Prometheus Server with a new node

    Before you start Prometheus, you must first edit a configuration file for it called prometheus.yml.

    vim /etc/prometheus/prometheus.yml
    Copy the following code into the file.

    # my global configuration which means it will applicable for all jobs in file
    global:
      scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. scrape_interval should be provided for scraping data from exporters 
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. Evaluation interval checks at particular time is there any update on alerting rules or not.
    
    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. Here we will define our rules file path 
    #rule_files:
    #  - "node_rules.yml"
    #  - "db_rules.yml"
    
    # A scrape configuration containing exactly one endpoint to scrape: In the scrape config we can define our job definitions
    scrape_configs:
      # The job name is added as a label `job=` to any timeseries scraped from this config.
      - job_name: 'node-exporter'
        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'. 
        # target are the machine on which exporter are running and exposing data at particular port.
        static_configs:
          - targets: ['localhost:9100']
    After adding configuration in prometheus.yml. We should restart the service by

    systemctl restart prometheus
    This creates a scrape_configs section and defines a job called a node. It includes the URL of your Node Exporter’s web interface in its array of targets. The scrape_interval is set to 15 seconds so that Prometheus scrapes the metrics once every fifteen seconds. You could name your job anything you want, but calling it “node” allows you to use the default console templates of Node Exporter.
    Use a browser to visit Prometheus’s homepage available at http://your_server_ip:9090. You’ll see the following homepage. Visit http://your_server_ip:9090/consoles/node.html to access the Node Console and click on your server, localhost:9100, to view its metrics.

    Logstash Timestamp

    Introduction

    A few days back I encountered with a simple but painful issue. I am using ELK to parse my application logs  and generate some meaningful views. Here I met with an issue which is, logstash inserts my logs into elasticsearch as per the current timestamp, instead of the actual time of log generation.
    This creates a mess to generate graphs with correct time value on Kibana.
    So I had a dig around this and found a way to overcome this concern. I made some changes in my logstash configuration to replace default time-stamp of logstash with the actual timestamp of my logs.

    Logstash Filter

    Add following piece of code in your  filter plugin section of logstash’s configuration file, and it will make logstash to insert logs into elasticsearch with the actual timestamp of your logs, besides the timestamp of logstash (current timestamp).
     
    date {
      locale => "en"
      timezone => "GMT"
      match => [ "timestamp", "yyyy-mm-dd HH:mm:ss +0000" ]
    }
    
    In my case, the timezone was GMT  for my logs. You need to change these entries  “yyyy-mm-dd HH:mm:ss +0000”  with the corresponding to the regex for actual timestamp of your logs.

    Description

    Date plugin will override the logstash’s timestamp with the timestamp of your logs. Now you can easily adjust timezone in kibana and it will show your logs on correct time.
    (Note: Kibana adjust UTC time with you bowser’s timezone)

    Classless Inter Domain Routing Made Easy (Cont..)

    Introduction :

    As we had a discussion  about Ip addresses and their classes in the previous blog,we can now start with Sub-netting.

    Network Mask /Subnet Mask –

    As mask means to cover something,
    IP Address is made up of two components, One is the network address and the other is the host address.The Ip Address needs to be separated into the network and host address, and this separation of network and host address in done by Subnet Mask.The host part of an IP Address is further divided into subnet and host address if more subnetworks are needed and this can be done by subnetting. It is called as a subnet mask or Network mask as it is used to identify network address of an IP address by performing a bitwise AND operation on the netmask.
    Subnet Mask is of 32 Bit and is used to divide the network address and host addresses of an IP.
    In a Subnet Mask all the network bits are set to 1’s and all the host bits are set to 0’s.
     
    Whenever we see an IP Address – We can easily Identify that
    WHAT IS NETWORK PART OF THAT IP
    WHAT IS THE HOST PART OF THAT IP
     
    FORMAT :
    mmmmmmmm.mmmmmmmm.mmmmmmmm.mmmmmmmm
    (Either it will have 1 or 0 Continuously)
    EXAMPLE :
    A Class Network Mask
    In Binary : 11111111.00000000.00000000.00000000         – First 8 Bits will be Fixed
    In Decimal : 255.0.0.0
    Let the IP Given is – 10.10.10.10
    When we try to Identify it we know that it belong to class A, So the subnet mask will be : 255.0.0.0
    And the Network Address will be : 10.0.0.0
     
    B Class Network Mask  
    In Binary : 11111111.11111111.00000000.00000000           – First 16 Bits will be Fixed
    In Decimal : 255.255.0.0
    Let the IP Given is -150.150.150.150
    When we try to Identify it we know that it belong to class B, So the subnet mask will be : 255.255.0.0
    And the Network Address will be : 150.150.0.0
     
    C Class Network Mask  
    In Binary : 11111111.111111111.11111111.00000000           – First 32 Bits will be Fixed
    In Decimal : 255.255.255.0
    Let the IP Given is – 200.10.10.10
    When we try to Identify it we know that it belong to class C, So the subnet mask will be : 255.255.255.0
    And the Network Address will be : 200.10.10.0

    Subnetting :

    The method of dividing a network into two or more networks is called subnetting.
    A subnetwork, or subnet, is a logically subdivision of an IP network
    Subnetting provides Better Security
    Smaller collision and Broadcast  Domains
    Greater administrative control of each network.
    Subnetting – WHY ??
    Answer : Shortage of IP Addresses
    SOLUTIONS : –
    1) SUBNETTING – To divide Bigger network into the smaller networks and to reduce the wastage
    2) NAT –  Network Address Translation
    3) Classless IP Addressing –
    No Bits are reserved for Network and Host
     
    **Now the Problem that came is how to Identify the Class of IP Address :**
    Let a IP Be : 10.10.10.10
    If we talk about classful we can say it is of class A But in classless : We can check it through subnetwork mask.
    255.255.255.0
    So by this we can say that first 24 bits are masked for network and the left 8 are for host.
    Bits Borrowed from Host and added to Network
    Network ID(N)
    Network ID(N)
    Host ID(H)
    Host ID(H)
    Network ID(N)
    Network ID(N)
    Subnet
    Host ID(H)
    Network ID(N)
    Network ID(N)
    Subnet
    Subnet/Host
    Let we have a
    150.150.0.0 – Class Identifier/Network Address
    150.150.2.4 – Host Address – IP GIVEN TO A HOST
    255.255.255.0 – Subnet Mask
    150.150.2.0 – Subnet Address

    CIDR : Classless Inter Domain Routing

    CIDR (Classless Inter-Domain Routing, sometimes called supernetting) is a way to allow more flexible allocation of Internet Protocol addresses than was possible with the original system of IP Address classes. As a result, the number of available Internet addresses was greatly increased, which along with widespread use of network address translation, has significantly extended the useful life of IPv4.
    Let a IP be – 200.200.200.200
     
    Network ID(N)
    Host ID(H)
    ——–24 Bit ——– ——-8 bit ———–
       
    Network Mask tells that the number of 1’s are Masked
    Here First 24 Bits are Masked
    In Decimal : 255.255.255.0
    In Binary : 11111111.11111111.11111111.00000000
       Here the total Number of 1’s : 24
    So we can say that 24 Bits are masked.
     
    This method of Writing the network mask can be represented in one more way
    And that representation is called as CIDR METHOD/CIDR NOTATION

    CIDR  – 200.200.200.200/24
    24 : Is the Number of Ones – Or we can say Bits Masked
    Basically the method ISP’s(Internet Service Provider)use to  allocate an amount of addresses to a company, a home
     
    EX :
    190.10.20.30/28 : Here 28 Bits are Masked that represents the Network and the remaining 4 bits represent the Host
    / – Represents how many bits are turned on (1s)

    CLASS C SUBNETTING :

     
    Determining Available Host Address :
     
    200
    10
    20
    0
    11001000               00001010               00010100                 00000000 – 1
                                                                                                  00000001 – 2     
                                          00000011 – 3
                                                                              .
                                                                                                        .
                                                                                                        .
                                                                                                  11111101 – 254
                                                                                                  11111110 – 255
                                                                                                  11111111 – 256     
                                                                                                                        -2
                                                                                                                   ———
                                                                                                                       254
        2^N – 2  = 2^8 -2 = 254
               (Coz we have 8 bits in this case)               – 2 (Because 2 Address are Reserved)
    254 Address are available here
     
    FORMULAS :
     
    Number of Subnets : ( 2^x ) – 2     (x : Number of Bits Borrowed)
    Number of Hosts : ( 2^y ) – 2         (y : Number of Zero’s)
    Magic Number or Block Size = Total Number of Address : 256 – Mask
    Let a IP ADDRESS BE 200.10.20.20/24
    Number of subnets : 5
     
    Network Address   :
    200
    10
    20
    20
    255
    255
    255
    0
    (as total Number of 1’s : 24)
    IP in Binary
    11001000
    00001010
    00010100
    00010100
    MASK
    11111111
    11111111
    11111111
    00000000

    And Operation in IP And Mask
    11001000
    00001010
    00010100
    00000000
    In Binary
    200
    10
    20
    0
    As we need 5 Subnets :
    2^n -2 => 5
    So the value of n = 3 that satisfies the condition
    So, We need to turn 3 Zero’s to One’s to create 5 subnets
     
    200
    10
    20
    0
    11001000
    00001010
    00010100
    00000000
     
    11001000
    00001010
    00010100
    11100000
     (3 Zero’s changed to 3 one’s)    
    200
    10
    20
    224
                                                                                      
    Subnet 0   
    200
    10
    20
    0/27  
    Subnet 1                                           +32 – Block Size
    200
    10
    20
    32/27
    Subnet 2                                            +32
    200
    10
    20
    64/27
    Subnet 3
    200
    10
    20
    96/27
    Subnet 4
    200
    10
    20
    128/27
    Subnet 5   
    200
    10
    20
    160/27
    Subnet 6
    200
    10
    20
    192/27
    Subnet 7
    200
    10
    20
    224/27

    How to Put Host ADD.
    Subnet 0   
    200
    10
    20
    0/27  
    Subnet Broadcast Number 0
    200
    10
    20
    31 /27  
    Subnet 1                                           +32 – Block Size
    200
    10
    20
    31/27
    200
    10
    20
    32/27
    200
    10
    20
    33/27
                                                              .
                                                              .
                                                              .
    200
    10
    20
    62/27
    Subnet Broadcast Subnet 1
    200
    10
    20
    63/27
    200.10.20.33 ….and so on till 200.10.20.62   – 13 Host can be assigned IP Address.

    Conclusion :

    As the world is growing rapidly towards digitalization, use of IP Addresses is also increasing, So to decrease the wastage of IP Addresses, the implementation of CIDR is important that allows more organizations and users to take advantage of IPV4.