Nifi and Zookeeper Cluster Setup with Terraform

Recently while trying to set up Apache Nifi in cluster mode manually, I faced the challenge of performing same tasks on all nodes manually. In addition configuring the right cluster configurations was not easy. In my last blog here, I have covered the advantages of having nifi cluster over standalone and the manual steps to configure a Nifi cluster with external zookeeper.
In this article, I will show you how you can set up a three-node Zookeeper and Nifi cluster with terraform which will minimize the steps we have to perform with manual setup.

Apache NiFi is an open-source data integration and automation tool that enables the automation of data flow between different systems. NiFi provides a user-friendly interface to design, control, and manage the flow of data between various sources and destinations. The tool is particularly useful in handling data from different sources, applying transformations, and routing it to different systems in real-time.

Advantage of Using Terraform for Nifi

Terraform allows you to define your NiFi cluster infrastructure as code, making it easily versioned, shareable, and easy to understand. This ensures that your infrastructure is consistent across different environments. This helps in maintaining consistency and reducing the chances of configuration drift. As your NiFi cluster requirements evolve, Terraform makes it simple to scale your infrastructure up or down by adjusting the configuration.

Setting Up Apache Nifi Cluster with External Zookeeper

Continue reading “Nifi and Zookeeper Cluster Setup with Terraform”

Deploying Azure Policy using Terraform Module

While working on Azure, you might come across a requirement that says the resources being deployed should be in accordance with the organization’s policies. Suppose you might want to grant a particular or a set of permissions on the resource group or on the management group so that the owner of it should be restricted like denying deploying of resources by enforcing resource tagging, region enforcement, allowing approved Virtual machines (VM) images, and many more. 

In this blog, we will try to resolve these issues by applying Azure policies. 

First, let’s get familiar with the azure policy.

The azure policy is a service that has been designed to help you enforce different rules and to act based on the rule’s effect on your Azure resources. You can use it to create, assign and manage policies. Azure policy evaluates your resources for non-compliance with assigned policies and performs the assigned effect. 

Continue reading “Deploying Azure Policy using Terraform Module”

Deploying Terraform IAC Using Azure DevOps Runtime Parameters

Introduction

While deploying your same terraform code manually multiple times you must have got through the thoughts:

  • If we can automate the whole deployment process and replace the whole tedious process with few clicks.
  • If we can dynamically change the values of terraform.tfvars.
  • If we can restrict the regions of deployments.
  • If we can limit our VM types to maintain better cost optimization.

In this article, we will touch upon these problems and try to resolve them in a way that the same concepts can also be applied to similar requirements.

Soo… Let’s Get Started !!!

First of all, we need to know what is Terraform & Azure DevOps.

Talking About Terraform: HashiCorp Terraform is an infrastructure as a code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructures throughout its cycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.

Terraform Workflow
Also check out our guide on deploying Terraform IaC with runtime inputs in Azure DevOps.

Talking about Azure DevOps: Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.

DevOps lifecycle in Azure DevOps

If you want to learn more about Azure DevOps click here.

Pre-requisites:

No matter whether we are deploying our infrastructure into Azure Cloud Services or Amazon Web Services (AWS). All we need are the following checklist:

  • Active Cloud Service (Azure/AWS)
  • Azure DevOps Account
  • Terraform Code to deploy.
  • A Linux machine (VM or EC2) for agent pool or Azure Microsoft-hosted agent.
  • Storage Account (Azure Blob Container or AWS S3)
  • Terraform code to deploy using terraform.tfvars.

Azure DevOps Pipeline

Let’s take a scenario in which we will deploy a simple terraform code of Azure Virtual Machine using Azure DevOps pipelines.

Have a look at the main.tf

resource "azurerm_resource_group" "rg" {
  name     = "dev-${var.name}-rg"
  location = var.region
}

resource "azurerm_virtual_network" "vnet" {
  name                = "dev-${var.name}-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_subnet" "subnet" {
  name                 = "dev-${var.name}-subnet"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.0.0/24"]
}

resource "azurerm_network_interface" "nic" {
  name                = "dev-${var.name}-nic"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "dev-${var.name}-ip"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_linux_virtual_machine" "vm" {
  name                  = "dev-${var.name}-vm"
  resource_group_name   = azurerm_resource_group.rg.name
  location              = azurerm_resource_group.rg.location
  size                  = var.vm_size
  admin_username        = "ubuntu"
  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  admin_ssh_key {
    username   = "dev-${var.name}-key"
    public_key = file("~/.ssh/id_rsa.pub")
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = var.vm_storage_account_type
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = var.image_sku
    version   = "latest"
  }
}

Let’s have a look at the terraform.tfvars file.

name                    = "{vm}"

region                  = "{West Europe}"

vm_size                 = "{StandardF2}"

vm_storage_account_type = "{StandardLRS}"

image_sku               = "{16.04-LTS}"

Pipeline Parameters

Let’s pass the following values dynamically using pipeline parameters.

  1. Name of VM and other resources.
  2. Regions of deployment.
  3. Size of VM.
  4. VM storage account type.
  5. VM image SKU
parameters:
  - name: name
    displayName: Name_of_Resource
    type: string
    default: application
  
  - name: region
    displayName: region
    type: string
    default: eastus
    values:
    - eastus
    - eastus2
    - northeurope
    - centralindia

  - name: vmSize
    displayName: VM_Size
    type: string
    default: D4s_v3
    values:
    - D2as_v4
    - DS2_v2
    - D4s_v3
    - D2as_v4
    - DS3_v2
    - D8s_v3

         
  - name: vmStorageAccountType
    displayName: VM_Storage_Account_Type
    type: string
    default: Standard_LRS
    values:
    - Standard_LRS
    - StandardSSD_LRS
    - Premium_LRS
    - UltraSSD_LRS

  - name: imageSKU
    displayName: Image_SKU
    type: string
    default: 20.04-LTS
    values:
    - 16.04-LTS
    - 18.04-LTS
    - 20.04-LTS
    - 22.04-LTS

In these pipeline parameters, we’re also restricting/limiting the range of values by providing a list of values to our parameters. In this way, the user cannot go beyond these pre-defined values while executing the pipeline.

If you want to Learn More about Pipeline click here.

Pipeline Steps:

In our pipeline, ‘we will use the below-mentioned steps

1. Replacing Values

- bash: |
    sed -i "s/{vm}/${{ parameters.name }}/g" terraform.tfvars
    sed -i "s/{West Europe}/${{ parameters.region }}/g" terraform.tfvars
    sed -i "s/{StandardF2}/${{ parameters.vmSize }}/g" terraform.tfvars
    sed -i "s/{StandardLRS}/${{ parameters.vmStorageAccountType }}/g" terraform.tfvars
    sed -i "s/{16.04-LTS}/${{ parameters.imageSKU }}/g" terraform.tfvars
    cat terraform.tfvars
  displayName: 'Replace Values'

This is the heart of our pipeline. In this step, we are using the terraform azure pipeline parameters.

2. Terraform Tool Installer

- task: TerraformInstaller@0
  inputs:
    terraformVersion: 'latest'
  displayName: 'Install Terraform latest'

In this step, we will install terraform tool for our pipeline.

3. Terraform Init

- task: TerraformTaskV3@3
  inputs:
    provider: 'azurerm'
    command: 'init'
    backendServiceArm: 'Opstree-PoCs (4c93adXXXXXXXXXXXXXXXXXXXXXX8f3c)'
    backendAzureRmResourceGroupName: 'jenkins_server'
    backendAzureRmStorageAccountName: 'asdfghjkasdf'
    backendAzureRmContainerName: 'backend'
    backendAzureRmKey: 'backend.tfstate'

This step will initialize the terraform code and the terraform backend configuration.

4. Terraform Validate


- task: TerraformTaskV3@3
  displayName: 'Terraform : Validate'
  inputs:
    command: validate

In this step, we will validate our terraform code configuration

5. Terraform Plan

- task: TerraformTaskV3@3
  displayName: 'Terraform : Plan'
  inputs:
    provider: 'azurerm'
    command: 'plan'
    commandOptions: '-lock=false'
    environmentServiceNameAzureRM: 'Opstree-PoCs (4c9xxxxxxxxxxx3c)'

This step will caution the instructor for sure.

6. Terraform Apply

- task: TerraformTaskV3@3
  inputs:
    provider: 'azurerm'
    command: 'apply'
    commandOptions: '-auto-approve'
    environmentServiceNameAzureRM: 'Opstree-PoCs (4c93xxxxxxxxf3c)'

This step will execute the configuration file and launch a VM instance. When you run apply command, it will ask you, “Do you want to perform these actions?”, you need to type yes and hit enter. To skip that we have provided our configuration with an “-auto-approve” argument.

[Also Read: Deploying Azure Policy using Terraform Module]

Upon saving and running our pipeline we can choose our desired parameters in this way.

 

We will get a drop-down for each parameter whose value we restricted.

Conclusion

So far we’ve learned how to make the pipeline for our terraform code using Azure DevOps Pipelines. Along with that, we’ve found out how to pass the runtime parameters to dynamically give values to our terraform.tfvars file and also restrict or limit the values as per our requirements.

Content References Reference 1, Reference 2
Image ReferencesImage 1, Image 2

Active-Active Infrastructure using Terraform and Jenkins on Microsoft Azure

In this blog, we will create an active-active infrastructure on Microsoft Azure using Terraform and Jenkins.

Prime Reasons to have an active-active set-up of your infrastructure

Disaster Recovery:

Disaster recovery (DR) is an organization’s method of regaining access and functionality to its IT infrastructure after events like a natural disaster, cyber attack, or even business disruptions just like during the COVID-19 pandemic.

  • Ensure business resilience
    No matter what happens, a good DR plan can ensure that the business can return to full operations rapidly, without losing data or transactions.
  • Maintain competitiveness
    Loyalty is rare and when a business goes offline, customers turn to competitors to get the goods or services they require. A DR plan prevents this.
  • Avoid data loss
    The longer a business’s systems are down, the greater the risk that data will be lost. A robust DR plan minimizes this risk.
  • Maintain reputation
    A business that has trouble resuming operations after an outage can suffer brand damage. For that reason, a solid DR plan is critical.
Continue reading “Active-Active Infrastructure using Terraform and Jenkins on Microsoft Azure”

Terraform Version Upgrade

Starting the blog with the question – What is Terraform?

It can be called a magic wand that creates Infrastructure on the basis of the code that you write. 

In Hashicorp’s words, “Terraform is an open-source Infrastructure as A Code software tool that enables you to safely and predictably create, change, and improve infrastructure.

Continue reading “Terraform Version Upgrade”