“Hello, Tech Trailblazers! 🚀 Buckle up, because today we’re diving into the world of tech with a twist — imagine it’s narrated by Homer Simpson from The Simpsons. 🍩 So grab your donuts, channel your inner ‘D’oh!’ moments, and let’s get cracking! Or should I say, let’s ‘deploy’ into this adventure? 😏”
What is Deployment strategy ?
💤Boring Version: Deployment strategies ensure software updates are delivered with minimal disruption. Their importance lies in maintaining service reliability while introducing new features.
😂 Funny Version: Deployment strategies are like the dynamics in Game of Thrones: you need to seat a new king (release) on the throne without triggering a civil war (outages). It’s all about power shifts without chaos.
Deployment Strategy Evolution: A Comedic Take 🎭
1. Dino Tech Age 🦕 (1970s–1990s)
Deployment was as manual as assembling IKEA furniture but without instructions.
Engineers rebuilt the entire system every time, leading to countless “oops” moments.
System downtime? Oh, it was practically a vacation — sometimes lasting weeks!
Technology enablers? If you can call ancient mainframes and faxes “technology,” sure.
2. Script Kiddie Era 🤓 (1990s–2000s)
Deployment scripts were introduced, but they worked about as consistently as your New Year’s resolution.
Rollbacks? Hah, good luck with that! “If it breaks, we start over.”
At least virtual servers showed up, making the chaos a bit more manageable.
3. Netflix-and-Deploy Era 📺 (2010s–Present)
Enter the cool kids: Kubernetes, Docker, and “Canary” deployments (no actual birds involved).
Downtime became a thing of the past, and traffic management got smarter than your GPS.
However, now deployments require advanced YAML skills, and your wallet might shed a tear over the costs.
If we can dynamically change the values of terraform.tfvars.
If we can restrict the regions of deployments.
If we can limit our VM types to maintain better cost optimization.
In this article, we will touch upon these problems and try to resolve them in a way that the same concepts can also be applied to similar requirements.
Soo… Let’s Get Started !!!
First of all, we need to know what is Terraform & Azure DevOps.
Talking About Terraform: HashiCorp Terraform is an infrastructure as a code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructures throughout its cycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.
Talking about Azure DevOps: Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.
DevOps lifecycle in Azure DevOps
If you want to learn more about Azure DevOps click here.
Pre-requisites:
No matter whether we are deploying our infrastructure into Azure Cloud Services or Amazon Web Services (AWS). All we need are the following checklist:
Active Cloud Service (Azure/AWS)
Azure DevOps Account
Terraform Code to deploy.
A Linux machine (VM or EC2) for agent pool or Azure Microsoft-hosted agent.
Storage Account (Azure Blob Container or AWS S3)
Terraform code to deploy using terraform.tfvars.
Azure DevOps Pipeline
Let’s take a scenario in which we will deploy a simple terraform code of Azure Virtual Machine using Azure DevOps pipelines.
In these pipeline parameters, we’re also restricting/limiting the range of values by providing a list of values to our parameters. In this way, the user cannot go beyond these pre-defined values while executing the pipeline.
If you want to Learn More about Pipeline click here.
Pipeline Steps:
In our pipeline, ‘we will use the below-mentioned steps
1. Replacing Values
- bash: |
sed -i "s/{vm}/${{ parameters.name }}/g" terraform.tfvars
sed -i "s/{West Europe}/${{ parameters.region }}/g" terraform.tfvars
sed -i "s/{StandardF2}/${{ parameters.vmSize }}/g" terraform.tfvars
sed -i "s/{StandardLRS}/${{ parameters.vmStorageAccountType }}/g" terraform.tfvars
sed -i "s/{16.04-LTS}/${{ parameters.imageSKU }}/g" terraform.tfvars
cat terraform.tfvars
displayName: 'Replace Values'
This is the heart of our pipeline. In this step, we are using the terraform azure pipeline parameters.
This step will execute the configuration file and launch a VM instance. When you run apply command, it will ask you, “Do you want to perform these actions?”, you need to type yes and hit enter. To skip that we have provided our configuration with an “-auto-approve” argument.
Upon saving and running our pipeline we can choose our desired parameters in this way.
We will get a drop-down for each parameter whose value we restricted.
Conclusion
So far we’ve learned how to make the pipeline for our terraform code using Azure DevOps Pipelines. Along with that, we’ve found out how to pass the runtime parameters to dynamically give values to our terraform.tfvars file and also restrict or limit the values as per our requirements.