Skip to content
No results
  • Home
  • Services
  • Case Study
  • Contact
  • Sign Up
Opstree
  • Home
  • Services
  • Case Study
  • Contact
  • Sign Up
Opstree

EKS Cluster AutoScaler

  • Dev GautamDev Gautam
  • March 18, 2025
  • AWS, DevOps

The Amazon EKS AutoScaler automatically adjusts the number of worker nodes in an Amazon EKS cluster based on resource utilization and scaling demands.

Before we started using EKS Cluster Autoscaler, managing node scaling in our Kubernetes cluster was a constant struggle. Here’s what we were dealing with:

High Costs: We had to keep extra EC2 nodes running all the time to avoid pod scheduling
issues — even when they weren’t being used.


Pod Failures: If we tried to reduce node count to save money, pods would often stay Pending
 due to lack of resources.


Manual Work: Scaling nodes up or down was a manual task
—time–consuming and error–prone.


Slow Deployments: During deployments or traffic spikes, our workloads would get stuck
because there weren’t enough resources.


Unpredictable Demand: Our workloads change frequently, but our cluster capacity doesn’t
— leading to overuse or underuse of nodes.


These challenges pushed us to explore Cluster Autoscaler, and it turned out to be a game
–changer

Overview

Setting up the Cluster Autoscaler on Amazon EKS using Helm can be tricky. Most guides cover the basic steps, but there’s an important tip that’s often overlooked. In this post, we’ll share this helpful trick and walk you through the entire process step by step.

Prerequisites

Before we dive into the deployment process, ensure you have the following:

  1. An active Amazon EKS cluster
  2. Helm installed on your local machine
  3. AWS CLI configured with appropriate permissions
  4. Kubectl is installed and configured to interact with your EKS cluster.

Procedure

1. Add the Cluster Autoscaler Helm repository

  • First, add the official Cluster Autoscaler Helm repository:
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update

2. Create IAM policy for Cluster Autoscaler:

The Cluster Autoscaler needs specific permissions to interact with AWS services. Create an IAM policy with the required permissions:

aws iam create–policy – policy–name AmazonEKSClusterAutoscalerPolicy – policy–document file://cluster–autoscaler–policy.json
{
“Version”
: “2012–10–17″,
“Statement”: [
{

“Action”: [

“autoscaling:DescribeAutoScalingGroups”,

“autoscaling:DescribeAutoScalingInstances”,

“autoscaling:DescribeLaunchConfigurations”,

“autoscaling:DescribeTags”,

“autoscaling:SetDesiredCapacity”,

“autoscaling:TerminateInstanceInAutoScalingGroup”,

“ec2:DescribeLaunchTemplateVersions”

],

“Resource”: “*”,

“Effect”: “Allow”
}
]
}

3. Create IAM–OIDC Provider:

First, check if an OIDC provider is already associated:
aws eks describe–cluster —name <your–cluster–name> —query
“cluster.identity.oidc.issuer” —output text
If this command returns an OIDC URL, you’re good — you already have an
OIDC provider.
If it’s empty or errors, then you need to create it.
eksctl utils associate–iam–oidc–provider /
—region=ap–south–1 /
—cluster=testing /
—approve

4. Create IAM role for Cluster Autoscaler

This step allows Cluster Autoscaler to access EC2 and Auto Scaling resources securely. We create an IAM role and link it to a Kubernetes service account using IRSA.
Next, create an IAM role and attach the policy:
eksctl create iamserviceaccount \
– cluster=<your–cluster–name> \
– namespace=kube–system \cluster
– name=cluster–autoscaler \
– attach–policy–arn=arn:aws:iam::<your–account–
id>:policy/AmazonEKSClusterAutoscalerPolicy \
– override–existing –serviceaccounts \
– approve

This command does three things:

•
Creates a Kubernetes service account named cluster–autoscaler in the kube–system namespace
•
Creates an IAM role and attaches the Cluster Autoscaler policy
•
Links the IAM role to the service account using OIDC identity provider


5. Deploy Cluster Autoscaler using Helm


Now, here’s the trick that often goes unmentioned: When deploying the Cluster Autoscaler using Helm, you need to set the `awsRegion` value explicitly. Th
is ensures that the autoscaler works correctly with your specific AWS region:
helm install cluster–autoscaler autoscaler/cluster–autoscaler \
—namespace kube–system \

—set autoDiscovery.clusterName=testing \

—set awsRegion=ap–south–1 \

—set rbac.serviceAccount.create=false \

—set rbac.serviceAccount.name=cluster–autoscaler


6. Verify the deployment:


Check if the Cluster Autoscaler pod is running:

kubectl get pods –n kube–system | grep cluster–autoscaler


7. After Verifying also check for Asg Policies for Scale Up And Scale–down
The nodes using Autoscaler:


If they are not there add these two policies for scale–up and scale–down


aws autoscaling put–scaling–policy —policy–name ScaleOutPolicy —auto–scaling–group–name <your– auto–scaling–group–name > —scaling–

adjustment 1 —adjustment–type ChangeInCapacity

aws autoscaling put–scaling–policy —policy–name ScaleInPolicy —auto–
scaling–group–name <your– auto–scaling–group–name > —scaling–
adjustment

We are done with eks Autoscaler we need to verify the Autoscaler is
working fine or not so we are making an ingress file for the nginx–
webserver and testing it.

These are the steps:–

1. Check for active nodes:

kubectl get nodes

2. Create a Test Ingress file for checking Autoscaler:

apiVersion: apps/v1

kind: Deployment

metadata:

name: webserver

spec:

replicas: 20

selector:

matchLabels:

app: nginx–webserver

template:

metadata:

labels:

app: nginx–webserver

spec:

containers:
– name: nginx–container
image: nginx:latest

resources:

requests:

memory: “1Gi”

cpu: 1

—

3. Apply this Ingress file and check for the nodes to scale–in

Nodes became 2 to 10 after deploying ingress

4. Now I Can scale out from 1o to 2 I put a replica number to 2 here and after that Check for nodes that are scaling down or not

now apply this and check for the nodes to scale out

as we see 2 nodes are there only because the load is less and it is scale-out
successfully

Conclusion:–


This document guides you through setting up EKS Cluster Autoscaler to automatically scale your worker nodes based on pod requirements. By enabling OIDC, creating the necessary IAM role, and deploying the autoscaler, you ensure your cluster scales efficiently, reduces manual effort, and optimizes resource utilization.

Let the Cluster Autoscaler handle node scaling — so you can focus on running your workloads, not managing infrastructure

CONTACT US

Share this:

  • Twitter
  • LinkedIn
  • Facebook

Like this:

Like Loading...

Related

Tags
# Amazon EKS# AWS CLI# EKS cluster# EKS Cluster Autoscaler# IAM Role# Kubernetes service account# OIDC identity provider
Previous Post Comparison of Confluent Kafka On-prem vs Confluent Kafka on Azure vs Azure Events Hub
Next Post How to Reduce AWS Data Transfer Costs: A CFO’s Guide to Cloud Savings
No results

Related Posts

CloudFormation

Complete AWS CloudFormation Guide In 2026

  • May 12, 2026
DevSecOps managed services

The Hidden Cost of Building DevSecOps In-House VS. a Managed Partner

  • May 6, 2026
DevSecOps

Why In-House DevSecOps Teams Burn Out And What Managed Services Fix

  • April 30, 2026
  • Home
  • Services
  • Case Study
  • Contact
  • Sign Up

Copyright © 2026 - OpsTree Solutions

%d