EKS IPV4 Exhaustion

Problem Statement
Elastic Kubernetes Service (EKS) is predominantly used by many of the Organizations because it is an upstream and certified conformant version of Kubernetes with backported security fixes. It also provides a managed Kubernetes experience for performant, reliable and secure Kubernetes clusters. In a rapidly growing Business or Organization, where the Workloads deployed to EKS increases rapidly, Kubernetes Admin face a situation where the New Pods run out of IPs during its initialization as part of Scaling.

Background:
When we use a third-party Networking Plugin like Calico, Cilium, Flannel or etc., the IPs of the Node and the Pod initialized gets assigned from different IP CIDRs. Pod IP space (Network plugin CIDR) and Node IP space (from VPC subnet) are different, and Pods get an isolated IP addresses from other services.
This case is bit different when we use EKS with AWS VPC CNI Networking Plugin. This is because the plugin assigns a private IPv4 or IPv6 address from your VPC to each pod and service. Your pods and services have the same IP address inside the pod as they do on the VPC network. This is intentional to ease the communication between Pod and other AWS
services.

Solution:
1. Enable ipv6 — Create EKS Cluster with ipv6 option enabled.
2. Add Secondary CIDR ranges to existing EKS cluster.
We will discuss in detail about the second solution and how we can achieve it via Terraform.

Steps in Detail:

Create subnets with a new CIDR range
aws ec2 describe-availability-zones — region us-east-1 — query
‘AvailabilityZones[*].ZoneName’

Considering our AWS region as us-west-2
1. list all the Availability Zones in your AWS Region, run the following command:

aws ec2 describe-availability-zones — region us-west-2 — query ‘AvailabilityZones[*].ZoneName’

2. Choose the Availability Zone where you want to add the subnets, and then assign those Availability
Zones to variables. For example

export AZ1=us-west-2a
export AZ2=us-west-2b
export AZ3=us-west-2c

3. To create new subnets under the VPC with the new CIDR range, run the following commands:

SUBNETA=$(aws ec2 create-subnet — cidr-block 100.64.0.0/19 — vpc-id $VPC_ID —
availability-zone $AZ1 | jq -r .Subnet.SubnetId)
SUBNETB=$(aws ec2 create-subnet — cidr-block 100.64.32.0/19 — vpc-id $VPC_ID —
availability-zone $AZ2 | jq -r .Subnet.SubnetId)
SUBNETC=$(aws ec2 create-subnet — cidr-block 100.64.64.0/19 — vpc-id $VPC_ID —
availability-zone $AZ3 | jq -r .Subnet.SubnetId)

4. (Optional) Add a name tag for your subnets by setting a key-value pair.

For example:

aws ec2 create-tags — resources $SUBNETA — tags Key=Name,Value=SubnetA
aws ec2 create-tags — resources $SUBNETB — tags Key=Name,Value=SubnetB
aws ec2 create-tags — resources $SUBNETC — tags Key=Name,Value=SubnetC

5. Associate your new subnet to a route table. List the entire route table under the VPC, run the following command:

aws ec2 describe-route-tables — filters Name=vpc-id,Values=$VPC_ID |jq -r
‘.RouteTables[].RouteTableId’

export ROUTETABLE_ID=rtb-xxxxxxxxx

6. Associate the route table to all new subnets. For example:

aws ec2 associate-route-table — route-table-id $ROUTETABLE_ID — subnet-id
$SUBNETA
aws ec2 associate-route-table — route-table-id $ROUTETABLE_ID — subnet-id
$SUBNETB
aws ec2 associate-route-table — route-table-id $ROUTETABLE_ID — subnet-id
$SUBNETC

Configure the CNI Plugin to use Newly created Secondary CIDR via Terraform

var.eks_pod_subnet_ids — Subnet IDs created as part of previous step
var.availability_zones — List of Availability Zones for which ENIConfig has to be created

Summary:
By this method, we can avoid a situation where we run out of ipv4 addresses in our Kubernetes
environment.

For more such technical blogs — cubensquare.com/blog

Leave a Comment

Your email address will not be published. Required fields are marked *