Guidelines to Migrate from Self- Managed Kubernetes in AWS to Amazon EKS
Migration — yet another usual term on the crowded streets of Software Architecture. Migration tasks in a Software Industry can be hectic, time-consuming, or painful involving multiple resources to get engaged, collaborate, and achieve the end-goal of Migrating our components
to a newer environment. Though it is exhaustive, the journey as part of any Successful Migrations involves in-depth Learning, effective Knowledge sharing, constructive Collaboration with a focused Roadmap and Planning.
In this blog, we will look into how we approached our major challenging Migration task of moving away from Self-Managed Kubernetes running in EC2 to AWS Managed Kubernetes service EKS.
First Question, Why?
When this task was initially discussed, the first basic question which came in everyone’s mind (Developer, Devops Engineer, AWS Architect, Manager) was Why to Migrate?
Yes, the existing Self-Managed Kubernetes environment in EC2 was running without a downtime but there were so many incidents not noticeable to other Engineers were observed by Kubernetes Admin team.
1. Multi-master setup with 3 Master nodes faced CPU hikes resulting in 2/3 nodes becoming faulty.
2. During High profile Events, the networking component Calico couldn’t scale in proportion to the Kubernetes workloads.
3. Nodes Autoscaling was taking a long time because of Older Generation AMI’s configured for worker Nodes.
4. Kubernetes version was Outdated. It was felt risky to do a version upgrade.
5. No regular Security patching was done in the Infrastructure components.
Best Fit Solution:
Moving to a Managed Service Model. As our Kubernetes cluster was already setup in Amazon EC2 instances, moving to AWS based solution was preferred and we chose Elastic Kubernetes Service (EKS).
• Know your existing cluster –
o Current Kubernetes version to check compatibility of APIs
o Cluster Provisioning method (kops, kubeadm, or any)
o Cluster Add-ons
o Autoscaling Configurations
o What Kubernetes Objects deployed in Namespaces — daemonset, deployments, statefulsets, cronjob, etc
o Volume Information — PV and PVC
o Network Policies, Cluster Accessibility and Security Group Rules (Ports, Firewalls, Routing)
o Kubernetes certs Management
o RBAC — How Authentication and Authorisation is taken care
o High Availability Quotient or Configurations
o Worker Nodes Firewall configurations
o Namespace Information and Resource Management (Quotas)
o Workload Deployment Information
• How to Build EKS Cluster? There are Multiple ways (AWS suggested or third party software) to Create and Manage EKS cluster like EKS Blueprint, eksctl, AWS Management Console, AWS CLI
• EKS being an upstream Kubernetes, similar to Kubernetes it doesn’t support Muti-Tier architecture but this can be achieved by Isolating the Customers using Namespace.
• EKS Add-on Management — EKS Blueprint has good integration with ArgoCD which can be used to manage the Workloads and Add-ons. It automatically creates the required IAM roles, does installation via Helm charts.
• Choose Network Adapters carefully — AWS by default provides AWS VPC CNI plugin for Networking. If you are going to use third party Network CNIs such as Calico, Cilium, Flannel or Weave, you are responsible for its maintenance.
• Enable ipv6 for your cluster or add Secondary CIDR if your workloads are huge and may run into ipv4 exhaustion.
• Choose between Managed Node Groups, Self-Managed Node Groups or AWS Fargate for compute Resources. Each has its own advantages and limitations depending on your use case.
• Service Mesh Analysis — Service to Service communication can be controlled efficiently using a service Mesh. AWS recommends to use Istio or AWS AppMesh for working with EKS.
• EKS Monitoring and Logging — EKS Control plane metrics can be scraped using Prometheus and these metrics can be visualised efficiently using Grafana / Datadog / AppDynamics.
1. Build your Own Production ready EKS cluster in Test Environment.
2. Install and Configure the Primary and Secondary Add-ons.
3. Monitoring and Alerting setup for EKS cluster and Workloads.
4. Perform Infrastructure Load Testing — Reference:
5. Derive a Migration Strategy — Routing of Traffic to new EKS cluster. Use Route53 weightage policy to have a control on routing the traffic to new EKS cluster while the major requests being served by Self-Managed Kubernetes cluster.
6. Meet with Development Teams to explain about EKS architecture and Migration strategy.
7. Deploy Services / Workloads in Test Environment.
8. Perform Application Functional and Load/Performance testing.
9. After Sign-Off, Decide Production Date and move the Traffic according to your Migration strategy.
Follow us for more technical blogs — cubensquare.com/blog