AWS

Ponniyin Selvan Characters And AWS Tools Comparision

Ponniyin Selvan is a Tamil historical novel written by Kalki Krishnamurthy, while AWS is a cloud computing service provided by Amazon. However, if we were to draw comparisons based on certain characteristics, we could say that some Ponniyan Selvan characters can be compared to AWS tools as follows Aditha Karikalan – Amazon EC2: Aditha Karikalan was a great warrior and leader, and he had a vast army at his disposal. Similarly, Amazon EC2 is a powerful and scalable cloud computing service that allows users to launch and manage virtual servers. Vandiyathevan – Amazon S3: Vandiyathevan was a messenger who travelled long distances to deliver messages to different people. Similarly, Amazon S3 is a simple storage service that allows users to store and retrieve data from anywhere on the web. Nandini – Amazon Rekognition: Nandini was a seductive and charming character who had a way of manipulating people to get what she wanted. Similarly, Amazon Rekognition is a powerful image and video analysis tool that uses machine learning to recognize faces, objects, and scenes. Arulmozhi Varman – Amazon DynamoDB: Arulmozhi Varman was a wise and intelligent ruler who had a deep understanding of his kingdom and its people. Similarly, Amazon DynamoDB is a fast and flexible NoSQL database that can handle large amounts of data with ease. Pazhuvettarayar – AWS CloudTrail: Pazhuvettarayar was a shrewd and cunning politician who was always trying to stay one step ahead of his enemies. Similarly, AWS CloudTrail is a service that provides a detailed record of user activity and API calls in the AWS environment, making it easier for users to identify and troubleshoot issues. Poonguzhali – AWS Lambda: Poonguzhali was a resourceful and clever character who was able to find solutions to difficult problems. Similarly, AWS Lambda is a serverless computing service that allows users to run their code without having to worry about servers or infrastructure. Kandanmaran – AWS Security Hub: Kandanmaran was a vigilant and cautious character who was always on the lookout for threats to his kingdom. Similarly, AWS Security Hub is a security service that helps users manage and prioritize their security alerts and findings across their AWS accounts, making it easier to identify and remediate security risks Kundavai – Amazon CloudFront: Kundavai was a thoughtful and strategic thinker who was always looking for ways to improve the well-being of her people. Similarly, Amazon CloudFront is a content delivery network that helps users deliver their content faster to their customers by caching it at edge locations around the world. Azhwarkadiyan – AWS Elastic Beanstalk: Azhwarkadiyan was a loyal and dependable friend who was always ready to lend a helping hand. Similarly, AWS Elastic Beanstalk is a service that makes it easy for users to deploy, manage, and scale their web applications without having to worry about the underlying infrastructure. These comparisons are just for illustrative purposes, and it’s important to note that they are not meant to be taken too seriously. Both Ponniyan Selvan and AWS are complex entities with their own unique characteristics and features, and they should be appreciated on their own terms

Ponniyin Selvan Characters And AWS Tools Comparision Read More »

installation, composition, abstraction-614963.jpg

Guidelines To Migrate From Self- Managed Kubernetes In AWS To Amazon EKS

Migration Migration — yet another usual term on the crowded streets of Software Architecture. Migration tasks in a Software Industry can be hectic, time-consuming, or painful involving multiple resources to get engaged, collaborate, and achieve the end-goal of Migrating our components to a newer environment. Though it is exhaustive, the journey as part of any Successful Migrations involves in-depth Learning, effective Knowledge sharing, constructive Collaboration with a focused Roadmap and Planning. In this blog, we will look into how we approached our major challenging Migration task of moving away from Self-Managed Kubernetes running in EC2 to AWS Managed Kubernetes service EKS. First Question, Why? When this task was initially discussed, the first basic question which came in everyone’s mind (Developer, Devops Engineer, AWS Architect, Manager) was Why to Migrate? Yes, the existing Self-Managed Kubernetes environment in EC2 was running without a downtime but there were so many incidents not noticeable to other Engineers were observed by Kubernetes Admin team. Few Issues: 1. Multi-master setup with 3 Master nodes faced CPU hikes resulting in 2/3 nodes becoming faulty. 2. During High profile Events, the networking component Calico couldn’t scale in proportion to the Kubernetes workloads. 3. Nodes Autoscaling was taking a long time because of Older Generation AMI’s configured for worker Nodes. 4. Kubernetes version was Outdated. It was felt risky to do a version upgrade. 5. No regular Security patching was done in the Infrastructure components. Best Fit Solution: Moving to a Managed Service Model. As our Kubernetes cluster was already setup in Amazon EC2 instances, moving to AWS based solution was preferred and we chose Elastic Kubernetes Service (EKS). Migration Consideration: • Know your existing cluster – o Current Kubernetes version to check compatibility of APIs o Cluster Provisioning method (kops, kubeadm, or any) o Cluster Add-ons o Autoscaling Configurations o What Kubernetes Objects deployed in Namespaces — daemonset, deployments, statefulsets, cronjob, etc o Volume Information — PV and PVC o Network Policies, Cluster Accessibility and Security Group Rules (Ports, Firewalls, Routing) o Kubernetes certs Management o RBAC — How Authentication and Authorisation is taken care o High Availability Quotient or Configurations o Worker Nodes Firewall configurations o Namespace Information and Resource Management (Quotas) o Workload Deployment Information • How to Build EKS Cluster? There are Multiple ways (AWS suggested or third party software) to Create and Manage EKS cluster like EKS Blueprint, eksctl, AWS Management Console, AWS CLI • EKS being an upstream Kubernetes, similar to Kubernetes it doesn’t support Muti-Tier architecture but this can be achieved by Isolating the Customers using Namespace. • EKS Add-on Management — EKS Blueprint has good integration with ArgoCD which can be used to manage the Workloads and Add-ons. It automatically creates the required IAM roles, does installation via Helm charts. • Choose Network Adapters carefully — AWS by default provides AWS VPC CNI plugin for Networking. If you are going to use third party Network CNIs such as Calico, Cilium, Flannel or Weave, you are responsible for its maintenance. • Enable ipv6 for your cluster or add Secondary CIDR if your workloads are huge and may run into ipv4 exhaustion. • Choose between Managed Node Groups, Self-Managed Node Groups or AWS Fargate for compute Resources. Each has its own advantages and limitations depending on your use case. • Service Mesh Analysis — Service to Service communication can be controlled efficiently using a service Mesh. AWS recommends to use Istio or AWS AppMesh for working with EKS. • EKS Monitoring and Logging — EKS Control plane metrics can be scraped using Prometheus and these metrics can be visualised efficiently using Grafana / Datadog / AppDynamics. Migration Phases: 1. Build your Own Production ready EKS cluster in Test Environment. 2. Install and Configure the Primary and Secondary Add-ons. 3. Monitoring and Alerting setup for EKS cluster and Workloads. 4. Perform Infrastructure Load Testing — Reference: https://aws.amazon.com/blogs/containers/load-testing-your-workload-running-onamazon-eks-with-locust/ 5. Derive a Migration Strategy — Routing of Traffic to new EKS cluster. Use Route53 weightage policy to have a control on routing the traffic to new EKS cluster while the major requests being served by Self-Managed Kubernetes cluster. 6. Meet with Development Teams to explain about EKS architecture and Migration strategy. 7. Deploy Services / Workloads in Test Environment. 8. Perform Application Functional and Load/Performance testing. 9. After Sign-Off, Decide Production Date and move the Traffic according to your Migration strategy.

Guidelines To Migrate From Self- Managed Kubernetes In AWS To Amazon EKS Read More »

hand, business, technology-3044387.jpg

TCP Connection Intermittent Failures

Problem Statement: Some of the TCP connections from instances in a private subnet to a specific destination through a NAT gateway are successful, but some are failing or timing out.   Causes The cause of this problem might be one of the following: • The destination endpoint is responding with fragmented TCP packets. NAT gateways do not support IP fragmentation for TCP or ICMP. • The tcp_tw_recycle option is enabled on the remote server, which is known to cause issues when there are multiple connections from behind a NAT device.   What it is? The tcp_tw_recycle option is a Boolean setting that enables fast recycling of TIME_WAIT sockets. The default value is 0. When enabled, the kernel becomes more aggressive and makes assumptions about the timestamps used by remote hosts. It tracks the last timestamp used by each remote host and allows the reuse of a socket if the timestamp has increased.   Solution Verify whether the endpoint to which you’re trying to connect is responding with fragmented TCP packets by doing the following: 1. Use an instance in a public subnet with a public IP address to trigger a response large enough to cause fragmentation from the specific endpoint.   2. Use the tcpdump utility to verify that the endpoint is sending fragmented packets. Important You must use an instance in a public subnet to perform these checks. You cannot use the instance from which the original connection was failing, or an instance in a private subnet behind a NAT gateway or a NAT instance.   Diagnostic tools that send or receive large ICMP packets will report packet loss. For example, the command ping -s 10000 example.com does not work behind a NAT gateway.   3. If the endpoint is sending fragmented TCP packets, you can use a NAT instance instead of a NAT gateway.   If you have access to the remote server, you can verify whether the tcp_tw_recycle option is enabled by doing the following: 1. From the server, run the following command. cat /proc/sys/net/ipv4/tcp_tw_recycle If the output is 1, then the tcp_tw_recycle option is enabled. 2. If tcp_tw_recycle is enabled, we recommend disabling it. If you need to reuse connections, tcp_tw_reuse is a safer option. If you don’t have access to the remote server, you can test by temporarily disabling the tcp_timestamps option on an instance in the private subnet. Then connect to the remote server again. If the connection is successful, the cause of the previous failure is likely because tcp_tw_recycle is enabled on the remote server. If possible, contact the owner of the remote server to verify if this option is enabled and request for it to be disabled.

TCP Connection Intermittent Failures Read More »

cloud, storage, storage medium-7832676.jpg

Popular Load Balancers in AWS Explained Easily

Problem Statement Amazon Web Services (AWS) offers several types of load balancers to distribute incoming network traffic across multiple resources like Amazon EC2 instances or containers. While we design Application and it’s Infrastructure components, we come across a stage where we need to Decide about the Load balancer to be used.   Here’s an easy explanation of four common popular types:   Application Load Balancer (ALB): Think of ALB as a smart traffic cop for web applications. – Can easily implement a Web Application Firewall WAF to protect against exploits. It operates at the application layer (Layer 7) and can route traffic based on content in the request, like URL paths or headers. Ideal for modern web applications, microservices, and API gateways. Btw, What is Layer 7? There are Seven Layers in the OSI Model (Open Systems Interconnection). Layer 7 is the topmost Application Layer and directly interacts with User applications. It includes HTTP, FTP and SMTP Protocols. Network Load Balancer (NLB): NLB is like a high-speed traffic router for TCP and UDP traffic. It operates at the transport layer (Layer 4) and is highly scalable and performs well with ultra-low latency. Suited for handling massive amounts of connections or when you need to forward raw network packets. Layer 4 – Wait, it’s again another Layer in OSI Model? Yes, it is called a Transport Layer. It ensures end-to-end communication and data integrity between two devices on a Network. It includes TCP and UDP Protocols. Classic Load Balancer (CLB): CLB is the older version and offers basic load-balancing capabilities. It balances traffic at both Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS). While still available, it’s generally recommended to use ALB or NLB for more advanced features and better performance. Gateway Load Balancer (GWLB): It’s primarily used for scenarios where you need to distribute traffic across multiple network appliances, such as firewalls, intrusion detection systems (IDS), and other security or networking devices. GWLB is highly available, with redundancy built-in across multiple Availability Zones (AZs) to ensure fault tolerance. It helps improve network security by allowing you to integrate various security appliances and inspect traffic as it passes through. Just like other load balancers in AWS, GWLB uses target groups to direct traffic to specific resources. In this case, the resources are network appliances. Suppose you have multiple security appliances, like firewalls and intrusion detection systems, in your network architecture to inspect incoming and outgoing traffic for threats. By placing a GWLB in front of these appliances, you can ensure that all traffic is evenly distributed across the security devices, helping you scale and secure your network effectively. Remember, the choice of load balancer depends on your specific application’s needs. ALB is a popular choice for most modern web applications, NLB for high performance and scalability, and CLB may be used for simple scenarios.   Summary: Hope you Now know about the Load Balancers and wait for our Blog if you got lost by the terms referred to here as Protocols.

Popular Load Balancers in AWS Explained Easily Read More »

DevOps & AWS Revolution: Sony Pictures’ Journey

The Digital Media Group (DMG) is a unit of Sony Pictures Technologies, which is part of Sony Pictures Entertainment, Inc. (SPE). SPE’s global operations encompass motion picture production, acquisition, and distribution; television production, acquisition, and distribution; television networks; digital content creation and distribution; operation of studio facilities and development of new entertainment products, services, and technologies. Sony Pictures and DevOps Sony Pictures has embraced DevOps as a key part of their digital transformation. DevOps is a set of practices and tools that help organizations to rapidly develop, test, and deploy software in a secure and reliable manner. By leveraging DevOps, Sony Pictures is able to accelerate the development and deployment of new products and services. Sony Pictures is also using Amazon Web Services (AWS) to help manage their infrastructure. AWS provides the computing power, storage, and networking capabilities that Sony Pictures needs to run their applications and services. With AWS, Sony Pictures can quickly scale up or down to meet their business needs Data Storage and Processing Sony Pictures uses AWS to store and process its data and digital assets, ensuring that its content is secure and accessible. By leveraging the power of Amazon S3, Sony Pictures can store large amounts of data in the cloud, allowing it to scale quickly and efficiently. AWS also enables Sony Pictures to process its data and digital assets quickly and efficiently. With Amazon EC2, Sony Pictures can quickly spin up instances to process its data, allowing it to launch new services and applications faster than ever before. Benefits of DevOps and AWS By using DevOps and AWS, Sony Pictures is able to quickly develop and deploy new products and services. This helps them stay competitive in the marketplace and quickly respond to customer needs. DevOps also helps to ensure that their applications and services are secure and reliable. AWS also helps Sony Pictures to reduce costs. By leveraging the scalability of AWS, Sony Pictures can quickly scale up or down to meet their business needs without incurring additional costs. This helps them to stay agile and responsive to customer needs. Sony Pictures Technologies Develops DevOps Solution with Stelligent to Create Always-Releasable Software The Continuous Delivery solution resulted in several benefits in AWS for DMG : ● More frequent and one click releases ● Less internal constraints ● Higher levels of security ● Developer focus on value adding features over running manual processes ● Elasticity, which reduces cost and idle resources Working with Stelligent, DMG created a full featured, automated Cloud Delivery system running on Amazon Web Services’ (AWS) infrastructure. The AWS components include the following: ● AWS Cloud Formation for managing related AWS resources, provisioning them in an orderly and predictable fashion ● AWS Ops Works for managing application stacks ● Virtual Private Cloud (VPC) for securely isolating cloud resources ● Amazon Elastic Compute Cloud (EC2) for compute instances ● Amazon Simple Storage Service (S3) for storage ● Amazon Route 53 for scalable and highly available Domain Name Service (DNS) ● AWS Identity and Access Management (IAM) for securely controlling access to AWS services and resources for users Data Security and Compliance Sony Pictures uses AWS to ensure that its data is secure and compliant with industry regulations. By leveraging the power of Amazon RDS, Sony Pictures can store its data in a secure and compliant manner, allowing it to meet the requirements of its customers and partners. AWS also enables Sony Pictures to comply with industry regulations and standards, such as HIPAA and GDPR. With AWS, Sony Pictures can ensure that its data is secure and compliant, allowing it to protect its customers and partners. Scalability and Efficiency Sony Pictures uses AWS to quickly scale its infrastructure and launch new services and applications. By leveraging the power of Amazon EC2, Sony Pictures can quickly spin up instances to process its data, allowing it to scale quickly and efficiently. AWS also enables Sony Pictures to reduce costs and improve efficiency by leveraging cloud-based solutions such as Amazon S3, Amazon EC2, and Amazon RDS. With AWS, Sony Pictures can reduce costs and improve efficiency, allowing it to focus on its core business. Conclusion Sony Pictures is also continuously improving their DevOps and AWS practices. They are leveraging the latest technologies and best practices to ensure that their applications and services are secure and reliable. This helps them to protect their customers and their data. For more technical topics — Follow us — cubensquare.com

DevOps & AWS Revolution: Sony Pictures’ Journey Read More »